“I Want to Contribute in a Way that Feels Fully in my Integrity…” AI Safety Leader Leaves Anthropic

Mrinank Sharma, the head of Anthropic’s safeguards research team, publicly announced his resignation on the 9th of this month. His detailed resignation post on X garnered almost a million views immediately. In his detailed letter, shared with colleagues and the public, Sharma said that the “world is in peril,” attributing it to not only AI risks but to a whole series of “whole series of interconnected crises unfolding in this very moment.” This departure from the prominent AI firm, backed by both Amazon and Google, highlights growing tensions between safety priorities and commercial pressures.

Today is my last day at Anthropic. I resigned.

Here is the letter I shared with my colleagues, explaining my decision. pic.twitter.com/Qe4QyAFmxL

— mrinank (@MrinankSharma) February 9, 2026

Sharma joined Anthropic’s team in August 2023 after completing his PhD in ML from the University of Oxford. He has led the safeguards team since its initial phase last year, focusing on critical AI risks like sycophancy (AI models excessively flattering users), and on defenses against AI-powered bioterrorism. In the letter, Mrinank has shared his achievements too, “I’ve achieved what I wanted to here… understanding AI sycophancy and its causes; developing defences to reduce risks from AI-assisted bioterrorism; actually putting those defences into production.” Sharma also stated that he had arrived in San Francisco 2 years before this to contribute to AI safety.

The letter, infused with references to poets like Rainer Maria Rilke and William Stafford, painted a philosophical picture of humanity’s crossroads. Sharma wrote, “We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences.” He emphasized broader perils beyond AI or bioweapons, including global politics and rapid technological shifts. He gave these reasons for his “the moment has arrived to move forward” decision. Observers interpret this as a critique of how companies struggle to let employee values govern actions under investor demands.

Anthropic, once a ‘safety-first’ lab, now pursues a reported $350 billion valuation in the middle of fierce competition with OpenAI. Sharma’s exit follows recent departures of AI scientist Behnam Neyshabur and R&D specialist Harsh Mehta last week, among speculations of rushed product development that compromises safety protocols. As of yet, there has been no official statement released by the company. This is on the same pathway as former resignations of Jan Leike and Gretchen Krueger from OpenAI, who criticized transparency and risk mitigation. 

Sharma’s resignation sheds a burning light on balancing AI innovations against ethical standards and existential risks. 

Related: Alphabet Stays Quiet on Google-Apple AI Partnership During Earnings Call

The post “I Want to Contribute in a Way that Feels Fully in my Integrity…” AI Safety Leader Leaves Anthropic appeared first on The Next Hint.

Leave a Reply

Your email address will not be published.

Previous post Jalen Williams anota 28 puntos y Thunder arrolla a Suns, por 136-109
Next post Exministro del Interior de Corea del Sur es condenado a 7 años por colaborar en la ley marcial