Anthropic AI safety chief resignation
Anthropic AI safety chief steps down amid concerns

San Francisco / Global — February 10–11, 2026: In a high-profile departure that has stirred debate across the tech world, Mrinank Sharma, head of the Safeguards Research Team at AI company Anthropic, has resigned, raising alarm about the future direction of artificial intelligence safety and ethical alignment.

A Cryptic Farewell With a Stark Warning

Sharma announced his resignation in a public post on X (formerly Twitter), sharing a long, philosophical letter he also sent internally. In it, he wrote that the “world is in peril” due to a “whole series of interconnected crises unfolding in this very moment,” arguing that humanity’s moral wisdom must grow alongside its technological power — or face serious consequences.

Though Sharma did not provide specific details about conflicts inside Anthropic, he spoke openly about the difficulty of allowing values to guide action in a fast-moving tech organisation, suggesting that competitive and commercial pressures sometimes compete with safety goals.

His Role at Anthropic

Sharma joined Anthropic in 2023 and led the company’s Safeguards Research Team — a group focused on understanding and reducing risks from AI systems, including work on AI sycophancy (where models flatter users excessively), defences against AI-assisted bioterrorism, and some of the first internal AI safety cases deployed into production.

According to reports, one of his final projects involved studying how AI assistants might influence human judgment and behaviour in ways that diminish human autonomy — a concern he highlighted as part of his broader warning.

Timing and Industry Context

The resignation comes days after Anthropic rolled out a major upgrade to its flagship AI, Claude Opus 4.6, aimed at high-end tasks like coding and workplace productivity — part of the company’s push to compete with rivals and scale commercially. Observers suggest that this timing has amplified discussions about whether safety and ethical considerations are being balanced effectively with growth and product deployment.

Sharma’s exit isn’t an isolated event: recent weeks have seen other departures from the company, further fueling conversations about internal culture and research priorities within leading AI labs.

A Broader Ethical Debate in AI

Industry watchers see Sharma’s resignation as another flashpoint in the ongoing debate over how AI developers should manage risk, ethics, and innovation. As companies race to improve and commercialise powerful models, tensions between long-term safety research and short-term product objectives can intensify — leading some experts to question whether current approaches are adequate for managing potential harms.

While Sharma has not publicly outlined his next steps, his decision and message are expected to keep the spotlight on responsible AI development and the need for robust governance frameworks as the technology evolves.