In a significant shift reflecting the escalating competition in artificial intelligence, both Anthropic and OpenAI have revised their safety commitments. Reports indicate that Anthropic has abandoned a crucial pledge to pause training on advanced AI systems unless guaranteed safeguards are in place. This change comes as Anthropic aims to compete more aggressively against rivals such as Google and xAI.
According to a report by TIME, Anthropic”s chief science officer, Jared Kaplan, explained that the company no longer believes it is beneficial to halt training due to lack of comprehensive risk mitigations. “We didn”t really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments… if competitors are blazing ahead,” Kaplan stated. This decision marks a departure from Anthropic”s previous stance as a leader in AI safety.
Anthropic”s recent policy changes come amid a public dispute with U.S. Defense Secretary Pete Hegseth regarding access to its Claude AI, making it the only major AI lab that has not granted full access to the Pentagon. This move highlights a growing tension between governmental interests and corporate AI development.
Edward Geist, a senior policy researcher at the RAND Corporation, noted that the earlier discourse surrounding AI safety originated from an intellectual community predating the current landscape of large language models (LLMs). He remarked on how early advocates of AI safety had a different vision of advanced artificial intelligence, which contrasts sharply with the current capabilities of LLMs.
Moreover, the shift in language is not isolated to Anthropic. OpenAI has also updated its mission statement, removing the term “safely” from its earlier commitment to developing general-purpose AI that benefits humanity without a financial return requirement. The new statement focuses on ensuring that artificial general intelligence serves the interests of all humanity.
This evolution in terminology signals to investors and policymakers that these companies are prioritizing competitiveness over safety concerns. Geist emphasized that the definition of “AI safety” has become ambiguous, complicating the landscape for companies seeking to establish their commitments to responsible AI development.
Anthropic”s revised policy now includes commitments to transparency, such as publishing “frontier safety roadmaps” and regular “risk reports.” The company asserts that it will delay development only if it perceives a significant risk of catastrophic outcomes.
As funding surges into the AI sector, both Anthropic and OpenAI are looking to bolster their commercial standings. Recently, Anthropic announced it raised significant capital, achieving a valuation around $380 billion. In contrast, OpenAI is finalizing a funding round that could reach up to $100 billion, backed by major tech players.
Both companies have secured lucrative contracts with the U.S. Department of Defense, although Anthropic”s relationship with the Pentagon may be jeopardized due to its access policies. As the geopolitical landscape becomes increasingly competitive, experts like Hamza Chaudhry from the Future of Life Institute argue that these policy changes are more about adapting to political dynamics than catering to government contracts.
Chaudhry remarked that Anthropic is shifting its narrative, indicating a need to reduce unconditional safety commitments in light of rising competitive pressures. “Anthropic is now saying, “Look, we can”t keep saying safety, we can”t unconditionally pause, and we”re going to push for much lighter-touch regulation,”” he stated.










































