In a striking revelation, Dario Amodei, the CEO of Anthropic, has publicly accused OpenAI and its co-founder Sam Altman of spreading misleading information regarding their recent military contract with the Department of Defense. This allegation arose from a leaked internal memo dated June 9, 2024, in which Amodei labeled OpenAI”s public claims as “straight up lies.”
The controversy emerged following OpenAI”s announcement of a contract with the Pentagon, which purportedly includes safeguards against unethical uses of artificial intelligence. Amodei, however, contended that these assurances are merely “safety theater,” designed to assuage public and employee concerns rather than enforce substantial ethical boundaries.
Amodei”s critique centers on the negotiations that both companies had with the Defense Department. While Anthropic had an existing $200 million contract with the military and sought an expanded agreement for its Claude AI systems, talks fell apart when the Pentagon insisted on a broad clause permitting “any lawful use.” Anthropic rejected this demand, as it wanted explicit prohibitions against uses such as domestic mass surveillance and the development of autonomous weapons.
In stark contrast, OpenAI went ahead with the contract, claiming that it included verbal assurances against such applications. Amodei argued that the core issue lies in OpenAI”s approach to ethical compliance, suggesting the company favors managing perceptions over implementing concrete protections against potential abuses.
The crux of the disagreement highlights a significant legal and ethical debate within the AI industry over the interpretation of “lawful purposes.” OpenAI, in its official statement, claimed that the Department of Defense clarified that mass domestic surveillance is illegal and therefore not a concern under the contract. However, critics pointed out the inherent flexibility of such legal terms, which could evolve over time.
The fallout from this clash has been notable, as evidenced by a dramatic 295% increase in uninstalls of OpenAI”s ChatGPT application following news of the military contract. Conversely, Anthropic”s Claude app has seen a surge in popularity, climbing to the second position in the App Store, indicating a shift in public sentiment toward companies perceived as more ethically responsible.
This incident is not an isolated event but a continuation of a broader historical tension between Silicon Valley firms and the military-industrial complex. Previous controversies, such as Google”s Project Maven, which faced employee backlash over the use of AI in drone warfare, have set a precedent for corporate responsibility in technology.
As the debate unfolds, technology ethicists view this dispute as a pivotal moment for the AI industry, underscoring the challenges of integrating ethical frameworks within lucrative government contracts. The divergent paths of Anthropic and OpenAI may compel other firms to align with one of two emerging ideologies: a flexible approach prioritizing business opportunities or a stricter adherence to ethical imperatives.
Ultimately, Amodei”s accusations against OpenAI reflect a crucial discourse on the future of AI governance, highlighting the need for clear ethical standards in an industry poised for rapid growth and increasing scrutiny.












































