The talks between Anthropic and the Department of Defense had been intensifying over months of tension that neither party was publicizing in the weeks prior to the stalemate becoming public. When it finally came up in the form of a meeting between Anthropic’s leadership and Secretary of Defense Pete Hegseth, the particular point of contention—whether Claude could be used for fully autonomous weapons systems and widespread domestic surveillance—had been a source of contention in contract negotiations for a long enough period.
“These threats do not change our position: we cannot in good conscience accede to their request” was Anthropic CEO Dario Amodei’s quick and uncompromising public response to the meeting’s stated ultimatum. In the context of how technology businesses normally interact with government clients whose contracts represent hundreds of millions of dollars in potential revenue, that comment from Amodei’s public statement is very out of the ordinary.
Amodei’s statement lacks the carefully crafted relationship language, the focus on collaboration and common objectives, and the unwillingness to incite public conflict with the federal government. What is evident is a clear description of the Pentagon’s demands as something the business has consciously decided to reject, along with a readiness to bear the financial fallout from doing so.
| Category | Details |
|---|---|
| Topic | Anthropic vs. Pentagon dispute over Claude’s safety restrictions |
| Anthropic CEO | Dario Amodei |
| Pentagon Contact | Secretary of Defense Pete Hegseth |
| DoD Under Secretary | Emil Michael |
| Key Meeting Date | Shortly before Amodei’s public statement (early 2026) |
| Pentagon Demand | Accept “any lawful use” of Claude, including surveillance and autonomous weapons |
| Anthropic’s Refusal | Won’t enable mass domestic surveillance or fully autonomous weapons |
| Pentagon Threats | Supply chain risk label; Defense Production Act invocation |
| Amodei’s Offer | Collaborative R&D to improve AI reliability for defense — rejected |
| Known Claude Use | Part of U.S. operation involving Venezuelan President Nicolás Maduro |
| Reference Website | anthropic.com |
According to Amodei, Anthropic has refused to support two particular use cases: completely autonomous weapons and widespread domestic monitoring. The issue of domestic surveillance concerns what artificial intelligence (AI) systems can do when they are applied to the kind of dispersed, individually harmless data that every individual produces in the course of daily life, such as location data, purchase history, communication metadata, and social media activity, and then automatically and at scale assembled into a comprehensive behavioral profile.
In a corporate blog post, Amodei provided enough details to demonstrate that the issue isn’t theoretical. Although the autonomous weapons issue is different in nature, the reasoning behind it is similar: current AI systems, despite their apparent capability in controlled situations, are not trustworthy enough to make the kind of contextual judgments that lethal force decisions require, and their deployment without human oversight creates failure modes that endanger both military personnel and civilians.
In response to Amodei’s remarks, Under Secretary of Defense Emil Michael attacked him directly on social media, claiming that the CEO “wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.” Michael claimed in a CBS News interview that the uses Anthropic is concerned about are already forbidden by law and Pentagon policy.
This raises a clear question that Michael did not specifically address: why would the Pentagon refuse to include explicit contractual language prohibiting those uses if they are already illegal and prohibited? This was openly addressed by Anthropic’s spokesperson, who pointed out that contract language intended as a compromise was supported by legal language that would essentially allow the safeguards to be disregarded when the government chose to do so.
After the meeting, there were serious threats. Hegseth brought up the possibility of using the Defense Production Act, a tool intended for industrial mobilization during wartime that has occasionally been used in peacetime situations with varying legal credibility. The Defense Production Act gives the executive branch the power to compel companies supplying goods deemed essential to national defense to meet government requirements.
Speaking to the BBC under condition of anonymity, a former DoD official called the justifications for this policy and the supply chain risk designation “extremely flimsy.” The most immediately significant of the two would be the supply chain risk designation, which would essentially declare Anthropic inadequately secure for government use and prohibit it from receiving federal contracts from organizations other than the Department of Defense.
The discovery that Claude had previously participated in a U.S. operation involving Nicolás Maduro, the president of Venezuela, adds a dimension to the narrative that wasn’t apparent during the initial contract discussions. The disclosure indicates that the relationship between Anthropic and U.S. government operations has already transcended theoretical boundaries, even though the precise nature of that use has not been made public.
Amodei’s claim that Anthropic draws the line at widespread domestic surveillance while supporting legitimate foreign intelligence and counterintelligence applications points to a distinction the firm is actually attempting to uphold rather than merely employing as rhetorical cover.
Senior defense officials’ public remarks and social media posts about this conflict give the impression that the AI industry’s relationship with the federal government is about to enter a phase that the companies developing these systems didn’t fully anticipate and haven’t yet developed frameworks for navigating.
The idea that safety restrictions are characteristics rather than limits served as the foundation for Anthropic’s brand. According to the Pentagon, those limitations are barriers. There is currently a public impasse between the two parties regarding which framework is in charge, and neither is showing any evidence of taking the initiative.
