Over the past several years, the term “woke” has been used to describe a variety of things. However, using it to describe an AI company’s refusal to permit autonomous lethal weapons and widespread surveillance of American citizens is a particular rhetorical move that merits analysis rather than merely reporting. Early in 2026,
Pete Hegseth, the Secretary of War under the Trump administration’s preferred nomenclature for the Defense Department, called Anthropic’s safety restrictions on Claude “woke AI” and what his office called “defective altruism” that impairs the military’s capacity to perform its duties. Ethical AI principles were framed as social justice intrusions into the serious business of national security and as ideological contamination. In some situations, it’s a persuasive political argument. Additionally, it is not a legally sufficient foundation for what the Pentagon then tried to do to Anthropic, as a federal judge later found.
| Category | Details |
|---|---|
| Topic | Anthropic vs. Pentagon “Woke AI” Dispute (2026) |
| Key Pentagon Figure | Pete Hegseth, Secretary of War (Defense) |
| Anthropic CEO | Dario Amodei |
| Anthropic’s “Red Lines” | No autonomous lethal weapons; no mass domestic surveillance |
| Pentagon Label Used | “Woke AI” / “Defective Altruism” |
| Threats Made | Supply chain risk designation, Defense Production Act, $200M contract termination |
| Venezuela Incident | Claude allegedly used in a military raid — triggering breach-of-trust |
| Federal Judge Ruling | Preliminary injunction (March 2026) — designation ruled “unlawful” |
| Microsoft’s Role | Filed supporting brief — framed issue as democratic norm defense |
| Contract Value | ~$200 Million |
| Reference Website | anthropic.com/policy |
Instead of allowing the “woke” framing to mask the precise limits that led to the disagreement, it is worthwhile to state them clearly. Anthropic drew two lines: Claude would not be utilized for widespread domestic surveillance of Americans, nor would it be used to fuel completely autonomous lethal weapons systems that operate without human supervision. These positions are not obscure. The first is in line with the long-running discussion concerning the proper role of human judgment in lethal force decisions within the military and defense ethics community.
The second is a reflection of the same worry about government overreach that gave rise to Fourth Amendment protections and has driven civil rights law for many years. Hegseth demanded that Anthropic remove both limitations and permit the DoD to use Claude for “all lawful purposes”—a word that, as Anthropic’s legal team pointed out, is sufficiently broad to effectively remove the limitations without explicitly stating so.
The conflict became operational rather than theoretical as a result of the Venezuela incident. According to reports, Anthropic’s terms of service did not permit Claude’s usage in conjunction with a military raid involving Venezuelan President Nicolás Maduro; the company learned of this use after the fact.
One source with knowledge of the negotiations characterized the occurrence as a breach-of-trust situation, creating a pattern where limits on paper were not translating into constraints in practice. From Anthropic’s point of view, this made the question of contract language even more crucial because, in the event that the government was using Claude in ways that the company had not authorized, the only safeguard against additional unauthorized use would be explicit contractual prohibition—exactly what the Pentagon refused to accept.
Following the stalemate, there were increasing and targeted threats. Hegseth attempted to classify Anthropic as a “supply chain risk”—a designation typically applied to foreign organizations thought to pose a threat to national security, not to American businesses refusing to remove safety elements from their goods. As a means of enforcing compliance, the Defense Production Act was threatened. As leverage, the $200 million deal was offered.
On social media, the Under Secretary for Defense made personal attacks against Dario Amodei. A former Defense Department official told reporters covering the story that the legal basis was “extremely flimsy” due to the rare combination of powers being used against a private corporation for refusing a particular contract change.
A federal judge blocked the supply chain risk designation in March 2026, ruling that it was an illegal attempt to penalize Anthropic for positions that were protected by the First Amendment. The decision established that the government’s chosen method of applying pressure to Anthropic had gone beyond the bounds of the law, but it did not settle the underlying contract dispute.
In a supporting brief, Microsoft, whose own AI products compete with Anthropic’s in the federal market, framed the dispute as a matter of government overreach and democratic norms. This intriguing alignment suggests that other tech companies were interpreting the dispute as having implications beyond Anthropic’s particular contracts.
The story’s narrative, which includes the growing threats, the Venezuela event, the injunction, and the “woke AI” framing, gives the impression that the battle is about more than just one company’s unwillingness to change a single contract.
It’s about who gets to decide the boundaries of what AI systems can be used for, and whether that decision should be made by the governments implementing the systems, the businesses developing the systems, or some other mix that hasn’t yet been agreed upon. The term “woke AI” is used to frame the question so that one response appears to be clearly accurate. The order issued by the federal judge suggests that the issue is more nuanced than the name suggests.
