Anthropic and the Pentagon: What’s Really at Stake in the AI Debate

Anthropic’s position on military AI collaboration has sparked debate about ethics, national security, and the future role of artificial intelligence in defence systems.

Mar 4, 2026 - 20:44
 0
Anthropic and the Pentagon: What’s Really at Stake in the AI Debate

The last two weeks have been dominated by a high-profile standoff between Anthropic CEO Dario Amodei and Defence Secretary Pete Hegseth, as both sides spar over how the U.S. military should be allowed to use advanced AI.

Anthropic says it will not permit its AI systems to be used for mass surveillance of Americans or for fully autonomous weapons that can carry out strikes without meaningful human involvement. Secretary Hegseth, meanwhile, has argued that the Department of Defence should not be constrained by a private vendor’s usage rules, insisting that any “lawful use” should be allowed.

On Thursday, Amodei made it clear publicly that Anthropic is not retreating — even as the company faces warnings that it could be labelled a supply chain risk. With headlines shifting quickly, it’s worth stepping back to lay out what this fight is really about and why it matters.

At the heart of the dispute is a basic question: who should control powerful AI capabilities—the companies building the models or the government seeking to operationalise them?

What is Anthropic worried about?

As noted above, Anthropic does not want its models used for large-scale surveillance of Americans or for autonomous weapons where humans are removed from target selection and firing decisions. In traditional defedefencetracting, companies typically have limited influence over how products are ultimately deployed. Anthropic has taken a different stance from the outset, arguing that AI poses distinct risks that require stronger, specialised guardrails. From Anthropic’s viewpoint, the challenge is how to preserve those guardrails once the technology enters military environments.

The U.S. military already uses highly automated systems, and some of them are lethal. Historically, the final decision to use lethal force has remained in human hands, but there are not many clear legal barriers that outright prohibit autonomous weapons in practice. The Department of Defence does not impose a blanket ban on fully autonomous weapons systems. A 2023 DoD directive allows AI-enabled systems to select and engage targets without human intervention, provided they meet defined standards and receive review and approval by senior defence officials.

That reality is exactly what makes Anthropic uneasy. Military programs are inherently secret, so if the U.S. were moving toward greater automation in lethal decision-making, the public might not find out until those systems were already deployed. If Anthropic’s models were used in that process, those applications could still fall under “lawful use.”

Anthropic’s argument is not that these applications must remain permanently prohibited. Instead, it’s that today’s models aren't advanced or reliable enough to support those roles safely. Picture an autonomous system incorrectly identifying a target, triggering escalation without human authorisation, or making an irreversible lethal choice in a fraction of a second. Put an imperfect AI in charge of weapons decisions, and you risk creating a system that acts extremely fast and extremely confident — while still being prone to catastrophic errors in high-stakes situations.

Beyond weapons, Anthropic is also concerned about how AI could drastically expand lawful surveillance. Under current U.S. legal frameworks, surveillance of Americans can already occur in various forms, including the collection of texts, emails, and other communications. AI changes the scale and intensity: it can automate large-scale pattern discovery, connect identities across multiple datasets, generate predictive risk scores, and enable continuousbehaviourall monitoring — all at a level that would be difficult to achieve manually.

What does the Pentagon want?

The Pentagon’s position is that it should be free to apply Anthropic’s technology in any lawful way deemed necessary, rather than being constrained by Anthropic’s internal policy on surveillance or autonomous weapons.

More specifically, Secretary Hegseth has maintained that the Department of Defence should not have to follow a vendor’s rules and that it intends to use the technology only for “lawful use.”

On Thursday, Pentagon chief spokesperson Sean Parnell wrote on X that the department has no intention of carrying out mass domestic surveillance or deploying autonomous weapons. But he also framed the ask plainly:

“Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes,” Parnell said. “This is a simple, common-sense request that will prevent Anthropic from jeopardising critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions.”

Parnell added that Anthropic has until 5:01 p.m. ET on Friday to decide. “Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW,” he wrote.

While the Department of Defence maintains its stance is grounded in the principle that a corporation should not set military policy, Hegseth’s public remarks have at times suggested that the dispute also intersects with cultural and ideological messaging. In a January speech at SpaceX and xAI offices, Hegseth criticised what he described as “woke AI,” a line that some interpreted as foreshadowing this broader clash with Anthropic.

“Department of War AI will not be woke,” Hegseth said. “We’re building war-ready weapons and systems, not chatbots for an Ivy League faculty lounge.”

So what happens next?

The Pentagon has threatened two major actions: it could label Anthropic a “supply chain risk,” effectively shutting the company out of government business, or it could invoke the Defence Production Act (DPA) to compel the company to adapt its model to military requirements. Hegseth has set a 5:01 p.m. ET deadline on Friday for Anthropic to respond. With that cutoff nearing, it remains unclear whether the Pentagon will follow through.

This is also not a fight either side can abandon. Sachin Seth, a VC at Trousdale Ventures who focuses on defence technology, said that being branded as a supply chain risk could be “lights out” for Anthropic.

At the same time, he argues that removing Anthropic from the Department of Defence could create national security complications.

“[The Department] would have to wait six to 12 months for either OpenAI or xAI to catch up,” Seth said. “That leaves a window of up to a year where they might be working from not the best model, but the second or third best.”

AI is reportedly moving toward readiness for classified work and positioning itself as a replacement for Anthropic. Given Elon Musk’s public rhetoric, it’s widely assumed the company would have little hesitation in granting the DoD broad control over how its technology is used. Meanwhile, recent reporting suggests that OpenAI may maintain similar red lines to Anthropic on issues such as autonomous weapons and mass surveillance.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
Shivangi Yadav Shivangi Yadav reports on startups, technology policy, and other significant technology-focused developments in India for TechAmerica.Ai. She previously worked as a research intern at ORF.