Pentagon Designates Anthropic as a Supply-Chain Risk in AI Security Review
The Pentagon has labelled AI company Anthropic a potential supply-chain risk, highlighting growing tensions between U.S. defence agencies and AI firms over military use of advanced models.
According to Bloomberg, which cited a senior defence official, the Department of Defence has formally informed Anthropic's leadership that the company and its products have been designated as a supply-chain risk. The move follows weeks of tension between the AI company and the Pentagon over how the U.S. military could use Anthropic's systems.
The designation comes after Anthropic CEO Dario Amodei refused to allow the military to use the company's AI systems for mass surveillance of Americans or for fully autonomous weapons in which no humans are involved in targeting or firing decisions. The Defence Department, by contrast, has argued that its use of AI should not be restricted by the terms set by a private contractor.
Supply-chain-risk designations are typically associated with foreign adversaries, making this a highly unusual step for a U.S. AI company. Under the designation, any company or agency doing work with the Pentagon must certify that it is not using Anthropic's models in connection with that work.
The Pentagon's decision could disrupt not only Anthropic's operations but also the government's own operations. Anthropic has been the only frontier AI lab with classified-ready systems, and Bloomberg has reported that the U.S. military is currently relying on Claude in its Iran campaign, where American forces are using AI tools to help rapidly process operational data. Claude is also among the key tools installed in Palantir's Maven Smart System, which military operators in the Middle East rely on.
Several critics have described the move as unprecedented. Dean Ball, a former Trump White House AI adviser, has characterised the supply-chain-risk designation as a "death rattle" of the American republic, arguing that the government has abandoned strategic clarity and respect in favour of what he described as "thuggish" tribalism that treats domestic innovators more harshly than foreign adversaries.
The backlash has extended beyond Anthropic itself. Hundreds of employees from OpenAI and Google have reportedly urged the Department of Defence to withdraw the designation and called on Congress to push back against what they see as a potentially improper use of government authority against an American technology company. They have also urged their own leaders to remain united in refusing Pentagon demands to use their AI models for domestic mass surveillance and for "autonomously killing people without human oversight."
Amid the dispute, OpenAI moved to strike its own agreement with the Department, allowing the military to use its AI systems for "all lawful purposes." That wording has raised concerns among some observers and employees, who worry that it is broad enough to permit the very uses Anthropic had been trying to block.
Amodei has described the Pentagon's actions as "retaliatory and punitive," and reports have said he believes his refusal to praise or donate to President Donald Trump publicly contributed to the worsening conflict with the Defence Department. OpenAI president Greg Brockman, meanwhile, has been a visible supporter of Trump and recently donated $25 million to the MAGA Inc. super PAC.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0