Pentagon considers labelling Anthropic a national security supply-chain risk
The Pentagon is reviewing whether AI company Anthropic should be classified as a supply-chain risk amid disputes over military use of artificial intelligence.
In a post published on Truth Social, President Trump instructed federal agencies to stop using all Anthropic products following the company’s public clash with the Department of Defence. The president permitted a six-month wind-down period for agencies currently relying on those products, but made clear that Anthropic would no longer be eligible to be a federal contractor.
“We don’t need it, we don’t want it, and will not do business with them again,” the president wrote in the post.
Notably, Trump’s message did not mention plans to formally designate Anthropic as a supply-chain risk, even though that possibility had previously been raised as a potential consequence. But a later post from Defence Secretary Pete Hegseth appeared to carry through on that threat.
“In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security,” Secretary Hegseth wrote. “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”
Anthropic said on Friday that although the company has not been contacted directly by the government, it intends to “challenge any supply chain risk designation in court.”
The dispute with the Pentagon centred on Anthropic’s refusal to permit its AI models to be used for either large-scale domestic surveillance or fully autonomous weapons systems, which Secretary Hegseth viewed as overly restrictive.
CEO Dario Amodei reaffirmed that position in a public post on Thursday, making clear that he would not back away from those two conditions.
“Our strong preference is to continue to serve the Department and our warfighters — with our two requested safeguards in place,” Amodei wrote at the time. “Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions.”
OpenAI reportedly expressed support for Anthropic’s position. According to the BBC, CEO Sam Altman sent a memo to staff on Thursday stating that he shared the same “red lines,” and that any OpenAI defence contracts would likewise reject uses that were “unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.”
OpenAI co-founder Ilya Sutskever, who famously broke with Altman in November 2023 and has since launched his own AI company, also joined the discussion on Friday, writing on X: “It’s extremely good that Anthropic has not backed down, and it’s significant that OpenAI has taken a similar stance.”
But only hours after the Trump administration directed federal agencies to sever ties with Anthropic, OpenAI stepped in to occupy the opening, announcing a Pentagon deal that Altman said maintained the same basic principles Anthropic had been defending — including bans on domestic surveillance and autonomous weapons.
According to The New York Times, OpenAI and government officials began discussing a possible partnership on Wednesday.
There will almost certainly be more developments ahead.
Anthropic, OpenAI, and Google were each awarded contracts by the U.S. Department of Defence last July. While some Google employees have voiced support for Anthropic, neither Google nor its parent company has publicly commented.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0