Anthropic files lawsuit against the US Defence Department over supply-chain risk label

Anthropic has sued the US Defence Department after being labelled a potential supply-chain risk, arguing that the designation could damage contracts and restrict its access to AI technology.

Mar 10, 2026 - 10:55
 0
Anthropic files lawsuit against the US Defence Department over supply-chain risk label

Anthropic has followed through on its pledge to challenge the Department of Defence in court after the agency designated the company a supply-chain risk late last week.

The maker of Claude filed two complaints against the DOD on Monday, one in California and another in Washington, D.C., after a weeks-long dispute between Anthropic and the department over whether the military should be given unrestricted access to Anthropic’s AI systems. Anthropic had drawn two firm red lines: it did not want its technology used for mass surveillance of Americans, and it did not believe the technology was ready to power fully autonomous weapons without humans making targeting and firing decisions.

Defence Secretary Pete Hegseth had argued that the Pentagon should be able to use AI systems for “any lawful purpose” and should not be constrained by a private contractor.

A supply-chain risk designation is typically reserved for foreign adversaries. It requires any company or agency doing work with the Pentagon to certify that it is not using Anthropic’s models. While several private firms continue to work with Anthropic, the company now risks losing a substantial portion of its government business.

In a complaint filed in federal court in San Francisco, Anthropic described the DOD’s actions as “unprecedented and unlawful” and accused the administration of retaliation. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the lawsuit states.

The protected speech Anthropic is referring to is its position on the “limitations of its own AI services and important issues of AI safety,” according to the complaint. The administration, including Defence Secretary Hegseth and President Trump, has criticised Anthropic and its CEO, Dario Amodei, as “woke” and “radical” for the company’s support of stronger AI safety and transparency measures.

In the filing, Anthropic argued that while the government is free to disagree with its views or decline to use its products, it cannot use state power to punish or suppress the company’s expression.

Anthropic also contended that “no federal statute authorises the actions taken here,” saying the Defence Department’s supply-chain risk designation was issued “without observance of the procedures Congress required.” Under the law, agencies are generally required to conduct a risk assessment, notify the targeted company and allow it to respond, make a written national security determination, and inform Congress before excluding a vendor from federal supply chains.

The company further argues that the president acted beyond the authority granted by Congress when he instructed every federal agency to immediately stop using Anthropic’s technology after Amodei said he would not back down from the company’s red lines. Following statements by both President Trump and Secretary Hegseth, the General Services Administration — the federal agency responsible for government purchasing and contracts — terminated Anthropic’s “OneGov” contract, cutting off the availability of Anthropic’s services to all three branches of the federal government.

“Defendants are seeking to destroy the economic value created by one of the world’s fastest-growing private companies,” the lawsuit says. “The Challenged Actions inflict immediate and irreparable harm on Anthropic; on others whose speech will be chilled; on those benefiting from the economic value the company can continue to create; and on a global public that deserves robust dialogue and debate on what AI means for warfare and surveillance.”

As part of the complaint, Anthropic asked the court to halt the Defence Department’s designation immediately. At the same time, the case moves forward, and ultimately, it is intended to invalidate it and permanently prevent the government from enforcing it.

“Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners,” an Anthropic spokesperson said in a statement. “We will continue to pursue every path toward resolution, including dialogue with the government.”

Anthropic also filed a separate complaint with the D.C. Circuit Court of Appeals, since federal procurement law allows companies to challenge supply-chain risk designations through that route. In that petition, the company asked the court to review and overturn the Defence Department’s decision to classify it as a national security supply-chain risk. Anthropic also argued there that the move was unlawful, retaliatory, and improperly carried out under federal procurement law.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
Shivangi Yadav Shivangi Yadav reports on startups, technology policy, and other significant technology-focused developments in India for TechAmerica.Ai. She previously worked as a research intern at ORF.