Tech employees call on Pentagon and Congress to remove Anthropic supply-chain risk designation

Technology workers urge the U.S. Department of Defence and Congress to reconsider labelling Anthropic as a supply-chain risk, warning it could impact AI innovation and partnerships.

Mar 7, 2026 - 03:57
 0
Tech employees call on Pentagon and Congress to remove Anthropic supply-chain risk designation

Hundreds of technology workers have signed an open letter calling on the Department of Defence to reverse its designation of Anthropic as a “supply-chain risk.” The letter also urges Congress to intervene and “examine whether the use of these extraordinary authorities against an American technology company is appropriate.”

The signatories include employees from major technology and venture capital firms, including OpenAI, Slack, IBM, Cursor, and Salesforce Ventures. The letter follows a dispute between the Department of Defence and Anthropic, which escalated last week when the AI company refused to grant the military unrestricted access to its AI systems.

Anthropic’s negotiations with the Pentagon centred on two key red lines. The company said it did not want its technology to be used for mass surveillance of Americans or for autonomous weapons systems that could identify and fire on targets without a human involved in the decision-making loop. The Department of Defence said it had no plans to pursue either use case, but also maintained that the policy choices of a private vendor should not constrain it.

After Anthropic CEO Dario Amodei declined to strike a deal with Defence Secretary Pete Hegseth, President Donald Trump on Friday ordered federal agencies to stop using Anthropic’s technology after a six-month transition period. Hegseth then moved to label Anthropic a supply-chain risk — a category typically used for foreign adversaries — which would effectively bar the AI company from working with any agency or firm that does business with the Pentagon.

In a post on Friday, Hegseth wrote: “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”

Still, a post on X alone does not formally make Anthropic a supply-chain risk. Before that designation takes full effect, the government must complete a risk assessment and notify Congress. Only then would military contractors and partners be required to cut ties with Anthropic or stop using its products. Anthropic said in a blog post that the designation is “legally unsound” and that it intends to “challenge any supply chain risk designation in court.”

Many people in the technology industry view the administration’s response to Anthropic as severe and clearly retaliatory.

“When two parties cannot agree on terms, the normal course is to part ways and work with a competitor,” the open letter states. “This situation sets a dangerous precedent. Punishing an American company for declining to accept changes to a contract sends a clear message to every technology company in America: accept whatever terms the government demands, or face retaliation.”

In addition to concerns about the government’s treatment of Anthropic, many in the industry remain uneasy about the possibility of government overreach and the use of AI for harmful purposes.

Boaz Barak, an OpenAI researcher, wrote in a social media post on Monday that preventing governments from using AI for mass domestic surveillance is also his “personal red line” and “it should be all of ours.”

Just moments after Trump publicly criticised Anthropic, OpenAI announced it had reached a deal allowing its models to be deployed in classified Department of Defence environments. OpenAI CEO Sam Altman said last week that the company shares the same red lines as Anthropic.

“If anything good can come out of the events of the last week, it would be if we in the AI industry start treating the issue of using AI for government abuse and surveilling its own people as a catastrophic risk of its own right,” Barak wrote. “We have done a good job of evaluations, mitigations, and processes for risks such as bioweapons and cybersecurity. Let’s use similar processes here.”

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
Shivangi Yadav Shivangi Yadav reports on startups, technology policy, and other significant technology-focused developments in India for TechAmerica.Ai. She previously worked as a research intern at ORF.