Anthropic and the Pentagon are reportedly arguing over Claude usage
Anthropic and the U.S. Department of Defence are reportedly in discussions over how Claude can be used in government settings, highlighting tensions around AI safety, access, and military applications.
The Pentagon has been pressing top AI labs to let the U.S. military use their systems for “all lawful purposes,” but Anthropic has been resisting that request, according to a new report from Axios.
Axios says the government is making essentially the same demand of OpenAI, Google, and xAI. A Trump administration official, speaking anonymously, told Axios that one of those companies has already agreed to the “all lawful purposes” language. At the same time, the other two have reportedly shown at least some willingness to move in that direction.
Anthropic, however, is described as the most unwilling to sign off. As a result, Axios reports that the Pentagon is now threatening to end Anthropic’s $200 million contract with the company.
The tension has been building for weeks. In January, The Wall Street Journal reported that Anthropic and Defence Department officials were at odds over the boundaries of how Claude could be used. The Journal later reported that Claude was used in the U.S. military operation to capture then-Venezuelan president Nicolás Maduro.
An Anthropic spokesperson told Axios that the company has “not discussed the use of Claude for specific operations with the Department of War,” and said the conversations have instead centred on “a specific set of Usage Policy questions — namely, our hard limits around fully autonomous weapons and mass domestic surveillance.”
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0