OpenAI shares new details about its partnership agreement with the Pentagon
OpenAI has revealed additional details about its agreement with the Pentagon, outlining how AI tools may support defence research, cybersecurity, and operational efficiency.
By Sam Altman’s own account, OpenAI’s arrangement with the Department of Defence was “definitely rushed,” and, as he put it, “the optics don’t look good.”
After talks between Anthropic and the Pentagon collapsed on Friday, President Donald Trump ordered federal agencies to stop using Anthropic’s technology following a six-month transition window. At the same time, Defence Secretary Pete Hegseth said he was designating the AI company as a supply-chain risk.
Soon afterwards, OpenAI announced it had secured a deal to deploy its models in classified settings. Because Anthropic had said it was drawing hard red lines around the use of its technology for fully autonomous weapons or mass domestic surveillance, and Altman had said OpenAI shared those same red lines, some immediate questions followed: Was OpenAI being truthful about the safeguards it claimed were in place? And why was OpenAI able to reach an agreement when Anthropic could not?
As OpenAI executives moved to defend the agreement on social media, the company also published a blog post explaining its position and the framework behind the deal.
In that post, OpenAI identified three categories of use that it said remain off-limits for its models: mass domestic surveillance, autonomous weapon systems, and “high-stakes automated decisions (e.g. systems such as ‘social credit’).”
The company said that, unlike other AI firms that have “reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments,” OpenAI’s agreement is designed to protect its red lines “through a more expansive, multi-layered approach.”
“We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections,” the blog post said. “This is all in addition to the strong existing protections in U.S. law.”
The company also added, “We don’t know why Anthropic could not reach this deal, and we hope that they and more labs will consider it.”
After the post went live, Techdirt’s Mike Masnick argued that the agreement “absolutely does allow for domestic surveillance,” citing language stating that the collection of private data would comply with Executive Order 12333, as well as other laws. Masnick described that executive order as “how the NSA hides its domestic surveillance by capturing communications by tapping into lines outside the US even if it contains info from/on US persons.”
In a post on LinkedIn, Katrina Mulligan, OpenAI’s head of national security partnerships, argued that much of the public debate over the contract language rests on the assumption that “the only thing standing between Americans and the use of AI for mass domestic surveillance and autonomous weapons is a single usage policy provision in a single contract with the Department of War.”
“That’s not how any of this works,” Mulligan wrote, adding, “Deployment architecture matters more than contract language […] By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware.”
Altman also responded to questions about the arrangement on X, where he acknowledged that the agreement had been rushed and had triggered heavy criticism of OpenAI — enough so that Anthropic’s Claude passed OpenAI’s ChatGPT in Apple’s App Store rankings on Saturday. That raised a natural question: why move forward with it at all?
“We really wanted to de-escalate things, and we thought the deal on offer was good,” Altman said. “If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterised as […] rushed and uncareful.”
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0