Lack of a Clear Strategy on How AI Companies Should Work With Governments

Experts warn there is still no clear framework guiding how AI companies should collaborate with governments on regulation, security, and responsible technology use.

Mar 7, 2026 - 04:54
 0
Lack of a Clear Strategy on How AI Companies Should Work With Governments

As Sam Altman learned on Saturday night, this is an uneasy moment for anyone doing business with the U.S. government. At around 7 p.m., the OpenAI chief executive said he would take questions publicly on X to explain his company’s decision to accept the Pentagon contract that Anthropic had just declined.

Most of the questions focused on OpenAI’s willingness to be involved in mass surveillance and automated killing — the same categories Anthropic had refused to allow in its own negotiations with the Pentagon. Altman largely deferred to public institutions in his responses, saying it was not his place to determine national policy.

“I very deeply believe in the democratic process,” he wrote in one reply, “and that our elected leaders have the power, and that we all have to uphold the constitution.”

About an hour later, he admitted he was surprised by how many people appeared to disagree. “There is more open debate than I thought there would be,” Altman said, “about whether we should prefer a democratically elected government or unelected private companies to have more power. I guess this is something people disagree on.”

The exchange says a great deal about both OpenAI and the wider technology industry. In that public Q&A, Altman adopted a position common in the defence world, in which military leadership and corporate contractors are generally expected to defer to civilian authorities.

But what may be more revealing is that as OpenAI shifts from a wildly successful consumer startup to a part of the national security infrastructure, the company does not appear fully prepared for the responsibilities that come with that role.

Altman’s public question-and-answer session came during an especially tense moment for the company. The Pentagon had just blocked OpenAI rival Anthropic after the company insisted on contractual restrictions related to surveillance and automated weapons. Only a few hours later, OpenAI announced that it had secured the same contract Anthropic had turned down. Altman framed the agreement as a quick way to de-escalate the standoff — and it was no doubt financially attractive. But he seemed unready for the scale of the backlash it triggered from both users and employees.

OpenAI has worked with the U.S. government for years, but not in this way. When Altmantestifiedbefore congressional committees in 2023, for example, he was still largely operating within the familiar social media-era playbook. He spoke in sweeping terms about the transformative potential of the company’s technology, acknowledged the risks, and engaged with lawmakers. This combination was highly effective in energising investors while easing regulatory pressure.

Less than three years later, that style is no longer enough. AI is now obviously powerful, and the capital demands around it have become so intense that a more serious relationship with government is unavoidable. The striking part is how unprepared both the companies and the government appear to be for what that relationship now requires.

The most immediate conflict involves Anthropic itself and Defence Secretary Pete Hegseth’s announcement on Friday that he planned to designate the company as a supply-chain risk. That threat hangs over the broader conversation like a gun that has not yet been fired. As former Trump official Dean Ball wrote over the weekend, such a designation would cut Anthropic off from key hardware and hosting partners, effectively destroying the company. It would be an extraordinary step against an American business, and even if the move were later overturned in court, it would still inflict damage in the meantime and send shockwaves across the entire industry.

According to Ball’s account of events, Anthropic was performing an existing contract under terms agreed years earlier, only for the administration to demand changes to those terms later. That is far beyond what would typically be acceptable between private companies and sends a chilling signal to other vendors.

“Even if Secretary Hegseth backs down and narrows his extremely broad threat against Anthropic, great damage has been done,” Ball wrote. “Most corporations, political actors, and others will have to operate under the assumption that the logic of the tribe will now reign.”

That poses a direct threat to Anthropic and a serious challenge for OpenAI. The company is already under significant internal pressure from employees to maintain at least some meaningful red lines. At the same time, right-wing media outlets will be watching closely for any indication that OpenAI is anything less than a firm political ally. At the centre of it all is the Trump administration, which seems intent on making the situation as difficult as possible.

OpenAI did not originally intend to become a defence contractor. But because of the scale of its ambitions, it has effectively been pushed into the same arena as companies like Palantir and Anduril. Making progress during the Trump administration means choosing sides. There are no truly apolitical actors in this landscape, and earning support from some factions will inevitably mean alienating others. It remains unclear how high the cost will be for OpenAI, whether in lost business or departing employees. Still, it is hard to imagine the company emerging from this without taking some damage.

It may seem odd that this kind of pressure is mounting at a moment when more high-profile tech investors than ever are holding influential roles in Washington. Still, many of them appear perfectly comfortable with a tribal logic. Among venture capitalists aligned with Trump, Anthropic has long been viewed as having cultivated favour with the Biden administration in ways that could hurt the broader AI industry — a perception highlighted by Trump adviser David Sacks’ response to the current dispute. Now that the situation has flipped, few of those voices seem interested in defending the larger principle of free enterprise.

This is a difficult position for any company to find itself in. And while politically aligned players may benefit in the short term, they will be just as vulnerable when the political winds eventually change. There is a reason the defence sector was dominated for decades by slow-moving, heavily regulated giants like Raytheon and Lockheed Martin. By operating as industrial extensions of the Pentagon, they had the political cover needed to stay out of day-to-day politics, allowing them to focus on the technology without having to reset every time control of the White House changed.

The startup challengers of today may move faster than those older contractors ever did, but they are far less prepared for the long game.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
Shivangi Yadav Shivangi Yadav reports on startups, technology policy, and other significant technology-focused developments in India for TechAmerica.Ai. She previously worked as a research intern at ORF.