A Practical Roadmap for Artificial Intelligence — If the World Is Ready to Listen

Experts outline a practical roadmap for the development of artificial intelligence, focusing on safety, transparency, governance, and responsible innovation.

Mar 8, 2026 - 18:35
 1
A Practical Roadmap for Artificial Intelligence — If the World Is Ready to Listen

As Washington’s rupture with Anthropic laid bare the absence of clear, coherent rules for governing artificial intelligence, a bipartisan group of thinkers has stepped forward with something the federal government has so far failed to offer: a concrete framework for what responsible AI development should entail.

The Pro-Human Declaration was completed before last week’s confrontation between the Pentagon and Anthropic, but no one involved missed the significance of how closely the two developments collided.

“There’s something quite remarkable that has happened in America just in the last four months,” said Max Tegmark, the MIT physicist and AI researcher who helped organise the initiative, in a conversation with this editor. “Polling suddenly [is showing] that 95% of all Americans oppose an unregulated race to superintelligence.”

The newly released declaration, signed by hundreds of experts, former government officials, and public figures, begins with a blunt assessment that humanity has reached a pivotal crossroads. One direction, which the declaration describes as “the race to replace,” would see humans displaced first in the workforce and then in decision-making roles. At the same time, power increasingly concentrates in unaccountable institutions and the machines they control. The alternative path points toward AI that dramatically enhances human capability.

That more hopeful outcome, the declaration argues, rests on five core pillars: keeping humans in control, preventing excessive concentrations of power, protecting the human experience, preserving individual liberty, and ensuring that AI companies can be held legally accountable. Among the stronger provisions in the document are a complete ban on developing superintelligence until there is scientific consensus that it can be built safely and with real democratic approval; required off-switches for powerful AI systems; and a prohibition on architectures capable of self-replication, autonomous self-improvement, or resistance to shutdown.

The declaration arrives at a time that makes its urgency easier to grasp. On the final Friday in February, Defence Secretary Pete Hegseth designated Anthropic — whose AI systems already operate on classified military platforms — as a “supply chain risk” after the company declined to grant the Pentagon unrestricted access to its technology, a label usually reserved for companies tied to China. Within hours, OpenAI struck its own agreement with the Defence Department, one that legal experts say may prove difficult to enforce in any meaningful way. Taken together, the events underscored just how costly Congress’s failure to act on AI has become.

As Dean Ball, a senior fellow at the Foundation for American Innovation, later told The New York Times, “This is not just some dispute over a contract. This is the first conversation we have had as a country about control over AI systems.”

When we spoke, Tegmark turned to an analogy that most people can immediately understand. “You never have to worry that some drug company is going to release some other drug that causes massive harm before people have figured out how to make it safe,” he said, “because the FDA won’t allow them to release anything until it’s safe enough.”

Political turf battles in Washington rarely generate the public pressure needed to change the law. Tegmark believes child safety is the most likely issue to break the current deadlock. In fact, the declaration specifically calls for mandatory testing before AI products are deployed — particularly chatbots and companion apps aimed at younger users — covering risks such as increased suicidal thoughts, worsening mental health conditions, and emotional manipulation.

“If some creepy old man is texting an 11-year-old pretending to be a young girl and trying to persuade this boy to commit suicide, the guy can go to jail for that,” Tegmark said. “We already have laws. It’s illegal. So why is it different if a machine does it?”

He argues that once the idea of pre-release testing is accepted for products aimed at children, the list of requirements will almost certainly expand. “People will come along and be like — let’s add a few other requirements. Maybe we should also test that this can’t help terrorists make bioweapons. Maybe we should test to make sure that superintelligence can’t overthrow the U.S. government.”

It is notable that former Trump adviser Steve Bannon and Susan Rice, who served as National Security Advisor under President Obama, have signed the same document — alongside former Joint Chiefs Chairman Mike Mullen and progressive faith leaders.

“What they agree on, of course, is that they’re all human,” Tegmark said. “If it’s going to come down to whether we want a future for humans or a future for machines, of course, they’re going to be on the same side.”

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
Shivangi Yadav Shivangi Yadav reports on startups, technology policy, and other significant technology-focused developments in India for TechAmerica.Ai. She previously worked as a research intern at ORF.