Trump AI policy aims to override state laws, shifts child safety responsibility to parents

Trump’s proposed AI framework seeks to limit state-level regulations and places greater responsibility on parents for protecting children online.

Mar 23, 2026 - 08:35
 6
Trump AI policy aims to override state laws, shifts child safety responsibility to parents

The Trump administration on Friday introduced a legislative framework to establish a unified national policy on artificial intelligence across the United States. The proposal seeks to centralise authority at the federal level by overriding state-level AI regulations, potentially weakening recent efforts by individual states to govern how the technology is developed and used.

According to a White House statement, the success of this approach depends on nationwide consistency. The statement emphasised that a fragmented system of state laws could hinder innovation and limit the country's ability to remain competitive in the global AI landscape.

The framework outlines seven primary objectives focused on accelerating innovation and expanding the deployment of AI technologies. It promotes a centralised regulatory structure that would take precedence over stricter state-imposed rules. At the same time, it places considerable responsibility on parents regarding issues like child safety, while offering only general, nonbinding expectations for how platforms should address potential risks.

For instance, the proposal suggests that Congress should require AI companies to introduce measures designed to reduce risks such as sexual exploitation and harm to minors. However, it does not define specific standards or enforcement mechanisms that companies must follow.

This framework follows an executive order signed by Trump three months earlier, which directed federal agencies to challenge state-level AI regulations. The order instructed the Commerce Department to identify laws deemed overly restrictive within 90 days, though that list has not yet been released. It also called for collaboration with Congress to create a nationwide regulatory approach, reflecting a broader strategy that prioritises growth and technological advancement over strict oversight.

The proposed framework advocates for what it describes as a "minimally burdensome national standard," aligning with the administration's broader effort to remove regulatory obstacles and accelerate AI adoption across industries. This approach is often associated with advocates who favour rapid technological expansion with limited regulatory interference, including White House AI advisor David Sacks.

While the framework acknowledges the role of states, it limits their authority to areas such as general consumer protection laws, zoning, and the use of AI within state government operations. It explicitly opposes state-level regulation of AI development, arguing that such matters are inherently interstate and closely tied to national security and foreign policy concerns.

Additionally, the proposal aims to shield AI developers from liability arising from third-party misuse of their technologies, preventing states from penalising companies for unlawful actions committed by others using their systems.

Notably absent from the framework are detailed provisions addressing liability, independent oversight, or enforcement strategies for potential harms associated with AI. Critics argue that this approach concentrates decision-making power in Washington while reducing states' ability to act as early responders to emerging risks.

Several states have already introduced legislation targeting AI-related concerns. For example, New York's RAISE Act and California's SB-53 focus on ensuring that large AI companies implement and maintain transparent safety protocols. Critics of the federal framework argue that such state initiatives are essential for addressing rapidly evolving risks.

Brendan Steinhauser, CEO of The Alliance for Secure AI, criticised the proposal, stating that it prioritises the interests of major technology companies while limiting accountability for potential harms. He argued that preventing states from regulating AI removes an important layer of oversight.

However, many within the technology sector have welcomed the framework, viewing it as a way to simplify compliance and encourage faster innovation. Teresa Carlson, president of the General Catalyst Institute, said that a unified national standard would allow startups to scale more efficiently without having to navigate conflicting state laws.

Child safety, copyright, and free speech

The framework arrives at a time when child safety has become a central issue in discussions around AI. While some states have pushed for stricter regulations that place responsibility on technology companies, the federal proposal emphasises parental control instead. It suggests that parents should be equipped with tools to manage their children's digital experiences, including privacy settings and device controls.

The administration also expresses the view that AI platforms should adopt measures to reduce risks such as child exploitation and self-harm. However, the language used in the proposal includes qualifiers such as "commercially reasonable" and stops short of establishing clear or enforceable requirements.

On copyright, the framework attempts to balance the interests of content creators with the needs of AI developers. It references the concept of fair use, aligning with arguments made by AI companies facing legal challenges over the use of copyrighted material in training datasets.

The framework's primary regulatory focus appears to centre on protecting what it describes as AI systems' ability to pursue truth and accuracy without interference. In particular, it emphasises limiting government involvement in content moderation rather than imposing stricter controls on platforms themselves.

It proposes that Congress should prevent government agencies from pressuring technology companies to alter or restrict content based on political or ideological considerations. It also suggests creating legal pathways for individuals to challenge government actions that could be seen as censorship of AI-generated content.

This stance aligns with earlier initiatives by the administration aimed at addressing concerns over perceived bias in AI systems. The framework reinforces the importance of protecting lawful political expression and dissent within AI platforms.

However, questions remain about how to distinguish between censorship and standard moderation practices, particularly in areas such as misinformation, election interference, and public safety. Critics argue that the lack of clarity could make coordination between regulators and technology companies more difficult.

Samir Jain, vice president of policy at the Centre for Democracy and Technology, noted the apparent contradiction in the framework's position. He pointed out that while it discourages government pressure on AI companies, earlier actions by the administration have attempted to influence how AI systems handle content.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
Shivangi Yadav Shivangi Yadav reports on startups, technology policy, and other significant technology-focused developments in India for TechAmerica.Ai. She previously worked as a research intern at ORF.