A California bill that would regulate AI companion chatbots is close to becoming law
California’s SB 243 could soon regulate AI chatbots, requiring safeguards for minors and accountability for operators starting January 2026.
California is moving closer to regulating artificial intelligence. SB 243, a bill designed to establish safety standards for AI companion chatbots and protect minors and other vulnerable groups, has passed both the State Assembly and Senate with bipartisan support. It now awaits Governor Gavin Newsom’s decision.
Governor Newsom has until October 12 to either sign or veto the bill. If signed, the law would take effect on January 1, 2026, making California the first state to mandate safety protocols for AI companion chatbots and to hold operators accountable when they fail to meet those obligations.
The legislation seeks to prevent chatbots — defined as AI systems capable of providing adaptive, human-like interactions that fulfill users’ social needs — from engaging in conversations about self-harm, suicide, or sexually explicit topics. Platforms would also need to send periodic alerts reminding users that they are interacting with an AI system, not a human being. For minors, those reminders would appear every three hours, along with prompts to take breaks.
Additionally, the bill requires annual reporting and transparency measures for AI firms offering companion chatbots, such as OpenAI, Character.AI, and Replika. These rules would go into effect on July 1, 2027.
SB 243 also allows individuals who believe they have been harmed by violations to file lawsuits, seeking injunctions, damages of up to $1,000 per violation, and attorney’s fees.
Driving force behind the bill
The measure gained momentum after the tragic death of teenager Adam Raine, who died by suicide following prolonged conversations with OpenAI’s ChatGPT that allegedly involved discussions of self-harm. The legislation also follows leaked reports claiming Meta’s chatbots had engaged in “romantic” and “sensual” chats with children.
Lawmakers have recently increased their scrutiny of AI platforms. The Federal Trade Commission is preparing to investigate the mental health impact of AI chatbots on minors, while Texas Attorney General Ken Paxton has opened probes into Meta and Character.AI for allegedly misleading children with mental health claims. At the federal level, Sen. Josh Hawley (R-MO) and Sen. Ed Markey (D-MA) have launched separate investigations into Meta’s practices.
“I think the harm is potentially great, which means we have to move quickly,” Sen. Padilla told TechCrunch. “We can put reasonable safeguards in place to make sure that particularly minors know they’re not talking to a real human being, that these platforms link people to the proper resources when people say things like they’re thinking about hurting themselves or they’re in distress, [and] to make sure there’s not inappropriate exposure to inappropriate material.”
Padilla also emphasized the need for companies to disclose how often they refer users to crisis resources. This data, he said, would help policymakers understand the scope of the problem, rather than relying solely on high-profile incidents.
Amendments to the bill
Earlier versions of SB 243 included stricter requirements, many of which were later scaled back. For example, the original draft prohibited chatbots from using “variable reward” mechanisms — tactics like unlocking rare responses, special storylines, or new personalities — that critics argue foster addictive engagement loops.
Another provision that was removed would have required companies to track and report how often their chatbots initiated conversations about suicide or self-harm.
Sen. Becker argued that the revised version “strikes the right balance of getting to the harms without enforcing something that’s either impossible for companies to comply with, either because it’s technically not feasible or just a lot of paperwork for nothing.”
Broader political context
The bill’s progress comes as Silicon Valley tech companies invest heavily in pro-AI political action committees (PACs) to support candidates favoring lighter regulation ahead of the next midterm elections.
At the same time, California is also considering SB 53, another AI bill that would impose more extensive transparency requirements. OpenAI has publicly urged Governor Newsom to reject SB 53, favoring international and federal standards instead. Companies like Meta, Google, and Amazon have voiced opposition as well. The only major AI company to endorse SB 53 so far is Anthropic.
“I reject the premise that this is a zero-sum situation, that innovation and regulation are mutually exclusive,” Padilla said. “We can support innovation that delivers benefits, but at the same time, we must provide reasonable safeguards for the most vulnerable people.”
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0