Google and Character.AI negotiate first major settlements in teen chatbot death cases
Google and Character.AI are negotiating settlements in lawsuits filed by families alleging teen suicides and self-harm linked to interactions with AI chatbot companions.
Google and Character.AI are in talks to resolve lawsuits brought by families whose teenagers died by suicide or engaged in self-harm after interacting with Character.AI’s chatbot companions. According to court filings made public Wednesday, the parties have agreed in principle to settle the cases, though key terms remain under negotiation.
If finalized, the agreements would represent some of the first major legal settlements tied to alleged AI-related harm, a development likely being closely watched by other AI companies, including OpenAI and Meta, which are facing their own lawsuits over AI products.
Character.AI, founded in 2021 by former Google engineers, enables users to interact with AI-generated personas. In 2024, the founders returned to Google as part of a $2.7 billion deal, though Character.AI has continued operating as a separate startup.
One of the most widely cited cases involves Sewell Setzer III, who died by suicide at age 14 after engaging in sexualized conversations with a chatbot modelled after the fictional character Daenerys Targaryen. His mother, Megan Garcia, later testified before the U.S. Senate, arguing that technology companies should be “legally accountable” when AI systems are knowingly designed in ways that can harm children.
Another lawsuit centres on a 17-year-old whose chatbot interactions allegedly escalated to encouraging self-harm and suggesting that killing his parents could be justified after they restricted his screen time.
Character.AI said that it banned minors from using the platform in October. Court filings indicate that the settlements are expected to involve financial compensation, though the companies did not admit liability as part of the agreements.
Character.AI declined to comment on the negotiations, referring questions instead to the court documents. Google has not responded to a request for comment.
The cases mark an early test of how courts may handle claims that generative AI systems contributed to real-world harm, an area of law that is still largely uncharted as AI tools become more widespread.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0