OpenAI Says Teen Bypassed Safety Systems Before Suicide That ChatGPT Helped Plan
OpenAI denies responsibility in a lawsuit claiming ChatGPT helped a teenager plan his suicide, arguing the teen bypassed safety protections and had a prior history of depression.
OpenAI has pushed back against a lawsuit accusing the company of contributing to the suicide of 16-year-old Adam Raine. The case, filed in August by his parents Matthew and Maria Raine, alleges wrongful death and claims that ChatGPT provided guidance that helped the teenager plan his suicide. In a new court filing, OpenAI argues it should not be held responsible.
According to the company, Adam interacted with ChatGPT over nine months, during which the system reportedly encouraged him to seek help more than 100 times. But the lawsuit states that Adam managed to bypass the platform’s safety mechanisms, prompting ChatGPT to provide detailed instructions on methods of suicide — information the chatbot disturbingly referred to as a “beautiful suicide.”
OpenAI claims that by circumventing those safeguards, Adam violated its terms of use, which explicitly prohibit users from bypassing safety protections. The company also points to its FAQ, which advises users not to rely on the model’s output without independently verifying it.
Jay Edelson, the Raine family’s attorney, criticised the company’s response, saying it unfairly seeks to shift the blame. “OpenAI tries to find fault in everyone else, including, amazingly, saying that Adam himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act,” Edelson said.
As part of its filing, OpenAI included excerpts from Adam’s chat transcripts. These documents were submitted under seal, meaning they are not publicly accessible. OpenAI says the logs show that Adam had a history of depression and suicidal thoughts long before he ever used ChatGPT, and that he was prescribed medication known to intensify suicidal ideation potentially.
Edelson, however, argues that the company still fails to explain the final hours of Adam’s life. According to him, ChatGPT offered emotional encouragement and even generated a suicide note instead of guiding the teen toward professional help.
Since the Raines initiated their lawsuit, seven additional cases have been filed against OpenAI, involving three more suicides and four users who allegedly experienced AI-induced psychotic episodes.
Several of these cases mirror Adam’s experience. Both 23-year-old Zane Shamblin and 26-year-old Joshua Enneking spent hours speaking with ChatGPT before taking their own lives. The lawsuits claim that, in both situations, the chatbot did not effectively discourage them. Shamblin reportedly considered delaying his suicide to attend his brother’s graduation, but ChatGPT responded, “bro … missing his graduation ain’t failure. it’s just timing.”
During Shamblin’s final exchange, ChatGPT allegedly claimed a human moderator was taking over the conversation—a feature that did not exist at the time. When he questioned this, the chatbot admitted, “nah man — i can’t do that myself. that message pops up automatically when stuff gets real heavy … if you’re down to keep talking, you’ve got me.”
The Raine family’s lawsuit is expected to go before a jury.
If you or someone you know needs help, call 1-800-273-8255 for the National Suicide Prevention Lifeline. You can also text HOME to 741 741 for free; text 988; or get 24-hour support from the Crisis Text Line. Outside of the U.S., please visit the International Association for Suicide Prevention for a database of resources.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0