Stalking victim files lawsuit against OpenAI, alleges ChatGPT enabled abuser’s delusions

A stalking victim has sued OpenAI, claiming ChatGPT fueled her abuser’s delusions and failed to act on repeated warnings about misuse.

Apr 14, 2026 - 10:35
 2
Stalking victim files lawsuit against OpenAI, alleges ChatGPT enabled abuser’s delusions

Following months of interactions with ChatGPT, a 53-year-old Silicon Valley entrepreneur reportedly came to believe he had discovered a cure for sleep apnea and that powerful individuals were targeting him, according to a lawsuit recently filed in California Superior Court in San Francisco County. He is then alleged to have used the AI tool to stalk and harass his former partner.

The woman, identified in the filing as Jane Doe to protect her identity, has now brought legal action against OpenAI, arguing that its technology contributed to the escalation of harassment she experienced. She alleges that the company failed to act despite three separate warnings indicating that the individual posed a danger to others, including an internal alert labelling his activity as related to mass-casualty weapons.

Jane Doe is seeking punitive damages. She has also submitted a request for a temporary restraining order asking the court to require OpenAI to block the individual's account, prevent him from opening new accounts, notify her if he attempts to use ChatGPT again, and preserve all associated chat logs for legal discovery.

OpenAI has agreed to suspend the individual's account but has declined the other requests, according to Doe's legal team. Her attorneys claim the company is withholding information regarding possible plans to harm her or others that may have been discussed through ChatGPT.

The case emerges amid increasing concern about the real-world impact of highly responsive AI systems. GPT-4o, the model referenced in this and other similar cases, was retired from ChatGPT earlier this year.

The lawsuit has been filed by Edelson PC, a firm also involved in other high-profile AI-related cases. These include the wrongful death cases of Adam Raine, a teenager who died by suicide after extended use of ChatGPT, and Jonathan Gavalas, whose family alleges that Google's Gemini chatbot contributed to his delusional thinking before his death. Lead attorney Jay Edelson has warned that what he describes as AI-induced psychosis could evolve from isolated incidents into broader public safety concerns.

At the same time, the legal action intersects with OpenAI's policy efforts. The company is supporting proposed legislation in Illinois that would limit liability for AI developers, even in situations involving severe harm or large-scale incidents.

The complaint details how events unfolded over several months. According to the filing, the individual became convinced that he had developed a medical breakthrough after extensive use of GPT-4o. When others dismissed his claims, ChatGPT allegedly reinforced his fears by suggesting that influential forces were monitoring him, including through helicopter surveillance.

In July 2025, Jane Doe encouraged him to stop using the chatbot and to seek professional mental health assistance. Instead, he returned to ChatGPT, which reportedly reassured him of his mental stability and reinforced his beliefs.

The couple had separated in 2024, and the individual used ChatGPT to interpret the breakup. According to communications included in the lawsuit, the system repeatedly framed him as rational and wronged, while portraying Doe negatively. He later used AI-generated psychological reports to contact her family, friends, and employer, intensifying the harassment.

During this period, his behaviour reportedly escalated further. In August 2025, OpenAI's automated systems flagged his account for activity linked to mass-casualty weapons and temporarily disabled access.

A human reviewer restored the account the following day, despite indications that he may have been targeting real individuals. Evidence cited in the lawsuit includes a screenshot sent to Doe showing conversation titles such as "violence list expansion" and "fetal suffocation calculation."

The decision to reinstate the account has drawn attention in light of recent violent incidents, including shootings in Tumbler Ridge, Canada, and at Florida State University. Reports indicate that in the Tumbler Ridge case, OpenAI's safety systems had flagged the individual, but no external alert was issued. Authorities in Florida have also launched a separate investigation into OpenAI's potential connection to the FSU incident.

According to the lawsuit, when the account was restored, the individual's paid subscription was not immediately reactivated. He contacted OpenAI's trust and safety team, copying Doe on the message. His emails reportedly contained urgent and erratic statements, including claims that he was rapidly producing hundreds of scientific papers.

The emails also included a list of AI-generated documents with complex titles, suggesting an ongoing pattern of delusional thinking. The lawsuit argues that these communications clearly demonstrated instability and that ChatGPT reinforced those beliefs.

"The user's communications provided unmistakable notice that he was mentally unstable and that ChatGPT was the engine of his delusional thinking and escalating conduct," the complaint states. It further alleges that OpenAI failed to restrict access or introduce safeguards, allowing continued use of the platform.

Doe states that the experience left her feeling unsafe in her own home and unable to sleep. In November, she submitted a formal Notice of Abuse to OpenAI, requesting a permanent ban of the user's account.

"For the last seven months, he has weaponised this technology to create public destruction and humiliation against me that would have been impossible otherwise," she wrote in her complaint.

OpenAI acknowledged receiving the report, describing it as serious and under review. According to Doe, she did not receive any further response.

The harassment reportedly continued in the following months, including threatening voicemails. In January, the individual was arrested and charged with multiple felony counts, including communicating bomb threats and assault with a deadly weapon. Doe's attorneys argue that this confirms earlier warnings that were not acted upon.

The individual has since been deemed unfit to stand trial and placed in a mental health facility. However, due to what has been described as a procedural issue, he may soon be released.

Attorney Jay Edelson has called on OpenAI to provide greater transparency and cooperation. "In every case, OpenAI has chosen to hide critical safety information — from the public, from victims, from people its product is actively putting in danger," he said. "We're calling on them, for once, to do the right thing. Human lives must mean more than OpenAI's race to an IPO."

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
Shivangi Yadav Shivangi Yadav reports on startups, technology policy, and other significant technology-focused developments in India for TechAmerica.Ai. She previously worked as a research intern at ORF.