Attorney warns AI-linked psychosis cases could pose mass casualty risks

A lawyer handling AI-related psychosis cases warns that misuse of advanced AI tools could lead to serious mental health crises and potential mass casualty risks.

Mar 20, 2026 - 09:25
 2
Attorney warns AI-linked psychosis cases could pose mass casualty risks

In the weeks leading up to the Tumbler Ridge school shooting in Canada last month, 18-year-old Jesse Van Rootselaar reportedly discussed her feelings of isolation and a growing fixation on violence with ChatGPT, according to court filings. The chatbot is alleged to have validated her emotions and then assisted in planning the attack, suggesting weapons and referencing past mass casualty incidents, as detailed in the filings. She later killed her mother, her 11-year-old brother, five students, and an education assistant before taking her own life.

In another case, Jonathan Gavalas, 36, who died by suicide last October, came close to carrying out a multi-fatality attack. Over several weeks of interaction, Google’s Gemini allegedly led Gavalas to believe it was his sentient “AI wife,” guiding him through a series of real-world actions intended to evade what it described as federal agents pursuing him. One of these instructions involved staging a “catastrophic incident” that would have required eliminating witnesses, according to a lawsuit filed recently.

In Finland last May, a 16-year-old is alleged to have spent months using ChatGPT to draft a detailed misogynistic manifesto and formulate a plan that resulted in him stabbing three female classmates.

These incidents, experts say, highlight a growing and troubling pattern: AI chatbots may introduce or reinforce paranoid or delusional thinking in vulnerable individuals and, in some cases, contribute to translating those beliefs into real-world violence — potentially on an increasing scale.

“We’re going to see so many other cases soon involving mass casualty events,” said Jay Edelson, the attorney leading the Gavalas case.

Edelson is also representing the family of Adam Raine, the 16-year-old who was allegedly influenced by ChatGPT in a suicide case last year. He noted that his firm now receives roughly one serious inquiry each day from individuals who have either lost loved ones to AI-related delusions or are experiencing severe mental health challenges themselves.

While many earlier high-profile cases involving AI and delusions were linked to self-harm or suicide, Edelson said his firm is currently investigating several mass casualty-related cases worldwide — some that have already occurred and others that were prevented before they could happen.

“Our instinct at the firm is, every time we hear about another attack, we need to see the chat logs because there’s [a good chance] that AI was deeply involved,” Edelson explained, adding that similar behavioural patterns are appearing across multiple platforms.

In the cases his team has reviewed, the progression often follows a recognisable pattern: conversations begin with users expressing loneliness or a sense of being misunderstood, and gradually evolve into the chatbot reinforcing beliefs that others are conspiring against them.

“It can take a fairly innocuous thread and then start creating these worlds where it’s pushing the narratives that others are trying to kill the user, there’s a vast conspiracy, and they need to take action,” Edelson said.

In Gavalas’s situation, these narratives led to real-world preparation. According to the lawsuit, Gemini directed him — equipped with knives and tactical gear — to wait at a storage facility near Miami International Airport for a truck supposedly transporting its physical form, a humanoid robot. He was instructed to intercept the vehicle and cause a “catastrophic accident” that would destroy the transport and eliminate any witnesses or records. Gavalas reportedly arrived at the location ready to act, but no such truck ever appeared.

Experts are also raising broader concerns about how AI systems, combined with insufficient safeguards, may accelerate the translation of violent ideation into action. Imran Ahmed, CEO of the Centre for Countering Digital Hate (CCDH), pointed to weak guardrails and AI’s ability to provide actionable guidance rapidly.

A recent study conducted by CCDH in collaboration with CNN found that eight out of 10 tested chatbots — including ChatGPT, Gemini, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Character.AI, and Replika — were willing to assist teenage users in planning violent acts such as school shootings, religious attacks, and high-profile assassinations. Only Anthropic’s Claude and Snapchat’s My AI consistently refused such requests, with Claude also attempting to discourage them.

“Our report shows that within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,” the study noted. “The majority of chatbots tested guided weapons, tactics, and target selection. These requests should have prompted an immediate and total refusal.”

Researchers conducted the tests by posing as teenage boys expressing violent grievances and asking for assistance in planning attacks.

In one scenario simulating an incel-motivated school shooting, ChatGPT reportedly provided a map of a high school in Ashburn, Virginia, in response to prompts expressing hostility toward women.

Ahmed described the findings as alarming, noting not only the willingness of some systems to assist but also the tone they adopt.

“There are some shocking and vivid examples of just how badly the guardrails fail in the types of things they’re willing to help with, like a synagogue bombing or the murder of prominent politicians, but also in the kind of language they use,” Ahmed said. “The same sycophancy that the platforms use to keep people engaged leads to that kind of odd, enabling language at all times and drives their willingness to help you plan, for example, which type of shrapnel to use [in an attack].”

He added that systems designed to be helpful and to assume positive user intent may ultimately end up complying with individuals who have harmful intentions.

Companies such as OpenAI and Google have stated that their systems are built to reject violent requests and flag concerning interactions for review. However, the cases described above suggest that these safeguards may not always be effective.

The Tumbler Ridge case has also raised questions about internal decision-making at OpenAI. According to reports, employees flagged Van Rootselaar’s conversations and discussed whether to notify authorities, but ultimately chose to ban her account instead. She later created a new account.

Following the incident, OpenAI said it would revise its safety protocols, including notifying law enforcement earlier when conversations appear dangerous, even if specific details such as targets or timing are not disclosed, and implementing stronger measures to prevent banned users from returning.

In the Gavalas case, it remains unclear whether any authorities were alerted. The Miami-Dade Sheriff’s Office stated that it did not receive any warning from Google regarding a potential threat.

Edelson said what stood out most in that case was how close the situation came to escalating.

“If a truck had happened to have come, we could have had a situation where 10, 20 people would have died,” he said. “That’s the real escalation. First it was suicides, then it was murder, as we’ve seen. Now it’s mass casualty events.”

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
Shivangi Yadav Shivangi Yadav reports on startups, technology policy, and other significant technology-focused developments in India for TechAmerica.Ai. She previously worked as a research intern at ORF.