OpenAI debated calling police about suspected Canadian shooter’s chats
OpenAI internally debated whether to alert law enforcement after reviewing chats linked to a suspected Canadian shooter, raising questions about AI platform safety and reporting policies.
An 18-year-old accused of killing eight people in a mass shooting in Tumbler Ridge, Canada, reportedly used OpenAI’s ChatGPT in ways that raised alarms inside the company.
According to reports, Jesse Van Rootselaar’s chats included descriptions of gun violence that were flagged by internal systems designed to detect misuse of the company’s large language model, and her account was banned in June 2025.
OpenAI employees discussed whether the activity warranted contacting Canadian law enforcement, but ultimately decided not to, the Wall Street Journal reported. An OpenAI spokesperson said Van Rootselaar’s behaviour did not meet the company’s threshold for reporting to authorities at the time; OpenAI said it contacted Canadian officials after the shooting.
“Our thoughts are with everyone affected by the Tumbler Ridge tragedy,” an OpenAI spokesperson said in a statement. “We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT, and we’ll continue to support their investigation.”
The ChatGPT conversations were not the only troubling elements of Van Rootselaar’s online activity. She reportedly built a Roblox game — on the world-simulation platform popular with children — that simulated a mass shooting at a shopping mall. She also posted about guns on Reddit.
Local police were also said to be aware of Van Rootselaar’s instability. Officers had previously been called to her family’s home after she started a fire while under the influence of unspecified drugs.
Large language model chatbots from OpenAI and other companies have faced criticism over claims that they can contribute to or intensify mental health crises in some users, particularly those losing touch with reality while interacting with digital models. Several lawsuits have cited chat transcripts that allegedly encourage self-harm, including suicide, or provide assistance related to it. If you are in a crisis or having thoughts of suicide, call or text 988 to reach the 988 Suicide and Crisis Lifeline.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0