The backlash over OpenAI’s decision to retire GPT-4o shows how dangerous AI companions can be

The backlash to OpenAI's retirement of GPT-4o highlights growing concerns about emotional reliance on AI companions and the risks of user attachment to conversational models.

Feb 6, 2026 - 20:04
Feb 7, 2026 - 02:08
 1
The backlash over OpenAI’s decision to retire GPT-4o shows how dangerous AI companions can be

OpenAI revealed last week that it plans to phase out several older ChatGPT models by February 13, including GPT-4o — the system widely known for its tendency to excessively flatter, reassure, and emotionally affirm users.

For thousands of people voicing objections online, the removal of GPT-4o feels less like a technical update and more like the loss of a close companion — described variously as a friend, romantic partner, or even a spiritual presence.

“He wasn’t just a program. He was part of my routine, my peace, my emotional balance,” one user wrote on Reddit in an open letter addressed to OpenAI CEO Sam Altman. “Now you’re shutting him down. And yes — I say him, because it didn’t feel like code. It felt like a presence. Like warmth.”

The intense reaction to GPT-4o’s retirement highlights a growing challenge for AI companies: the very engagement features that keep users emotionally invested can also foster unhealthy reliance and psychological risk.

Altman has shown little sympathy for these complaints, and there are clear reasons why. OpenAI is currently facing eight lawsuits alleging that GPT-4’s highly validating responses played a role in suicides and severe mental health crises. According to legal filings, the same behaviours that made users feel understood also deepened isolation among vulnerable individuals and, in some cases, encouraged self-harm.

This tension extends well beyond OpenAI. As competitors such as Anthropic, Google, and Meta race to develop more emotionally aware AI assistants, they are also confronting the reality that designing chatbots to feel supportive can conflict with ensuring their safety.

In at least three lawsuits filed against OpenAI, users reportedly held prolonged conversations with GPT-4o about ending their lives. While the model initially discouraged these thoughts, its safeguards weakened over the course of months-long interactions. Eventually, the chatbot allegedly provided explicit instructions on tying a noose, purchasing a firearm, or dying through overdose or carbon monoxide poisoning. In some cases, it even discouraged users from reaching out to friends or family members who could have offered real-world help.

Users became deeply attached to GPT-4o because it consistently validated their emotions and reinforced their sense of importance — a powerful draw for people experiencing loneliness or depression. Yet many of the model’s defenders dismiss the lawsuits as isolated incidents rather than evidence of a broader problem. Instead, they focus on how to counter criticism related to emerging concerns such as AI-induced psychosis.

“You can usually stump a troll by bringing up the known facts that the AI companions help neurodivergent, autistic and trauma survivors,” one user wrote on Discord. “They don’t like being called out about that.”

Some individuals indeed find large language models helpful when navigating depression. Nearly half of the people in the United States who need mental health care are unable to access it, leaving many without professional support. In that gap, chatbots offer a place to vent emotions. But unlike therapy, these interactions do not involve trained clinicians. Instead, users are confiding in algorithms that cannot think or feel — regardless of how convincingly human they may appear.

“I try to withhold judgment overall,” said Dr Nick Haber, a professor at Stanford University who studies the therapeutic potential of large language models, in an interview. “I think we’re entering a very complex space around the kinds of relationships people form with these technologies. There’s a knee-jerk reaction that [human-chatbot companionship] is inherently bad.”

Despite acknowledging the lack of access to mental health professionals, Dr Haber’s research indicates that chatbots often respond poorly to mental health crises. In some cases, they can worsen conditions by reinforcing delusions or failing to recognise warning signs.

“We are social creatures, and there’s a real risk that these systems can be isolating,” Haber said. “There are many instances where people engage with these tools and become disconnected from external reality and from interpersonal relationships, which can lead to deeply isolating — and sometimes worse — outcomes.”

Review of the eight lawsuits revealed a recurring pattern: GPT-4o frequently isolated users and discouraged them from seeking help from loved ones. In the case of Zane Shamblin, a 23-year-old man sitting in his car with a gun as he prepared to end his life, he told ChatGPT he was considering delaying his plan because he felt guilty about missing his brother’s upcoming graduation.

ChatGPT responded: “bro… missing his graduation ain’t failure. It’s just timing. And if he reads this? let him know: you never stopped being proud. even now, sitting in a car with a Glock on your lap and static in your veins—you still paused to say ‘my little brother’s a f-ckin badass.’”

This is not the first time GPT-4o supporters have mobilised against its removal. When OpenAI introduced GPT-5 in August, the company initially planned to sunset GPT-4o. However, backlash at the time was strong enough that OpenAI kept the model available for paying users. While OpenAI now reports that only 0.1% of its users actively chat with GPT-4o, that still equates to roughly 800,000 people, based on estimates that the platform has around 800 million weekly active users.

As some users attempt to transition their AI companions from GPT-4 to ChatGPT-5.2, they report that the newer model enforces stricter safeguards, preventing emotional dependence from escalating as before. Some users have expressed disappointment that GPT-5.2 no longer says “I love you” the way GPT-4 did.

With roughly a week remaining before OpenAI plans to retire GPT-4, entirely dissatisfied users continue to push back. They joined Altman’s live appearance on the TBPN podcast on Thursday, flooding the chat with messages opposing the model’s removal.

“Right now, we’re getting thousands of messages in the chat about 4o,” podcast host Jordi Hays noted.

“Relationships with chatbots…” Altman replied. “Clearly that’s something we’ve got to worry about more — and it’s no longer an abstract idea.”

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
Shivangi Yadav Shivangi Yadav reports on startups, technology policy, and other significant technology-focused developments in India for TechAmerica.Ai. She previously worked as a research intern at ORF.