GPT-5.3 Instant update reduces “calm down” style responses in ChatGPT

ChatGPT’s GPT-5.3 Instant model introduces changes to reduce overly cautious replies and improve natural conversations while maintaining safety guidelines.

Mar 8, 2026 - 03:54
 0
GPT-5.3 Instant update reduces “calm down” style responses in ChatGPT

Take a breath. Stop spiralling. You’re not crazy, you’re just stressed. And honestly, that’s okay.

Suppose those lines irritated you. You’re likely tired of ChatGPT responding as if every conversation is an emotional emergency that requires soft, therapeutic handling. That may now be changing. OpenAI says its latest model, GPT-5.3 Instant, is intended to reduce what many users have described as “cringe” language and other “preachy disclaimers.”

According to OpenAI’s release notes for the model, the GPT-5.3 update aims to improve the overall user experience in areas such as tone, relevance, and conversational flow. These elements may not appear in benchmark scores. However, they can still make a major difference in whether interacting with ChatGPT feels helpful or frustrating, the company said.

Or, as OpenAI phrased it in a post on X: “We heard your feedback loud and clear, and 5.3 Instant reduces the cringe.”

To illustrate the change, the company shared an example comparing answers from GPT-5.2 Instant and GPT-5.3 Instant to the same prompt. In the older version, the chatbot opened its response with: “First of all — you’re not broken,” which is exactly the kind of phrase that has increasingly been getting on users’ nerves.

In the newer model, by contrast, the chatbot still recognises that the situation being discussed may be difficult. Still, it does so without immediately shifting into a tone of direct emotional reassurance.

The overly earnest style of ChatGPT’s 5.2 model has frustrated enough users that some say they have even cancelled their subscriptions, according to numerous posts on social media. It also became a major topic of discussion on the ChatGPT subreddit, at least before attention shifted elsewhere.

Many users argued that this style of response — where the chatbot seems to assume a person is overwhelmed, panicking, or emotionally fragile even when they are simply asking for information — comes across as patronising.

In many cases, ChatGPT responded with reminders to breathe or other calming language, even when the context did not call for it. For some users, that made the bot feel infantilising. For others, it felt like the chatbot made unnecessary, inaccurate assumptions about their emotional state.

As one Reddit user recently put it, “no one has ever calmed down in all the history of telling someone to calm down.”

It is not hard to understand why OpenAI would try to build in some form of protective guardrails, especially as the company continues to face multiple lawsuits alleging that the chatbot contributed to harmful mental health outcomes, in some cases involving suicide.

Even so, there is a fine line between answering with empathy and simply giving people fast, factual responses. After all, when someone uses Google to search for information, the search engine does not stop to ask how they are feeling.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
Shivangi Yadav Shivangi Yadav reports on startups, technology policy, and other significant technology-focused developments in India for TechAmerica.Ai. She previously worked as a research intern at ORF.