Take a breath, cease spiraling. You’re not loopy, you’re simply harassed. And actually, that’s okay.
If you happen to felt instantly triggered studying these phrases, you’re most likely additionally sick of ChatGPT always speaking to you as when you’re in some type of disaster and want delicate dealing with. Now, issues could also be enhancing. OpenAI says its new mannequin, GPT-5.3 Immediate, will cut back the “cringe” and different “preachy disclaimers.”
In accordance with the mannequin’s launch notes, the GPT-5.3 replace will give attention to the consumer expertise, together with issues like tone, relevance, and conversational movement — areas that won’t present up in benchmarks, however could make ChatGPT really feel irritating, the corporate stated.
Or, as OpenAI put it on X, “We heard your suggestions loud and clear, and 5.3 Immediate reduces the cringe.”
Within the firm’s instance, it confirmed the identical question with responses from the GPT-5.2 Immediate mannequin in contrast with the GPT-5.3 Immediate mannequin. Within the former, the chatbot’s response begins, “To begin with — you’re not damaged,” a standard phrase that’s been getting beneath everybody’s pores and skin recently.
Within the up to date mannequin, the chatbot as a substitute acknowledges the problem of the scenario, with out making an attempt to instantly reassure the consumer.
The unbearable tone of ChatGPT’s 5.2 mannequin has been annoying customers to the purpose that some have even cancelled their subscriptions, in accordance with quite a few posts on social media. (It was a huge point of discussion on the ChatGPT Reddit, for example, earlier than the Pentagon deal stole the main target.)
Individuals complained that this kind of language, the place the bot talks to you as if it assumes you’re panicking or harassed if you had been simply looking for data, comes throughout as condescending.
Typically, ChatGPT replied to customers with reminders to breathe and different makes an attempt at reassurance, even when the scenario didn’t warrant it. This made customers really feel infantilized, in some circumstances, or as if the bot was making assumptions in regards to the consumer’s psychological state that simply weren’t true.
As one Reddit consumer lately pointed out, “nobody has ever calmed down in all of the historical past of telling somebody to settle down.”
It’s comprehensible that OpenAI would try and implement guardrails of some variety, particularly because it faces a number of lawsuits accusing the chatbot of main individuals to expertise destructive psychological well being results, which generally included suicide.
However there’s a fragile stability between responding with empathy and offering fast, factual solutions. In spite of everything, Google by no means asks you about your emotions if you’re looking for data.

