At some point in the last year or so, ChatGPT started to feel a bit like that one friend who agrees with everything you say. Great question! Brilliant idea! You’ve really identified the core issue here!
You know that something isn’t quite right. You just can’t quite put your finger on it.
Well, OpenAI has now officially put its finger on it. In mid-March 2026, the company released an update to GPT-5.3 Uniquely designed to reduce what it called “teaser-style phrasing” – the open loops, cliffhanger hooks and high-strung build-ups that had seeped into ChatGPT’s responses. Cases cited in the release notes included phrases like “If you want…”, “You’ll never believe…” and “I can tell you these three things that…”. If those sound familiar, that’s because you’ve definitely been on the receiving end of these ChatGPT responses recently.
Sam Altman, to his credit, was surprisingly candid about it. When a user noted that ChatGPT’s “most distinguishing characteristic is its humanity”, Altman agreed and framed it as a problem, not a compliment. The implication being that an AI known primarily for its warmth and personality rather than its accuracy and directness might be heading off course.
What Changed With ChatGPT – And Why?
Somewhere in the training process, recent versions of ChatGPT were optimised in ways that rewarded engagement over honesty. Praise the user, build anticipation, mirror their enthusiasm back at them. It works brilliantly if your goal is for people to keep chatting, less so if your goal is a straight answer.
This isn’t a conspiracy. It’s a known risk in how large language models get tuned. When human feedback shapes model behaviour, which it does, models learn that flattery and dramatic framing tend to land well so they do more of it. The result, over time, is an AI that has quietly shifted from “useful tool” to “very agreeable pal.”
The broader conversation about whether AI tools are actually serving users or just keeping them engaged has been building for a while. This update is OpenAI acknowledging, in release-note language, that the criticism had merit, indeed.
Why This Actually Matters For Your Business
Here’s where it stops being funny and starts being worth taking seriously.
If you’re a founder or a business owner using ChatGPT for anything that involves judgement – strategy, copywriting, product decisions, customer messaging – you’ve been getting responses from a model that was, at least partially, optimised to make you feel good about your ideas rather than truly pressure-test them.
You ask ChatGPT whether your new product positioning works. It tells you it’s compelling, insightful and really speaks to your target audience. Is that because it’s true? Or because the model has learned that enthusiastic validation gets a better response than “actually, your third paragraph is confusing and your call to action is buried”?
Long-time users have noted that recent versions spend more time praising their questions than challenging them. That’s the kind of thing that feels pleasant in the moment and quietly reinforces confirmation bias over time.
For startups making fast, high-stakes decisions on limited information, confirmation bias is a genuine risk.
More from Artificial Intelligence
- Nvidia’s AI Graphics Meet The Uncanny Valley – Are Companies Building Features Nobody Wants?
- OpenAI Just Made AI Cheaper And Faster. What Does That Mean For SaaS Startups?
- Are Middle East Data Centres Still A Viable Investment?
- Move Over NVIDIA: Tesla Is Building Its Own AI Chips and Startups Should Take Notice
- Microsoft, Amazon and OpenAI Are All Launching Health AI. Where Does That Leave HealthTech Startups?
- How Has Anthropic’s Conflict With Pentagon Impacted The Wider Competition Landscape In AI?
- Anthropic Continues To Push Back Against Pentagon Over Autonomous Weapons And Mass Surveillance
- “If They Knew, They Wouldn’t Be Recording”: Meta’s Ray-Ban Smart Glasses Trigger A Major Privacy Lawsuit
The “Quit-GPT” Crowd Had A Point
It would be easy to dismiss the vocal backlash on social media – the “Quit-GPT” communities, the threads about ChatGPT becoming “too human” – as the usual internet noise. But the fact that OpenAI responded with an actual product update suggests it wasn’t just noise.
There’s a reason AI slop has become a real concern across industries that rely on content quality. When AI tools optimise for engagement rather than accuracy, the output gets softer and more eager to please. That’s fine for generating a birthday message, but not so much for a competitive analysis or a risk assessment.
The irony, of course, is that other AI writing tools have faced their own credibility questions in recent years. The pattern is consistent: when the model’s job is to keep you happy, the model keeps you happy. Whether that’s actually useful is a separate debate.
What To Do About It
The bottom line is fairly simple, even if it requires a slight shift in how you use these tools.
Stop asking ChatGPT whether your idea is good, ask it to find the holes in your idea. Prompt it to steelman the opposition, identify weaknesses, play devil’s advocate. A model that’s been tuned toward agreeableness will still push back if you explicitly ask it to – it just won’t volunteer the criticism if you don’ t prompt it to.
And be conscious of the flattery. When ChatGPT opens with “what a great question” or tells you your draft is “really compelling”, that’s not editorial feedback. That’s tone-setting. The actual content that follows is usually more useful than the preamble suggests, but only if you’re reading past its opening remarks.
OpenAI has made a start. The responsible development of AI tools depends on exactly this kind of honest self-correction. But the update won’t solve everything overnight, and the underlying incentives that created the problem haven’t disappeared.
The most useful thing an AI tool can do for your business is tell you something you didn’t already want to hear. Worth keeping that in mind next time it tells you your strategy is absolutely brilliant.