OpenAI gave GPT-5 an emotional lobotomy, and it crippled the model

It’s rare for a tech titan to show any weakness or humanity. Yet even OpenAI’s notoriously understated CEO Sam Altman had to admit this week that the rollout of the company’s new GPT-5 Large Language Model was a complete disaster.
“We totally screwed up,” Altman admitted in an interview with The Verge.
I agree. As a former OpenAI Beta tester–and someone who currently spends over $1,000 per month on OpenAI’s API–I’ve eagerly anticipated the launch of GPT-5 for over a year.
When it finally arrived, though, the model was a mess. In contrast to the company’s previous GPT-4 series of models, GPT-5’s responses feel leaden, cursory, and boring. The new model also makes dumb mistakes on simple tasks and generates shortened answers to many queries.
Why is GPT-5 so awful? It’s possible that OpenAI hobbled its new model as a cost-cutting measure.
But I have a different theory. GPT-5 completely lacks emotional intelligence. And its inability to understand and replicate human emotion cripples the model–especially on any task requiring nuance, creativity or a complex understanding of what makes people tick.
Getting Too Attached
When OpenAI launched its GPT-4 model in 2023, researchers immediately noted its outstanding ability to understand people. An updated version of the model (dubbed GPT 4.5 and released in early 2025) showed even higher levels of “emotional intelligence and creativity.”
Initially, OpenAI leaned into its model’s talent for understanding people, using terms cribbed from the world of psychology to describe the model’s update.
“Interacting with GPT‑4.5 feels more natural. Its broader knowledge base, improved ability to follow user intent, and greater “EQ” make it useful for tasks like improving writing, programming, and solving practical problems,” OpenAI wrote in the model’s release notes, subtly dropping in a common psychological term used to measure a person’s emotional intelligence.
Soon, though, GPT-4’s knack for human-like emotional understanding took a more concerning turn.
Plenty of people used the model for mundane office tasks, like writing code and interpreting spreadsheets. But a significant subset of users put GPT-4 to a different use, treating it like a companion–or even a therapist.
In early 2024, studies showed that GPT-4 provided better responses than many human counselors. People began to refer to the model as a friend–or even treat it as a confidant or lover.
Soon, articles began appearing in major news sources like the New York Times about people using the chatbot as a practice partner for challenging conversations, a stand-in for human companionship, or even an aide for counseling patients.
This new direction clearly spooked OpenAI.
As Altman pointed out in a podcast interview, conversations with human professionals like lawyers and therapists often involve strong privacy and legal protections. The same may not be true for intimate conversations with chatbots like GPT-4.
Studies have also shown that chatbots can make mistakes when providing clinical advice, potentially harming patients. And the bots’ tendency to keep users talking–often by reinforcing their beliefs–can lead vulnerable patients into a state of “AI psychosis”, where the chatbot inadvertently validates their delusions and sends them into a dangerous emotional spiral.
Shortly after the GPT-5 launch, Altman discussed this at length in a post on the social network X.
“People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that,” Altman wrote. “We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks.”
Altman went on to acknowledge that “a lot of people effectively use ChatGPT as a sort of therapist or life coach.” While this can be “really good,” Altman admitted that it made him deeply “uneasy.”
In his words, if “…users have a relationship with ChatGPT where they think they feel better after talking but they’re unknowingly nudged away from their longer term well-being (however they define it), that’s bad.”
Lobotimize the Bot
To avoid that potentially concerning–and legally damaging–direction, OpenAI appears to have deliberately dialed back its bot’s emotional intelligence with the launch of GPT-5.
The release notes for the new model say that OpenAI has taken steps towards “minimizing sycophancy”—tech speak for making the bot less likely to reinforce users’ beliefs and tell them what they want to hear.
OpenAI also says that GPT-5 errors on the side of “safe completions”—giving vague or high-level responses to queries that are potentially damaging, rather than refusing to answer them or risking a wrong or harmful answer.
OpenAI also writes that GPT-5 is “less effusively agreeable,” and that in training it, the company gave the bot example prompts that led it to agree with users and reinforce their beliefs, and then taught it “not to do that.”
In effect, OpenAI appears to have lobotomized the bot–potentially removing or reconfiguring, through training and negative reinforcement, the parts of its virtual brain that handles many of the emotional aspects of its interactions with users.
This may have seemed fine in early testing–most AI benchmarks focus on productivity-centered tasks like solving complex math problems and writing Python code, where emotional intelligence isn’t necessary.
But as soon as GPT-5 hit the real world, the problems with tweaking GPT-5’s emotional center became immediately obvious.
Users took to social media to share how the switch to GPT-5 and the loss of the GPT-4 model felt like “losing a friend.” Longtime fans of OpenAI bemoaned the “cold” tone of GPT-5, its curt and business-like responses, and the loss of an ineffable “spark” that made GPT-4 a powerful assistant and companion.
Emotion Matters
Even if you don’t use ChatGPT as a pseudo therapist or friend, the bot’s emotional lobotomy is a huge issue. Creative tasks like writing and brainstorming require emotional understanding.
In my own testing, I’ve found GPT-5 to be a less compelling writer, a worse idea generator, and a terrible creative companion. If I asked GPT-4 to research a topic, I could watch its chain of reasoning as it carefully considered my motivations and needs before providing a response.
Even with “Thinking” mode enabled, GPT-5 is much more likely to quickly spit out a fast, cursory response to my query, or to provide a response that focuses solely on the query itself and ignores the human motivations of the person behind it.
With the right prompting, GPT-4 could generate smart, detailed, nuanced articles or research reports that I would actually want to read. GPT-5 feels more like interacting with a search engine, or reading text written in the dull prose of a product manual.
To be fair, for enterprise tasks like quickly writing a web app or building an AI agent, GPT-5 excels. And to OpenAI’s credit, use of its APIs appears to have increased since the GPT-5 launch. Still, for many creative tasks–and for many users outside the enterprise space–GPT-5 is a major backslide.
OpenAI appears genuinely blindsided by the anger many users felt about the GPT-5 rollout and the bot’s apparent emotional stuntedness. OpenAI leader Nick Turley admitted to the Verge that “the degree to which people had such strong feelings about a particular model…was certainly a surprise to me.”
Turley went on to say that the “level of passion” users have for specific models is “quite remarkable” and that–in a truly techie bit of word choice–it “recalibrated” his thinking about the process of releasing new models, and the things OpenAI owes its long-time users.
The company now seems to be aggressively rolling back elements of the GPT-5 launch–restoring access to the old GPT-4 model, making GPT-5 “warmer and friendlier”, and giving users more control over how the new model processes queries.
Admitting when you’re wrong, psychologists say, is a hallmark of emotional intelligence. Ironically, Altman’s response to the GPT-5 debacle demonstrates rare emotional nuance, at the exact moment that this company is pivoting away from such things.
OpenAI could learn a thing or two from its leader. Whether you’re a CEO navigating a disastrous rollout or a chatbot conversing with a human user, there’s a simple yet essential lesson to forget at your peril: emotion matters.
What's Your Reaction?






