Most people are using ChatGPT totally wrong—and OpenAI’s CEO just proved it

Aug 13, 2025 - 17:46
 0  0
Most people are using ChatGPT totally wrong—and OpenAI’s CEO just proved it

How did you react to the August 7 release of GPT-5, OpenAI’s latest version of ChatGPT? The company behind the model heralded it as a world-changing development, with weeks of hype and a glitzy livestreamed unveiling of its capabilities. Social media users’ reactions were more muted, marked by confusion and anger at the removal of many key models people had grown attached to.

In the aftermath, CEO Sam Altman unwittingly revealed why the gulf between OpenAI’s expectations for GPT-5’s reception and the reality was so wide. It turns out that large numbers of us aren’t using AI to its fullest extent. In a post on X explaining why OpenAI appeared to be bilking fee-paying Plus users (full disclosure: that includes me)—who hand over $20 per month to access the second-highest tier of the model—by drastically reducing their rate limits to the chatbot, Altman revealed that just 1% of nonpaying users queried a reasoning model like o3 before GPT-5’s release. Among paying users, only 7% did.

Reasoning models are those that “think” through problems before answering them (though we should never remove those air quotes: AI models are not human, and do not act as humans do). Not using them—as was the case with the overwhelming majority of users, paying and nonpaying alike—is like buying a car, using only first and second gear, and wondering why it’s not easy to drive, or going on a quiz show and blurting out the first thing that comes to mind for every question.

Many users prioritize speed and convenience over quality in AI chatbot interactions. That’s why so many lamented the loss of GPT-4o, a legacy model that was later restored to paying ChatGPT users after a concerted campaign. But when you’re querying a chatbot for answers, you want good ones. It’s better to be a little slower—and often it is only a little—and right than quick and completely wrong.

Reasoning models are built to spend more computational effort planning, checking, and iterating before answering. This extra deliberation improves results on tasks where getting the logic right matters. But it’s slower and costlier, which is why providers tend to offer the “non-thinky” versions first and require users to opt in via a drop-down box for alternatives. Then there’s OpenAI’s previously impenetrable habit of naming models—a problem GPT-5 attempted to fix, not altogether successfully. Users still can’t easily tell whether they’re getting the “good thinky” GPT-5 or the less-capable version. After receiving complaints, the company is now tweaking that.

To me, waiting a minute rather than a second isn’t an issue. You set an AI model off and do something else while you wait. But evidently, it’s a wait too long for some. Even after GPT-5’s release—where the difference between “flagship model” GPT-5 and GPT-5 thinking, which offers to “get more thorough answers,” is more obvious—only one in four paying users are asking for thoroughness.

This quickly tossed-out data answers one big question I had about AI adoption: Why do only a third of Americans who have ever used a chatbot say it’s extremely or very useful (half the rate among AI experts) and one in five say it’s not useful at all (twice the rate among experts)? The answer is clearer now: Most folks are using AI wrong. They’re asking a chatbot to handle tough, multipart questions without pausing for thought or breath. They’re blurting out “What is macaroni cheese” on The Price is Right and “$42” on Jeopardy!

So if you’re going to try a chatbot, take advantage of OpenAI’s moves to keep users from canceling their subscriptions by opening up more access to models. Set them “thinking” while remembering they’re not actually doing that—and see if you stick around. It’s the right way to use generative AI.


What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0