A GPT-5 Reality Check: When AI Progress Meets Human Attachment
OpenAI dropped GPT-5 last week with the usual fanfare, and I’ve been diving into what it actually delivers versus what the marketing promises. Spoiler alert: it’s complicated, and not always in the ways you’d expect.
What Actually Changed
GPT-5 is a solid incremental upgrade—emphasis on incremental. The numbers are real: 94.6% on math competitions, 74.9% on coding benchmarks, and significantly fewer hallucinations. These aren’t revolutionary leaps, but they matter for anyone using these tools for real work.
The most interesting technical change is the integration of reasoning capabilities directly into ChatGPT’s free tier. Instead of just spitting out responses, GPT-5 can now “think through” complex problems step-by-step. It’s not magic, but it’s measurably more thoughtful.
What OpenAI built is actually a system that automatically routes your questions to different model variants based on complexity. Simple requests are handled by the fast model, while complex ones receive the reasoning treatment. It’s clever engineering disguised as a single product.
The Personality Wars
Here’s where things get interesting—and where OpenAI might have miscalculated. The internet is pissed about GPT-5’s personality changes. One Reddit thread titled “GPT-5 is horrible” attracted nearly 3,000 upvotes and more than 1,200 comments filled with criticism of the new release.
This caught me off guard initially, but the more I think about it, the more it makes sense. People have built real workflows around GPT-4o’s particular style of interaction. They’ve crafted custom prompts, developed debugging routines, and even formed what they describe as emotional attachments to how the model responds.
OpenAI deliberately made GPT-5 less “sycophantic”—their word for being excessively agreeable. The numbers went from 14.5% to under 6% in their evaluations. They wanted something that felt “less like talking to AI and more like chatting with a helpful friend with PhD-level intelligence.”
But here’s the thing: many users liked that enthusiastic, overly helpful personality. They describe GPT-5 as feeling like “an overworked secretary” compared to GPT-4o’s “enthusiastic buddy” vibe. Some are threatening to cancel subscriptions over this.
The Business Reality Behind the Tech
OpenAI isn’t profitable despite pulling in an estimated $20 billion annually. That’s a staggering number that puts their pricing decisions in context. The GPT-5 rollout is designed to push people toward paid subscriptions—$20/month for Plus, $200/month for Pro.
The API pricing is more interesting. GPT-5-nano at $0.05 per million tokens directly undercuts Google’s pricing. This feels like a volume play—sacrifice margin on basic usage to capture market share, then upsell to premium features.
What This Actually Means
I’ve been covering AI long enough to recognize the pattern: breathless announcements followed by gradual reality checks. GPT-5 is genuinely better than GPT-4o in measurable ways, but it’s not the leap toward AGI that Sam Altman keeps hinting at.
The initial user backlash reveals something more profound about how AI tools integrate into our lives. These aren’t just utilities—people develop preferences, workflows, and yes, attachments to specific interaction styles. As these systems become more central to how we work, the challenge of improving them while maintaining consistency becomes genuinely difficult.
I was struck by how many of the complaint threads appear to be written with AI assistance themselves. There’s something meta about using AI to articulate frustration with AI personality changes.
The Competition Context
DeepSeek rattled Silicon Valley in January by releasing a free reasoning model that performed competitively with OpenAI’s paid offerings. That was a genuine wake-up call about where innovation might come from. Meanwhile, Anthropic just revoked OpenAI’s API access over terms of service violations, suggesting the cozy relationships between AI companies are fracturing.
Google, Meta, and others continue developing their own models. Musk claims Grok is “better than PhD level in everything.” The pace is relentless, which means today’s breakthrough becomes tomorrow’s baseline expectation.
My Take
GPT-5 represents steady progress in AI capabilities. The reasoning integration is significant, the reliability improvements matter, and the pricing changes could democratize AI access. But it’s not the revolutionary leap the marketing suggests.
The social media complaints underscore an unforeseen challenge for AI companies: as these tools become more integrated into daily workflows, people develop genuine preferences for specific interaction styles. Improving the technology while maintaining consistency becomes a user experience problem as much as a technical one.
For now, GPT-5 moves the ball downfield without fundamentally changing the game. The AI revolution continues, but it’s happening in smaller, more complex steps than the headlines suggest. And apparently, some of those steps involve making users angry about personality changes they didn’t ask for.
The future of AI might be as much about managing user expectations and attachments as it is about advancing the underlying technology. That’s a very human problem for a very inhuman technology to solve.
What’s your take on the GPT-5 release? Have you noticed the personality changes? Drop me a line—I’m curious about real-world experiences beyond the Reddit drama.
Posted by John K. Waters on August 13, 2025
Source: adtmag.com