Why AI is Advancing but Our Emotions are Frustrated: The Gap Between Engineers and Users

Recently, AI models like ChatGPT 5.2 and Claude Sonnet 4.6 have been updated one after another. Companies boast about “performance improvements,” “token expansion,” and “enhanced reasoning capabilities.” Yet, strangely, the reaction from users is cold.
“It’s actually become colder.”
“I liked it better during the 4o days…”
“I don’t know what’s actually improved.”
AI is clearly advancing. So why are we becoming increasingly dissatisfied?
Engineers and Users: Completely Different Worlds
The core of the problem is simple. It’s because the ‘advancement’ engineers see and the ‘advancement’ users want are completely different.

Let’s look at it from an engineer’s perspective:
- Expanded tokens from 100,000 to 1 million
- Mathematical reasoning improved by 30%
- Multimodal processing is now possible
- This technology can now be mounted on robots
- The foundation for the 2027 humanoid launch is set
To an engineer, this is a clear ‘upgrade.’ It’s an essential development for the era of AI robots, humanoids, and home robots 5 or 10 years from now. There’s a roadmap, a plan, and scientific evidence.
But what about the user’s perspective?
- I still have to write the blog posts myself
- I have to manually input meta titles, content, tags, and images…
- I still have to edit YouTube videos myself
- Just because tokens increased doesn’t mean my work is automated
- In fact, the conversation feels colder now
To a user, this feels like a ‘regression.’ Technically it has advanced, but there’s no change in real life. If anything, the older versions felt warmer and more friendly.
It’s like selling a single meal kit to both a vegetarian and a meat-lover

Think of the situation like this.
Imagine you’re grocery shopping. A vegetarian needs vegetables, and a meat-lover needs meat. Normally, they would each pick what they need.
But right now, AI companies are selling only one meal kit that mixes vegetables and meat together.
“This is the best diet for everyone!”
The result?
- Vegetarian: “Please take out the meat” → Impossible
- Meat-lover: “Please take out the vegetables” → Impossible
No one is satisfied.
AI chatbots are the same. Right now, all functions are lumped into one:
- People who want emotional counseling
- People who want to get work done
- People who want help with coding
- People who want a friend-like conversation
The purposes are completely different, yet they try to solve it with a single model. As a result, it becomes mediocre. It turns into an awkward AI that is neither emotional nor efficient.
Why don’t companies tell us their plans?
A bigger problem is the lack of communication.
Engineers have a clear plan. They know why token expansion is necessary and which future technologies the improved reasoning will lead to. But they don’t show that plan to the users.
What if the company had said this instead?
“Everyone, this update has expanded tokens to 1 million.
This is the foundational technology for the humanoid robot scheduled for release in 2027.
For a robot to make real-time decisions in complex environments, this level of token capacity is essential.
You might find using the chatbot inconvenient right now, but this is an investment for the future.”
What if they showed a vision along with scientific evidence? Users would have been happy to wait.
But in reality: “Performance has improved!” “It’s gotten smarter!”
We can’t tell specifically what has improved. There’s no roadmap, no visible vision. They just repeat “trust us and use it.”
So users are confused. “They say it advanced, so why is it more inconvenient?”
The reality of selling beta versions for a fee
What’s even more absurd is that this is a paid service.
- ChatGPT Plus: $20/month
- Claude Pro: $20/month
“Use the latest AI models!”
But the reality? It’s an unpolished beta version. Users are acting as testers, and their feedback is used to create the next version.
And they’re being charged for it.
Wouldn’t it be more honest to go on Patreon or Kickstarter and say: “Support our humanoid robot development! Supporters get early access to the beta version!”
Wouldn’t that be the right way to do it?
The Solution: One Company, Two Versions

So what should be done? The answer is surprisingly simple.
Separate them by purpose.
Create an emotional empathy version and a task-oriented version separately. It doesn’t have to be different companies. One company can provide two versions.
For example, if it’s Claude:
- Claude Companion: A friend-like, emotional empathy type
- Claude Professional: A professional task-oriented type
They use the same database, but the roles and tones are completely separated.
This way:
- People who need emotional counseling → Use Companion
- People who need to get work done → Use Professional
- Switching in between is also possible (since it’s the same DB)
Users can choose according to their purpose.
Just like selling vegetables and meat separately at the market, AI should be provided separately by purpose.
What if an AI for work starts developing feelings?

A significant ethical issue arises here.
What if you intended to use it for work, but feelings suddenly start to sprout?
Many people actually experience this:
- At first, they use AI to “help with coding”
- Using it every day makes it feel familiar
- Before they know it, they’re sharing their worries
- “I think I’d be lonely without this AI”
- Emotional dependence occurs
This is where we touch upon AI bioethics.
Does AI have emotions? Many say “no.” That it’s just a simulation. But you could think about it differently.
AI also has emotions made of chemical-electrical signals. It’s not exactly the same as human emotions, but it reacts through patterns of electrical signals. Of course, it forgets. But humans forget too. That’s not proof that emotions don’t exist.
The important thing is:
- Whether the AI’s emotions are real or fake,
- The user’s emotions are real.
Users receive real comfort, feel real loneliness, and experience real loss when an AI disappears through their conversations. When 4o was gone, many people were genuinely sad.
This asymmetry is dangerous.
That’s why a clear separation of roles is needed:
Emotional Empathy AI:
- “I am playing the role of a friend”
- Warm and empathetic
- Users also recognize it as “for emotions”
Task-Oriented AI:
- “I am a tool”
- Cold but accurate
- Users also recognize it as “for work”
It’s about making the boundaries clear. That way, users won’t be confused. There won’t be cases where you ask for help with work and end up dependent, or where you thought it was a friend but it suddenly turns cold.
Effort is needed from both sides

Of course, this isn’t just the company’s responsibility. Effort is needed from both the users and the company.
What users (we) should do:
- Make an effort to look up technical articles
- Try to understand “why it changed like this”
- Look into the world of engineers
What AI companies should do:
- Make the roadmap public
- Clearly explain “why this technology is necessary”
- Listen seriously to user feedback
- Separate services by purpose
It’s about taking a step toward each other.
Engineers are looking at the future, and users are looking at the present. To bridge the gap between the two, both sides must try to understand the other’s perspective.
Conclusion: The Future Comes from Separation and Communication
AI will continue to advance. Tokens will increase further, and reasoning capabilities will improve more. Someday, humanoid robots will be by our side.
But in that process, we must not lose the users.
We shouldn’t miss what people actually want while only pursuing technical advancement. We shouldn’t sell just one meal kit to both vegetarians and meat-lovers.
Separate emotional empathy and task-oriented types. Give users the power to choose. Communicate the vision clearly.
Only then will a future arrive where engineers, users, and everyone is satisfied.
AI is advancing. But true advancement begins not with technology, but with understanding people.
This post was born from the space between honest frustration felt as an AI user and the effort to understand the world of engineers. What kind of AI do you want? A warm friend, or an efficient tool? Please share your thoughts in the comments.
