AI is Magical, Until it Isn’t: The Power of Managing Expectations

By: Sarah Thompson, Director of Behavioral Design at Live Neuron Labs

The AI user experience isn’t just shaped by what the technology does—it’s shaped by what users *expect* it to do

One minute, AI is finishing your sentences, generating an image straight from your imagination, and solving a complex problem almost instantly.

The next? It’s spitting out gibberish, hallucinating facts, and turning a human hand into a six-fingered nightmare.

Sure, these glitches will quickly be ironed out.

But the real issue isn’t the bugs—it’s our very BIG expectations.

We expect AI to cure cancer, solve loneliness, and unlock a future so advanced we can barely picture it. But studies show these sky-high expectations are a recipe for negative user experiences.

Let’s take a closer look at what the research says about the power of expectations—and how they shape our experiences with AI.

TLDR: Don’t oversell what your AI product can do. Framing AI as less capable can actually boost user satisfaction.

Imagine you’re choosing between two equally capable AI chatbots to help plan your dream vacation. One is described as a ‘trained professional’,  the other as an ‘intern’.

Which would you pick? Probably the ‘professional’, right?

Well, think again, because behavioral science suggests you might regret your choice.

📖 The Study

A Stanford experiment had participants use functionally identical chatbots—but each was described with a different metaphor to signal its level of competence.

Some were described as ‘trained professionals’ or ‘executives’ (high competence).

Others as ‘inexperienced teenagers’ or ‘toddlers’ (low competence).

Despite identical performance, bots described with low-competence metaphors scored higher on post-use usability, intention to adopt, and willingness to cooperate.

🧠 The Why

Descriptions and labels aren’t design fluff. Metaphors like these actually activate different mental models.

A low competence metaphor like “toddler” sets low user expectations—so even an average experience feels impressive.

“Professional” sets the bar higher—so the same experience can feel like a letdown.

💡 The Takeaway

The way we describe our AI tools—through labels, descriptions, and metaphors—shapes expectations. And those expectations influence how users rate the experience.

While confident-sounding labels may draw users in, if the product doesn’t live up to the hype, disappointment, and a poor user ratings, might follow.

A smarter move? Use language that slightly undersells, so your product can overdeliver. For example, instead of ‘expert’, try ‘assistant’.

Source: Conceptual Metaphors Impact Perceptions of Human-AI Collaboration, 2020

TLDR: Humanizing AI tools raises expectations—and if the experience falls short, it can hurt customer satisfaction and brand perception.

We can’t help it—our brains are wired to anthropomorphize.

When an AI tool looks, sounds, or types like a person, we instinctively treat it like one. And as conversational AI advances, this tendency will only grow.

But while studies show adding a friendly name, face, or voice to our AI tools can have benefits like increasing trust and engagement, it can also backfire in certain contexts.

📖 The Study

Across five studies, including real-world data from over 400,000 chatbot interactions, researchers found angry customers reacted more negatively to human-like bots than non human-like ones. This led to  

  • reduced customer satisfaction
  • lower brand perception
  • decreased purchase intentions

Calm customers, on the other hand, were NOT significantly affected by whether the bot was humanized or not.

🧠 The Why:

Giving an AI tool human-like features—like a name, voice, or avatar—raises expectations.

We also start to see it less as a tool and more as a social agent—something with agency and control.

That’s why angry users were especially hard on humanized bots.

They were more likely to blame it, and react with a stronger backlash, because it felt like someone failed them.

💡 The Takeaway:

Giving AI human-like traits is a double-edged sword. While it may increase trust and engagement, in emotionally charged contexts, it can reduce satisfaction and brand reputation.

A better approach? Highlight the people behind the tool’s development. Labels like ‘designed by experts’ or ‘trained by clinicians’ have been shown to build more credibility than humanizing AI.

Source: Anthropomorphism in artificial intelligence: a game-changer for brand marketing, 2025

Source: Blame the Bot: Anthropomorphism and Anger in Customer–Chatbot Interactions

Source: Harvard Business Review, Research: Consumers Don’t Want AI to Seem Human, 2025

TLDR: If users expect the system is learning and improving, they forgive it and trust it more after an error.

AI is going to mess up. A lot.

As we ask AI to do more—diagnose patients, drive cars, and manage our schedules—mistakes are inevitable. And when they happen, they’ll likely feel bigger, riskier, and more frustrating than a glitch in a search engine.

That’s because trust is the currency of AI adoption. And when systems fail to meet expectations, rebuilding that trust is essential.

Research suggests one powerful strategy: make it clear the AI is learning. Framing mistakes as part of an ongoing improvement process helps users stay open, forgiving, and engaged—even when things go wrong.

📖 The Study

Researchers tested how different trust-repair strategies affected users’ willingness to keep relying on an AI system after it made an error.

After the mistake, the AI system responded with either:

  • A denial (‘I’m confident my response was right’)
  • A  promise (‘I’ll do better and improve’)
  • An apology  (‘Sorry for the mistake and the inconvenience’)
  • A model update (‘I’m a machine learning model and the model’s been updated’)

The result? Users were most likely to trust the AI again when they saw the model update response.

🧠 The Why

The model update response (‘I’m a machine learning model and the model’s been updated’) restored trust because it reframed the mistake as a learning moment. It made users believe the AI system had both the intention and ability to improve.

💡 The Takeaway

If users believe your AI is learning, they’re more likely to stick with it after a mistake.

Even subtle language shifts —like calling it a ‘machine learning algorithm’ instead of just an ‘algorithm’—can be enough to boost trust.

Source: Framing the Ghost in the Machine: How to Build Consumer Trust in AI, 2022

Source: Trust Development and Repair in AI-Assisted Decision-Making during Complementary Expertise 2024

Final Thoughts

At the heart of every AI interaction is something deeply human: our expectations.

What we believe a system can do shapes how we feel when it does—or doesn’t—deliver.

Set expectations too high, and even stellar performance can feel like failure. Set them just right, and even mistakes can build trust.

About the Author

Sarah Thompson

Sarah Thompson is the Director of Behavioral Design at Live Neuron Labs, where she helps teams design smarter products, communications, and services that actually work with human behavior—not against it. With a background in Psychology and a Master’s in Cognitive Semiotics, she brings behavioral science to life in areas like healthcare, sustainability, financial wellbeing, education, and tech.

Sarah regularly consults with organizations, trains teams in behavioral science, and speaks on how to use behavioral insights to improve the user experience.

Connect with her on LinkedIn: linkedin.com/in/sarahethomps

Published
May 29, 2025
Author(s)
No items found.

More Articles

Published
May 15, 2025

Your UX Is Missing Something – It’s not a Feature – and not AI 🙂

Every time someone taps a button, skips onboarding, or abandons a cart—it’s not just a UX problem, it’s a human behavior problem…

Published
April 4, 2025

Medico Two-Day Ticket incl. industry-focused full-day Workshop

A special Ticket for Product Designers from the medico industry

Luciano Lykkebo
Luciano Lykkebo
Published
June 4, 2025

Framna Future Product Leaders Ticket

The premium Conference Pass for Product Managers, Design Leaders and Decision Makers

Luciano Lykkebo
Luciano Lykkebo
Published
May 23, 2025

The Concert

Conference by Day, Festival by Night: When DJ Thode Takes Over Future Product Days

DJ Thode
DJ Thode
Published
April 2, 2025

Software & AI Exhibitor Packages

Showcase your Brand and AI Product

Luciano Lykkebo
Luciano Lykkebo
Published
May 15, 2025

Practical Information

A collection of useful information, with start and end times, hotels, the nearest metro stations, and much more

Luciano Lykkebo
Luciano Lykkebo
Published
April 2, 2025

UX Nordic this year a part of Future Product Days

This year in Copenhagen

Luciano Lykkebo
Luciano Lykkebo