Listen

All Episodes

Audio playback

Humanising Chatbots – When Does Anthropomorphism Work?

This episode explores the benefits and risks of making chatbots more human-like. Many businesses assume that giving chatbots human traits—such as personalities, emotions, and natural conversation patterns—will increase engagement. But does this always work? We discuss when anthropomorphism enhances trust and when it creates discomfort, the risks of the "uncanny valley," and how chatbot gender and self-congruence influence consumer interactions. Learn best practices for designing AI-powered chatbots that feel engaging without misleading users.

This show was created with Jellypod, the AI Podcast Studio. Create your own podcast with Jellypod today.

Get Started

Is this your podcast and want to remove this banner? Click here.


Chapter 1

The Rise of Human-Like Chatbots

Christina

Welcome to The Customer Code, where we decode the future of marketing, AI, and digital strategy. I’m Christina, and today we’re tackling a fascinating question: Should chatbots sound and act like humans?

Andreas

Let’s start with a simple observation: chatbots are everywhere. From customer service to personal assistants, they're becoming a staple of our interactions with technology. But what’s particularly fascinating is this growing trend to make them human-like—giving them voices, personalities, even emotions.

Christina

Yeah, more and more businesses are designing chatbots with human-like traits—giving them personalities, emotions, and even conversational quirks. The assumption is that the more human a chatbot appears, the more engaging and effective it will be. But does this always work?

Andreas

Well, the assumption definitely seems logical at first glance. Human-like chatbots—those that mimic natural language or express empathy—can create a sense of personal connection. And data supports this: over 70% of consumers prefer chatbots that sound human-like. But—but, and this is important—they also expect these chatbots to act human. If the experience doesn’t match the presentation, users feel what we call—

Christina

Let me guess, the uncanny valley?

Andreas

Exactly.

Andreas

When a chatbot looks or sounds almost human but fails to interact naturally, it can feel unsettling. In fact, the same studies show trust drops significantly when users realise they're talking to AI, not a person.

Christina

I mean, it makes sense. If a chatbot says “I totally understand how frustrating this is” but then can’t actually fix the issue, it doesn’t just feel inadequate—it feels almost… deceptive.

Andreas

Precisely. And this mismatch between human-like expectations and limited chatbot capabilities is where businesses often struggle. They risk losing trust, which ironically undermines the very reason for humanising their chatbots in the first place.

Christina

Okay, so here’s a question: is this a case of companies going too far? Or maybe just not understanding where to draw the line?

Andreas

That's an excellent point. It’s not necessarily about doing less, but rather about calibrating—matching the chatbot’s level of realism to its actual capabilities. Take anthropomorphism, for example: it's not inherently bad. When done subtly—

Christina

Like using natural phrasing or a friendly tone?

Andreas

Yes, to make interactions feel intuitive. But overdoing it—a hyper-realistic voice or too much empathy—can lead to heightened expectations. And that’s where frustration creeps in.

Christina

Okay, so on one hand, people clearly like the human touch, but on the other, it seems like we’re walking a tightrope here. Too little, and it feels robotic. Too much, and it’s weird or even off-putting.

Andreas

Precisely. The challenge lies in finding that balance. It’s not a one-size-fits-all solution—different contexts demand different approaches. But one thing is clear: businesses need to understand that more human doesn’t always mean better performance.

Chapter 2

The Paradox of Anthropomorphism – When Human-Like Chatbots Help vs. Hurt

Christina

But that raises an interesting question—why do we even feel the need to make our chatbots so much like us in the first place?

Andreas

It's actually rooted in psychology. Humans have this natural tendency to anthropomorphise, meaning we assign human traits to non-human things. It’s how we make sense of the world. You see it in kids naming their toys, or even adults cursing at their computers when they don’t work.

Christina

Right, it's like that instinct to treat something as if it’s alive just to make it easier to relate to.

Andreas

Exactly. And the same principle works with chatbots. When they use warmth and natural phrasing—like saying “How can I help you today?” instead of “Specify your query”—it instantly feels more intuitive. It’s easier for users to engage because it mirrors human interaction patterns.

Christina

Alright, but doesn’t that set us up for disappointment? I mean, if a chatbot sounds human, people are naturally going to expect it to act human too, right?

Andreas

That's the crux of the problem. When users expect social intelligence—a real understanding of context, emotions, and intent—and the chatbot can’t deliver, it leads to immediate frustration. And that frustration feels bigger than if the chatbot had never pretended to be human in the first place.

Christina

Yeah, because the stakes feel higher. A robotic chatbot might miss the mark, but at least you knew it wasn’t going to “get” you in the first place.

Andreas

Exactly. Research supports this too. When a chatbot sounds human but fails to show emotional or social intelligence, trust declines by 30%. And that’s a pretty significant penalty—

Christina

For a design choice that was supposed to do the opposite!

Andreas

Right. And when these failings are particularly stark, it triggers what we call the uncanny valley effect. This is when something—a chatbot, a robot, whatever—looks or sounds almost human but just misses the mark, creating discomfort instead of trust.

Christina

So the uncanny valley is basically like those AI faces that look real until you realise the eyes don’t blink at the same time. Creepy, but sort of fascinating.

Andreas

It’s a perfect analogy. And for chatbots, this happens when natural language fails to mask the limitations of the technology. A chatbot might craft a friendly response like “I understand how frustrating this is,” but if it doesn’t follow through, users feel deceived.

Christina

It’s funny how just being a little off—too human and still not human enough—can be worse than being straight-up robotic.

Andreas

That’s why moderation is key. Chatbots designed with subtle anthropomorphism—warm tones, polite phrasing—strike the best balance. They engage without over-promising. But hyper-realistic chatbots? They often raise expectations that they simply cannot meet.

Chapter 3

Gender, Self-Congruence, and Chatbot Perceptions

Christina

So, we’ve talked about how human-like traits can shape user expectations, but here’s another angle: does chatbot gender really matter when it comes to how people engage with them?

Andreas

Actually, yes, it does. Research tells us that gendered chatbots shape user perceptions in pretty significant ways. For example, female chatbots are generally perceived as warmer, more approachable. You’ll see them dominating customer service roles where friendliness is key.

Christina

And let me guess—male chatbots get used more in, like, finance or tech where they want a sense of authority?

Andreas

Exactly. Users tend to associate male chatbots with competence and reliability, especially in sectors where expertise is valued. It’s interesting how these associations come into play even when there’s no explicit need for a chatbot to have a gender at all.

Christina

But isn’t that kind of... I don’t know, reinforcing stereotypes? Like assigning emotional labor to “female” chatbots and technical expertise to “male” ones?

Andreas

That’s a valid concern. And it’s why many companies are opting for gender-neutral chatbots to sidestep those issues entirely. This approach not only avoids perpetuating biases but also aligns with growing user preferences for inclusivity in design.

Christina

Yeah, especially since not everyone fits neatly into those boxes to begin with. Plus, a gender-neutral chatbot just feels more universal—it’s like it’s trying less to assume what we need based on our own biases.

Andreas

Precisely. And to take that further, there’s the idea of self-congruence—users prefer chatbots that reflect their own personality or communication style. Someone extroverted might enjoy a chatbot that’s conversational, even expressive. On the other hand, introverted users might prefer something more professional and straightforward.

Christina

Makes sense. Nobody wants a chatbot cracking jokes if you just need to get something done. But how do businesses even begin to match a user’s personality like that?

Andreas

It’s more about subtle adaptation. For instance, chatbots can learn from prior interactions, tailoring their tone or language to align with a user’s preferences. But there’s a fine line—if it feels too close, too mimicked, it can actually feel intrusive rather than helpful.

Christina

Right, like no one wants an AI that’s trying way too hard to be your best friend. You just want it to get you, not... copy you.

Andreas

Exactly. Over-personalisation can make interactions uncomfortable, which is why adaptive but neutral chatbots are increasingly popular. They focus on being efficient and inclusive rather than overly tailored or gendered, especially in areas like healthcare where professionalism is paramount.

Christina

So really, it’s about dialing things back a little—not trying to be everything to everyone, but just, I guess, getting it right where it counts?

Andreas

That’s the key. Whether it’s gender, tone, or personality, the goal is to meet user expectations without overstepping. When design choices don’t align with the interaction’s purpose, trust erodes quickly.

Christina

And when trust breaks, it’s game over for that chatbot, right?

Chapter 4

The Role of Empathy – Can Chatbots Offer Emotional Support?

Christina

Speaking of getting it right where it counts, Andreas, let’s talk about chatbots showing empathy. It’s kinda fascinating but also... well, tricky. Where do we even start untangling it?

Andreas

You're absolutely right. Empathy in AI really is a double-edged sword. On one hand, users appreciate warmth and politeness in routine interactions—it can make things feel more human, less transactional. But—but in high-pressure or emotionally charged situations, it often backfires.

Christina

How so? Like, what’s the tipping point where it stops being helpful?

Andreas

Consider this scenario: a chatbot says, “I’m really sorry to hear that. I understand how frustrating this must be.” It sounds empathetic, right? But if it follows up with an unhelpful, generic solution—or worse, no solution at all—users feel patronised. It’s like... like offering sympathy without substance.

Christina

Ugh, yeah, that’s gotta be frustrating. It’s like, “Thanks for nothing, bot.”

Andreas

Exactly. Research actually highlights this frustration. Around 45% of consumers respond positively to empathetic language in casual interactions. But—

Christina

But the other 55%?

Andreas

Prefer factual, solution-oriented responses, especially in stressful scenarios. The more serious the situation—healthcare, finance, urgent customer service—the less tolerance there is for anything that feels insincere or... inadequate.

Christina

So basically, the higher the stakes, the less people care about empathy—they just want answers.

Andreas

Precisely. And when chatbots fail to deliver those answers, their expressions of empathy can feel hollow, even aggravating. It’s a mismatch between what users need and what the AI provides.

Christina

Right, because it’s trying to act like a person, but we know it’s not really one. So the “I totally get you” thing comes across as... fake?

Andreas

Exactly. Unlike humans, chatbots don’t truly “feel” empathy—they simulate it. And users are more willing to accept that in low-stakes interactions, where it’s just a nice touch. But in high-stakes situations, it’s competence they value, not emotional flourishes.

Christina

So does that mean businesses should just ditch chatbot empathy altogether?

Andreas

Not necessarily. It’s about balance. Empathy works when it’s subtle and context-appropriate—like acknowledging a simple issue with warmth but quickly transitioning to a clear, practical solution. The chatbot's job is to be helpful first and human-like second.

Christina

Got it. So, empathy’s fine in small doses, but only if it doesn’t get in the way of actually solving the problem.

Andreas

Exactly. And that’s the challenge for developers: finding the right calibration point. Expressing just enough warmth to engage users, without overstepping into territory that risks feeling insincere or, worse, frustrating.

Chapter 5

Best Practices for Human-Like Chatbot Design

Christina

Exactly—and finding that balance, striking just the right tone, sounds like it takes a lot of fine-tuning. Honestly, it feels like designing a chatbot is more of an art than a science, doesn’t it?

Andreas

You're absolutely right, Christina. Designing human-like chatbots requires a delicate blend of technical expertise and psychological insight. The goal isn't simply to make AI appear human, but to enhance user experience in a way that feels natural and effective.

Christina

Okay, so where’s the line? How do you humanise a chatbot just enough without crossing into that weird, unsettling territory?

Andreas

It starts with subtle anthropomorphism—using natural language and a polite tone to make the interaction less robotic. For example, phrases like “How can I assist you today?” are far more engaging than a cold “Specify your request.” Small touches like these help users feel at ease.

Christina

Right, so like, warmth and friendliness instead of rigid, mechanical responses?

Andreas

Exactly. Chatbots that strike this balance see up to a 40% increase in user engagement. But—but here's the key—engagement only rises when the chatbot’s responses remain solution-oriented. If a chatbot is friendly but unhelpful, that initial boost quickly turns into frustration.

Christina

Makes sense. A chatbot isn't there to crack jokes or be your best friend; it’s supposed to actually help, right?

Andreas

Precisely. And that’s why functional design takes precedence over exaggerated personality traits. Chatbots should enhance the user experience, not distract from it. It’s about creating value—not just adding layers of artificial charm.

Christina

So what about transparency? I know earlier we touched on trust dropping when users feel tricked. How do you prevent that?

Andreas

This is where progressive disclosure comes in. Instead of trying to “pass” as human, the chatbot makes its AI nature clear from the outset or at an appropriate point in the conversation. For example, saying something like “I’m your virtual assistant here to help” sets realistic expectations from the start.

Christina

Right, so users know what to expect and don’t feel... deceived later on?

Andreas

Exactly. Transparency builds trust. If users understand the chatbot’s limitations but find it helpful, they’re far more likely to engage positively. Misleading design, on the other hand, sets users up for disappointment and erodes trust.

Christina

But isn’t all this dependent on the context? Like, what works in customer service might not fly in healthcare, right?

Andreas

Absolutely. Context is everything. For example, in casual scenarios, a chatbot can afford to show some personality—light humor, conversational warmth—but in professional settings like financial services or healthcare, the tone needs to remain neutral and efficient. It’s about tailoring the experience to the user’s needs.

Christina

Yeah, no one wants a chatbot cracking jokes when they’re trying to dispute a bank charge or schedule surgery.

Andreas

Precisely. And that’s where businesses get it wrong—they over-humanise in situations where professionalism should come first. Matching tone, warmth, and functionality to the context is critical for success.

Chapter 6

Key Takeaways and Business Implementation

Andreas

You're absolutely right about context, Christina, and I think that brings us to an important takeaway for our listeners. To design effective chatbots, businesses must tread carefully, ensuring that balance is struck between warmth, functionality, and transparency. It’s all about adapting to the specific needs of the situation.

Christina

Right. So, step one: don’t overpromise. It’s all about setting realistic expectations. Like we said earlier, people appreciate human touches—they like warmth and natural phrasing—but they’re not looking for the chatbot to become their best friend. They just want it to work.

Andreas

Exactly. Progressive disclosure is key here. When users know they’re engaging with AI—and understand its limitations—they’ll trust the interaction more. Chatbots shouldn’t set themselves up to fail by trying too hard to be human. Simply being approachable and efficient is enough.

Christina

Got it. And step two is keeping things in context. A chatbot for online shopping can be casual, even funny. But in healthcare or finance? Keep it neutral, keep it professional.

Andreas

Precisely. Context dictates everything. And step three, of course, is functionality. A chatbot has to do what it’s designed to do—solve problems, answer questions. Emotional flourishes, like showing empathy or humor, should only enhance the experience, not distract from it.

Christina

And when it can’t handle something on its own, it’s gotta hand it off seamlessly, right? Escalating things to a human agent when necessary is a huge part of maintaining trust.

Andreas

Exactly. And businesses need to remember that AI is here to enhance human interactions, not replace them. The most successful chatbots are the ones that know their limits and work within them.

Christina

So, bottom line: if companies get the balance right—realistic design, clear functionality, and smart use of anthropomorphism—they’re not just building better chatbots. They’re building better customer relationships.

Andreas

And that better relationship can provide a real competitive edge. As customers increasingly engage with AI-powered assistants, trust will separate the successful businesses from the rest. Those who master chatbot design will not only meet user expectations but exceed them.

Christina

Absolutely. And speaking of success, I’m actually excited about our next episode. It feels like we’ve been focusing on trust and engagement so far, but next week… we’re diving into a whole other kind of value.

Andreas

Exactly. Navigating how chatbots don’t just support businesses, but actually drive them forward. Think AI that outperforms humans in certain areas of productivity and insights—it’s a fascinating shift. But for now, I think we’ve given our listeners plenty to think about.

Christina

Oh, absolutely. Thanks for tuning in, everybody! We’ll see you in the next one.

Andreas

Until next time.