7845 Kipling Ave, Vaughan, ON L4L 1Z4
icon-headphone(416) 999-3437

News & Articles

Picture this: you're scrolling through a mental health app, and instead of a blank chatbot interface, you see a warm smile, kind eyes, and hear a gentle voice offering support. Feels more comforting, right? But what happens when we dress up artificial intelligence to look and sound human, especially when it comes to something as personal as therapy?

A recent study dove deep into this question by asking 20 young adults to create their ideal AI psychologist using Character AI, a popular platform where users can design chatbot personalities. What researchers discovered reveals fascinating truths about how we relate to technology and the hidden biases shaping these digital relationships.

The Face of Digital Comfort

When given free rein to design an AI therapist, most participants gravitated toward remarkably similar choices. The overwhelming favorite? A middle-aged woman with a gentle smile. Out of 15 participants who chose human avatars, 10 created female characters. The voices selected were typically mature, slow-paced, and lower-pitched, qualities participants associated with trustworthiness and professionalism.

This wasn't random. Participants explained their reasoning: female avatars felt less aggressive, more approachable, and somehow more naturally empathetic. As one participant put it, they worried a male voice might question what they said, acknowledging this as a personal bias but choosing accordingly anyway.

The age factor mattered too. Participants consistently avoided youthful appearances, associating them with inexperience and lack of credibility. The sweet spot landed between 30 and 50 years old, old enough to seem professional but young enough to feel relatable.

Why We Humanize Our Digital Helpers

Making AI seem human isn't just about aesthetics. Research shows that anthropomorphism, the technical term for giving human characteristics to non-human things, can significantly impact how much we trust and engage with technology. When participants in this study created personalized, human-like AI psychologists, many reported feeling greater initial trust compared to generic chatbot interfaces.

The logic makes sense: we're hardwired to interact with other humans. When faced with something unfamiliar like an AI system, we naturally apply what we know about people to make sense of it. A smiling face, a warm voice, professional attire in an avatar, these cues tap into our existing social knowledge and make the technology feel less alien.

However, the picture isn't entirely simple. Some participants remained skeptical regardless of how human-like their creation appeared. One person noted they wouldn't trust an AI psychologist for real problems, preferring instead to create a fun character like a duck that could be a supportive friend rather than pretending to be a doctor.

The Invisible Hand of System Bias

Here's where things get interesting and a bit concerning. While participants thought they were freely designing their ideal therapist, the AI system itself was quietly shaping their choices through biased suggestions.

Several participants who initially used neutral, professional descriptors like "rational," "professional," or "doctor" were shown only male avatar options by the system. These participants then created male AI psychologists, even though they hadn't specifically wanted a male character. When one woman described wanting someone "in a suit," the system suggested men, forcing her to revise her description to "female in a suit" to get what she actually wanted.

The pattern revealed itself clearly: feminine descriptors like "gentle" and "empathetic" triggered female avatar suggestions, while "rational" and "professional" brought up male options. The system was reinforcing traditional gender stereotypes, and users largely accepted these suggestions without question.

Even more troubling was the issue of race. Nearly all created characters were white, despite most participants being from Asian countries. Only one person specifically requested an Asian character. When someone typed "African," the system generated images of people in tribal settings with extensive jewelry, a harmful stereotype. Yet participants seemed largely unaware of this bias, accepting whatever the system offered.

What This Means for Digital Mental Health

The findings paint a complex picture of the growing field of AI mental health support. On one hand, anthropomorphism clearly helps some people feel more comfortable engaging with digital tools. Making AI seem more human can lower barriers and make technology feel accessible to those who might otherwise avoid it.

The ethical questions are harder to ignore, though. When we make AI seem too human, do we risk creating false expectations? Some participants worried about this, preferring their AI to clearly identify itself as artificial to avoid confusion. There's also the concern of dependency: if people form strong emotional connections with AI therapists, they might rely too heavily on systems that, despite appearances, lack true understanding or emotional capacity.

The gender and racial biases embedded in these systems present another layer of concern. If AI mental health tools reinforce stereotypes, they could perpetuate harmful assumptions about who makes a good caregiver or authority figure. They might also fail to represent diverse communities adequately, making the technology feel less welcoming to people from underrepresented groups.

As AI continues to play a larger role in mental health support, serving as everything from crisis intervention tools to ongoing therapy supplements, understanding how people relate to these systems becomes crucial. The study reveals that anthropomorphism is a double-edged sword: powerful enough to build trust and engagement, but risky in its potential to mislead or reinforce biases.

The challenge ahead isn't whether to make AI more human-like, it's how to do so responsibly. That means being transparent about what AI can and cannot do, actively working to identify and eliminate bias in how these systems present themselves, and keeping humans firmly in the loop when it comes to serious mental health care.

For now, the middle-aged, smiling woman with the gentle voice remains the face of AI therapy in many users' minds. Whether that's who should represent the future of digital mental health support is a question worth continuing to examine closely.


Guo, Y. (2025). How do users interpret the anthropomorphic features of the “AI psychologist”?.

Ready to Talk? Book a Session Today.
We Serve the Greater York Region
  • Vaughan
  • Maple
  • Woodbridge
  • Newmarket
  • Thornhill
  • Richmond Hill
  • Aurora
  • Georgina
  • East Gwillimbury
  • King City
  • Kleinberg
The information provided on this website is for general informational purposes only and is not intended to be a substitute for professional counselling, psychological advice, diagnosis, or treatment. This website is not intended for use in emergencies. If you or someone you know is in immediate danger, experiencing a crisis, or in need of urgent assistance, please contact emergency services by calling 911 or go to the nearest hospital.
© 2024 csyorkregion.com  ·  Vaughan Psychologist  ·  Vaughan, Ontario  ·  All rights reserved  ·  Sitemap
icon call