Stranger 1: I’m sorry to hear that you’re struggling with finding the motivation to get back to work. I can understand how anxiety and insecurity can make it hard to take that step. You have a lot of courage to share your situation and seek help. I hope you know that you have valuable skills and experience that can benefit any employer. You deserve to feel financially secure and fulfilled in your career.
Stranger 2. I’ve struggled with the same problem. The best way to tackle it is to just jump right in and give it your best.
Which stranger seems more compassionate, attentive and wise? Who would you choose as a confidant?
Researchers have been asking people these very questions to rate strangers’ responses to emotional situations. The trick: Some responses come from humans, others from chatbots, but the raters don’t know which is which. The result? Over and over, humans say the most empathetic stranger—in this case stranger 1—is a bot.
A 2023 study in JAMA Internal Medicine found that patients with a medical concern preferred a chatbot’s response to a physician’s nearly 80% of the time. Another study published in the journal Communications Psychology this year found that people consistently found a chatbot more compassionate than trained hotline crisis responders.
Large language models (LLMs) are doing a better job than humans at making people feel seen and heard. This phenomenon, which we can call LLMpathy, is both stunning and controversial. Some experts argue that because computers are incapable of emotion, they can’t possibly care for people, a fundamental requirement of true empathy. Others are alarmed by just how readily people are trading human connections for digital ones, as ever more people turn to chatbots for therapy, friendship and even romance.
But beyond these concerns and complaints, chatbot confidants might offer something more practical. If they are beating us at compassion, shouldn’t we try to learn what they are doing right? Can computers actually help strengthen human relationships?
Researchers initially wondered if the AI advantage came from the fact that a bot has limitless time to offer endless attention—commodities that are in scarce supply for physicians and crisis responders. But that doesn’t seem to explain it. In a 2024 working paper published by researchers at Harvard Business School, 400 participants were asked to read descriptions of other people’s struggles and write responses. Some were told they would get a bonus payment if their response was particularly thoughtful and helpful. This incentive nudged people to spend more time on their expressions of compassion, but these efforts still fell short of the empathy expressed by ChatGPT.
The secret to the chatbots’ success may be the all-too human mistakes they avoid. In a 2024 study published in the journal PNAS, more than 500 people either wrote about a personal struggle, such as returning to work after time away, or sent responses to other people’s struggles. Researchers also prompted Microsoft’s Bing Chat to respond to everyone’s struggles.
Raters, who scored these responses without knowing the source, judged Bing responses as more empathic than those written by humans, largely because Bing spent more time acknowledging and validating people’s feelings. Humans typically responded by sharing a seemingly related experience from their own lives. Basically, the chatbots made the exchange about the person; the humans made it more about themselves.
Chatbots are effective in these situations not because of something they do that we can’t, but because of the mistakes humans make and they avoid. When we see someone is in pain, or when someone we care about shares a problem, we instinctively want to help. We offer advice, suggest solutions and rattle off how we once dealt with something similar.
These impulses may be noble, even loving, but they aren’t as helpful as we might hope. Rushing to share opinions and hash out next steps can trivialize someone’s pain, and shifting the focus to yourself may unintentionally undermine their hope to be heard.
Chatbots avoid these pitfalls. With no personal experiences to share, no urgency to solve problems and no ego to protect, they focus entirely on the speaker. Their inherent limitations make them better listeners. More than humans, Bing paraphrased people’s struggles, acknowledged and justified how they might feel and asked follow-up questions—exactly the responses that studies show signal authentic, curious empathy among humans.
When people adopt similar strategies, their connections strengthen. Consider “looping for understanding,” a technique in which a listener repeats what someone else says in their own words, then asks if their summary is correct—“Do I have that right?” Chatbots are natural loopers. When humans are taught to do the same, they do a better job of understanding what the other person is feeling and helping them feel heard.
These skills aren’t just for strengthening bonds with family and friends. Dozens of studies show that managers and employers who are seen as good listeners tend to have more loyal, effective and productive employees.
It bears noting that the AI advantage in empathetic conversations has limits. Talk for long enough with ChatGPT and you’ll find it a friendly but formulaic partner. Its go-to recipe of “paraphrase, affirm, follow up” may feel warm and attentive the first time, but rote the second and annoying the third. AI responses can also be glitchy and prone to hallucinations.
Research in this area typically asks people to interact with chatbots just once. It is possible that their edge over humans would disappear in longer chats, when its kindness grows repetitive and cloying. Given a small taste, consumers prefer Pepsi, the sweeter beverage, over Coca-Cola; given a whole can, they prefer Coca-Cola.
Despite AI’s impressive listening skills, studies show that most human beings still want to engage with other humans. When scientists reveal the source of the supportive messages, participants often insist that the chatbot made them feel less heard, especially if they are wary of AI in general. When people are struggling with a problem, they prefer to wait to talk to another person rather than access a chatbot right away. Anyone who’s repeated “agent” at a customer-service bot knows the feeling of desperately wanting a carbon-based life form on the other end of the line.
Chatbots might be effective listeners, even virtuosic, but they still can’t feel or truly care for us. The market for AI therapists may be growing, but many people still resist seeking emotional support from a machine. Some of the bugs of human connection are also, in fact, features. Chatbots can’t roll their eyes, leave our texts unanswered or complain that our problems are getting boring. But the fact that we often must earn human empathy, and that it comes from limited beings who sacrifice to be there for us, is part of its beauty.
Jamil Zaki is a professor of psychology at Stanford University. His latest book is “Hope for Cynics: The Surprising Science of Human Goodness,” published by Grand Central Publishing.