Is Your Online Therapist a Chatbot?
Or how AI in mental health is raising more questions than answers
Mental health support is built on trust. You open up, you get vulnerable, and you expect a human being—someone trained, compassionate, and invested in your well-being—to be on the other end. But what happens when that therapist turns out to be, well… not a person?
A recent short-seller report on Teladoc Health and its BetterHelp platform alleges that some therapists are secretly using AI-generated responses during therapy sessions. Patients who thought they were speaking with a licensed professional were, at times, unknowingly getting messages copied and pasted from AI tools like ChatGPT. If true, this is a massive ethical breach—one that exploits patients at their most vulnerable moments.
BetterHelp’s alleged use of AI isn’t happening in a vacuum. Many therapists on the platform report being overworked, underpaid, and incentivized to cut corners. If clinicians feel forced to rely on AI to keep up with demand, that’s a system failure, not a technological innovation.
The report suggests that BetterHelp's compensation model might be nudging therapists towards AI-generated to maximize earnings, potentially at the expense of meaningful, human care.
These revelations, whether entirely accurate or not, cast a shadow over the trustworthiness of digital mental health services. When individuals seek therapy, they're often at their most vulnerable, yearning for genuine human connection and understanding. The idea that their therapist might be outsourcing empathy to an algorithm? Well, that can feel like a betrayal.
Moreover, financial misrepresentations can erode investor confidence, leading to broader implications for the telehealth industry's credibility and sustainability.
But here’s where it gets weird.
A recent study published in PLOS Mental Health suggests that people often preferred AI-generated responses over those from real therapists. That’s right—when people were asked to judge which responses were better, the AI often won.
So now we’re left with an unsettling paradox:
If AI is being used without consent, that’s unethical. Therapy is a deeply human process, and substituting a machine without informing patients violates trust.
If AI is actually better than human therapists, that’s also a problem. Because it forces us to ask: What does that say about the state of mental health care?
But wait, there’s another twist!
Just when we’re wrapping our heads around those questions, a study comes along that throws yet another wrench into the mix.
A recent randomized controlled trial published in NEJM AI tested a generative AI chatbot named Therabot on people with clinically significant symptoms of depression, anxiety, and eating disorders. The results? Surprising and a bit convincing.
Over a 4-week period, participants using Therabot saw significantly greater symptom reduction than those in the control group (traditional person delivered care) across all three mental health conditions. Even more surprising? Participants rated their therapeutic alliance with the AI, which is that deep bond of collaboration and trust typically formed with a human therapist, as being on par with real clinicians.
So yes, a well-designed, ethically deployed AI chatbot can help people feel better. It was used for good. It was used transparently. And it worked. Oh, and it wasn’t trying to hide behind a human name badge.
So Where Does That Leave Us?
This is where it gets complicated—in the best way. Because we’re now in a space where three things are true at the same time:
If AI is being used deceptively in therapy—as the BetterHelp allegations suggest—it’s harmful, unethical, and undermines patient trust. No one should pour out their trauma only to unknowingly get a chatbot in return.
But if AI is thoughtfully developed, fine-tuned by experts, and used transparently, it might actually be a highly effective tool—especially for people who don’t have easy access to human therapists.
Even when AI is effective, we still have to ask: Effective at what? The NEJM AI study showed that a well-trained chatbot can reduce symptoms, but it also raises deeper questions: Are we only measuring symptom reduction? What about long-term healing, personal growth, or the complex, messy work of processing trauma? Can AI support those things too? Or are we just optimizing for short-term relief?
That’s why this moment is such a crossroads. AI isn’t inherently bad or good. It’s a tool. How we use it—and whether we’re honest about it—will determine whether it helps or harms.
Bringing It All Together: Our Job Going Forward
We can’t afford to be passive about this. Mental health is too important, and too many people are already slipping through the cracks. AI is here—it’s not going away—but how we use it is still up for grabs.
So here’s what needs to happen:
Demand Transparency: Platforms must tell users if they’re interacting with AI. Anything less is deceptive, and deception in therapy is never okay.
Regulate the Use of AI in Therapy: Clear ethical guidelines and legal protections must be created and enforced. AI can support care—but it can’t be a free-for-all.
Learn from Therabot, Not Exploit It: The NEJM AI study on Therabot shows what’s possible when AI is thoughtfully designed and deployed. With expert fine-tuning, user consent, and clinical validation, AI can reduce symptoms and support real healing. But we need to apply those high standards across the board, not just in academic trials.
Center the Human Experience: Even if AI gets really, really good, it must stay in service of human needs—not as a shortcut, not as a way to cut costs, and not as a replacement for care rooted in empathy and trust. Artificial Intimacy - another AI - is a real thing that we need to be cautious of.
Final Thought: AI Shouldn’t Be the Answer to a Broken System
Technology is advancing fast—but mental health care shouldn’t become a testing ground for under regulated experiments. If AI is replacing therapists in secret, that’s a violation of trust. And if it’s outperforming them, that’s a wake-up call about the pressures, limitations, and burnout plaguing the system.
So let’s not choose between panic and blind optimism. Let’s choose intentionality.
Let’s build a future where AI supports care without replacing it—where people know who (or what) they’re talking to—and where healing is never left in the hands of a machine alone. Who oversees this, holds us accountable, and more? Well, that's a discussion worth having.
At the end of the day, therapy should be about connection, not shortcuts. And trust, not trade-offs. There's a role for AI, yes, but let's be intentional about where that is.
Hi Ben, I wanted to add the following context on this sentence in your post:
> Over a 4-week period, participants using Therabot saw significantly greater symptom reduction than those in the control group (traditional person delivered care) across all three mental health conditions.
The Therabot paper states: "Participants were randomly assigned to a 4-week Therabot intervention (N=106) or waitlist control (WLC; N=104)."
The correct comparison is between Therabot and no treatment (waitlist), not "traditional person delivered care".
You may have been confused because there is a separate quote from one of the authors that says “Our results are comparable to what we would see for people with access to gold-standard cognitive therapy with outpatient providers” -- however this is not the control group measured in the research paper. The paper simply states: "Participants, on average, reported a therapeutic alliance comparable to norms reported in an outpatient psychotherapy sample."
I have studied the comparison of AI vs. human interviewers in suicide prevention screening. Our participants overwhelmingly preferred the chat bot when disclosing their SI.