20 Comments
User's avatar
Ben's avatar

Hi Ben, I wanted to add the following context on this sentence in your post:

> Over a 4-week period, participants using Therabot saw significantly greater symptom reduction than those in the control group (traditional person delivered care) across all three mental health conditions.

The Therabot paper states: "Participants were randomly assigned to a 4-week Therabot intervention (N=106) or waitlist control (WLC; N=104)."

The correct comparison is between Therabot and no treatment (waitlist), not "traditional person delivered care".

You may have been confused because there is a separate quote from one of the authors that says “Our results are comparable to what we would see for people with access to gold-standard cognitive therapy with outpatient providers” -- however this is not the control group measured in the research paper. The paper simply states: "Participants, on average, reported a therapeutic alliance comparable to norms reported in an outpatient psychotherapy sample."

Expand full comment
Ben Miller's avatar

Yes! You are so right, and that's totally my bad for miscommunicating that. Very important to communicate these nuances and not overgeneralize! Thanks for the fix.

Expand full comment
Jennifer  Wright-Berryman PhD's avatar

I have studied the comparison of AI vs. human interviewers in suicide prevention screening. Our participants overwhelmingly preferred the chat bot when disclosing their SI.

Expand full comment
Jason Whitehead's avatar

Love this, and love the sense of limitations. As an early adopter of most technologies I use ai personally to get things started but rarely to share a finished idea or product. It's a useful organizational tool.

However, your points about therapy are things I've wondered about for years, not just in the ai space. Are we really just about symptom relief? A lot of evidence-based treatment priviliges that over other markers around attachment, relationship, and longer-term growth and development. I'm reminded of a supervisor who once told me that "it's a shame therapy is wasted on the sick."

While I don't fully share that sentiment, I would amend it to say that it's a shame therapy is solely focused on sickness and symptom relief. Our paradigms need to change in relationship with ai and with other tools necessary to cocreate good human beings rather than just non-pained ones.

Expand full comment
Ben Miller's avatar

+1 Jason. Well said!

Expand full comment
Rachna L's avatar

Love everything about this article! Your points on transparency and intentionality of use are so important.

Another thought for discussion might be the argument that AI - or computer algorithms have been used in the practice of medicine without necessarily involving patients consent for many years. (Radiology, pathology and much more).

So there is a story there - about why it feels so bad here in mental health when a therapist used a AI tool.

1) bc chatgpt is not an FDA approved or validated tool, it feels bad

2) Something about therapy data being personal stories and history is different from images, blood and lab values

3) the use of AI in mental health is not yet standard of care, eventually the best therapists likely will be the ones who use excellent AI tools to supplement their care, education, and fill gaps in their knowledge. And patients aren’t in on this ride to see that standard of care being built. Some don’t care but many many do, and it’s time to be more transparent about the entire messy process.

Expand full comment
Ben Miller's avatar

Yes! It's important to recognize that other fields of medicine have adopted AI tools long before the mental health field even thought about it. It feels that without transparency, we are doomed to create something in service to goals that may be different than those we are attempting to serve. Really great points! Thanks for commenting - and love that you have written about the ethics of this so eloquently here: https://publish.obsidian.md/ai-in-mental-health/Inseparable/Enhancing+Mental+Health+Care+with+AI

Expand full comment
Benjamin Kyle, LCPC's avatar

I’m a therapist and wrote on this issue last week. I don’t think I feel threatened by the existence of AI in my field. As you say, it has its uses. I have qualms with the way AI obtained its information but it is arguably useful in the way it packages the information for the user. I think treading carefully with transparency is best. I agree that more regulation is needed when AI is used for medical purposes. I asked Chat GPT questions about this for my post and the screenshots are at the end. https://open.substack.com/pub/benjaminkyle/p/doctor-ai-is-in-can-chat-gpt-replace?r=4lt1s6&utm_medium=ios

Expand full comment
Ben Miller's avatar

I love your article! So well done - and I appreciate your take on the issue. Thank you for weighing in here.

Expand full comment
Benjamin Kyle, LCPC's avatar

Thanks for the kind words! This is such an important topic and doesn’t get much attention.

Expand full comment
Zoe Siegel's avatar

Thanks. This is so important

Expand full comment
Spherical Phil - Phil Lawson's avatar

Thank you for your insightful summary of the complexities of using chatbots as therapists. While AI holds tremendous potential for mental health support, its development is far more nuanced than it may seem.

Two key points stand out:

1. Current chatbot models focus on identifying conditions (sadness, anxiety, etc.) and addressing symptoms, yet therapy's true goal is deeper than symptom management.

2. Chatbots lack the personal understanding that therapists gain through dedicated time spent learning their patients' experiences and underlying concerns.

The future lies in AI that genuinely knows and supports individuals, empowering their decisions about health and well-being. As personal responsibility for health grows, this need is more pressing than ever. A new, advanced kind of AI could be transformative.

Especially if the AI is used upstream to support individuals before they have clinical issues.

Expand full comment
Ben Miller's avatar

Phil, I love this as it's both guidance and hope at the same time. So well said. Your call out for more context and understanding is spot on. I'd love to see more people in this space invest heavily in understanding the issues and going beyond the superficial pieces.

Expand full comment
Paola Santiago's avatar

Thanks for bringing up the BetterHelp article. This article was actually such a fascinating read! I agree with you that AI is a neutral, albeit very powerful too. The fact that it can even beat out actual therapists in terms of symptom reduction is already a reason why we can't just easily remove AI from mental health spaces. But at the same time, being ethical about it is a step to making it more humanistic.

I actually did cover something similar to yours especially around therapy and how people humanizes AI. My biggest take though is that no matter what AI does not understand you, and the biggest concern I have is people are being more incline to talk about their most sensitive topics and over relying their emotions onto AI.

https://psproductpersonpapers.substack.com/p/why-ai-feels-human-even-when-far

Expand full comment
Ben Miller's avatar

"AI does not understand you.." such a powerful statement. Thank you so much for the comment and for sharing your work. I look forward to reading it!

Expand full comment
When Freud Meets AI's avatar

Hey Ben, thank you for this thoughtful and measured article. I truly appreciate your approach of "learning from the Therabot study and not exploiting it." I touched upon several aspects of the study (questionnaires used, effect size...) in an article and would appreciate your feedback on it:

https://wfmai.substack.com/p/no-psychiatry-did-not-just-experience?r=3row1i

Expand full comment
Ben Miller's avatar

Thank you for this excellent commentary on the study. I absolutely agree with you and love how you highlight other "movements" within the field that should have swayed/influenced psychiatry. Folks should read your post as you do a great job getting more into the study, which I did not. Great work!

Expand full comment
When Freud Meets AI's avatar

Dear Ben, thank you for your feedback, I really appreciate it!

Expand full comment
Promise's avatar

Thank you for your thoughtful article. I appreciate you mentioning both the importance of trust and how LLMs can help improve people's mental health. At the same time I strongly disagree with your conclusion, "At the end of the day, therapy should be about connection, not shortcuts." To me therapy should be about improving mental health, not the exact way through which we go about doing this. People are suffering, and the role of a therapist, IMHO, is to find the most effective way to help a person improve their mental health.

Expand full comment
Daniel Eisenberg's avatar

Very interesting essay, thank you! This makes me wonder about the following: how much does it matter whether we think responses are coming from AI versus humans, independent of the content of those responses? In other words, imagine a randomized trial with two groups. Both groups are given identical feedback, except one group is told it's from AI and the other group is told it's from humans. Which group does better? I guess this type of experiment would be tricky to conduct, because ideally the feedback would depend on what each individual says they are struggling with--so you can't ensure that the feedback is identical between the two groups. But perhaps some version of this experiment has been run?

Expand full comment