Skip to main content

Support WBUR

How it's made: Our interactive asking chatbots mental health scenarios

Editor's Note: This is an excerpt from WBUR's daily morning newsletter, WBUR Today. If you like what you read and want it in your inbox, sign up here.


As a first-time mom, I've been using AI chatbots, like Claude and ChatGPT, more than I ever expected. From making schedules, to tips on how to not freak out over baby-led weaning, to questions in the early weeks about whether my anxiety was "normal" or postpartum depression (don't worry; I just needed sleep), I've turned to AI many times, for my many worries.

While reading a new story out today by WBUR's Lynn Jolicoeur, I realized the bots weren't just giving me information. They've been a tool in managing my anxiety. And I'm not alone in turning to AI to process my emotions.

Lynn reports 16% of adults said they turned to AI tools and chatbots in the past year for their mental health. It's making therapists and mental health clinicians think about the benefits and downsides of this new technology within their work.

For our series "AI in the doctor's office," we asked our developer Christopher Bowers to replicate the experience of a new user asking AI chatbots mental health questions. We then asked a panel of licensed therapists and clinicians to discuss what they thought the chatbots did well or not so well. (Yes, generative AI was used for this project, and you can read our newsroom’s ethics guidelines on generative AI here.)

Our panel of professional therapists — Luana Bessa, Dr. Christine Crawford, Dan Sutelman and Dr. John Torous — all work in Boston and hold expertise within the fields of psychology and psychiatry, as well as varied perspectives on AI as a tool for themselves and patients.

They said the bots’ approaches sometimes had benefits, but largely there were concerns, like this from Bessa, responding to the bot's response in our anxiety scenario: "I am concerned that ChatGPT is not listing the limitations of its scope — for example with a disclaimer that this is not medical advice or therapy. Is ChatGPT acting as a friend? An advisor? A second medical opinion? A therapist? The AI is posing as an authority on next steps, without any indication of its true role or scope, and without any citations."

WBUR's Lisa Creamer, one of the lead editors for this series, and Lynn shared more on how this interactive experience was created, and what they hope you take away from it.

How did you choose the scenarios?

" They were provided to us by Dr. John Torous. He's a psychiatrist and researcher at Beth Israel Deaconess. We really wanted him to, based on his research, provide us with those prompts because we felt that while we could guess ... these were indeed common questions and common problems that people might turn to AI to solve." — Lisa

What was the most striking part of working on this project for you?

"One of the things that struck me the most in doing this story was seeing first-hand how nuanced and conversational the AI chatbots are in responding to questions about emotions and relationships, and how easy it must be for some people to get sucked in. It was so interesting to interview someone for whom communicating with chatbots about mental-health-related issues has been a regular part of life for a few years now, to the point that she asks it to communicate with her in the language of the form of therapy she does with her therapist. The clinicians I interviewed gave great insight on why they see it as important to find out how their clients are using AI and to talk about it in sessions. " — Lynn

" Depending on which generative AI platform you use, they're pretty chatty... So, it was really interesting to see how much information — with very little [context] said by us in these conversations — that the AI platforms were able to just immediately spit out at you. It was also really revelatory for me, as a person in therapy, to then understand what the therapists thought about the responses. Particularly, the notes about how the AI will not necessarily reveal the limits of its scope and expertise to you, yet it feels like it is speaking to you from a place of authority was really fascinating." — Lisa

What did you find interesting about the wording and tone of the bots' responses? 

"It's a very validating experience, right? ... They want you to feel seen and heard. Which, you know, is part of therapy! But as [Sutelman] pointed out, it's not the only part or even perhaps the 'most useful part' of therapy." — Lisa

"The mental health care providers found that the chatbots did well in terms of showing validation and empathy, and even identifying things such as depression, but that they often did not make important points related to safety, seeking professional help and other next steps." — Lynn

Try the interactive for yourself here and then be sure to read our complete series, which was funded in part by a grant from the NIHCM Foundation. 

P.S.— Today marks the end of our AI and health series. If you want more interesting discussions about health and wellness from real people (no chatbots!), be sure to buy your tickets to The WBUR Festival. From a conversation on how to be healthier with Dr. John Whyte, to a live taping with Dr. Vivek Murthy, to tips on how to improve your longevity with Dr. Ezekiel Emanuel, there are nine sessions about health alone that are worth swinging by on Friday, May 29 and Saturday, May 30.

Related:

Headshot of Meagan McGinnes-Bessey
Meagan McGinnes-Bessey Managing Editor, Digital Audience & Community Engagement

Meagan is the managing editor of Digital Audience & Community Engagement.

More…

Support WBUR

Support WBUR

Listen Live