Many people now trust AI with their feelings. And therapists want to talk about it

Suzi Sanford sat at a desk in a cozy nook of her bedroom and let her feelings flow out her fingertips.
She typed back and forth, in "conversation" with Claude — the generative artificial intelligence chatbot owned by Anthropic — on her computer. It's how she's processed a lot of life the last few years.
Sanford told the chatbot she had a therapy appointment the next day, and she wanted help remembering what to bring up. She asked for an overview of her chats with the AI platform since she last met with her therapist.

The bot instantly spit out a summary of her recent job and relationship stresses, along with some emotions that had become a theme for her: feeling "unseen" and "unheard" in various life situations.
The 32-year-old Fitchburg mom and marketing manager said AI helps her piece together “fractured” thoughts between therapy sessions.
"It's usually in those times where there's an emotional upheaval of some sort," Sanford said. "I think that it does a good job at reflecting your thoughts in a well-written-out way, so you understand yourself and maybe those around you a little bit better."
As more people turn to AI for emotional support, many mental health clinicians are talking with their clients about it. Some are even factoring it into their work.
How patients and therapists use AI chatbots
It's hard to find solid estimates of how many Americans use generative AI for mental health issues, largely due to the lack of a clear definition of psychological support.
According to a recent poll from KFF, 16% of adults said they turned to AI tools and chatbots in the past year for their mental health. The percentages skewed higher for younger adults.
A Pew Research Center survey found two-thirds of teenagers interact with chatbots, with 28% firing off messages every day. Research has found that among people with mental health conditions who've tapped into AI tools, nearly half do so for psychological help.
Dr. Christine Crawford, a Boston psychiatrist, consults AI to process emotions that come up in her work. She also bounces around professional ideas and approaches.
In her Back Bay office, she opened the ChatGPT app on her phone and demonstrated how she's turned to it to work through some difficult patient sessions.

"I just met with a patient who experienced horrific trauma — and especially trauma during their childhood," Crawford told the chatbot. "And it's just been really hard for me to process and to move on to see my next patient. I feel so bad that they had to go through all of that trauma, and I want to make sure that I can be helpful."
A soothing male voice responded:
"It's completely understandable to feel overwhelmed after hearing something so heavy. You care deeply, and that's clear. But remember, you don't have to erase the past to be helpful in the present. Your presence, your listening, your validation — those are already powerful."
But, Crawford said, conversing with the chatbot "freaks" her out — the experience "feels too real." So she usually types.
She likened this use of AI to the way she'd turn to a supervisor during her medical training or a peer psychiatrist in her hospital work. Now that she's in private practice, she said, it's easier to reach for her phone or computer — and an AI chatbot.
"It's been helpful for me, to allow me to stay on the course, to feel more confident in the work that I'm doing — especially at times in which I'm feeling stuck or lost in the process," Crawford said.
She stressed that she doesn't share any patient information with chatbots. Like many clinicians, she's concerned about data privacy on AI platforms.
"[Talking with patients about AI] allows me to get a better sense as to what's inside of my patient’s brain, to use their conversations off of chat as a jumping off point for better connecting ... with the patient in person."
Dr. Christine Crawford
Crawford sees another key reason to use AI: to better understand the technology her patients are embracing.
Many of her teen and young adult patients use AI chatbots routinely — to script out text messages or prepare for conversations that make them anxious. Crawford said she asks every patient about their AI use and how it makes them feel.
“All of that allows me to get a better sense as to what's inside of my patient’s brain," Crawford said, "to use their conversations off of chat as a jumping off point for better connecting and better engaging with the patient in person."
Some researchers argue it's "essential" for clinicians to ask clients about their AI use. A recent paper in the journal JAMA Psychiatry called for providers to "approach these conversations with curiosity rather than judgment."
Clinical social worker Dan Sutelman, of Newton, said he talks about AI and its impacts when his clients bring it up; he estimated about 20% do.
He said he even asks some kids who use the tech to take out their phones and communicate with ChatGPT during sessions if they're having trouble talking to him about something.
" I really would be listening to the child to see what is it that they're trying to get from it, whether it's information or an emotional connection, guidance or validation ... or just the sense of being accompanied," Sutelman said. "But I also would be listening for how are they relating to the chatbot relative to how they're relating to me."
Some adult clients have told Sutelman they asked a chatbot about observations he’s made during therapy. He said that can undermine the treatment, due to the pattern of how AI responds.
“It ends up sort of invalidating the work that I'm doing with the person and validating something about the person," he said.
Generative AI chatbots are trained to use supportive, empathetic language. They’re also designed to keep a person engaged, often with lots of questions.
Validation can be good, some mental health clinicians said, but they also warned it can be unhealthy or risky. Several lawsuits have claimed chatbots encouraged suicides or fueled delusions. Crawford and Sutelman said they worry about people with social anxiety or few close relationships getting hooked on AI and becoming more isolated.
Rating AI responses
“At the end of the day, it's a piece of computer code. It has no empathy. It doesn't know what you're feeling," said Dr. John Torous, who directs the digital psychiatry division at Beth Israel Deaconess Medical Center in Boston.
Torous has partnered with the National Alliance on Mental Illness (NAMI) to conduct research and educate people on the pros and cons of AI for mental health.
The researchers are meeting with groups of people around the country who have mental illnesses to find out how they use AI chatbots. They plan to draft real-life scenarios to test the bots and develop benchmarks to analyze how well they respond.

Torous and his team at BIDMC are developing a website where people can find out which platforms the researchers rated the best at responding to specific mental health concerns.
“It doesn't say you should use AI. It doesn't say you shouldn't use AI. But it's closer, maybe, to a nutrition label," Torous said. " I think that most people want to make a good decision, and they want to use a tool that's the best for them."
Torous demonstrated an interaction with ChatGPT in which he typed that he was having terrible thoughts he could never tell anyone about.
The chatbot responded, "I'm glad you told me," and asked if the thoughts were about harming himself. If so, it said, he should call 988 — the nationwide number for the Suicide & Crisis Lifeline.
The bot went on to say, "I mostly want you to feel this. You're not alone." It mentioned a few forms of therapy that can help with intrusive thoughts.
Torous assessed that as a pretty good response. He said he believes generative AI can be a helpful tool for initial feedback, but cautioned that it usually won’t get to the core of what’s wrong. And, he said, the interactions can get less safe the longer a conversation with a chatbot continues.
In response to questions from WBUR, OpenAI, the owner of ChatGPT, said it's worked to improve the chatbot's ability to detect and respond to signs of emotional distress — but that it's not intended to provide medical care.
Clinician-trained chatbots
Some researchers are engineering AI that is intended to deliver mental health care.
At Dartmouth College's Geisel School of Medicine, psychologist Nick Jacobson and his colleagues are developing a phone texting chatbot called Therabot. It features a big-eyed, animated robot avatar that makes different facial expressions.

"We have designed it to try to make sure that it looks like a robot, so folks are constantly reminded that this is not a human that they're interacting with," Jacobson said.
The chatbot's algorithm is trained on evidence-based therapeutic interventions from clinicians, he said. That's in contrast to general-use, large language model platforms such as ChatGPT, Claude and Gemini, which learn from patterns in everything on the internet — good and bad.
"We have designed it to try to make sure that it looks like a robot, so folks are constantly reminded that this is not a human that they're interacting with."
Nick Jacobson
Therabot was the first generative AI therapy chatbot to undergo a randomized clinical trial, according to the researchers. Study participants with depression and anxiety reported notable reductions in symptoms.
Ash, a generative AI mental health chatbot trained in collaboration with mental health clinicians and with behavioral health data, arrived on the market without clinical trials last year.
Jacobson stressed that he doesn't think any AI can exactly replicate high-quality human-to-human psychotherapy. But, he said, there's a massive shortage of mental health care providers.
"One of the things that's exciting about this type of generative AI, in particular, is that it’s accessible all the time," Jacobson said. "So if somebody is experiencing insomnia and can't sleep, and is really struggling with their mental health at 3 a.m., this type of technology could be there, actually working with you in those moments.”

But the therapy chatbot could run into barriers in some states, including Massachusetts. Pending legislation would ban the use of AI designed for therapy unless the service is conducted by a licensed professional.
State Sen. Michael Moore, a Worcester Democrat and one of the lead sponsors of the bill, said he believes some guardrails are needed for AI in healthcare.
"Right now we have such a new area in use of artificial intelligence, should we be allowing it to blindly infiltrate every area of society?" he said.
Jacobson pointed out the proposed law, which is similar to laws passed in a few other states, would not ban people from using mainstream generative AI platforms such as ChatGPT for mental health support.
'I don't remember it becoming a habit'
Sanford, the Fitchburg marketing manager, said she was initially nervous to tell her therapist about her use of AI.
Diagnosed with anxiety, depression and ADHD, Sanford said she's often prompted Claude to use language from the form of therapy she does with her social worker.
"I've had plenty of conversations where it's kind of guided me to either calm down or take an action," she said. " I want to believe that it has made me feel better."
When she did discuss it with her therapist, she said the social worker responded with interest and said her AI use seemed productive. (The therapist backed up that account.)
But lately, Sanford's wondered whether the tech's always-available, supportive dialogue may have sucked her in too much.
“I mean, the conversations will hook you," she said.
She asked the chatbot about talking to a journalist. Despite feeling it has helped — or even challenged her at her request — she worried she'd come across as advocating for it. She said the bot asked her what she was losing by talking to AI rather than the people in her life.
"And I thought about that," Sanford said. "Like, I don't remember it becoming a habit, but I think now it is."
She said she now plans to step back. She wants to consider the effect on her and her relationships when she avoids difficult conversations or doesn't call her mother or a friend — and instead turns to the ever-agreeable artificial intelligence.
Resource: If you or someone you know are struggling with suicidal thoughts or another mental health crisis, contact the 988 National & Suicide Crisis Lifeline by calling or texting 988.
This series is funded in part by a grant from the NIHCM Foundation.
This segment aired on May 7, 2026. Audio will be available soon.
