Advertisement

Good Bot, Bad Bot | Part I: Mental Health and Bot Therapy

32:55
Download Audio
Resume
A snapshot at the Computer History Museum in Mountain View, California. All of the objects you see here are robots. (Ben Brock Johnson/WBUR)
A snapshot at the Computer History Museum in Mountain View, California. All of the objects you see here are robots. (Ben Brock Johnson/WBUR)

For the next few weeks, the Endless Thread team will be sharing stories about the rise of bots. They're all over social media platforms, chatrooms, phone apps, and more. How are these pieces of software — which are meant to imitate human behavior and language — influencing our daily lives in sneaky, surprising ways?

First up, our co-hosts delve into the history of ELIZA, the world's first chatbot therapist. Why did this computer's creator have a lot of complicated feelings about the development of AI? We also contemplate the bigger question: can AI help us cope with mental health issues?

Show notes

Support the show: 

We love making Endless Thread, and we want to be able to keep making it far into the future. If you want that too, we would deeply appreciate your contribution to our work in any amount. Everyone who makes a monthly donation will get access to exclusive bonus content. Click here for the donation page. Thank you!

Full Transcript:

This content was originally created for audio. The transcript has been edited from our original script for clarity. Heads up that some elements (i.e. music, sound effects, tone) are harder to translate to text. 

Ben Brock Johnson: I am back at college for the fall semester. Or at least for today. Trying to figure out where I’m going in order to meet a professor during office hours.

Hey can you tell me, is this the new computer science building? Do you know?

Helpful campus pedestrian: Yes it’s this building. This one that you’re looking at right here.

Amory Sivertson: I wouldn’t have pegged you for an office hours student Ben, I gotta say.

Ben: Yeah, but I love hanging out with everyone. Even professors. And especially professors who work in very nice new buildings.

Ben: Beautiful building.

Helpful campus pedestrian: Have you been here before?

Ben: No.

Amory: Hm, okay, that does check out. I did not like school…per se.

Ben: You’re a “school drools” kind of person?

Amory: All the time. So much drool. What were you talking to Dartmouth College assistant professor Soroush Vosoughi about?

Ben: Well first, how cool the atrium of his building is.

Ben: What does it look like to you?

Soroush Vosoughi: Huh. Like a beehive. (Laughs.)

Ben: It does, right? Yeah.

Soroush: Yeah.

Ben: Lots of individual cells that make up a whole or something.

Soroush: Exactly. Yeah. A whole that's greater than the sum of the individuals. Yeah.

Amory: A nice sounding observation, but from his answer I’m guessing Soroush is not a professor of architecture.

Ben: Correct. Though in a way, he does deal with certain kinds of architecture. The careful assembly of things.

Soroush: I work on machine learning and natural language processing, and I do a lot of work with social media data.

Ben: I’ve come here to get Soroush to tell me about a project he and some grad students recently worked on that’s kind of on the academic bleeding edge of what machines can do with social media data. A program designed to predict the onset of serious mental health challenges.

He takes me up a floating staircase to the top floor of the building. Past floors of hardware labs and software labs. An expensive looking remote controlled sailboat.

Soroush: It's actually an autonomous sailboat.

Ben: Is it really?

Soroush: Yes. It learns how to sail itself.

Ben: We get into Soroush’s office, where the central air is on. Never good for an audio interview.

Ben: Is there any way to turn that air off?

Soroush: Umm.

Ben: The air is controlled by software. Centrally. And ironically, this computer science professor is currently powerless to change it. He says there’s an angry email thread about this very issue in the Computer Science school’s listserv right now.

I start to do what any student does when they’re looking for extra credit with a professor. Complimenting and asking him about the items on his bookshelf. Important books about algorithms, machine learning, and Isaac Asimov’s Foundation series. Which any sci-fi nerd knows. He’s got something made by a 3-D printer on there.

Soroush: This is a prototype of a Hodor holding the door. Like it's a doorstop, actually.

Ben: Solid GameStop memes reference.

Soroush: Yes, exactly.

Ben: There’s a homemade radar, built with coffee cans, brain puzzles, a sterling engine, which uses automated temperature differential detection to turn heat into energy.

Soroush also has this beautifully designed hand-sized box. With a simple mechanical switch on it. No explanation. When you flip the switch on, a robotic hand pops up immediately and flips the switch back off.

Ben: (Laughs.) I love those.

Soroush: There are a lot of metaphors around this one is, you know, maybe the uselessness of technology. You solving a problem that doesn't exist, right? I mean, just goes back.

Ben: It's a reminder of what not to do.

Soroush: What not to do, exactly, is a useless box. That's why it is, you know.

Amory: Alight, I think you’ve got your extra credit, Ben.

Ben: True true true true true. Time to get down to business. We start with One Hundred Level stuff.

Ben: What's a robot?

Soroush: Hmm.

Ben: Amory, care to answer before Soroush does?

Amory: Hm. I’d say a robot performs a task mechanically and automatically and maybe sometimes more efficiently than we can?

Ben: Not bad, not bad.

Soroush: So a robot, I think most people will think of a mechanical being, but the definition of a robot is actually more general than that. Anything that does a task that a human does, but in an automated fashion, I would call a robot.

Ben: Soroush started out working on mechanical beings at MIT. Robots that lifted things, performed physical feats. But now, he’s more focused on a particular part of robots. What he would maybe describe as the brain. And this whole host of programs which often get called by slightly shorter name, bots.

Earlier this year, Soroush and some of his students started scraping data off of Reddit. A massive number of comments, from thousands and thousands of real Reddit users, to look for signs of mental illness among those users. They were doing this thing in the online world because of something Soroush was seeing in his offline world.

Soroush: As a professor at Dartmouth, I've had a lot of conversations with students, both graduate and undergraduate, who, have told me that the culture they come from is such that they still don't feel comfortable talking about mental health issues, and they feel stigmatized to actually even say that, hey, I might, you know, be feeling anxious or maybe slightly, slightly depressed.

Ben: Can you say more about the specific cultures, or would you rather keep it general, it's up to you.

Soroush: Well, I can, I can give you — so generally speaking, I think a lot of Asian cultures and I mean both East and West Asia, not just East Asian. So people in Middle East, in East Asia, South Asia.

Amory: Soroush and other researchers built a bot to help people from Asian cultures acknowledge they were having a mental wellness challenge? By searching their posting data and looking for signs of mental stress? That is wild. And also tricky.

Ben: Yes. And one of the things that’s so fascinating about this. Is that millions of people all over the internet are going around spending their days, I think mostly thinking that they’re interacting with other people online. Sure everybody’s heard someone like Elon Musk complain that there might be too many Twitter bots. But more and more people are part of this complicated, massive, teeming ecosystem of humans and virtual machines interacting with each other. In obvious ways and kind of sneaky ways. For better and for worse. And we want to talk about that.

Amory: I’m Amory “totally not a robot” Sivertson.

Ben: I’m Ben “not a robot” Johnson and you’re listening to Endless Thread.

Amory: We’re coming to you from WBUR, Boston’s NPR station. And we’re bringing you a new series about the rise of the machines. Good bot.

Ben: Bad bot. Today’s episode: Bot therapy.

Amory: OK Ben. If a bot lives on the internet, is it really a robot?

Ben: I think by Soroush’s definition, yes. A robot does something a human does but in an automated fashion. But Soroush, who works at the college where the term Artificial Intelligence was first coined might not even call his creation a bot. He might call it a model.

Soroush: The model itself is the core of the bot. The other part is the input on output is just plugging it into some kind of a, you know, platform and have it run in real time. So yeah, go ahead.

Ben: Are those, are those the three parts of this kind of bot input output model?

Soroush: That's right.

Ben: And is the model sort of like a road map or an instruction manual or something like that? How would you further describe the model?

Soroush: Yeah, that's a really good question. The, a model, the simplest way to think of it is as a mathematical function that maps the input to the output. So here the input is raw data collected from the real world. You have a mathematical model, that's what we call the model, they can then map it through some transformation to a meaningful output.

Amory: I don’t know man. I don’t know. Model? Meaningful output? Input? Bleep bloop.

Ben: OK so think of a kind of really complicated flowchart right? The input is the beginning of that complicated flowchart and the output is the very end of it. The model is the middle.

Amory: Bleep bloop boop.

Ben: So let’s actually try out a real world example. And it’s a relevant one. It’s called ELIZA. ELIZA was a natural language processing computer program that was created by this legendary MIT professor named Joseph Weizenbaum.

Amory: Natural language processing is…

Soroush: Natural language processing, which all that means is you're basically creating models that deals with naturally spoken languages, uh, has really advanced in the last decade.

Amory: And ELIZA was a computer that you talked to. But not just any old Regular Joe computer program. A therapist computer. And it wasn’t built in the last decade. It was built in 1964.

Ben: So to try it we need to get into a time machine, but just kidding.

Amory: I mean, I wish. But I would go into the future. Which is female, for the record.

Ben: Fair fair, but so was ELIZA. Supposedly. And people are still running her code on the web. Amory, shall we go to the therapist?

Amory: Let’s go to the therapist.

You sent me an Eliza link. Okay, it says talk to Eliza by typing your questions and answers in the input box. She says, "Hello, I'm Eliza. I'll be your therapist today."

Ben: What shall we say?

Amory: (Typing.) Hi, Eliza. I'm feeling pretty lazy today. Okay, she says, "Do you believe it is normal to be feeling pretty lazy today?" What do you want to say back?

Ben: No.

Amory: (Typing.) No, not one bit. She says, "Are you saying no. Just to be negative?"

Ben and Amory: (Laugh.)

Ben: Let's say, maybe I don't know.

Amory: (Typing.) Maybe, I don't know. That's just how I feel.

She says, "Do you wish to be able to know that's just how you feel? Question mark." All right, I'm onto you ELIZA. This is the thing where you're gonna say, you keep asking a question back. (Typing.) Yes, I wish to know the source of what's making me feel lazy, and like I can't complete things.

Wow, that was a rapid fire response. She says, "How do you know you can't complete things."

Ben: Just ask my editor.

Amory: (Laughs.) Just ask my boss. She says, "Would you say that you have psychological problems?"

Ben: Well, how about you say, would you say that you have psychological problems?

Amory: (Typing.)

Ben: What do you got?

Amory: Okay. She says, "Oh, dot dot dot. Let's move on to something else for a bit."
Say that, "I have psychological problems? Question mark."

Ben: How about, you're making me depressed Eliza?

Amory: (Typing.) Oh my God, "What makes you think I'm making you depressed, Eliza?" The bot is self destructing as far as I'm concerned.

Ben: (Laughs.)

Amory: Like, she doesn't know her name. She's, you know, it's like I know I am. But what are you? What's going on?

Ben: She's kind of negative in this therapy session. Kind of a negative vibe, no?

Amory: Yeah. I mean, we weren't necessarily giving her the best material to work with, but the most helpful thing that I read in this interaction is is her saying, "How do you know you can't complete things?" Yeah, maybe I'll just say that to myself throughout the day.

Ben: OK. And we’ll get back to ELIZA and why that experience is not great. But think of Soroush’s project as an evolution of this decades-old idea, that humans in discussion with chatbots can be helpful. Because maybe a bot can help us see things that we wouldn’t normally see ourselves.

Amory: And if ELIZA was built something like 60 years ago, then bots should be amazing experts at this! Right? Except no, absolutely not. In fact they suck at it. Because we humans are nuanced as hell. And while robots have been processing human language for a while, truly understanding meaning from that language is a lot more tricky.

Soroush: So it's easy to well, relatively easy, I’m going to put that in quotes to analyze what people say in terms of what they actually say explicitly. But it's a much harder scientific question to use what people are saying to infer what is the internal mental state? People know how to infer other people's states based on the way they talk and emotions and facial expressions, bots don't. And so that's a very important ability for bots to learn to infer people's internal states.

Ben: That's really interesting. So in a way, you're talking about a foundational need that bots have, which is interpreting and understanding humans’ underlying emotions.

Soroush: This is known as, in cognitive science, as people sometimes referred to that as theory of mind. And so humans, of course, evolved to do that. So did monkeys, for example. And other primates.

Ben: Over a really, really, really, really, really, really, really long time.

Soroush: Exactly.

Ben: Soroush points back to his office bookshelf where there’s a rock polisher. A tumbler that accelerates a natural process somewhat unnaturally.

Soroush: We're doing something very similar. Where we are doing what evolution does is hundreds of millions of years. But in a few years, basically.

Amory: Some might say this feels a little like playing God. Accelerating a piece of software’s understanding of the mental state of humans. It’s a bit … yikes?

But we’ve been reaching for the stars on this stuff for a long time, ever since we imagined the future, or imagined people imagining the future.

Soroush: I'm a big science fiction fan, so pretty much all of my research is actually inspired at some level, you know, by science fiction. But this particular line of research, looking at mental states and more importantly, being able to predict people's behavior. It actually was inspired mainly by reading the Foundation series by Asimov. The core of the series is that there's a mathematician called Harry Seldon who develops a model called or a field of study actually called Psychohistory...

[Foundation clip audio: Psychohistory is a predictive model designed to predict the behavior of very large populations]

Soroush: That is able to predict how societies will evolve in the future based on past historical data.

Amory: In a minute, how Soroush is following in the footsteps of Harry Seldon, making psychohistory real with the individual commenting histories of Redditors.

[SPONSOR BREAK]

Ben: Something is really important to say here: Soroush and his graduate students stopped short of assembling a bot because of the potential implications of building a bot that might detect mental health or mental illness challenges in individual users.

Amory: This is good! We’re learning. Don’t build Skynet. Maybe just write a paper that imagines what might happen if we did build Skynet.

[Terminator movie clip audio: If we upload now, Skynet will be in control of your military…but you’ll be in control of Skynet, right?!]

Ben: What Soroush did instead was chart how to build the bot. Run the model, the input, the output, and also how to tune that output.

Amory: Tune?

Ben: We’ll get there. For now know this. The team at Dartmouth looked at tons of Reddit users’ publicly available data over time.

Soroush: It's thousands.

Ben: Okay. Tens of thousands or just thousands?

Soroush: Tens of thousands.

Amory: But their goal wasn’t to have a bot or computer model tell if a bunch of people were having mental health challenges in the aggregate. Rather, at an individual level. Which again, is hard. Because we’re all, well, individuals. In this computer science area of study, natural language processing, the model has to account for different people communicating differently. For example, sarcasm.

Ben: Sarcasm is super hard. Which is why Soroush’s team was applying natural language models in a really specific way.

Soroush: So the model learns that idiosyncratic use of language by each person.

Amory: This admittedly is both very similar to what an individual therapist might do over time, learn the complexities of communication in a given patient. But also, something that let’s be honest is a massive, massive use of time. Hence that computational speeding up of evolution.

Ben: The first thing the team’s model, or bot, does with these massive data sets on a user’s entire Reddit posting history is remove certain kinds of things, like references to particular events, and people.

Amory: Like say, a pandemic.

Soroush: Because we want to make sure we're not capturing emotions directed towards particular events. But, you know, we want to capture the person's internal emotions,

Amory: Then, the model uses some pretty complex natural language analysis to discern meaning, or the signal, from the posts.

Ben: This is an area where natural language processing in computer science has really leapt forward in the last decade or so. And Soroush’s team is using the latest and greatest programs to help the bot understand what the user is really saying.

Amory: Where previous computer programs could detect keywords and phrases, the new computer programs are way more sophisticated.

Soroush: Words and phrases are, of course, informative, but we can actually look at, for instance, the syntactic structure of a post and uh, look at long range dependencies between words and what that means that you might say word at the beginning of the sentence. Language is complicated like that.

Ben: Right, but there's a huge difference between saying, “I'm thinking about killing myself.” And, “Wow, this, uh, high quality gif-maker is really killing it. He reminds me of myself.”

Soroush: Exactly.

Ben: Here’s a big question though.

How do you know if the bot you built works?

Soroush: Yes, that's a really good question. So for these kind of projects, evaluating, uh, your, your body's probably actually the most challenging.

Ben: Before the team looked at measuring success they did a lot of testing and tuning of the model. They gave the bot test inputs and waited for the model to give outputs, and if the outputs were off, they actually applied another layer of calculation on the outputs after the model to get more accurate results. Then they looked at two measures of success, whether the bot predicted a user had a mental health issue and later that user joined a mental health focused subreddit and also looked at users self-reporting mental health challenges.

Soroush: Surprisingly, a lot of people self-report after a while saying that, hey, I just got diagnosed by you know, they go to these forums and they subreddits and they say, I got diagnosed bipolar, with bipolar for instance. Right.

Ben: So, the two markers for success from your point of view are user joins mental health related subreddit user self-reports that they are either they've been diagnosed with a mental health disorder or they're dealing with a mental health challenge.

Soroush: Exactly. And our model would have been successful if we predicted that way before the user actually reports. Again, if we detect it afterwards, it's meaningless, of course. So it's about how far in advance you can detect that.

Amory: This information is of course anonymized in the team's work. And because Soroush and his team had to get clearance from an ethics board to even do the work, we didn't look at specific users or ask to interview any of them. The team chose Reddit in part because the user post history is publicly available and Reddit provides this data in easy ways for researchers to use without strings attached, a key distinction between Reddit and Meta’s Facebook. But you do have to wonder a bit how people might feel about being part of this study.

Ben: To be clear, Soroush isn’t actually trying to replace therapists. Create the latest, greatest ELIZA. He’s trying to connect the challenges he spoke about earlier in certain cultures and build a bot that might help counteract what he and some of his students see as unhealthy cultural norms around discussing mental illness or acknowledging it. It could be more of an early warning system.

Soroush: I came to the conclusion that having a, a way for people not to have to voluntarily say, hey, I feel depressed, would be a huge help to people coming from those culture. 

Ben: Amory, how would you feel about getting a nudge that you might be depressed by a bot that was reading your entire history of posting on social media?

Amory: Honestly, I’m not as wary of the kind of Big Brother thing that most people are, and maybe that’s a bad thing. But I don’t think it would hurt to have a light shine on my posting behaviors and just to take another look back at them and go oh yeah, I did post some things or say some things because we just don’t have that perspective ourselves, you know?

Ben: So let’s actually go back to ELIZA for a minute. And ELIZA’s creator.

Amory: Hmm. Joseph Weizenbaum.

Dag Spicer: Joseph Weizenbaum and his family emigrated to the United States in the 1930s. They saw what was coming with the Nazi Party and Hitler.

Ben: That is Dag Spicer. Who we hung out with for a while. He’s actually not in Dartmouth, New Hampshire. He’s on the opposite side of the country as Soroush.

Dag: I'm Dag Spicer, senior curator at the Computer History Museum. And we're in Mountain View, California, right now.

Amory: Dag is kind of a special guy with a kind of a special name. He’s been at the computer history museum for almost 30 years. And he knows everything about computers. And he also knows a good bit about ELIZA and about ELIZA’s creator, Joseph Weizenbaum, who worked on a few computers which had a significant impact on how we live and interact with machines. Even before ELIZA.

Dag: Weizenbaum and others worked on this computer called ERMA, which was a machine for processing checks. Well, how did it do that? Well, the really cool thing they came up with was this font called MICR, magnetic ink character recognition, that we can all still see on the bottom of our checks. It's those weird little shape numbers that you see at the bottom of your check. Those come from ERMA, circa 1953.

Ben: Dag says that ERMA’s impact wasn’t just on those little funny numbers on the bottom of a check. It also put thousands and thousands of check processors, human check processors, out of work.

Amory: And Dag says this had an impact on Weizenbaum.

Dag: He was a technologist who really cared how his work was being used and how the discipline that he was a part of was being used.

Amory: Weizenbaum, who became a foundational mind in artificial intelligence and human computer communication, was worried about the things we might try to solve, or build, with tech.

Ben: And here’s the funny part. ELIZA, which has been called the very first chatbot, wasn’t actually a serious project. ELIZA was built as a satire. Meant to demonstrate to humans how chatterbots, as they were originally called, might behave poorly.

Amory: Mind. Blown.

Ben: That’s why our therapy session didn’t go so well, Amory!

Amory: We have been played!

Ben: Joseph Weizenbaum died in 2008, a year after the iPhone was released. But Dag says this skepticism of technology was a running theme throughout Weizenbaum’s life.

Dag: It really started with most notably with, with Robert Oppenheimer, who said, you know, or after he created the atomic bomb, lived the rest of his life in regret at what he had done. Right? And he said, you know, technologists have to be on their guard for what he called technologically sweet, quote unquote, problems, because they actually attract you with their challenge. But, if you look at them from a more humane perspective, they may be actually quite harmful.

Amory: We asked Dag what Weizenbaum might think about Soroush Visogi’s project looking at reddit post histories to get a sense of whether users were struggling with mental health issues. He didn’t want to speak out of turn on behalf of Weizenbaum. So we asked him just what he thought.

Dag: My first gut reaction is it's, it's a bit scary because they're essentially mood watching. And, you know, there are AIs now that read people's faces and do the same thing. They're like, oh, you're in a bad mood today. You know, they just look at your face. And it’s just such a slippery slope, you know, from there to intervention by, by the state or by somebody. So, you know, it's always the tradeoff, right? Well, if it saves one life, is it worth—? But, you know, I think I think in this case, I don't think it's a good idea.

Ben: Soroush built the model scraping Reddit to find signs of mental illness in individual users’ posts. So he’s not so skeptical. But he does have a big caveat.

Soroush: It shouldn't be the platforms or government or any other external entity that's running these things and, you know, telling people to go see a therapist or whatnot. It should be a choice by people to run these things privately, and the communications should be private between that tool and the person.

Ben: Whether or not you support Soroush’s team in imagining a world where an opt-in program could help people acknowledge their own mental health needs and challenges, or you’re more cynical about how a program like that could be used, like Dag Spicer or even, maybe Joseph Weizenbaum. This stuff is already happening.

Amory: Bots are already dutifully harvesting massive, publicly available datasets, interacting with users, and much more. Sometimes we don’t even realize that our experience of the internet isn’t just people talking to people. It’s increasingly mediated by little pieces of software, trained on the latest and greatest programs. To do all sorts of things. Today: practicing how to predict your mental health issue. Tomorrow: running for political office?

Ben: Next week…

[Preview audio: And of course, being digital, you can keep a record of, of everything that you say and do. So it creates a level of accountability that the current politicians just don't, don't have.]

Ben: Good bot.

Amory: Bad bot.

Ben: Endless Thread is a production of WBUR in Boston.

Amory: This episode was written and produced by my co-host Ben Brock Johnson with help from Dean Russell. And co-hosted by yours truly. Mix and sound design by Paul Vaitkus.

Ben: Our web producer is Megan Cattel. The rest of our team is Nora Saks, Quincy Walters, Grace Tatter, Matt Reed, and Emily Jankowski.

Amory: Endless Thread is a show about the blurred lines between digital communities and a useless box. If you’ve got an untold history, an unsolved mystery, or a wild story from the internet that you want us to tell, hit us up. Email Endless Thread at WBUR dot O RG.

Headshot of Ben Brock Johnson

Ben Brock Johnson Executive Producer, Podcasts
Ben Brock Johnson is the executive producer of podcasts at WBUR and co-host of the podcast Endless Thread.

More…

Advertisement

More from Endless Thread

Listen Live
Close