Advertisement

AI red teams see the internet's worst so we don't have to

10:49
Download Audio
Resume
AI red teams find and root out the worst of AI by feeding it prompts that show what various chat bots will come up with, and then alerting companies so they can add guardrails. (Business Wire)
AI red teams find and root out the worst of AI by feeding it prompts that show what various chat bots will come up with, and then alerting companies so they can add guardrails. (Business Wire)

We’ve heard about the dark side of artificial intelligence: chatbots that suggest people’s spouses don’t love them, the proliferation of conspiracy theories and even suggestions of violence and self-harm.

That’s where so-called red teams come in. They’re hired by companies to find and root out the worst of AI by feeding it prompts that show what various chatbots will come up with, and then alerting companies so they can add guardrails. But the work can be grueling and traumatic, prompting some red team members to advocate for more support.

Evan Selinger is a philosophy professor at the Rochester Institute of Technology and Brenda Leong is a partner at Luminos.Law specializing in AI governance. Both are red team members who teamed up to write about their experience in a recent Boston Globe article “Getting AI ready for the real world takes a terrible human toll.”

They join host Robin Young to discuss the issue.

This segment aired on February 5, 2024.

Advertisement

More from Here & Now

Listen Live
Close