Skip to main content

Support WBUR

Why AI models hallucinate

05:31

Artificial intelligence chatbots will confidently give you an answer for just about anything you ask them. But those answers aren’t always right.

AI companies call these confident, incorrect responses “hallucinations.” Researchers at OpenAI have been digging into why large language models hallucinate, and say part of the problem is that rankings of AI models reward guesses while penalizing uncertainty.

Here & Now's Scott Tong speaks with Ina Fried, chief technology correspondent for Axios.

This segment aired on September 8, 2025.

Support WBUR

Support WBUR

Listen Live