Skip to main content

Support WBUR

AI makes us overestimate our knowledge and performance

Saron Henok, 10, uses Ed, an AI-assisted learning platofrm during the official launch event at Edward R. Roybal Learning Center in Los Angeles on March 20, 2024. (Christina House / Los Angeles Times via Getty Images)
Saron Henok, 10, uses Ed, an AI-assisted learning platofrm during the official launch event at Edward R. Roybal Learning Center in Los Angeles on March 20, 2024. (Christina House / Los Angeles Times via Getty Images)

We’ve all encountered someone so utterly certain of their skill or knowledge of a subject that they can’t fathom the possibility they might be wrong. The comedian who repeats the same cringeworthy punchline over and over because he’s sure he’s killing it. The student who protests an essay grade by asserting the undeniable brilliance of their paper, despite its glaring errors and failure to meet requirements. You know the type.

Such people lack metacognition, or the ability to accurately assess their own knowledge or performance on a specific task. According to new research, using artificial intelligence (AI) diminishes people’s metacognitive ability, which is even more frightening than AI diminishing cognitive abilities because it’s harder for users to recognize. This research sheds light on the penetrating effects of using AI, and as well as its effect on current events, polarization, misinformation and confirmation bias.

If people struggle to recognize what they don’t know, they’re unlikely to learn. In my classes at Boston University, students practice metacognition by writing self-reflections, assessing their performance on each essay (which should roughly align with their grade), as well as setting goals for upcoming assignments. Their final course grade matters far less than what they think they learned and how they feel they performed. That’s what sticks and what transfers to other classes, situations and — ultimately — their careers.

Which is also why the recent paper is so alarming. An international group of researchers conducted two studies evaluating the effects of using AI for task completion on metacognition. In the first study, participants solved 20 logical reasoning questions from the Law School Admissions Test (LSAT). Those who used AI scored an average of three points higher than subjects who didn’t. Seems like a win, right?

The problem is that the subjects who used AI tended to overestimate how well they did by an average of four points. In other words, AI users misjudge their own knowledge and performance — a trend observable just about everywhere in real life, from the empty boasts of incompetent politicians and citizen crime sleuths to WebMD-assisted armchair physicians, to students who use ChatGPT to write their essays.

What happens when massive numbers of people think they know more about a subject or are better at something than they actually are? They provide and circulate bad advice and misinformation — the same kind of “slop” ChatGPT dispenses.

In a recent “South Park” episode, Randy and Towelie use ChatGPT to help them generate a business idea. The AI tells them their idea — a series of vague descriptors such as “global,” “local,” and “entertainment industry” — is “innovative” and “fantastic.” “South Park” satirizes not only ChatGPT’s sycophantic responses, but also Randy and Towelie’s immediate, delusional belief that they’ve learned something:

Towelie: “I feel smarter already!”

Randy: “Do you feel smarter? I feel smarter. She’s making us smarter.”

Towelie: “AI is incredible!”

Perhaps AI can draft a business plan, but if it facilitates delusion about our own capabilities, how helpful is it?

The recent study also showed a correlation between higher AI literacy and a lower ability to accurately assess performance. In other words, people who possess greater technical knowledge about AI and how it works tend to be more confident about their own performance, but less able to gauge it accurately. This finding raises questions about the divide between basic AI literacy and the specific understanding of how AI does (or doesn’t) function in a particular situation. Does technical understanding distort users’ perceptions and metacognitive abilities in specific situations because power users are more unwilling than the average person to admit they don’t know something or might be wrong? How can we make those users more aware of their own metacognitive shortcomings when they use AI?

The obvious answer is unfortunately unlikely: Companies such as OpenAI could address these findings via the programming of their models. Just as OpenAI can (and should) fix the programming that dictates ChatGPT’s dangerous and irresponsible responses to the users expressing mental health concerns or crises, AI companies should examine ways to make the interactions between users and AI less metacognitively damaging.

ChatGPT handles more than 2 billion queries every day and 800 million people use it each week. If the effects noted in this study apply on that scale, it becomes clear how adversely impacted our daily discourse will be — especially online.

President Trump hands a pen to Senior White House Policy Advisor on AI Sriram Krishnan after signing an executive order that curbs states' ability to regulate AI on December 11, 2025. (Alex Wong/Getty Images)
President Trump hands a pen to Senior White House Policy Advisor on AI Sriram Krishnan after signing an executive order that curbs states' ability to regulate AI on December 11, 2025. (Alex Wong/Getty Images)

This lack of metacognitive abilities overlaps with something called the Dunning-Kruger Effect, the tendency of someone who lacks skill or knowledge about a certain task or subject matter to overestimate their skill or knowledge. The effect also encompasses the tendency of those who are knowledgeable or skilled to underestimate their proficiency. The Dunning-Kruger Effect turns knowledge and understanding upside-down, muffling experts and amplifying those who are confidently incompetent. Sound familiar?

But in this particular study, computational modeling revealed a twist: Almost everyone who used AI overestimated their performance, regardless of skill level. AI flattens the usual pattern, but not in a good, egalitarian way. Rather, everyone loses some amount of healthy perspective on their own performance.

Exactly how AI warps people’s ability to self-assess remains unclear. There are, after all, a lot of complicated ways metacognition intersects with other factors, including knowledge level, confidence, sex, age, education, experience and biases. What’s clear, however, is that in addition to outsourcing thinking, researching, and writing to AI, humans are also outsourcing — and losing — their ability to assess accurately what they know and how they know it.

According to this study, most people trust AI, often without asking any follow-up questions. More than 12% of the participants saw AI as a “partner,” rather than as a tool. This perspective overvalues AI and undervalues the human user, eroding our abilities — both cognitive and metacognitive.

Teachers, particularly those in the humanities, have been grappling with these concerns with increasing urgency since ChatGPT’s debut. Research and lived experience provide countless reasons to be concerned about the future of learning as well as knowledge production and dissemination. Education exists because no one knows everything.

If AI use, especially among the technologically literate, convinces people otherwise and makes it more difficult for people to ask questions and think critically, then how is it helpful? Or more importantly, helpful to whom and why? If people aren’t willing to ask — and answer — these questions, then perhaps we deserve the slop AI serves us.

Related:

Headshot of Joelle Renstrom
Joelle Renstrom Cognoscenti contributor

Joelle Renstrom is a science writer whose work has appeared in Slate, The Guardian, Aeon, Undark and other publications. She also wrote the essay collection "Closing the Book: Travels in Life, Loss, and Literature." She teaches at Boston University.

More…

Support WBUR

Support WBUR

Listen Live