Support WBUR
How to make AI work for us

AI has become unavoidable. It prompts us when we write emails, handles our customer service issues, answers our internet queries. How AI is changing our lives and what we can do to ensure it's actually helping, not harming us.
Guests
Gary Marcus, leading AI expert. Author of “Taming Silicon Valley: How we can Ensure that AI Works for Us.” He runs the Substack “Marcus on AI.” He also co-founded a machine-learning startup, Geometric Intelligence, which was acquired by Uber in 2016.
The version of our broadcast available at the top of this page and via podcast apps is a condensed version of the full show. You can listen to the full, unedited broadcast here:
Transcript
Part I
DEBORAH BECKER: Artificial intelligence is increasingly pervasive and many of us may not even consciously realize that we're using it. It prompts us in internet searches, when we write emails or deal with customer service issues. It chooses our music, even makes some music.
We asked you, our On Point listeners, what you make of the rapid rise of AI. Here's Todd Anderson of Salt Lake City, Utah.
TODD ANDERSON: For me, AI has helped me conduct research and analysis for complex decisions relating to my career, health, personal finance, my car, and my pet.
BECKER: Todd says AI has been like a personal assistant, saving him time and energy, but then there's On Point listener Shelly Smith of Ypsilanti, Michigan, who looks at it differently.
SHELLY SMITH: I think AI is being built to not understand consent. Do you want to have this thing track you? Do you want to talk to someone? No, I want to talk to a human. It's pretty much getting in the way and being shoved down my throat.
BECKER: Two polarizing thoughts there. One listener enjoys using AI in his daily life.
The other listener frankly hates it, but Todd and Shelly do agree on something.
TODD: I do not have confidence that corporations and the billionaires that run them will put enough guardrails on AI software to protect humans.
BECKER: And as Shelly puts it.
SHELLY: There's gotta be policies that turn this around.
BECKER: So whether we like AI or not, it seems it is here to stay.
How can we make sure that AI better serves us all? Joining me is AI expert Gary Marcus. He's author of the book Taming Silicon Valley: How we can ensure that AI works for us. He also runs the Substack Marcus on AI. And he co-founded two AI companies, including the machine learning startup Geometric Intelligence, which was acquired by Uber in 2016.
He joins us from Vancouver, Canada. Welcome, Gary, to On Point.
GARY MARCUS: Thanks for having me.
BECKER: So your book came out in 2024, and you listed then 12 threats posed by AI, disinformation, misinformation, market manipulation, deep fakes, crime, cybersecurity, and bioweapons. That's just a few of them. And you also said we were heading toward this sort of AI oligarchy.
How would you describe some of those threats today?
MARCUS: They've materialized. The book was, in a certain way, speculative. I was trying to read the writing on the wall and warn people about it. And I think all of those things that you just mentioned are here now, including the kind of AI oligarchy. And you mentioned policy. A quarter of all lobbyists in D.C. now are working on AI and that was last year, and even more money was just announced in New York Times article I think yesterday. So we are very much moving to the AI oligarchy. We have massive problems with cybercrime, with deepfakes, with misinformation.
Everything that I tried to warn the world about has come.
BECKER: But give me a specific, because you also warned about how AI was going to affect the 2024 elections and we really didn't see substantive use of AI harming the election according to most studies. So give me a specific example of how you feel that your predictions came to pass.
MARCUS: Misinformation, we can see it a lot in the Iran war. The U.S. election I think was not hurt too much by misinformation. But some other elections were hurt at least a little bit, and these things are just getting cheaper and faster and used more and more. So we dodged a bullet in 2024.
We won't dodge that bullet in 2026, and we're already seeing lots of deep fake commercials and things like that. We've already seen large scale misinformation operations. I think the 2024 election itself in the U.S. was not significantly harmed, but that threat is getting worse.
BECKER: But I wonder, what do you say to folks who, many people feel they don't have to worry about artificial intelligence, right? They say, I don't use it. I'm not in an industry where I'm writing a lot, drafting papers or analyzing case law as a lawyer or something like that. So why should I care about artificial intelligence?
What do you tell them, Gary?
MARCUS: First, I would question your premise. I think most people are actually worried about AI risk in some form or another. So there are, as you said, and as I said, multiple risks. So not everybody's job personally is immediately threatened. So for example, if you're a plumber, there's no robot that's gonna do your job anytime soon, and you shouldn't worry about that.
If you're a paralegal, you should be worried about the threat of AI taking your job. But there are other worries. Like for example, we're now using AI in war, and that may contribute to mistargeting. It may have been involved in the school. We don't really know. But I could talk about that if you'd like.
But we have, for example, the possibility that we will have wars that are accidentally accelerated by AI making bad decisions or decisions too fast for humans to verify. I think the average person, we should also still be worried about misinformation and what it means for democracy. Many people have teenage daughters who should be worried about non-consensual deep fake porn.
So there's a whole array of risks. In some professions, I think it's true that there's no immediate threat for AI taking those jobs. But there's threats all around society. Another example is you maybe have a 17-year-old in high school. They're probably not learning as much as they used to, or 21-year-old in college because people use ChatGPT.
And there's a phenomenon that's now being called cognitive surrender where people don't really learn anything. They do well in the immediate, doing practice items or something like that, but they don't really absorb the content when you're using.
BECKER: Because there's no critical thinking is what you're saying.
They're just relying on the machine.
MARCUS: There's basically no critical thinking in high school and college anymore. And that's going to be bad for all of society regardless of what you do and whether you have kids and so forth.
BECKER: Now, lawmakers have been aware of this. You actually testified in 2023 before a Senate Committee looking at oversight of artificial intelligence.
And at that time, Missouri Republican Senator Josh Hawley called AI one of the most significant technological innovations in history. And he said then, and again, this was about three years ago, that the country was at a crucial turning point. We have a clip from that hearing. Let's listen.
Is it going to be like the printing press, that diffused knowledge and power and learning widely across the landscape that empowered ordinary, everyday individuals, that led to greater flourishing, that led above all to greater liberty?
Or is it going to be more like the atom bomb? Huge technological breakthrough, but the consequences, severe, terrible, continue to haunt us to this day. I don't know the answer to that question. I don't think any of us in the room know the answer to that question. Because I think the answer has not yet been written.
And to a certain extent, it's up to us here and to us as the American people to write the answer. What kind of technology will this be?
BECKER: Gary Marcus, have we gotten any closer to answering those types of questions since that 2023 hearing where Senator Hawley made those comments?
MARCUS: I think a couple things have happened.
It's become clearer, first of all, that there are both printing press and atomic bomb elements. Obviously, I'm speaking metaphorically. But it's clear that there are some real advantages and some real costs, and they're significant on both sides. Kind of the answer to A or B was both.
The other thing that's happened is that in that room, Hawley was actually very typical. Everybody in that room wanted to have strong regulation on AI. We didn't know exactly what that would consist of. And that was what the discussion was about. But now the U.S. has basically abandoned having any strong regulation around AI, and the lobbyists are pouring in literally hundreds of millions of dollars to keep that from happening.
And Hawley is now one of the very few Republicans, though not the only one. For example, Ron DeSantis is on the same side, who are resisting having AI completely take over with no regulation.
BECKER: And of course, the argument for that is that regulations could stifle innovation. And there could be a lot of competing regulations that could really get in the way of U.S. competitiveness in artificial intelligence.
What do you say to that?
MARCUS: I think that counterargument is wildly overblown and historically not well grounded. So lots of technologies have actually advanced because there was regulation, or in part because there was regulation, and we have better safety because of regulation. That's why we have seat belts, and that's why we have airbags and so forth, is because of regulation.
Regulation actually often sponsors or fosters, I should say, innovation. And many of the innovations that we're talking about are very minor costs of compliance relative to the massive amounts of money that are being put in. So you have people whining about compliance costs that might be $10, $20 million when their companies are valued at trillions of dollars.
And so the math just doesn't make sense. We have compliance in every industry, like airlines and baking and just everywhere else, but these people have been so successful, in part because they control social media or have a strong influence on social media. Have been very successful at painting this straw man, where it's if you have any regulation, you can't possibly innovate that.
That's just nonsense.
BECKER: But is a part of this perhaps that this is something very difficult to understand? I would bet that many people in Congress probably don't fully understand artificial intelligence and it's moving so fast. Nobody wants to stop the train, if you will, because there's so much money involved.
And so those factors make it difficult to try to figure out how to regulate this particular industry. We talked about all of that, when I was there at the Senate in 2023, the speed and how fast things are moving, the limits of what the senators themselves can understand.
Senator Durbin raised all of that very directly. What I suggest is that we have an AI agency that's at least a little bit bipartisan. We've managed to do that in other domains. Here what's happened is Trump has had an AI and crypto czar who actually just stepped down. David Sacks, who I would say is pretty far from bipartisan, and it's not an agency that's like that.
I think we need a bipartisan agency as representation from both sides that tries to figure out what would be appropriate here. Instead, what we're having is no regulation at all, and that leaves our citizens vulnerable to cybercrime and misinformation and non-consensual deep fake porn. There are problems with education.
There's bias in hiring, et cetera, et cetera. All of this stuff is happening. The companies have essentially no liability or they're largely ducking liability. One of the things we talked about in the Senate was Section 230 which gave the social media platforms essentially immunity from anything that they might do.
The consensus in, the bipartisan consensus in the room, was that was a terrible mistake, and now we're doing exactly the same thing in AI. Were basically giving these companies complete immunity from all of the obvious negative effects that they're causing.
Part II
BECKER: Gary, you've been talking a lot about some of the real dangers of AI and what is at risk here. And I want to go a little bit deeper into some of the things you mentioned in the book. One of them was inaccuracy and hallucinations, which you've actually been warning about for quite some time.
But you tell a story in the book about how artificial intelligence actually made up things about you. I'm wondering if you could tell our listeners that story.
MARCUS: That one was about Henrietta. But I'll tell you an even better story actually. People often send me biographies that are written by ChatGPT in similar systems, and one day it, they sent me a system, because I don't go using it myself, to do this, but they sent me one that claimed that I had a pet chicken named Henrietta.
And this became a running joke. Because I don't actually have a pet chicken, let alone one named Henrietta. So here's where the story gets better. One day, Harry Shearer, who's a pretty well known actor, he was in Spinal Tap. He does the voices for a bunch of characters in The Simpsons. And he's a friend of mine.
One day someone sent him a biography that made mistakes about him. He sends it to me with the subject header, No Henrietta, and then, it claims that he's a British voiceover actor, comedian, et cetera. Only thing is he's born in Los Angeles. In fact, he sent it to me on the day we were gonna meet in Los Angeles by coincidence.
Now what's amazing about this is that because he's a pretty well-known actor, it's very easy to find the facts about whether he's American or British. You look at his Wikipedia entry, it says he was born in Los Angeles, right? And he's in IMDB and Rotten Tomatoes, all of those kinds of places.
So the information is readily available, and yet the system made a mistake. I wrote a whole Substack about this. You can look up my name and Harry Shearer. And you find it. And it's really fascinating that the AI can't just take this publicly available information. So that raises the question, why does it make a mistake?
What these systems do is they, to use an old term that's roughly right, they cluster different bits of information together. So they take everything they've seen, they break it into little bits, and they try to find what's similar bits there might be, and then they predict things.
So there's a cluster, I'm inferring this, but it's likely the case. There's a cluster of, let's say, actor-comedians including, Ricky Gervais and John Cleese and so forth who are actually British. And it's lumped him together with that, it can't keep track of individuals. You said, I've been warning about this for a long time.
I actually originally used an example in my 2001 book of my Aunt Esther, my late Aunt Esther, who lived in Concord, Mass. And I said, suppose she wins the lottery. These systems might think then other people who also live in the same area or a female or have similar jobs also won the lottery. I said, these systems cannot help but over generalize.
That was in 2001. 25 years later, we still haven't solved this problem. I was looking at a slightly older system. We have newer systems, but the newer systems have the same problem. They've inherited the same thing, because of how they're built. It's inherent in how they work that they will hallucinate.
BECKER: But we're hearing constantly about improvements in artificial intelligence and all of the things that can happen with artificial intelligence and how now basically the machine can write the code to improve itself. And is almost intuitively thinking as a person would, to be able to improve the product and get rid of some of these glitches that you're talking about.
Maybe not completely, what do you say to that when you've got such a rapidly developing technology?
MARCUS: The first thing I would say is that the hype is endless and people should be very careful about what they believe. The second is if you actually understand cognitive science, which is what I studied when I was in Cambridge, Mass. at MIT when I did my PhD.
If you study cognitive science, you realize that different things are being lumped together and that intelligence is multidimensional. There are different aspects of it. So these kinds of systems can improve certain things about themselves and not others. They can't actually improve their fundamental architecture.
And the fact is hallucinations are still here. They haven't even declined precipitously. They maybe declined a little bit, but it's hard to do the right measures and it depends on the measure and so forth. It is very clear that they still do these. I'll give you an astonishing example that Stanford just reported just a couple days ago, which is you can feed in a radiology file, including the images and the text, and it will give you an analysis. You can do the same thing without the image, and it may give you the same analysis. And it will say that it's looking at the image. It doesn't even have it, this is 2026, and you have systems that will claim to be reading images that can't actually see and make comments about them.
This hallucination problem has in no way disappeared. Despite all the hype about they can self-improve and so forth. The reality is they can self, sort of self-improve in small ways, but not in fundamental architectural ways.
BECKER: And of course, medicine is where AI is really being used. Probably the industry that most of us might actually see it being used firsthand.
Right now, in health care, there are benefits and we hear from a lot of folks who are founders of AI companies and workers in AI companies, tell us how this is going to be transformational for health care. It's going to help us find cures. It's going to make doctors' lives easier. It's going to consolidate all of our medical records so we can get a more accurate diagnosis.
There's a lot of hype around that, but is health care in particular a field where you think it could be very useful?
MARCUS: Let's break down different parts of that. And also, let's take a step back. So there's the AI that we are mostly talking about, are things like chat bots, generative AI.
That's a popular technique. There are other more special purpose systems that actually know something about biology aren't just scraping the whole thing, scraping the entire internet. Those may eventually do better. I wrote an essay recently called F Cancer and your listeners can guess what the F stands for.
And this was about two weeks ago, and in it I referred to another paper and I have the link there that had an astonishing stat. I may not get it perfectly, but I'll come pretty close, which is in something like 13 years of efforts to develop a cure with AI, there've been many drug candidates tried, but none of them have passed phase three clinical trials such that you could actually use them.
So there's been enormous amount of discussion, but at least on the cure side, there has not been that much progress coming directly about, directly from AI. Now, that doesn't mean that AI can't help health care in other ways, so for example, doctors need to write down their notes. That takes a bunch of time.
And now AI can help with that quite a bit. So certainly on the margins, I think AI can already --
BECKER: But still hallucinate, right? Still runs the risk of hallucinating.
MARCUS: And there is a risk there. And so even there, there are problematics. I was trying to throw them a bone, but you're right, even where I'm throwing them a bone the hallucination problem is still there.
Now there are some techniques that people can use in AI that are not as sexy, they don't hallucinate. So your GPS system is probably not going to hallucinate. It uses a different technology that is, I think, much more stable and reliable than large language models, which are the core technology here.
Go ahead.
BECKER: No, go ahead. Go ahead.
MARCUS: There will be impact in medicine. The first really big impact will be when we solve the radiology reading problems, and we're making some progress there. So I expect some progress in the next five years. I do not expect, like Dario Amodei does, that we will like double our lifespan in the next decade. That's not actually happening, because you still have to do the clinical work no matter how good the AI is. And so there's a lot of exaggerated promise around, but there will be some improvement in health care over the next, say, five years.
BECKER: We should say Dario Amodei is co-founder of Anthropic, the AI company.
MARCUS: And CEO.
BECKER: Yeah. Yeah. And CEO. Thank you. But you, Gary, have also worked in artificial intelligence, right? Co-founding two AI companies. I wonder how did you make sure that those companies were responsible and looked at some of these risks that you're now warning us about?
MARCUS: The first company was over a decade ago. And the technology was not so widely implemented or as advanced in certain ways such that I don't think these problems really arose back then. I was already concerned about them, but they weren't practical things that you could do something about, part of the problem is in 2020 or so, people started taking technologies that we all knew were flawed and started rolling them out into production. Especially once ChatGPT came out in 2022, such that they affected a lot of people. Until then, they were basically laboratory studies or they were used in very constrained ways.
The problem started to come in when people started rolling out these lab technologies that we were still trying to study on a much larger scale, both in terms of how many people they were affecting, and also the range of problems that they were affecting. When we take a narrow technology and we focus that on a single thing, like people are trying to build AI to understand sepsis and to try to predict sepsis.
Even that is extremely difficult. People have been working on it for many years, but if you focus it on that one problem, that's different than saying, I'm going to have a chat bot that can do anything. When you have a chat bot that can do anything, you wind up with these weird, unexpected problems, like, causes delusions and so forth.
If you have something like a GPS system, it's not gonna cause people to have delusions by having conversations and telling them how great they are and being sycophantic. One kind of answer is it wasn't really a problem. Because people were not so ambitious as to take on and try to boil the ocean.
When you boil the ocean it's problematic. The other thing is we need now, for sure, to be building new kinds of technology that focus on AI safety. There's a problem in the industry called the alignment problem. How do you get a machine to obey instructions basically, and ideally make it compatible with humans.
And so far, we've made almost no progress on that. We've made progress on some aspects of AI. We as a field. But we as a field have made almost no progress on alignment. And that may actually be what I work on next. I don't think it can be done with the large language models that everybody's so obsessed with now.
I think we need other technologies altogether to work on that.
BECKER: And so basically, you do think that AI can be useful if it's crafted in such a way to focus on something very specific and that more work needs to be done to get the most youthfulness out of it with the least dangerous consequences.
Is that?
MARCUS: And can I restate that in a different way? Which is, I'm still very pro AI. I found a piece of paper the other day from when I was 14 years old, saying I wanted to be an AI researcher influenced by cognitive science. Like I've been interested in this forever. I'm in my fifties now, four decades.
I really want AI to succeed. I still think it's possible, but I think we have two problems now. One's technical and one's kind of political economics. So the technical one is the large language models that people are working on are just not the droids we should be looking for. They are just not reliable and calculators are reliable.
We should aspire to an AI that is just as reliable. So that's a technical problem. I don't think that the tech we're using right now is the right answer. The sociopolitical problem is we've given so much power and so much money to people who are pushing this unreliable technology without a lot of ethical fiber, shall we say.
And those people have so much money that they're influencing the government. This is really what taming Silicon Valley was about, and it's only gotten much, much worse since I wrote that book two years ago. In order to get to an AI that we can trust, we need better technical foundations than we have right now.
And we also need a society that says, okay, we're going to hold these companies responsible when they prematurely push out technology that causes lots of problems in society. Right now, we're not. And so they're just running wild. And so we're being led to AI being used for mass surveillance, AI that causes problems in colleges and so forth.
With just no constraint on that and that's not a way to get to a positive outcome for AI.
BECKER: I want to play a piece of tape from an AI leader who you have disagreed with publicly several times. But this is OpenAI CEO Sam Altman, who went before the Senate in May of last year, and this is what he told a senate committee about AI regulations.
Let's listen.
SAM ALTMAN: We need to make sure that companies like OpenAI and others have legal clarity on how we're going to operate. Of course, there will be rules, of course, there need to be some guardrails. This is a very impactful technology, but we need to be able to be competitive globally. We need to be able to train, we need to be able to understand how we're gonna offer services and where the rules of the road are gonna be.
So clarity there and I think an approach like the internet, which did lead to flourishing of this country in a very big way. We need that again.
BECKER: Gary Marcus, I'm wondering what you think of that. That was OpenAI CEO Sam Altman before the Senate in May of 2025. Like the internet, regulation, yes. But like the internet, do you agree? Or what do you make of that?
MARCUS: First of all, it's worth noting that his first appearance in the Senate was next to me in 2023, and that he sang a somewhat different tune then he was much more favorable towards AI regulation. He actually pointed to my proposals about an international AI agency and so forth, and he's walking them back. Like the internet is sneaky words, for saying, we love Section 230 in which there was almost no regulation.
The details matter. A lot of what that book was about was making specific proposals. I'll give you just one, although there were, I think, 11 there. Which is we need to have an external agency not inside the companies, that has some say over whether technologies are dangerous, just the way that we do for drugs.
So for drugs, you have to show to the FDA. Things may have changed recently, but you have to show to FDA that not only does your thing have a positive effect, but it doesn't have massive negative effects. And so there's a cost benefit trade off that is decided for every important drug.
We need to do that for any software that is going to be released to hundreds of millions of people. Now, if it's a GPS system, the benefits are strong and the costs seem very minor. But if it's something like large language models, we already know from OpenAI's own work that it's, for example, increasingly leaving us vulnerable to bioweapons risks where untutored people can use these tools in order to help them make bioweapons. OpenAI has acknowledged this, and the problem is that right now only one person makes the decision about whether OpenAI releases something that they themselves think is dangerous, and that person is Sam Altman.
There is no external procedure, and as we all know, he's losing tons and tons of money, and so he's deeply incentivized to get things out. Even if they might be risky, it should not be just up to him, the same way it shouldn't be just up to a drug company. There should be some external independent evaluation.
Part III
Gary, I know we were talking, we've been talking quite a bit about this hearing in May of 2023 on the oversight of artificial intelligence. And you were at that hearing. We have a bit of tape from this hearing, because it's something that you're still recommending now, that an external agency be created to try to oversee some of the risks of AI and see how to regulate it best.
So this clip we have is Republican senator from South Carolina Senator Lindsey Graham with you, and also OpenAI, CEO, Sam Altman, and IBM Vice President Christina Montgomery. Here's a bit.
GRAHAM: Do you agree with me that the simplest way and the most effective way is have an agency that is more nimble and smarter than Congress, which should be easy to create overlooking what you do?
ALTMAN: Yes. We'd be enthusiastic about that.
GRAHAM: You agree with that, Mr. Marcus?
MARCUS: Absolutely.
GRAHAM: You agree with that, Ms. Montgomery?
MONTGOMERY: I would have some nuances. I think we need to build on what we have in place already today.
GRAHAM: We don't have an agency as regulated --
MONTGOMERY: Regulators.
GRAHAM: Wait a minute. Nope. Nope. No.
MONTGOMERY: We don't have an agency that regulates the technology.
GRAHAM: So should we have one?
MONTGOMERY: But a lot of the issues I don't think so.
GRAHAM: I just don't understand how you could say that you don't need an agency to deal with the most transformative technology maybe ever.
BECKER: Gary Marcus, do you recall that conversation that you were a part of where you and Sam Altman did agree that there should be some sort of regulation, but Christina Montgomery had some reservations about that.
How do you make sense of that today?
MARCUS: Of course, I remember the conversation. I still absolutely think we need such an agency. I think Lindsey Graham had a nice observation there. Of course, we should need such a thing when it's the most disruptive technology, maybe of all time, or at least, in the top 10.
Of course, we should have an agency for it. I don't think Montgomery really gave a good argument against it and Altman at that time was completely in favor. Then again, maybe he read the room and thought that's what people wanted him to say, I don't know what he really believes at this point, because when he went back later, he was clearly not in favor of it.
And it turns out, in fact, at the same time, more or less, his company was lobbying the EU to water down the AI Act.
BECKER: So let's explain that a little bit, because let's explain what's happened in other countries and that AI act in Europe. How do you think that so far is going to work?
And is it something that the U.S. should model?
MARCUS: I think it's been watered down a little bit. A lot of the implementational details are not yet entirely clear to me. I think the sentiment of it is right, that you want to watch out for high-risk applications and have some regulation there.
I think if the U.S. just borrowed that AI, we would be in much better shape than we are. I'm not saying we should, in fact, use exactly that, but I would say that we have abdicated our role in trying to lead to an international consensus around AI regulation by having none. And so people are going to look to the EU to see how their model is going.
Obviously, it's going to take some years to see whether they got it right. It's going to take some iteration to improve. But we have seeded that role pretty much entirely to the EU.
BECKER: And we should say here in this country, President Trump signed an executive order, right? Just in December. So just a couple of months ago to create a national policy framework for artificial intelligence.
And we have a clip of President Trump at the signing. Let's listen.
DONALD TRUMP: There's only going to be one winner here, and that's probably going to be the U.S. or China. And right now, we're winning by a lot. China has a central source of approval. I don't think they have any approval. It's going, but people want to be in the United States and they want to do it here, and we have the big investment coming.
But if they had to get 50 different approvals from 50 different states, you could forget it, because it's not possible to do, especially if you have some hostile, all you need is one hostile actor and you wouldn't be able to do it. So it doesn't make sense. I didn't have to be briefed on this, by the way.
This is real easy business. This is simple.
BECKER: And so Gary Marcus, how do you react to President Trump on this executive order regarding national regulations for artificial intelligence and saying, you know, those will hamper competitiveness on the part of the United States.
MARCUS: So there's, I could spend the whole show on that one clip, but I'll talk about just a couple things there.
One is the opening premise that there's going to be only one winner. I think that's almost certainly false. I think that both the U.S. and China are going to be winners in the sense that both will have companies that make large language models. Google is not going to stop making large language models because China makes progress and Alibaba and Tencent are not gonna stop making large language models because Google makes progress.
It's going to be more like Coke and Pepsi. It's not going to be that Coke completely displaces Pepsi. That's just not happening here. So that fundamental axiom's just wrong. The second point is, I agree we don't really want 50 states doing things that the reason we have 50 states doing things, or at least a dozen or whatever is because the federal government is falling short.
Absolutely. If we could come up with good federal standards that both parties agreed on, then I'm fine with having that instead of the states. But the reality is federal government is failing and that's why the states are trying to step in and it is a violation of states' rights to say they can't do anything and they should leave their citizens in harm's way.
BECKER: What about, what do you suggest people do, right? We're the ones who are sometimes unwittingly, right? Using artificial intelligence. What can people do if in fact this is a potentially dangerous tool that might be unregulated? Where do you see this going? And is there something that regular people can do?
MARCUS: What I'm increasingly starting to see regular people do is boycott. So a lot of people are boycotting OpenAI now. I think the number's up to 5 million or maybe more. That's in part around the controversy with mass surveillance and so forth. Ultimately, that's really the only power we have is to say we won't use these things until the companies clean up their act.
And that was how I ended the book. And that is becoming an increasingly popular social movement. And its time might have come. We could have had a different social media world if people had said, Hey, hold on, clean up your act and then we'll start using that stuff. But people did not unite. They may around AI, and if they don't, these systems are becoming increasingly embedded in our lives.
Increasingly hard to control and we may wind up with an outcome we don't like. It's very tempting for people to use this stuff, but there is an argument that if we simply seed our power to these large companies, we're not going to like where we wind up, not going to like it with where we wind up with employment.
There may be catastrophes, bad actors may knock down our power grids and so forth. But if we just let it all ride. That's maybe what happens.
BECKER: But I think it is, as you said, already so embedded that it's in some cases almost impossible to opt out of the use of artificial intelligence in our lives, because then you might lose the tool entirely that you were hoping to use unless you agree to allow the AI in on it.
So what, how do people get around that?
MARCUS: Consumers don't have to use it. People might have to use it in the workplace. Consumers don't have to, they do have a choice. And people tend to make decisions in the short term, is it going to help me this minute? And the answer might be yes.
But the question is, what is it going to do for society in the long run? Is that really where we want to be?
BECKER: And so you, as someone who works in this industry and has children, what do you do? To mitigate how you interact with AI at home?
MARCUS: I don't actually use it that much except as a scientist to see like what the status of the current systems are.
Though I'm interested in Claude code, which is I think a more sophisticated system in some ways than these other things. My kids have a pretty healthy skepticism towards AI, as you might imagine. And watch out for hallucinations even when they were little, they would make fun of Siri when it made mistakes and things like that.
So I think my kids in particular have a healthy skepticism around these things. I don't know that's true in general.
BECKER: But I do think it is increasingly showing up. If you go to Google or other search engines on the internet, AI automatically comes up. There are ways to get around it, but it's increasingly difficult for people.
MARCUS: The companies are pushing it hard. We haven't talked at all about the economics here, but the fact is the companies are mostly losing money from running this stuff, trying to find a use case to justify the massive amount of investment. And for a while they had Wall Street sold on that.
But in the last six months, Nvidia has declined. CoreWeave has dropped almost 50%. Oracle has dropped almost 50%. The companies are desperate to prove that there is economic value in the things that they have built and the things they want to build. And so they are pushing it down everybody's throats.
It's not always working out. OpenAI just canceled Sora, for example. They put in a huge investment in it and they decided they weren't going to make their money back on it.
BECKER: So if AI were to work in a positive way, let's think of some positive things that AI could do and if it were properly regulated, what would that world look like for you?
MARCUS: And what would we, how would we have AI, use AI and regulate AI? So we would, first of all, as I said, look at things before we release them on wide scale to try to understand the consequences. It would also be, an after something is released way of holding the companies liable if they cause problems or force them to make changes like we have recalls on cars and stuff like that.
We have none of that right now. The second is we would spend more of our effort on researching new approaches where reliability and contributions to science were the things that we really cared about. Not, can I make cute graphics or can I make near porn as Elon Musk does.
But, can this technology be used for science and medicine and so forth, or can we build a better technology that'd be better for those problems.
I still absolutely think that there is a possible outcome where on balance, AI is net positive and maybe even net very positive. We are on a path where that's not the case, but if we pushed more towards the scientific applications and less towards the AI slop and focused less on the chatbots that have these problems that nobody has been able to solve and is likely to be able to solve, I think we could do well.
BECKER: But I would imagine that many folks say it's the slop that sells quickly to give us the resources to be able to perfect the tool for the higher order things such as science and medicine. So maybe we need to do some of the slop as quickly as we can and figure out how to regulate it.
I would imagine.
MARCUS: Maybe, but I think that there's a couple of very dark scenarios for the people playing that game, right? The game is to hype the slop; make it sound like it's more than it is to make it sound like it's artificial general intelligence when it's not and so forth.
So there are two different kinds of forms of backlash. One is the public is getting upset about this. There is now a public backlash against AI. As I said, 5 million people as I understand it, boycotting OpenAI is part of a backlash. Public opinion of AI has consistently trended downwards.
So these companies have overplayed their hand and then there's a very real possibility that this stuff might crash the economy. A lot of people have written about this; I've been warning about it since August '23. If it crashes the economy, then the public backlash is going to be even stronger.
BECKER: And crashes the economy because so much is invested in this. Because how it will affect the markets.
MARCUS: Exactly. We've invested trillions of dollars. The only company that's really profitable at large scale is Nvidia. They're selling shovels in a gold rush. They have the chips that everybody else uses, but companies like OpenAI and Anthropic are losing lots of money.
And that's not, excuse me, that's not long term sustainable. And then you have the secondary problem that like banks have loaned a lot of money. We don't know exactly how much, to companies that have invested, we don't know about leverage and so forth, but many people have started to draw parallels to banking crisis and so forth.
It's a question of what the blast radius will be and how bad, but I think the bubble has already started to decline. If you just look at Nvidia, lost 10% over the last six months. Even though it had gone up by a factor of 12 in three or four years before that.
That's a radical change.
BECKER: Is that in part because we were hearing so much about the dangers of AI and AI taking over the world and machines destroying humanity?
MARCUS: No, I don't think so.
BECKER: You don't think so? That didn't sort of temper things.
MARCUS: I don't think that affected Wall Street. What I think affected Wall Street was that GPT 5.0 was not what it was promised to be. I kept warning that it was gonna be late and it was gonna be not AGI. And Altman kept hinting that he had AGI and people believed him.
Things really changed on August 7th when he released GPT 5.0, in the release, he said that it could do anything a PhD could do, and within hours, people realized that was not true. That was, I think, the single biggest change in how people have perceived AI. And they started to realize that Altman is over promising.
Altman backed down a couple days later and said, AGI, I don't know what the definition is, after having said earlier that year that we now know how to build AGI. And so he changed his tune. And a lot of people, I think, woke up. I think by November, really, a lot of people had woken up that they had been sold a bill of goods and that the technology was not as strong as people had said, and that maybe the economics were not there.
BECKER: And we should say AGI. Artificial general intelligence, which is a step above what many of us might think of when we think of artificial intelligence, more than a chat bot essentially.
MARCUS: A future chat bot might be, AGI is supposed to be AI that can do essentially anything a person can do can be just as resourceful and flexible.
And you have studies every day that show that there's still lots of problems. Basic things like visual comprehension that they're not actually good at.
BECKER: Okay. In our last couple of seconds here, Gary Marcus, what I'd like to say is that, you know, what I'd like you to end on here is you've told us that there are some positive uses here, and what about the dangers?
Were those overhyped as well? Than when we were hearing about dangers of AI and can we put those aside and now think about how we move forward safely?
MARCUS: I don't think so. I think that the dangers are still with us. You could make an argument that misinformation wasn't as bad in the 2024 election, but I think in general the dangerous people have warned about are here and only getting worse.
The first draft of this transcript was created by Descript, an AI transcription tool. An On Point producer then thoroughly reviewed, corrected, and reformatted the transcript before publication. The use of this AI tool creates the capacity to provide these transcripts.
This program aired on March 30, 2026.

