Advertisement

Can AI be regulated?

46:57
Download Audio
Resume
A visitor watches an AI (Artificial Intelligence) sign on an animated screen at the Mobile World Congress (MWC), the telecom industry's biggest annual gathering, in Barcelona. (Photo by Josep LAGO / AFP) (Photo by JOSEP LAGO/AFP via Getty Images)
A visitor watches an AI (Artificial Intelligence) sign on an animated screen at the Mobile World Congress (MWC), the telecom industry's biggest annual gathering, in Barcelona. (Photo by Josep LAGO / AFP) (Photo by JOSEP LAGO/AFP via Getty Images)

Sign up for the On Point newsletter here

Artificial intelligence systems are permeating into everyday life faster than ever before.

"The AI systems that are currently being developed and the ones that have been released recently represent a type of technology that is intrinsically very difficult to understand and very difficult to guarantee that it’s going to behave in a safe way," Stuart Russell says.

That's why thousands of researchers who develop AI recently wrote an open letter pleading for help regulating the very technology they're creating.

Today, On Point: Can AI be regulated?

Guests

Stuart Russell, professor of computer science at University of California at Berkeley. His textbook Artificial Intelligence: A Modern Approach is the leading AI textbook around the world. He co-signed the Future of Life Institute letter titled “Pause Giant AI Experiments: An Open Letter."

Peter Stone, professor of computer science and director of robotics at the University of Texas at Austin.  Executive director of Sony AI America. He’s the standing committee chair of the 100 year study on AI. He co-signed the Future of Life Institute letter titled “Pause Giant AI Experiments: An Open Letter."

Also Featured

Louis Rosenberg, CEO and Chief Scientist of Unanimous AI.

Laura Grego, senior scientist and the research director of the Global Security Program at the Union of Concerned Scientists.

Interview Highlights

On access to greater AI intelligence

Stuart Russell: "Our entire civilization is the result of our intelligence. We're not particularly big, we're not particularly strong. We don't have particularly long teeth and claws, but we have intelligence. And that's what's given us dominance over the planet, over all the other species. And it's led to everything that you see around you, all the knowledge that we've accumulated. So if we have access to much more intelligence, then we could have a hopefully much better civilization. But the thing that is causing concern and I think you mentioned this in your introduction, is that if we build systems that are more powerful than human beings, because after all, you know, it's intelligence that gives us the power.

"If we have systems that are more powerful than us, how do we maintain power over them forever? And that's the underlying concern that even Alan Turing, the founder of Computer Science back in 1951, he thought about this problem. And his conclusion was we should have to expect the machines to take control. So that's why this is the most important invention. It would either be the beginning of a golden age for humanity, or it could be the end of human history if we don't get it right."

On the tech boom in the 1980's

Stuart Russell: "In the mid-eighties when I was doing my Ph.D. and then started as a faculty member at Berkeley, that was a boom in technology called Expert Systems, which were systems sometimes called rule-based systems, where you interviewed experts on a particular topic. Let's say, you know, how to configure a computer system or how to diagnose disease. And then you would write down the experts' knowledge in the form of rules, and then a reasoning engine would take those rules and then diagnose disease or configure computers for you.

"And the semester that I finished at Stanford, so that was the summer of 1986. In one semester, 10% of the student body took the course. So that tells you something about how popular it was. There were hundreds of startup companies, all the big companies that created air divisions to apply this technology to their to their work and so on. And within about three years, that had completely fizzled out because the technology was not ready for primetime. So mostly during the history of AI, we've taken what you might call the reductionist approach.

"We tried to figure out how intelligence works, what are the pieces, how do you put them together? How does each piece work? Can we build a mathematical theory underlying that piece? So to give one example, when you think about reasoning, which most people would say is part of what intelligence systems have to do, we built on logic, which goes back at least to the ancient Greeks. So 2,500 years of history of development, of logic and created logical reasoning systems that were quite powerful and have been, for example, used to prove mathematical theorems that human beings were not able to prove. We developed on probability theory so that now systems can reason under uncertainty. And that was a big step forward.

"And I'd say in that area, I actually have contributed the bulk of what we understand about how to reason under uncertainty, which is a huge contribution. But starting around 2012, an approach called deep learning started to become dominant in many areas. For example, in speech recognition, and in computer vision and machine translation. And deep learning doesn't say, okay, let's study this task.

"You know, for example, computer vision, the task of recognizing objects, you might think, okay, well, we've got to look for, you know, edges and textures and regions and then try to figure out based on, you know, the shading and light and dark in a region, you know, what is the shape of that region and then gradually piece together all those clues to recognize objects. Deep learning just says, let's provide lots and lots of training, data of images with labels saying, this is a giraffe, this is a school bus, this is an ostrich. And then the learning system figures out how to recognize the object."

On fears that AI could be used for disinformation campaigns

Stuart Russell: "It's not only personalized, but it's adaptive. The AI system is going to start to see your responses, see your reservations, and it's going to adjust its tactics. To overcome those reservations and overcome that resistance instead of, you know, an influence campaign being buckshot that's sprayed out there into the world, it will become these heat seeking missiles that are targeted at you personally.

"You talk into that piece of content and there are already parties out there working on that for advertising. There are third parties who want to be able to talk you into buying a car or buying a computer through conversational influence. And that's creepy. But when it's misinformation or disinformation or propaganda, it's dangerous. And that capability now exists."

On avoiding the negative impacts of AI 

Peter Stone: "I think it is the time to be thinking about these things. I think it's absolutely worth thinking about how we can, if and when those sorts of discoveries are made, how we can ensure as much as possible that the values of the systems are aligned with those of humanity. But I don't know that I'd say it's safe to assume that those discoveries will be made. I think it's quite plausible that we will get to a point of AGI or artificial general intelligence, but we don't really know what that will look like. It's not likely to be just a scaling up of current large language models.

"And so, you know, I think it's not plausible to me that it would happen without us seeing it coming, without us being able to prepare and to try to harness, I think to harness it for good. And ... there's thousands of people around the world trying to make artificial intelligence systems more intelligent. Because, of course, if we're going to try to make farming and food production more efficient and we're trying to make discovery of vaccines and managing of health care better, we want the systems that are doing that to be as intelligent as possible.

"But we also need to think about what are the scenarios in which there could be loss of control. And I think we'll get to know that better as the technology is developed. And as we keep it in mind as people who are doing this become more and more trained in the humanities and social sciences, the sciences and the risks as well as the technology."

On regulation of AI 

Peter Stone: "One of the most crucial things to do is to make sure and to help governments get up to speed on what are the realistic threats and what are the realistic possible uses, positive uses. And absolutely, I think regulation of specific sectors of AI technologies in different sectors is going to be an essential part of our path forwards. And one of the reasons I think that a short pause is useful is to give time for governments to figure out how to do this.

"I don't think that it's going to be a winning strategy to try to squash or stop progress, technological progress, or to regulate AI as a whole. But I think it's essential to think about how should we regulate current AI technologies on specific use cases such as transportation, on health care. And the answers are going to be different. What should we put into place when we're thinking about AI technologies for radiology versus AI technologies for food production? And this is just an urgent and essential conversation that we all need to be having. And I think it's great to have conversations like this one for people to start thinking about it."

Read: How societies grapple with transformational technology

MEGHNA CHAKRABARTI: The atomic bomb. First detonated at the Trinity Test site in New Mexico, on July 16, 1954.

Less than a month after the Trinity test, President Harry Truman authorized the bombing of Hiroshima and Nagasaki.

More than 200,000 people were killed in Hiroshima and Nagasaki. The Cold War and threats of mutually assured destruction soon followed.

Though atomic weapons were developed in wartime – the technology’s developers were not in lockstep about its use.

Two months before the U.S. bombed Japan, and a month before the Trinity test, an influential group of scientists wrote a letter to Truman, warning the president of what the country was creating.

LAURA GREGO: The Franck report was one instance of a semi-regular drumbeat by nuclear scientists to try to raise visibility about the dangers of these weapons.

CHAKRABARTI: Laura Grego is senior scientist and research director of the Global Security Program at the Union of Concerned Scientists. The Franck report – named after James Franck, the Nobel prize-winning scientist who chaired the committee that wrote it, was sent to President Truman in June of 1945.

GREGO: The Franck report came out of the group at University of Chicago whose technical job in the Manhattan Project was to develop the methods to produce plutonium for the American bombs. In 1945, they'd completed a lot of that work. In other parts of the Manhattan Project, they were still really busy completing the bomb work.

But a lot of that had been done and they had some time to sit back and consider the effects of the technology that they had produced. And a group of seven really eminent physicists and I think one was a biologist and one was chemist sat and thought through these ideas, and they produced this report called the Frank Report, which was warning that if the United States use the bomb on Japan, it would unleash a set of results that would be really bad.

The Franck report noted that by the summer of 1945, the war in Europe had ended. That changed the stakes, they believed, writing:

“If the United States were to be the first to release this new means of indiscriminate destruction upon mankind, she would sacrifice public support throughout the world, precipitate the race for armaments and prejudice the possibility of reaching an international agreement on the future control of such weapons.”

CHAKRABARTI: In fact, even J. Robert Oppenheimer noted in 1945.

J. ROBERT OPPENHEIMER: There seem to be two great views among scientists and no doubt would be among others if people knew about it. On the one hand, they hoped that this instrument would never be used in war, and therefore they hope that we would not start out by using it. On the other hand, and on the whole, we were inclined to think that if it was needed to put an end to the war and had a chance of so doing, we thought that was the right thing to do.

Laura Grego says the Franck report urged even more action:

“We therefore feel it is our duty to urge that the political problems, arising from the mastering of nuclear power, be recognized in all their gravity, and that appropriate steps be taken for their study and the preparation of necessary decisions.”

GREGO: We ended up at one point during the Cold War with more than 60,000 weapons, each of which were much larger than what were used in Hiroshima and Nagasaki. Even today, the U.S. is prepared to spend $1 trillion over the next 30 years to modernize and upgrade its nuclear arsenal.

In 20 years, we'll have 100 years of the atomic bomb. And we're not close to controlling that. We are still organized around these technologies of mass destruction. So I do think have we been better able to control that right at the very beginning of the technology, we would be in such a better place today.

Related Reading

The Guardian: "AI has much to offer humanity. It could also wreak terrible harm. It must be controlled" — "In case you have been somewhere else in the solar system, here is a brief AI news update. My apologies if it sounds like the opening paragraph of a bad science fiction novel."

This program aired on May 1, 2023.

Related:

Headshot of Hilary McQuilkin

Hilary McQuilkin Producer, On Point
Hilary McQuilkin is a producer for On Point.

More…

Headshot of Meghna Chakrabarti

Meghna Chakrabarti Host, On Point
Meghna Chakrabarti is the host of On Point.

More…

Advertisement

More from On Point

Listen Live
Close