Leading minds in artificial intelligence are raising concerns about the very technology they're creating.
“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too," Sam Altman says.
Two of the biggest tech companies in the world, Microsoft and Google, are warning about the dangers of unregulated AI development. At the same time, they’re racing each other to push AI into their most popular products.
“This technology does not have any of the complexity of human understanding, but it will affect us profoundly in the way that it’s rolled out into the world," Sarah Myers West says.
So, how could that change us?
Today, On Point: The Microsoft-Google AI war.
Dina Bass, tech and AI reporter for Bloomberg News.
Will Knight, senior writer for WIRED, covering artificial intelligence.
Sarah Myers West, managing director of the AI Now Institute, which studies the social implications of artificial intelligence.
MEGHNA CHAKRABARTI: Here comes another open letter from the world of AI developers warning about the AI they're developing. Though this one is less of a statement and more a single sentence.
"Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."
CHAKRABARTI: The statement is hosted by the Center for AI Safety and signed by politicians and scientists, including researchers at the forefront of AI technology. So first of all, when technologists are literally repeatedly begging for regulation, we, the public and our political representatives should listen to them and should do something about it because they're saying they cannot self-regulate.
And when it comes to civilization changing technology, tech folks probably shouldn't be relied on to self-regulate. Because it's the citizens, the civilization, i.e. the rest of humanity, who should have a say in how that very civilization should be changed. Or at least I think so.
The tech world is also saying it will not self-regulate. Because there's more than a little talking out of both sides of their mouths going on, isn't there? On the one hand, Google DeepMind CEO Demis Hassabis and Microsoft's chief scientific officer, Eric Horvitz, are among the warning letters signatories. On the other hand.
SATYA NADELLA: The age of AI is upon us and Microsoft's powering it.
CHAKRABARTI: That is Microsoft CEO Satya Nadella announcing earlier this year that even as his company is warning about the dangers of unregulated AI, his company is also pushing new AI technologies into almost every aspect of Microsoft's massive product reach.
NADELLA: We are witnessing nonlinear improvements in capability of foundation models, which we are making available as platforms. And as customers select their cloud providers and invest in new workloads, we are well positioned to capture that opportunity as a leader in AI.
CHAKRABARTI: For example, Microsoft now has a generative AI tool in its search engine Bing. Not to be outdone, Google, also a signatory to that warning letter, is pushing hard into generative AI. Google CEO Sundar Pichai told CBS's 60 Minutes:
SUNDAR PICHAI: This is going to impact every product across every company. And so that's why I think it's a very, very profound technology. And so we are just in early days.
CHAKRABARTI: And just last month, Google vice President of Engineering Cathy Edwards announced the company is rolling out generative AI into the most popular search engine in the world. Google Search, which has 80% of global search market share, 99,000 searches per second, more than 8.5 billion searches every day.
CATHY EDWARDS: These new generative AI capabilities will make such smarter and searching simpler. And as you've seen, this is really especially helpful when you need to make sense of something complex with multiple angles to explore. You know, those times when even your question has questions.
CHAKRABARTI: Google, Microsoft and AI. These two companies are so big, so consequential that their near simultaneous public push into generative AI is being called the Microsoft-Google AI War. And that, of course, raises the question exactly how will this war impact you and every other human being? Well, joining us now is Dina Bass. She's tech and a reporter for Bloomberg News who's covered Microsoft for 20 years, and she joins us from Seattle. Dina, welcome back to the show.
DINA BASS: Hi, Meghna.
CHAKRABARTI: Also with us today is Will Knight. He's a senior writer for WIRED covering artificial intelligence, and he joins us here in the studio. Welcome to On Point.
WILL KNIGHT: Hello. Thanks for having me.
CHAKRABARTI: So, first of all, I just want to get a sense from both of you about whether this framing of the Google Microsoft AI war is an accurate one, because it seems from the outside kind of significant that at least in search, we have these, you know, two nearly simultaneous announcements from these two companies. So, Dina, is this a big war between the two?
There is definitely a strong competition between the two.
BASS: There is definitely a strong competition between the two. I think the kind of wizard behind the curtain that you're leaving out here is OpenAI, which basically the Microsoft technology at its heart is OpenAI's technology. What we're looking at is Microsoft adopting OpenAI ... language and image generation technology. It also writes code into every single one of its products.
And because Microsoft was a little quicker to get some of these things out, it seemed to put Google a bit on the back foot, even though many of these technologies were actually kind of invented and pioneered at Google. And so it has, as a result, turned into a bit of sort of, you know, Google trying to catch Microsoft even though it's early days.
But I think that also leaves out a lot of other companies, a lot of other startups, a lot of open-source work that may end up passing both of these. You know, there was a leaked memo a couple of weeks ago from a Google engineer saying that both OpenAI and Google have no moat and will be surpassed by the open-source work. So the framing is correct. But also there's a lot more going on.
CHAKRABARTI: Good. We'll get to that. We'll get to a lot more that's going on in a few minutes. But I have to say, given the reach and the size of both of these companies, I want to focus for a little bit longer on how their push into AI and specifically generative AI is going to impact all of us. So Will, same question to you. I mean, is this kind of a new front in the yearlongs competition between Microsoft and Google?
KNIGHT: I would say yes, absolutely. I mean, as Dina is saying, you know, this is definitely a new era of competition between the two, driven by what is quite a profound kind of step forward in some of the capabilities in AI led by these models like GPT. And there are still tons and tons of limitations.
But whenever there's a big technological shift, we've seen it before with the internet, with mobile. Big companies see an opportunity to kind of get ahead of each other and maybe also worry about falling behind.
CHAKRABARTI: Now we heard both Sundar Pichai and Satya Nadella. The CEOs of both companies say essentially that AI is going to transform everything they do. When a CEO says that, I tend to believe them. But Will, I mean, do you see the same that it's going to transform Microsoft and Google? Maybe not as we know it, but how they operate and what they do?
KNIGHT: I think absolutely. I mean, you know, we should also, you know, should be wary that there are limitations. This technology is being rushed out very quickly. There are issues with it. But I look at it as something as sort of fundamental as software. And we've seen that with the previous era of AI. Machine learning has transformed so many products, so many companies. And what this is, is kind of a step change, quite a significant change really, in what you can potentially do with machine learning.
We're seeing it primarily in these chat bots and image generation, but it represents a sort of new set of capabilities that you can give to computers. And they are very general purpose. So yes, they're going to, I think try to apply it everywhere. There may be many problems along the way as they do that as well.
Machine learning has transformed so many products, so many companies.
CHAKRABARTI: Okay, Dina, So what's really fascinating to me is, you know, hot on the heels of ChatGPT, one of the first sort of public points of access that people have to Microsoft and Google's use of generative AI is through search. Right? And like searches, like literally everybody uses search all the time, every day. Have you used the AI powered search on Bing?
BASS: BASS: I have and it's interesting that you point that out because search up until a couple of months ago was really a sleepy corner of the Internet in terms of competition. I mean, in fact, I can't imagine you thought we'd be talking about Microsoft and Google refighting the search battle that Microsoft essentially lost ten years ago. I mean, this is not an area that we thought was ripe for innovation.
I have used it. It is much better with open ended questions similar to ChatGPT. It can, you know, generate content, fun content, recipes, shopping lists, but also answer more open-ended questions. If you're trying to figure out what products to buy or where to travel.
But as Will is pointing out, it makes a lot of mistakes, both companies are trying to get around that by having citations, you can click and see where the data is coming from and catch the mistake. I just don't know if users going are going to do that. The term of art for these mistakes, quote-unquote, is hallucinations. That's what it seems to be. It seems to be a euphemism for, if I can put this politely in radio speak, making stuff up. So that's still an issue.
CHAKRABARTI: Are you guys up for a live demo?
CHAKRABARTI: Should we try this? Because I have Microsoft Edge open here in front of me, which, by the way, interestingly, if I have this right, you can't use the Bing AI powered search in any other browser but Edge. Is that right Will?
KNIGHT: Yes, that's right.
CHAKRABARTI: Ah. Hmm ... keeping us in the Microsoft ecosystem ... (LAUGHS)
BASS: It's coming to others. But not yet ...
CHAKRABARTI: Ok, so I've got it here open in front of me. I have no idea what's going to happen. Well, I kind of do, because I've asked this question before, but it's my favorite question for summertime travel. And it's open ended enough. Dina, I'm going to ask the Bing AI search. (TYPING) Why is airline travel so horrible? Is that open ended enough, Dina?
CHAKRABARTI: Okay, so here, enter. And it's thinking. Oh, it's not that fast, okay? It's still going. Is that normal? Yeah.
KNIGHT: Well, what is running on the back end is a giant neural network that's trying to come up with the answer. So it's quite different to a regular search.
CHAKRABARTI: Oh, okay. Here it comes with the answer. Wow. It's a long answer. It says there are several reasons why airline travel can be unpleasant. According to a CNN business article, some reason includes lack of enough pilots and flight attendants. Okay, that is true, especially on the pilot front. There was a pilot shortage.
Worries about vaccine rules, fewer seats, higher fares rising, number of unruly, unhappy passengers. Then it quotes Investopedia and a Time article about pandemic induced pain after the airline industry ground to a halt and is now struggling to catch up with surging demand. I hope this helps. Okay. Now, if I didn't know anything about the airline industry, I would say these answers make sense. But Will, it's leaving out massive, massive causes of ...
KNIGHT: Yeah, I think there may be some hallucinations there. Like the vaccine problems. What it's doing is taking a huge amount of stuff from the web and then just trying to kind of guess what would be a plausible answer, not necessarily the right answer.
CHAKRABARTI: So Dina, knowing this and if even people even inside the industry are calling them halluciations. We of course are going to talk about some of the good that AI can do. Do you have any concerns about these products being rolled out as fast as they are for everyone to use?
BASS: Ladies and gentlemen, you have to bear in mind that you are the beta testers.
BASS: This is being experimented on you. The companies will tell you that that's necessary, that in order to refine it, to have it work well, they need large volumes of data.
CHAKRABARTI: Great, so we are testing the product for them. (TYPING) Right now I'm asking Microsoft's new AI power search powered search. Should you listen to the amazing radio show and podcast called On Point from WBUR? I'm a little afraid to press enter but I will. (LAUGHS) And we are talking about the Microsoft-Google AI war as both of these huge companies start unrolling generative AI technologies into their products. Oh look! It answered, it said:
Yes. On Point is a radio show and podcast produced by WBUR in Boston. It covers a wide range of topics from the economy and health care to politics and the environment. The show speaks with newsmakers and everyday people about the issues that matter most.
So no hallucinations there. Okay. But in terms of the importance of AI to companies like Microsoft and Google and therefore to us as the users of their technology. Here is what Google CEO Sundar Pichai said to CBS 60 Minutes.
PICHAI: I've always thought of AI as the most profound technology humanity's working on, more profound than fire or electricity or anything that we have done in the past.
SCOTT PELLEY: Why so?
PICHAI: It gets at the essence of what intelligence is, what humanity is. You know, we are developing technology which for sure one day will be far more capable than anything we have ever seen before.
CHAKRABARTI: And here is Microsoft CEO Satya Nadella in conversation last month with Andrew Ross Sorkin talking about the fact that he agrees AI might be moving, quote, too fast, but not in the way that some people think.
NADELLA: A lot of technology, a lot of AI is already there at scale, right? Every newsfeed, every sort of social media feed search as we know it before ... they're all on AI. And if anything, the black boxes, I'd describe them as the autopilot era. So in an interesting way, we're moving from the autopilot era of AI to copilot era of AI. So if anything, I feel, yes, it's moving fast, but moving fast in the right direction.
CHAKRABARTI: We talked a lot about sort of Microsoft. But I want to just shift quickly to Google. Can you kind of describe, I have felt that they've been between the two roughly in the lead on AI, if that's an accurate statement for quite some time.
KNIGHT: Yeah, absolutely. They invented a lot of the stuff that has led to some of these leaps forward, but they were hesitant to release some things because they could misbehave, which we are seeing now. And so they got kind of a little bit blindsided by what Microsoft, the OpenAI is doing in releasing some of these big language models and, you know, showing what you could potentially do with search.
And so there was this great panic inside Google, it seems, where they were suddenly like, we've got to catch up. We've got to throw everything at this. They've merged their two different AI units, Google and DeepMind. And now Demis Hassabis, as you mentioned, is the lead of both of those. And this is a kind of one of those big moments like the Internet wave that Bill Gates talked about years ago where they suddenly realized, we need to catch up.
There was this great panic inside Google, it seems, where they were suddenly like, we've got to catch up.
CHAKRABARTI: I saw a story in the New York Times that said, you know, there were memos, I think, circulating. I can't remember whether it was within Google or Microsoft. But that said that, like, you know, this race could be won or lost in a matter of weeks.
BASS: That might have been Google. That's not what I hear from Microsoft. It is critically, if anything, and perhaps it's because they feel like they're in the lead. When I asked Satya Nadella that he has focused a lot on this being an initial lead, not a permanent. Being very cognizant of the fact that lots of markets are being disrupted, including some of the ones that Microsoft needs.
And, you know, we focused a lot here on chat, but one of the other major vectors for adding AI and for competition around that is going to be office software again between Microsoft and Google. It's one thing for Microsoft to experiment with Bing, which has about 2% share of the market. It's another thing for them to start putting significant AI assistants into their office products, which are, you know, flagship dominant product, and they are doing that, albeit rolling it out a little bit more carefully and a little bit more slowly than perhaps the Bing stuff.
CHAKRABARTI: Dina is this one of those winning the competition at all costs moment? I saw a story in the New York Times that said, you know, there were memos, I think, circulating. I can't remember whether it was within Google or Microsoft. But that said that, like, you know, this race could be won or lost in a matter of weeks.
BASS: Maybe this is Clippy, 2.0. I mean, there were there was a meme going around. I think it was probably around May 4th ... that was sort of like, you know, Clippy, I am your father kind of meme involving Bing. I mean, this is what on some level, this is a fulfillment of what Microsoft wanted to do with Clippy and what it wanted to do in around 2016 with its kind of conversational AI strategy that they rolled out that didn't really go anywhere at the time because the technology wasn't very good then.
CHAKRABARTI: Okay. Well, so good. Going back for just a moment to the memo that I was quoting, I have the story here from the New York Times. And they say that Sam Schillace, technology executive at Microsoft had written in an internal email, which the Times says it's viewed. And in that email, Schillace said that it was absolutely, quote, an absolutely fatal error in this moment to worry about things that can be fixed later.
And that suddenly shifting towards a new kind of technology was essential because, as Schillace wrote in this memo, again, viewed by The New York Times, the first company to introduce a product is the long-term winner. Just because they got started first. Sometimes the difference is measured in weeks. Dina, since you're the one who covers Microsoft directly, how does that sound to you?
BASS: Look, obviously, Microsoft is moving very quickly here. And I mean, nary a week has gone by without them introducing a new AI product. And that brings up the question that you started the show with, around if they're so concerned about what the impacts of this might be, why continue to roll it out at great speed?
Microsoft is moving very quickly here. And I mean, nary a week has gone by without them introducing a new AI product.
You know, we do know that the company has been moving very quickly, you know, to try to get these things out. But again, what I hear from them is that they do not believe that the fact that they introduced things first means that they are the winner. They feel that there's a lot of potential for disruption here. And, you know, we mentioned office. There's also issues of disruption around the cloud.
CHAKRABARTI: Around the cloud. Okay. So hang onto that thought for just a moment, because we do have just a little bit of tape here. You had talked about the other products that Microsoft has. And so in an event in March called the Future of Work with AI, Microsoft introduced copilot, or we'll call it here in this conversation, Clippy 2.0. It's an AI assistant feature for Microsoft 365.
Copilot combines the power of large language models with your data in the Microsoft graph and the Microsoft 365 apps to turn your words into the most powerful productivity tool on the planet.
CHAKRABARTI: So Will, do you anticipate Google also rolling out AI into its products beyond search?
KNIGHT: Yeah, in fact, they've already started doing that at Google IOS. They demonstrated some stuff similar to what we're seeing with Office where it will help you write, write a document, help you generate an email, go into a spreadsheets and do stuff for you.
CHAKRABARTI: Okay. So, you know, both of you have raised an interesting question that obviously, you know, the products that these things get rolled out in is only the surface part of the story. What's really, I think, more deeply going on is when people are talking about this as fundamental as software. ... How much of the frenetic activity that we're seeing now between Google and Microsoft is more about can they be the leaders in this new technology, in generative AI and how it's used?
KNIGHT: I think a huge amount of it is about that. And what I hear from people inside some of these companies is that sometimes the executives are not listening to even, you know, the technologists and those who have concerns about them. So they're just desperate to get this stuff out and to gain a lead in what they see as such a foundational technology that's going to sort of be rolled out to billions of computers.
CHAKRABARTI: And Dina, what do you think about that question?
BASS: I think that's fair. And they're doing a bit of a dance between saying we're trying to be careful and we know that these products don't work perfectly, and we want you to know the ways in which they don't work. But here's another one and here's another one. And you should try this, and by the way, you should regulate us. It is a little bit of trying to do a bit of a tap dance.
CHAKRABARTI: Going back to search again, just because I think it's the touch point that people will understand most intrinsically. How do you think that what Google and Microsoft are doing will change the way people use search or the information they get from it?
KNIGHT: Yeah, I think that remains to be seen. I mean, when I try using this generative search, it's amazing to me that I have something that will hallucinate. Make up stuff, Right? That's not what we expect from a search engine. And I think it still remains to be seen if it's going to be you know, the benefits will completely outweigh the limits. Undoubtedly, I think it's going to sort of creep in in some ways to the search, but maybe not just completely supplant it.
CHAKRABARTI: Yeah. You know, we heard a little earlier in that tape from Google's release of their AI enabled search, Cathy Edwards was talking about, Oh, it can be particularly useful for your questions that also have questions. What do you think she's getting at there?
KNIGHT: I think she's talking about follow up questions where you will ask something. You have the ability to sort of ask a clarifying question of, say, Bing chat or Google's chat bot. So you can kind of get into this more humanlike dialog with these things. But that also raises a ton of problems in my mind because you're sort of anthropomorphizing these things to a level that they don't justify and it causes a lot of confusion and can ... lead people to kind of misinterpret what they're talking to and it can say things that come across as very weird.
CHAKRABARTI: So we're going to get to more of these the hallucinations and concerns in just a minute here. But, Dina, you know, you had mentioned a little bit earlier that, you know, isn't it surprising that once again, we're talking about Microsoft v. Google. As far as I understand, though, they had for at least some period of time around to 2015, reached some kind of legal and regulatory truce. Would you describe it that way?
BASS: Yeah. There was a kind of a formal detente between the two companies. It was not that they wouldn't compete. They were still competing very vigorously. Again, particularly as Google tries to take more business from Microsoft in the office and productivity space. But what they agreed to do was not to complain about each other to regulators. That's fallen apart in the last few years. They've both been vociferously complaining about each other to regulators. And so that adds another dimension. As we look at this AI battle, another dimension to the hostility between the two companies.
Okay, so, Dina, and we'll hang on here for just a second because I want to bring Sarah Myers West into the conversation. She's managing director of the AI Now Institute, which studies the social implications of artificial intelligence. Welcome to On Point, Sarah.
SARAH MYERS WEST: Thanks for having me.
CHAKRABARTI: So as I introduced the show, we have both, you know, very senior people at Microsoft and Google amongst the signatories of yet another AI warning letter saying that it could be an existential threat. We need to be regulated. Then, as Dina and we'll have very carefully laid out for us, they're still pushing these products out that really billions of people use every day, knowing that sometimes those generative AI products in search hallucinate, which will be my favorite scary word of the week here. I mean, isn't it somewhat irresponsible for these companies to be doing this, Sarah?
MYERS WEST: I mean, it seems like we're back in the move fast break things era of tech. Where they're essentially experimenting in the wild with these technologies, even as they acknowledge themselves that they're not really, you know, fully validated or tested or ready for market.
I mean, Dana put it perfectly that we're all the beta testers for this, but it's something to be beta testing a new version of an OS. Or beta testing, I don't know, a new app or the latest version of Word. It's something different entirely to be beta testing a product that's giving you information in return that you can't actually be sure is truthful or not.
And this is a product that you're relying on to give you truthful answers. I mean, it seems like more than playing with fire a little bit to me. I mean, are there any systems in place to get companies to think more about this in the world of AI before they roll out the products?
This is a product that you're relying on to give you truthful answers. I mean, it seems like more than playing with fire.
MYERS WEST: It's a really good question. And these companies have been aware of these concerns for a long time, Will already mention that Google has been working on the underlying technology since 2015 and has been hesitant to roll it out because of these risks. OpenAI similarly has been working on this technology for some time. And if you remember back in 2019 when they were about to release the GPT-2 model, which is kind of the precursor to GPT-4 the engine that powers ChatGPT, OpenAI said that this was too dangerous for public release and we're going to hold back.
Now it changed its mind on that. But you know, if you look back to that moment in time, they were expressing many of these same concerns about the risk of fraud, of the spread of misinformation, cybersecurity risks. And those underlying risks clearly haven't gone away. What I think is really critical about this moment is that the regulators and enforcement agencies are being very clear that they have every intention to apply existing law to these systems.
And there's a pretty clear framework for how those rules apply. Things like the FTC's authorities on deception, for instance, they've said pretty clearly that it's not okay, even if it's not the intention of the system. If you're producing something that enables fraud at scale, that violates the law.
CHAKRABARTI: So, I mean, if it's as clear as that, that would seem that these products are violating the law right now because they're putting out information that isn't always accurate.
MYERS WEST: I mean, it raises the question why the companies would be willing to put out systems that could very well be illegal or why there aren't sufficient guardrails, at the very least, to prevent that kind of conduct.
CHAKRABARTI: Will, let me ask you a question here. Is part of the sort of irresistible urge to release these products, even when there seems like pretty vigorous discussion within both Microsoft and Google about whether or not they should. Is there an irresistible urge that in order for these large language models to get better, they need more information? And the best way to do that is to like unleash them on people.
KNIGHT: That's a great point. And it's actually a really big reason why they're rushing so much. One of the reasons these models are so good is because they've been fine-tuned on feedback from people. So if they think it's a good answer, a bad answer. And you can hire people to do that, or you can do that through users. So, yes, one of the advantages that Microsoft OpenAI have is that they have this enormous user base ... and they're not going to sort of slow down because they're building upon that advantage.
CHAKRABARTI: You can imagine that, you know, let's say Google with more than 8.5 billion searches a day, even if a small fraction of those searches are done through their AI enabled search, that's going to be a massive amount of information.
KNIGHT: That's right. That's one of the things behind this whole generative AI boom is the more data you can get from users, the more you can feed to it, the better.
CHAKRABARTI: So let's listen to a little bit more of some big voices from the world of AI. You've probably heard over the past few weeks of a man named Geoffrey Hinton. He's considered one of the three godfathers of AI who used to be at Google until he resigned just last month.
One of the reasons these models are so good is because they've been fine-tuned on feedback from people.
GEOFFREY HINTON: What I've been talking about mainly is what I call the existential threat, which is the chance that they get more intelligent than us and they'll take over from us. They'll get control.
CHAKRABARTI: So that's Geoffrey Hinton. There's also Sam Altman. He's the CEO of OpenAI, and he recently spoke to the New Yorker Radio Hour and shared a fundamental question that he thinks about. Is AI a tool or a creature?
SAM ALTMAN: What this system is, is a system that takes in some text, does some complicated statistics on it, and puts out some more text. And amazing emergent behavior can happen from that. As we've seen, that can significantly influence a person's thinking. And we need a lot of constraints on that. But I don't believe we're on a path to build a creature here. Now, humans can really misuse the tool in very big ways, and I worry a lot about that. Much more than I worry about currently the sci fi-esque kind of stuff of this thing, you know, wakes up and loses control.
CHAKRABARTI: So Sarah Myers West, respond to what Sam Altman says there, because, of course, it's sort of the existential warnings that are grabbing all the headlines, including our attention. But that there's a whole other world of concerns that would come before the loss of control scenario that maybe we're not thinking enough about.
MYERS WEST: Absolutely. You know, that clip just reminded me of this really great interview that I read over the weekend with the science fiction writer Ted Chiang, where he was asked, you know, what would you use to describe AI if it weren't the terms AI. And his immediate response was applied statistics. You know, there's a lot of meaning that's imbued with the term artificial intelligence that gives it, you know, this sort of cultural power that's making us all, you know, excited and anxious about its effects.
But what we've heard from, you know, a number of the corporate leaders that we've heard clips from today is a consistent refrain about one, these are essentially sort of data prediction engines and two, that they're already in wide use. What's different about this moment is now there's an interface that people can interact with and that gives a really different tenor to our understanding of AI.
But if they're essentially, you know, statistical mechanisms used to look for patterns, one thing that they've demonstrated themselves to be really good at is reflecting back and amplifying historical patterns of social inequality and those kinds of patterns where we need to be placing our greatest focus, where AI is being used in health care, in education.
Whether you're going to get called in for a job interview or what the rate on your mortgage will be. What we're seeing consistently is a tendency for AI systems to amplify historical patterns of racial discrimination, gender-based discrimination. And that's, I think, where we need to place most of our focus.
CHAKRABARTI: So, you know, in thinking about, again, the call for regulation or at least, you know, more vigorous discussion around regulation that these very same technologists, you know, are repeatedly putting out this year, Dina let me turn to you on something. Because I imagine that we are already in an area of a new kind of inequality, I would call it information inequality. And we could be headed hurtling towards an even greater inequality on that front at hyper speed.
On the other hand, if regulators tomorrow announced, let's say the FTC tomorrow announced, well, you know, we've tried out your AI, your generative AI search. We see that it's generating inaccurate information or answers out of context or hallucinating. And it's clearly breaking the law.
We're going to stop you from using it. We're going to say you cannot have this product out in the world right now. You know, even though Microsoft says, hey, maybe we should regulate AI. Dina, I mean, Microsoft has also historically and Google, let's be honest, has done everything in its power to combat any kind of regulation against these same companies. I mean, how would they react to actually being regulated on AI?
BASS: Microsoft has been calling for regulations on AI for several years now. But as you point out, it's the regulations that they want. They want it to go to a certain point and no further. But we have regulations on the safety of all sorts of products. People often use the example of a car. You don't just allow someone to design an electric vehicle and go driving around, you know, the streets.
And, you know, I want to second that this notion that these AI systems already have significant problems around racial bias, around gender bias, the things that we need to be concerned about right now are not what Dr. Hinton is focusing on necessarily this possibility of some superhuman power in the future that we're not even sure AI can achieve. The issues are the things we already see that aren't being dealt with. You know, we're coming up on an election in the U.S. and there's an issue. The concern is not that you get a sort of goofy answer from Bing when you ask it why the airlines are so bad.
The concern is that these are generative tools, which means they can create new content that can be misleading, deepfakes, you know, pictures that aren't real. And that it may not be possible for people to tell that these things are fake. These are issues that, you know, AI researchers and ethicists have been pointing out for years within companies like Google and Microsoft.
And they have not gotten a lot of traction in terms of being listened to. To take exception with what Dr. Hinton is saying, when he was at Google, he did not support people like Dr. Timnit Gebru and Dr. Margaret Mitchell, who are working on these very issues of immediate concern and AI.
These are generative tools, which means they can create new content that can be misleading.
CHAKRABARTI: Good point. Will, I know you want to jump in here.
KNIGHT: I just wanted to say one of the other things, I think you're absolutely right to bring up issues of bias and disinformation. But one of the other things that, you know, just leapt out when Sam Altman was talking about, he talked about these things influencing people.
And when you have these agents, yes, they're statistical engines, but when they are quite good at mimicking people and these new models are much better than anything we've seen, there's the possibility for systems that are going to be become better sort of engines of disinformation, rather than just writing fake news, actually getting into dialog with people on social media. And also we may see products and chat bots like Siri and so on that are much better persuading and influencing us. And that's something I think we're sleepwalking towards.
CHAKRABARTI: I mean the Turing Test will be a thing of the past essentially.
KNIGHT: I think it already is.
CHAKRABARTI: It already is? Ok. Well so Sarah, Dina actually mentioned something which I think is a powerful point of comparison, because when she talked about self-driving cars. So what is so interesting to me about that is, yes, like, so here's another technology that can really transform transportation. It will transform transportation. We know that. But it's actually going at a much slower pace, I would say, than the integration of AI into technologies like search.
MYERS WEST: We're seeing proposals to that effect starting to emerge and I think increasing consensus that we need greater regulatory scrutiny and particularly that the burden needs to be more on the companies than it is at present to be doing some kind of evaluation of their systems before they're being deployed out into the world, yes.
I think the other thing that we're seeing emerge, especially out of the European Union, which is in the process of finalizing its AI Act in that regulation, there is a set of types of AI systems that the EU has essentially said these have such detrimental effects on society we just don't think that they should be in use at all. Things like predictive policing systems, social scoring, facial recognition systems in public spaces. The EU has essentially said, you know what, we have enough evidence that these have detrimental effects on the public. They don't outweigh the benefits of this technology. And we're comfortable with creating a bright line rule around that.
So I think that's a key thing that needs to be on the table as well. As you know, the question is, you know, across the very many different types of AI systems currently in development and available for commercial use. Which ones do we actually want to have out in public use? And are there some ... where the risks just really don't outweigh the benefits?
CHAKRABARTI: Dina, what do you think about the EU AI Act that Sarah talked about?
BASS: It's been kicking around for a couple of years, but it seems like it's making progress. And I think that's gotten us, you know, U.S. Congress people thinking about whether they ought to be moving a little quicker. There was a mention at the hearing in the Senate a couple of weeks ago ... Sam Altman was one of the people that testified.
And there was this sense from some of the senators that the U.S. needs to lead. Altman said the same thing, but the U.S. is already behind. The other challenge will be coming up with a way that these sorts of legal regimes can be global, because obviously all of these companies are global and that's also still an open question. So in the U.S., at least we saw this on the Senate side, a lot of discussion about how we should regulate, but ... thus far not a lot of movement.
The other issue is how do you regulate in such a way that it doesn't entrench the current incumbents? We started off this conversation talking about, you know, Microsoft and, you know, followed by Google as well, as OpenAI being in the lead there. You know, there's been a fair bit of discussion about regulation in a way that it doesn't prevent academic work. It doesn't prevent open-source work, it doesn't prevent startups from developing and it doesn't have the opposite effect of what people probably want when they say we should regulate companies. Which is a system that actually entrenches the companies that have already done the work and releases it and prevents new ones from doing the same thing.
CHAKRABARTI: Oh, a really good point. Really good point. You know, Will, I feel like AI presents us with a particular regulatory challenge. Because, to put it simply, I would say the standard should be, you know, does it cause harm society? That's easy for me to say, but it's hard to measure. So I wonder, like, is it possible that we might end up in a world where the technologies are developed, introduced, unleashed into the wild, and then thereafter, we try to use regulatory clawback tools?
KNIGHT: I mean, that feels the way we're headed, and I think that is not really what we want. And, you know, a couple of really interesting things came up. I think one is that the government, the U.S. government is going to be very wary of regulating this to some degree. Because they see it as something that could be an enormous benefit to the to the economy, to its competition with other countries. A similar thing happened with self-driving cars, to be honest.
And, you know, the fact that car companies like Google were allowed to test these on their public roads has, I think, kind of raised some questions of its own. But one of the things I'd like to see more, I think is not being talked about enough. Is companies being forced to make their models more transparent so that scientists can study them. If there's as much of a risk as nuclear war, then scientists should be able to see what the potential problems are.
CHAKRABARTI: Okay. That is an excellent point. Dina, let me just quickly turn back to you here, because I have been reading that even on this actually pretty excellent idea of transparency, Google and Microsoft have already sort of run into a little bit of trouble. I've heard that internally, regarding Google's own sort of ethical use of AI group, Microsoft's ethical use of AI group, that they had trouble getting some information about how the products were being developed.
BASS: Both companies have significant internal ethical AI groups and when you talk to people in those groups, they feel that they do not get what they need. They do not get listened to necessarily. And outside the company, it's very hard to get details. Microsoft has had AI ethics policies for a number of years, but we don't have a ton of specifics. And they announced recently that they're now going to start publishing ... an AI transparency report, although again, I don't know what's involved in that. When customers come to them asking for certain powerful AI tools, they have an internal group that looks at whether the customers use case is one that should be granted that or whether it's problematic.
Again, we don't know what those criteria are. We don't know who they say yes to and who they say no to. And as we start getting into issues, you know, some of the scenarios that were mentioned in the EU law that are problematic, when we start to get into military use, potentially, you know, more transparency, both how the models work, but also what they're being used for and what companies are reselling them to people for is necessary.
CHAKRABARTI: Oh, yeah. I mean, it seems to me a no brainer that when you have these major inflection points in human history that are caused by technology, it's absolutely incumbent on us to think of new regulatory schemes. And for the companies themselves to let go of old habits. And I think we all would be benefited by having more transparency, both internally to the companies and externally as well. Sarah Myers West, we have 30 seconds left in this conversation, unfortunately.
I want to give you the last thought. What would you say to people listening right now? Because ultimately, you know, they're going to be logging on their computers and phones and might be able to use these search products right now.
MYERS WEST: I think one thing that's really key to keep in our minds, especially with all of the sort of frenzy around AI, is that there's nothing about this technology that ultimately is inevitable. There is tremendous scope for us to shape the direction of the technology's future through regulation, through organizing, like we're seeing at the WGA. And I think that that's where we need to be placing our energy right now.
This program aired on June 6, 2023.