Advertisement

AI's influence on election 2024

47:20
Download Audio
Resume
A Senate Rules and Administration Committee hearing titled "AI and The Future of Our Elections" on Capitol Hill September 27, 2023 in Washington, DC. The hearing focused on what effect Artificial Intelligence can have on the 2024 election and future elections in America. (Photo by Drew Angerer/Getty Images)
A Senate Rules and Administration Committee hearing titled "AI and The Future of Our Elections" on Capitol Hill September 27, 2023 in Washington, DC. The hearing focused on what effect Artificial Intelligence can have on the 2024 election and future elections in America. (Photo by Drew Angerer/Getty Images)

First there was fake news on social media. Now, there's AI, and its power to shape American politics.

"It is offering new ways of spreading disinformation, like the audio and video content, especially, but it's mostly just turbocharging existing efforts and making it a lot cheaper and easier," Nicole Gill, co-founder and executive director at the watchdog group Accountable Tech, says.

AI has the power to make audio and videos of people saying anything their creators want.

As the 2024 presidential election approaches, several states are trying to pass laws to stop the spread of deceitful AI generated political content. But very few have been able to do so.

Today, On Point: AI and its influence on election 2024.

Guest

Darrell West, senior fellow at the Center for Technology Innovation within the governance studies program at the Brookings Institution. Author of "How AI will transform the 2024 elections."

Nicole Gill, co-founder and executive director at the watchdog group Accountable Tech.

Also Featured

Steve Simon, Secretary of State of Minnesota.

Transcript

Part I

On October 17th, 2023, Michigan State Representative Penelope Tsernoglou gave a presentation on Michigan House Bill 5141. The presentation was sparsely attended. The meeting room was largely empty. And that's too bad. Because Representative Tsernoglou drew back the curtain on one of the biggest new forces working its way into American politics.

She started with this.

MR. BEAST DEEPFAKE: If you're watching this video, you're one of the 10,000 lucky people who will get an iPhone 15 Pro for just $2. I'm Mr. Beast, and I'm doing the world's largest iPhone 15 giveaway. Click the link below to claim yours now.

CHAKRABARTI: Now if you don't know who Mr. Beast is, ask your kids. Because he is a YouTube superstar with more than 230 million followers.

So why did Mr. Beast have a cameo in Representative Tsernoglou's bill presentation? She explained.

REP. TSERNOGLOU: So it turns out Mr. Beast didn't actually do that video. That's not him. If you click that link, something bad probably happened. You did not get a free iPhone. So that's just one of the better examples out there.

CHAKRABARTI: It's an example of video and audio content created by AI. To be clear, it is fake. Mr. Beast himself disavowed it, even though the voice sounds like his, and the words matched the lip movements, and the video, Mr. Beast took to social media and posted the following, quote, "Lots of people are getting this deepfake scam ad of me.

The future is going to continue to get weirder. Are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem."

So what's the connection between a deepfake video of Mr. Beast and politics? Representative Tsernoglou then played this.

JOE BIDEN DEEPFAKE: Hi, Representative Tsernoglou it's your buddy Joe. I really like your bill that requires disclaimers on political ads that use artificial intelligence. No more malarkey. As my dad used to say, 'Joey, you can't believe everything you hear, not a joke. Anyway, thank you and your committee for your leadership in the drive for more democratic elections and give your daughter a hug for me. By the way, this statement was created using artificial intelligence.'

CHAKRABARTI: Okay, be honest. Look at your radio, or your smartphone, or however you're listening right now, and be honest. Tell me, until the very end there, did you think that was the real President Joe Biden? The tempo's right. So is the tone. Even the papery edge to the voice.

Also, there are the Biden-esque idiosyncrasies out there. The story about his dad, and calling him Joey, and heck, even malarkey got a shout out in that AI generated content. But as Representative Tsernoglou emphasized, that was not, I repeat, not Joe Biden. It is completely AI generated, never uttered by the actual President of the United States.

TSERNOGLOU: And this audio took approximately three minutes to make.

CHAKRABARTI: Three minutes to successfully replicate a statement using the president's voice, but a statement the president never made. Now, Representative Tsernoglou was trying to make clear, fake news has already wreaked havoc on American politics and elections.

TSERNOGLOU: Imagine what easily accessible artificial intelligence can and will do, even in this election year.

AI generated content is currently indistinguishable from real life images and sounds. The threat of AI generated content to influence and deeply impact elections and voters is imminent. Michigan can take the lead in regulating misleading election related content and protecting democracy.

CHAKRABARTI: And that is what Michigan House Bill 5141 is all about. A state level attempt to regulate and identify AI generated content that could impact American elections. Now, the bill passed in November. Michigan is not the only state to do this. But is it enough? Just how much can AI positively and negatively influence our elections?

This is On Point. I'm Meghna Chakrabarti. I really am. I'm not AI just yet. And that's what we're talking about today. And we're going to start with Darrell West. He's a senior fellow in the Center for Technology Innovation at the Brookings Institution, and he joins us from Washington, D. C. Darrell, welcome to On Point.

DARRELL WEST: Thank you very much. It's nice to be with you.

CHAKRABARTI: First of all, are we already seeing use of AI, whether positive or negative in American politics last year and this year?

WEST: We are seeing a lot of use of AI in campaign communications. There have been fake videos, fake audio tapes, the United States is not unique.

We just had presidential elections in Slovakia and Argentina. There was misuse of the technology there. There were fake audio tapes alleging that one of the presidential candidates was corrupt in taking bribes. In Argentina, one opponent, a conservative opponent tried to turn his adversary into a Marxist with Marxist clothes, rhetoric and so on.

And so the problem is we've reached a point where the technology can create video and audio that sounds completely authentic. Even though it is fake.

CHAKRABARTI: How much has the technology progressed since a couple of years ago? And I ask that because Representative Tsernoglou in her presentation also played a video, a deepfake video, of the actor Morgan Freeman.

The voice was pretty close to accurate. He's got such a singular, recognizable voice. There was a little mismatch in the lip movement. But that, she noted, that was from a couple of years ago. How indistinguishable from reality is it now, Darrell?

WEST: It's very indistinguishable from reality. And of course, that is part of the problem.

The technology has advanced considerably just in the last six months. It used to be, if you wanted to use sophisticated AI tools, you needed some degree of a technical background. Today, because the new generative AI tools are prompt driven and template driven, anybody can use them. So we have democratized the technology at the very time that American society is highly polarized.

There's a lot of extreme rhetoric and actions are taking place all across the political landscape. People are upset. And this is like the worst time to put this type of technology in the hands of everyone because people are expecting this to be a close race, people have incentives to do bad things with this technology.

Let's play another couple of examples here, and I'm going to reiterate over and over again. This is AI generated audio. Okay, it is AI generated. It was not said by the actual person whose simulated voice you're about to hear. It's Hillary Clinton and she is purportedly supporting Florida Governor Ron DeSantis in this AI generated audio misinformation.

So here it is.

HILLARY CLINTON DEEPFAKE: People might be surprised to hear me say this, but I actually like Ron DeSantis. A lot. Yeah, I know. I'd say he's just the kind of guy this country needs, and I really mean that. If Ron DeSantis got installed as president, I'd be fine with that. The one thing I know about Ron is that when push comes to shove, Ron does what he's told.

And I can't think of anything more important than that.

CHAKRABARTI: Now again, that is not Hillary Clinton, it's an AI generated audio sample spreading misinformation about her purported support of Ron DeSantis. Here's another example. This is from a pro DeSantis super PAC. They used AI to create an attack ad of Donald Trump's voice disrespecting Iowa.

Now that was the first state, of course, to go to the caucuses in the 2024 presidential election cycle, which just happened. And again, to be clear, AI generated, and Trump did not say what you're about to hear.

AI NARRATOR: Governor Kim Reynolds is a conservative champion. She signed the heartbeat bill and stands up for Iowans every day.

So why is Donald Trump attacking her?

DONALD TRUMP DEEPFAKE: I opened up the governor position for Kim Reynolds and when she fell behind, I endorsed her. Did big rallies and she won. Now she wants to remain neutral. I don't invite her to events.

AD NARRATOR: Trump should fight Democrats, not Republicans. What happened to Donald Trump?

Never Back Down is responsible for the content of this advertising.

CHAKRABARTI: And again, that's a pro Ron DeSantis super PAC. Darrell, I want to ask you about, we'll return to how domestic groups might use AI in the election, but recalling 2016, there was so much concern and evidence of foreign use of fake news on social media. Are you also concerned about the same thing regarding AI?

WEST: There almost certainly is going to be misuse of AI by foreign entities.

So you mentioned Russia in 2016. In previous elections we've had content saying the Pope had endorsed Donald Trump, which of course never happened. When you think about the number of foreign countries that see this as a high stakes' election, and several of them actually have a preferred candidate, oftentimes Donald Trump. Russia certainly would like Trump to win.

Russia's had difficulty beating Ukraine on the military battlefield, but if they can elect more American politicians who are willing to cut off U.S. military assistance to Ukraine, Putin wins the war. He has a clear stake in this election. China is obviously interested. Iran, North Korea, the Saudis, even Israel.

The stakes of this election for all these foreign countries has gone up dramatically. Many of these countries have very sophisticated technology capabilities, a number of them have well developed propaganda operations, as well. So we need to make sure that there is transparency about the use of these communications.

We want the 2024 American elections to be decided by Americans and not foreign entities.

CHAKRABARTI: But as we learned from 2016, you can't necessarily regulate your way around this. We're going to talk in a bit about the efforts at the state level to tag AI generated misinformation or disinformation, but that has limited effectiveness, doesn't it, Darrell?

Doesn't the same problem apply to AI?

WEST: We can't regulate what Russia does. We can't regulate what China does. They are outside the American borders. And even if we tried, they of course would not pay any attention to our regulations. So voters need to be aware in this campaign about the risks facing them.

If they start to hear content that seems a little off, if the voice sounds a little tinny, if the video image seems a little shady, it probably is. And so people just need to be on guard. It's going to be a very difficult next 10 months leading up to our general election. There's not a whole lot we can do about the bad actors out there who are seeking to misuse these tools in order to influence the election.

Part II

CHAKRABARTI: Today, we're talking about artificial intelligence. It's having an impact on every aspect of human life around the world. We're focusing on what it could do to American politics and American democracy today. So here's a couple more examples.

First of all, I'm gonna take you back in time a little bit and have you listen to the voice of Paul Harvey. Now, many of you might remember him. He was a much beloved and hugely popular syndicated radio broadcaster. His signature line was, "And that's the rest of the story." Had a really distinctive voice. Back in 1978, Harvey gave a speech to the future farmers of America.

This is real. And it was titled, "And So God Made a Farmer." And here's a bit of it.

PAUL HARVEY: And on the eighth day, God looked down on his planned paradise and said, I need a caretaker. So God made a farmer. God said, I need somebody willing to get up before dawn, milk cows, work all day in the fields, milk cows again, eat supper, then go to town and stay past midnight at a meeting of the school board.

So God made a farmer.

CHAKRABARTI: Now that was Paul Harvey in 1978. Harvey died in the early 2000s. He's not with us anymore, but just recently an online ad was released. And by the way, the Trump campaign played this ad just before he took to the stage in many rallies before the Iowa caucuses. So here's a segment of that.

ADVERTISEMENT: And on June 14th, 1946, God looked down on his planned paradise and said, I need a caretaker. So God gave us Trump. God said, I need somebody willing to get up before dawn, fix this country, work all day, fight the Marxists, eat supper, then go to the Oval Office and stay past midnight at a meeting of the heads of state.

So God made Trump.

CHAKRABARTI: So that's the God Made Trump online video. Now, to be clear, we don't know if Paul Harvey would have ever actually willingly uttered those words because he is dead, but all the reporting around this ad says that it's most likely it was created by artificial intelligence. And we know the group that created it, it's a very pro MAGA group that announces itself as a very political and generating politics and entertainment content online. So that's yet another example of how far this technology can go right now. Darrell West is joining us today, and I'd like to bring Nicole Gill into the conversation. She's co founder and executive director at the watchdog group, Accountable Tech. Nicole, welcome to you.

NICOLE GILL: Thanks for having me.

CHAKRABARTI: So I just want to talk a little bit more about the ease with which these things are made. Here we have an example of a person who's dead, but whose voice lives on, appearing in an AI generated ad. Is it as simple as, taking that recording of Paul Harvey's 1978 speech, rewriting it a bit and just like plugging it into an AI tool? And boom, out comes this new content?

GILL: In one answer, in one word, sorry. Yes. It really is. And that's the beauty and the downfall of these tools. Is that the better they are, the more content they have. And so someone who has spent their entire career on the radio airwaves, there is hours and hours, right? Probably hundreds of hours of footage, of audio recording.

And even if it's from the seventies, many, much of that type of footage has now been digitized. You can find lots of old content like that on YouTube even. And so in some ways, the Internet has really revolutionized the use of older content into this modern era, and it really these LLMs, these AI chatbots, the more content you have of someone, the better the product is going to be, and that's why they're so effective with someone. The clips you played earlier with Hillary Clinton, with Joe Biden, because we have, we all know what they sound like.

And we have hours and hours of audio of them or footage of them to pull from.

CHAKRABARTI: Oh, interesting. I would say thousands of hours.

GILL: Sure. Right.

CHAKRABARTI: For some of them. So the tools can get trained on a vast amount of material and come up with these near perfect replicas.

GILL: The more material you have, the better the product is going to be.

CHAKRABARTI: Okay. And just to be clear, these tools are available to anyone? If I just went and Googled something, I could just use one of these tools?

GILL: Yeah, they are. There are limits being put in place by some of the companies, with regards to how the content can be used. OpenAI, the company behind the popular ChatGPT tool, just Monday announced that they were limiting the use of their technology in creating applications to be used for campaigning. So what that means is if I'm a candidate running for office, I can't use their technology to create maybe an app that helps me find voters that are most likely to support me. Now that's good.

That's a good first step. But the proof is in the pudding, right? And OpenAI, many of these companies, they don't have the kind of staff required to really effectively ensure that their policies are being followed. And so it's one thing to have a policy, it's another thing to enforce it.

CHAKRABARTI: Yeah. Okay. So we're going to come back to that, because there's definitely a sense that there needs to be cooperation amongst the private sector and lawmakers to come up with some kind of effective solution if it exists. Now, on that point, Darrell, I want to move back to you because we started this hour with the example of a bill that was passed in Michigan.

Now there are other states trying to figure out a regulatory way to curb the use of harmful AI content in politics, California, Texas, Minnesota, Michigan, Washington state, at least. What are the kinds of things that they're putting into these bills that they're trying to get passed?

WEST: There is a number of different things taking place, both at the state and the federal level. First of all, these bills are requiring for greater transparency, if people use AI generated content in campaign communications, there needs to be disclosure in the same way that we have disclosure of campaign contributions. And who paid for particular campaign ads. But the real issue is going to be the harmful impact on voters, like does propaganda actually work, do fake videos work, if people feel a false narrative, are they going to be persuaded?

Most of the bills do not get into the issue of harmful impact, just because it's very hard in an election campaign to regulate that, given Supreme Court decisions, basically equating campaign communications with freedom of speech. But the legislation that is moving forward is a good first step, in the sense that at least if we have transparency requirements, we will know when AI generated content is being used to train influences.

CHAKRABARTI: Okay, but tell me a little bit more about that, because that requires the group or person making it to identify themselves, right? In order for the transparency to work. As you mentioned earlier, there's nothing that would force the foreign actors to identify themselves.

But don't we even have that same problem here in the United States, because it's not just super PACs, right? We're talking about even, let's say, let's call it grass roots level and AI generated content that can get out there.

WEST: Yeah, that is certainly a problem. And it's interesting you highlighted the Paul Harvey example in how Trump is doing 'God made Trump,' The Lincoln Project, which is an anti-Trump group actually just put out their response ad and it's entitled 'God made a dictator.'

And it basically incorporates Trump images with other prominent dictators in history, as well as around the world. So we are seeing a kind of a response already on that front, but it is an issue just in the sense of how you identify the use of AI generated content in campaign advertising. Some groups are voluntarily disclosing, many are not.

Technology may end up becoming part of the solution, in the sense that there are algorithms that are now seeking to be able to identify the use of AI generated content in campaign communication. So even if the sponsor does not admit they have used that type of content, there may be ways to detect the use of that, with or without them.

I understand there's also some punitive measures in some of these bills, fines, possibly even jail time in them. Do you think that will be enough of a deterrent, Darrell?

WEST: It certainly helps. Like in other areas where companies, organizations, or individuals misbehave, we find them. And sometimes that is a sufficient deterrent.

But the problem in an election setting is the enforcement of those types of infractions are after the fact, like we can't wait until after the election to find somebody $25,000 because they misused AI in a campaign communication, by then people have made up their minds, they voted and somebody has won the election.

So civil enforcement of that sword is useful, but it's probably not going to help us in terms of this particular election.

CHAKRABARTI: Nicole Gill. Let me turn back to you. What do you think about the while noble attempt, the limited impact, possibly, of legislation on this problem?

GILL: Yeah, it is limited in the sense that we have five states so far, right?

That have taken these steps. And they are for the most part bipartisan, which is encouraging. There has also been some action at the federal level, but let's remember that because of the way that elections work in the United States, any federal legislation would only relate to federal elections, and state laws to state elections.

And you have this inconsistency across states, but also at the federal and state level, that can make it a bit challenging for enforcement, creates loopholes. And so I think that this is a good first step, but I also think we should look to the companies who are creating these tools to take some responsibility here and be a part of the solution.

CHAKRABARTI: Yeah, we're going to talk about that, because it's a really important part of the overall picture here, but you mentioned some action at the federal level, Nicole. On September 27th, there was a Senate hearing held on the use of AI in elections. And it was also a chance to debate possible safeguards against AI deceiving voters.

And Ari Cohn of Tech Freedom was one of the speakers, and he warned lawmakers while he was testifying that legislation that was too restrictive could actually be harmful.

Reflexive legislation prompted by fear of the next technological boogeyman will not safeguard us. Free and unfettered discourse has been the lifeblood of our democracy and it has kept us free.

If we sacrifice that fundamental liberty and discard that tried-and-true wisdom that the best remedy for false or bad speech is true or better speech, no law will save our democratic institutions. They will already have been lost.

CHAKRABARTI: Now, Nicole and Darrell, and Nicole, I'll start with you first. This is a very important counter argument here, because it really does go to a fundamental of another aspect of our democracy, that ideally the federal government should not be regulating speech, and also how hard it is to determine what is harmful speech, right?

Because what's the harm that we're defining here? It does seem to me that some of these bills could run up against that wall. How would you respond to that?

GILL: When it comes to elections there are some non-content related actions that can be taken in order to limit the spread of disinformation and some of the harms from these tools, and I would look to those rather than anything that regulates speech. I also think that a lot of these laws are being designed based on what we know about the technology right now. And we're taking as much as we can into place.

But ultimately, the election is going to, I'm more, more worried about what happens in the days leading up to the election than what we can think of right now and think to proactively regulate. And that's the harm when it comes to these tools, is that they're cheap and they're fast. And so it's quite easy to come up with content that can affect an election and do it quite quickly and have an effect.

CHAKRABARTI: Yeah. So an AI generated October surprise kind of thing.

GILL: Exactly.

CHAKRABARTI: So we will definitely come back to that. But Darrell, I want to stay with this issue of the potential downsides and even possibly anti-democratic aspects of laws surrounding AI and communications during elections. Because just let me use a very analog metaphor here.

And the idea of, first of all, slowing down virality of AI and political content. That is definitely something we will explore about how that might be possible with social media and tech companies. But my analog metaphor, isn't that kind of the government telling a newspaper you can't put out a daily.

But you can put out a weekly. No one would really stand for that, Darrell.

WEST: Absolutely. And that type of provision is never going to pass legal muster. But on the freedom of speech argument, all of us support freedom of speech, but we've never had unlimited freedom of speech. Like you cannot yell fire in a crowded theater, because it creates harms to other individuals. You cannot advocate violence. You cannot engage in illegal activities. You can't use voice to engage in hate speech. There are all sorts of limits in the non-digital world that we already have accepted as a country.

Companies cannot engage in fraudulent advertising; they get fined for a consumer fraud in that situation. So my argument is we've litigated freedom of speech cases for decades. Like we actually have rules of the road in that area. We just need to apply those rules to the digital space. Right now, there are no guardrails. There are no rules in that area.

It's a wild west, anything goes. That creates a lot of dangers for us. We know that we are facing choices that are very fundamental in this election, perhaps even the future of American democracy. My greatest fear is this election gets decided based on disinformation.

CHAKRABARTI: But let me push on this a little bit, because lies may be odious, but they're not illegal.

And in the examples that you put out there, the most easily graspable one is you can't yell fire in a crowded theater. The harm is pretty well defined, right? The harm is causing panic. People might get injured in running out of the theater. In campaign or elections misinformation and disinformation, what is the harm?

It might be shaping what people believe, but lots of forms of advertising do that. And ultimately people are casting votes. That's a perfectly legal and desirable thing. What is the defined harm that might justify curbing AI generated speech, Darrell?

WEST: There are defined harms in the election area. For example, it is illegal to basically buy or sell votes, that is illegal all across the country, but yet there are websites that have been doing that, like that should be illegal. We should take down those websites. That is perfectly legal. That is not freedom of expression on the part of those individuals, because they are advocating illegal behavior.

The fact that the election is going to be on a Tuesday in November, you can't go around telling people, "Oh, we changed the election date. It's actually going to be on Thursday." And you're targeting Black voters with that message, knowing that would harm Democrats, like that type of stuff actually is illegal as well.

So there are a number of defined harms, and we just need to apply those rules to the digital space. Because we already have them in the non-digital world.

Part III

CHAKRABARTI: Now, to be fair, I've been focusing almost entirely on the negative impacts that AI could have, but I just want to give a little bit of voice to some of the positive impacts that campaigns are seeing with AI, because it's not just disinformation that AI can make.

There are lots of different tools, data analytics, for example, and also a way to reach out to more voters, perhaps even in their native languages. And also as an easy way to give voters more information that they ask for. And here's one example. Shamaine Daniels is a Democratic House candidate in Pennsylvania.

And the Daniels campaign has launched an AI volunteer caller named Ashley. Now this AI tool, Ashley, calls voters and interacts with them about her campaign. And Shamaine Daniels said the tool is being used to better access a greater diversity of voters who usually are harder to reach. So here's an example of Ashley, the AI tool, speaking with a campaign volunteer.

ASHLEY: Hey there. I hope you're doing well. My name is Ashley, and I'm an artificial intelligence volunteer for Shamaine Daniels run for Congress in Pennsylvania's 10th District, calling on a recorded line. May I ask who I'm speaking with?

CASSANDRA: This is Cassandra.

ASHLEY: Hi, Cassandra.

Great to meet you.

Are you familiar with Shamaine Daniels and her campaign for Congress?

CHAKRABARTI: Okay, so that's actually an AI tool that Shamaine Daniels, a House candidate, a Democratic House candidate in Pennsylvania, is using to reach a greater diversity of voters. Darrell, just talk to me for a minute about this other side of AI.

With the data heavy and contact dependent campaigns that we have now, there could definitely be a positive use here.

WEST: Ashley is a great example of a positive use of AI. Having a chat bot like that can answer basic questions, like people often call up candidates or ask them on the campaign trail, what's your position on abortion?

What do you think about Ukraine? How are you going to handle budget issues? A chatbot can provide basic material to people in ways that are very accessible. They're interactive tools. You can have a conversation with them in the manner that you just illustrated, but there are also other positive uses, like one way in which AI can be helpful is in starting to try and level the playing field between wealthy and less wealthy organizations.

It used to be to play in the field of campaign communications, you needed a lot of money, a large professional staff to design video ads and so on. Today, AI can do that. So we're bringing powerful and very accessible tools to organizations that may not have a lot of money and may not have a large staff, that can be helpful.

It's not that we're repealing the role of money and the inequities that exist in our system, but it offers the potential to start to help. You also can target voters. If you're interested in a congressional race or a gubernatorial race, you may have a 10% undecided vote. You need to figure out how do you reach those people who are undecided, AI can help with that. And then, as you mentioned, language translation, like being able to reach out to people who are non-native non-English speakers, AI can help with translation functions.

CHAKRABARTI: Yeah. Never in my life did I think that I'd be living in a time where the fictional universal translator from Star Trek actually exists in our real lives.

A very powerful tool there. So let's go back now to state level efforts here, because then I want to try and connect those efforts, Nicole, with the actual tech companies themselves. But we spoke with Steve Simon, secretary of state of Minnesota, and he said his office is already preparing for how AI could influence November's presidential election. And once again, it really makes sense on the state level, because that is where the votes are cast. Just last week, dozens of election officials from around Minnesota met with members of the Department of Homeland Security, the FBI, and law enforcement for training on cyber security.

STEVE SIMON: And we had everyone in the room for a day, and we analyzed multiple scenarios, including AI driven scenarios that might reveal weaknesses in our system, whether it's individual systems at the local level, whether it's state systems, whether it's communications gaps between our office and the offices of our partners at the local level.

CHAKRABARTI: Now, Secretary Simon expects AI generated misinformation to spread as the election approaches. His goal is to make sure that real information is also out there and easily accessible.

SIMON: If you have questions about how it operates, who is eligible to vote, how a person can vote, how a person should get registered, what the criteria are for registration, all of that, go to a trusted source.

Seek out trusted information. We'd like to think it's secretaries of state, and we all have a really good strong websites that are one stop shops. Don't give into the temptation to rely exclusively on what in your social media feed or what friends and neighbors tell you. It doesn't mean it's automatically unreliable.

It just means don't rely on it exclusively. Go to those trusted sources, if you have questions about how the system really operates.

CHAKRABARTI: So that's for real information or reality, but regarding AI misinformation, last November, Minnesota passed a law to help crack down on AI generated misinformation in elections.

SIMON: It is unlawful within 90 days of an election to post an AI driven depiction, audio or video. If it's without the permission of the subject and if it's intended to influence the outcome of an election, we'll see how that goes in this election cycle in 2024. But that's one example, that purveyors of this information or on notice that within 90 days, there is now a statute in place in Minnesota that says you can't do that.

CHAKRABARTI: And if found guilty, violators could face up to five years in prison or $10,000 in fines. Now, interestingly, Secretary Simon says he does not see AI as a new threat, but more of an improved way to amplify the existing threat of misinformation.

SIMON: And so seen within that context, it doesn't mean reinventing the wheel.

It means being wary, and it means being precise, and it means being on guard. But many of the same trusted strategies to combat disinformation will have to be in play here. Namely stand up and speak out. Say what the truth is, lead with the truth, be as proactive as you can, be as open and as transparent as possible.

CHAKRABARTI: So that was Minnesota Secretary of State Steve Simon. Nicole Gill, regarding when he said it's not really new, it's not really reinventing the wheel. Of course, that brings my mind back to what we learned from misinformation spreading on social media and companies and their willingness or unwillingness to do something about it.

So just tell me quickly. How would you grade Facebook, well now Meta, Twitter, et cetera, in terms of the efforts they put in to regulate misinformation in previous elections?

GILL: I think we can look at history. And the reporting of history shows that there's a lot more that they could have done. Of course, 2016 and Facebook's infiltration really by Russian disinformation campaigns, because the first instant that comes to mind. But every platform has had issues with relation to elections. And as they've also cut back on staff that oversee these types of issues, we're sure to see more, right?

The technology is getting better, and the staffing is fewer and further between.

CHAKRABARTI: How much would you say they're doing now regarding the AI threat?

GILL: I think they're doing the minimal amount possible to look like they are doing something without having to really expend a ton of energy. And I say that because what we've seen is a lot of policy releases. We've seen, tech companies love blog posts. And they've all started to release policies about how they'll regulate AI with regards to elections. What we haven't seen is a real investment in the type of trust and safety teams that are needed in order to ensure that those policies are actually working and that they are enforcing them.

CHAKRABARTI: Darrell West, Facebook would say in response, that in the 2016 era, they did have a civic and elections integrity team that was rather large. I will note that team got dissolved by Facebook, but Zuckerberg has since said, Mark Zuckerberg has since said that those folks are now integrated into the other aspects of Facebook's business.

So they're still keeping up or looking after the public's interest regarding misinformation. Is that adequate, Darrell?

WEST: No. These companies need to be doing a lot more than what they are doing. As Nicole pointed out, they have cut staffs. They actually used to invest much more in human overseers who would look at content and take things down if they thought they were illegal or unethical.

It's great to put out policy principles explaining what you want to do, but you need humans to actually help oversee and implement, like algorithms can identify some types of problematic content, but it's not, the algorithms are not good enough that we can rely on that. You need humans who can actually look at things, look at the context, see what is being said, what is being done or images being misused.

Are there fake videos and fake audio tapes there? They need to be the ones who make these decisions. And the companies seem to be pulling back from the content moderation role, even that they themselves acted on in 2020, I think in 2020, they took that more seriously. Now they're just worried about all the controversy, all the divisiveness, American polarization.

They don't want to be in the middle of it. And so they are giving up some of their responsibility and the result is going to be a tsunami of disinformation in 2024.

CHAKRABARTI: But Nicole, relying more or having human-based content moderation as one of the major tools here, doesn't that introduce a major problem in that content moderation happens after the fact, some of the time, or at least most of the times.

And it relies on again, human based content moderation, those posts or pieces of information to be flagged by users. And then also because of that, it does nothing to solve the velocity problem, right? Or how fast misinformation spreads through these networks.

GILL: So something that Darrell just touched on is that the companies could be using AI technology in order to flag instances of virality.

And so something that we've called for, and a number of researchers have called for, is something called a circuit breaker, something used in financial markets. When there is unusual levels of activity or virality around a post that often isn't from someone you might expect to have such a reaction.

Me posting versus Hillary Clinton posting and I'm getting hundreds of thousands of re-shares and likes. You could have a circuit breaker system that would just put a temporary pause in place and stop that type of post from being reshared, while a moderator, a content moderator, takes a look and ensures that there's nothing funny going on.

And so what I'd hoped we would have been seeing this year is the companies using these technologies in order to help them better police their own systems, really.

CHAKRABARTI: Yeah so you're right, we've seen it in financial markets, like the New York Stock Exchange has actually halted trading when those high-volume trades happen, that are unusual.

Okay, so that's a great example, but Darrell, as you heard Nicole use the word, could. The companies could be doing this, and she had hoped, meaning that they're not doing it, at least they're not using these tools adequately. It seems like the obvious reason is they have no financial incentive to do right?

I know that there are ideas and tools in these social media companies that could cut velocity of misinformation by, I don't know, 20%, 30%, 40% even, but they just won't use them because there's a potential loss of a tiny amount, like a minuscule fraction of engagement, aka money. In the absence of regulation forcing these companies to use those tools, how else can we incentivize them to do it?

WEST: The problem is, as you point out, we have very bad financial incentives right now, the more engagement on social media sites, the more advertising revenue. And I think this is a big problem that Twitter right now. We know that a number of mainstream advertisers have polled or slow down their advertising rates because they're worried about the kinds of content that is appearing there.

And so the problem is Musk's response has been to basically open the floodgates and you know allow a wide range of content. There's racist content. There's antisemitic content and so that is a very dangerous situation. I worry about the situation in October, as we're coming to the closing days of this campaign, everybody expects it to be a close race.

We'll probably come down to four or five states, maybe 50,000 or 100,000 votes in those states. Is there going to be some surprise message of a fraudulent nature? That twists that election and ends up deciding this election based on disinformation. I view that as a catastrophe if that is what happens.

CHAKRABARTI: But again, the Elon Musk and Twitter slash X example is a good one. Because in order to come up with the right regulation it has to comport Darrell, with what you were talking about earlier. About existing laws around how it's legal to curb speech or even the speed of communication. Nicole, we've got about 30 seconds left.

I'm thinking of all the listeners who are wondering what can they do just as consumers of this information to try and be aware of whether or not it's AI generated. Is there anything?

GILL: Be a critical consumer. Similar to what Darrell was talking about earlier. Question the sources. Question where you're getting information from.

Triple check information.

By all means, please use the secretary of state and federal government official websites when you're looking for official information about the elections.

This program aired on January 17, 2024.

Related:

Headshot of Paige Sutherland

Paige Sutherland Producer, On Point
Paige Sutherland is a producer for On Point.

More…

Headshot of Meghna Chakrabarti

Meghna Chakrabarti Host, On Point
Meghna Chakrabarti is the host of On Point.

More…

Advertisement

More from On Point

Listen Live
Close