Advertisement

Reality wars: Deepfakes and national security

47:19
Download Audio
Resume
A visitor watches an AI sign. (Josep Lago/AFP via Getty Images)
A visitor watches an AI sign. (Josep Lago/AFP via Getty Images)

A prominent Kremlin critic says the Russian government invited him to a Zoom call that turned out to be a deepfake.

But what happens when governments start using deep fakes against each other?

"The U.S. government is cognizant that that senior leaders, political elected officials and the like, might have their images and likenesses manipulated," Jamil Jaffer, founder and executive director of the National Security Institute at the Antonin Scalia Law School at George Mason University, says.

"So, one would assume that we're looking into how to affect the views of foreign audiences. In fact, it would be stupid if we weren't doing it."

Today, On Point: 'The reality wars': Deepfakes and national security.

Guests

Hany Farid, professor at the University of California, Berkeley’s schools of information and electrical engineering and computer sciences. He specializes in digital forensics, generative AI and deepfakes.

Jamil Jaffer, founder and executive director of the National Security Institute at the Antonin Scalia Law School at George Mason University. Venture partner with Paladin Capital Group, which invests in dual-use national security technologies.

Also Featured

Bill Browder, head of the Global Magnitsky Justice Campaign.

Wil Corvey, program manager for the Semantic Forensics (SemaFor) program at the Defense Advanced Research Projects Agency (DARPA), which aims to develop technologies to detect and analyze deepfakes.

Transcript

MEGHNA CHAKRABARTI: Bill Browder was in London watching helplessly. It was late 2008. His friend and lawyer, 37-year-old Sergei Magnitsky, had been arrested and thrown into Moscow's Butyrka prison.

BILL BROWDER: I mean, I can't even describe how upsetting it is to have somebody who works for you taken hostage because there's not a moment that you can feel happiness or relaxation or anything. Because you just know that while you're in your own bed, he's sleeping like a stone cot. While you're taking a shower, he's not allowed a shower, you know, while you're sitting in a warm room, he's sitting in a room nearly freezing to death.

CHAKRABARTI: This is Browder telling the story to the independent media company London Real. Up until 2005, Bill Browder had been a hedge fund manager who worked in Moscow and was among the largest private investors in all of Russia. But then his lawyer Sergei Magnitsky had found evidence implicating Russian officials in massive corruption, and also implicated them in having connections with the Russian mafia. Magnitsky was jailed, held for more than 350 days without trial and killed. Cause of death, blunt trauma to the head.

BROWDER: When he died, when they killed him. It was so far outside of my own expectations of the worst-case scenario. I couldn't even process that. It was just so horrible. Well, I processed it the only way I knew how. Which was to take responsibility, to go after the people that killed him.

CHAKRABARTI: Bill Browder pushed hard. He has constantly advocated for sanctions against Russia. And in 2012, he was instrumental in Congress's passing of the Magnitsky Act, which bars Russian human rights abusers from entering the United States. Browder is also one of Russian President Vladimir Putin's most forceful critics.

BROWDER: He is truly one of the most cynical, aggressive, evil dictators on the planet. He's a killer. And as a result of being his enemy and as a result of his homicidal tendencies, I've had to adjust my life very profoundly. I'm still here, which is a good thing.

CHAKRABARTI: As Bill Browder says, as a result of his constant criticism of Vladimir Putin, he has had to protect every aspect of his life, his physical safety, his financial safety, even his digital safety. Browder told us he's always on guard against any way in real life or online that Putin might get to him.

But he's also still criticizing the Russian regime, and most recently, he's been vocally supporting sanctions against Russia for its attack on Ukraine. So just a few weeks ago, Browder told us he wasn't surprised at all to get an email that seemed to come from former Ukrainian President Petro Poroshenko, asking if Browder would schedule a call to talk about sanctions.

BROWDER: And so that seemed like a perfectly appropriate approach. The Ukrainians are very interested in sanctions against Russia. And so, I asked one of my team members to check it out, make sure it's legit, and then schedule it. I guess in the rush of things that were going on that week, this person didn't actually do anything other than call the number on the email. The person seemed very pleasant and reasonable. The call was scheduled, and I joined the call a little bit late.

I'm on like 10 minutes after it started because of some transportation issues, and apparently before I joined there was an individual who showed up on the screen saying, I'm the simultaneous translator. I'm going to be translating for former President Poroshenko. And there's an image of the Petro Poroshenko as I know him to look like. And he starts talking. It was odd because everybody else, as they were talking, you could see them talking.

And he was talking, and there was this weird delay, which I attributed to the simultaneous translation. It was as if you're watching some type of foreign film that was dubbed in. So, you know, the person's watching their lips move, it's not a correspondent with the words coming out of the mouth. Then it started getting a little odd. The Ukrainians, of course, are under fire, under attack by the Russians. And this fellow who portrayed himself as Petro Poroshenko started to ask the question, "Don't you think it would be better if we released some of the Russian oligarchs from sanctions if they were to give us a little bit of money?"

And it just seemed completely odd. And I gave the answer which I would give in any public setting. And I said, "No, I think the oligarchs should be punished to the full extent of the sanctions." And then he did something even stranger, which is he said, "Well, what do others think on this call?" And that's a very unusual thing. If it's sort of principal to principal, people don't usually ask the principal's aides what they think of the situation.

But my colleagues then chimed in and said various things, and I didn't think that it wasn't Poroshenko. I just thought, what an unimpressive guy. All these crazy and unhelpful ideas he's coming up with. No wonder he's no longer president. That was my first reaction. And then it got really weird. And as the call was coming to an end, he said, "I'd like to play the Ukrainian national anthem, and will you please put your hands on your heart?"

And again, we weren't convinced it wasn't Petro Poroshenko. And so, we all put our hands on our heart. Listening to the Ukrainian national anthem, I had some reaction that maybe this wasn't for real, but there he was this Petro Poroshenko guy. Then the final moment that I knew that this was a trick was when he put on some rap song, in Ukrainian, that I don't know what it said. And asked us to continue putting our hands on our hearts. And at that point, it was obvious that we had been tricked into some kind of deepfake.

Well, this was done by the Russians. Why would the Russians do this? Well, the Russians have been trying to discredit me for a long time, in every different possible way. And I think what they were hoping to do is to get me in some type of setting where I would say something differently than I had said publicly.

I've been under attack. Under death threat, under a kidnaping threat by the Russians since the Magnitsky Act was passed in 2012. And so the fact that they've actually penetrated my defenses is very worrying. The fact that we didn't pick it up is extremely worrying. And I think thankfully, I mean, in a certain way, this is a very cheap lesson. Because nobody was hurt, nobody was killed, nobody was kidnaped. You know, we all just looked a little stupid. And I'm glad they taught me this lesson because since then, we've dramatically heightened our vigilance and our security. Maybe we've just gotten too relaxed, but we aren't anymore.

CHAKRABARTI: Bill Browder, a prominent critic of the Russian government. Now, Browder also told us that he and his staff finally confirmed that the call was indeed a deepfake when they took a much closer look at the email that that message, supposedly from Poroshenko, where it came from. Turns out they traced the email back to a domain in Russia that had only recently been created.

So, Browder's experience raises the question once again about what happens when deepfakes move from the realm of saying a thing that a celebrity never said, and into the realm of governments using deepfakes against each other. Well, that's what we're talking about today. And joining us now is Hany Farid. He's a professor at the University of California, Berkeley School of Information and Electrical Engineering. And he specializes in digital forensics, generative AI and deepfakes. Professor Farid, welcome to you.

HANY FARID: Good to be with you again, Meghna.

CHAKRABARTI: So how emblematic would you say Bill Browder's story is of the kinds of uses that we might see of deepfakes in the national security sphere?

FARID: First, that's a chilling story. I'm also not surprised to hear it. We have been seeing over the last five years the deepfake generative AI technology continue to improve in quality. And the democratization of the technology, that is, it's not just state sponsored actors, but it's anybody. And what's particularly chilling about this example is it's only fairly recently that we've seen live deepfakes. It's one thing to go to YouTube or TikTok and say, okay, somebody has offline created a deepfake.

But this is happening in real time now over a video call. And I think this is yet another problematic world we are entering where we can't believe what we read and see online. We can't believe the Zoom calls. We can't believe the phone calls. And the question you got to ask yourself is, "How do we get through the world? How do we get through the day?" And I think that is a really concerning aspect of deepfakes, is now everything is suspect. You got to know that on every call Bill Browder gets on, there's going to be this nagging suspicion of like, "Is this happening again?" And that's a tough world to enter into.

We have been seeing over the last five years the deepfake generated AI technology continue[s] to improve in quality.

CHAKRABARTI: Okay. Wow. So can you just tell me a little bit more about what's allowed in the past couple of years for the deepfakes to get so much better, so much more convincing?

FARID: Yeah, there's a couple of things going on with the deepfakes technology. So first of all, there's just a lot of people out there developing really powerful algorithms that are faster and create higher quality. So there's just a big body of literature that is both academic and in the private sector, we have more and more data. There's more and more images and videos of people that you want to create deepfakes of.

And of course, we have more and more computing power. Computing power is becoming more ubiquitous and easier to get a hold of. And so it's the natural evolution of almost every technology that we have seen over the last two or three decades. The technology gets better, it gets faster and it gets cheaper and it gets more ubiquitous. And the deepfakes are following that same basic trend.

CHAKRABARTI: And so, I mean, obviously, we've talked a lot about deepfakes in sort of the commercial and social media sphere, social media being the way that these things go viral, of course. But our focus today is on national security. So do we already have evidence beyond, you know, Browder's one experience that governments are perhaps using deepfakes as a means to undermine other countries in various ways?

FARID: This is not the first example that we have seen the Russians using deepfakes. We saw one in the early days of the invasion of Ukraine, where they had created a deepfake of President Zelenskyy saying, "We surrender, put down your weapons."

The mayor of Madrid, Vienna and Berlin, each separately, had a Zoom call, very much like Bill Browder's, where they thought they were talking to the mayor of Kyiv. And in fact, it was deepfakes. Our very own chairman of the Fed was on a phone call a few weeks ago with who he thought was President Zelenskyy. And it was not. It was a deepfake. So we are seeing this weaponization impacting global leaders around the world.

CHAKRABARTI: Today, we're talking about the use and threat of deepfakes when governments deploy them against each other, so the potential national security threats of deepfakes. And Professor Farid, just before the break, you had talked about an example of Ukrainian President Volodymyr Zelenskyy, his voice being deep faked. And I want to actually just walk folks more specifically through that example.

So this happened back in March of 2022, and it was apparently a video of Zelenskyy telling his soldiers to lay down their arms and surrender to Russia. The deepfake video in total is about a minute long, and it circulated on social media quite extensively. We're going to play the deepfake in just a second. But first, I wanted people to once again hear Vladimir Zelenskyy's real voice. Now we can confirm this is really him. ... Because you're about to hear Zelenskyy's speech, a moment from a speech when he stood before a joint session of the United States Congress in December of 2022. So here's what he sounds like, and this is Zelenskyy's real voice.

ZELENSKYY: Dear Americans, in all states, cities and communities, all those who value freedom and justice, who cherish it as strongly as we Ukrainians. In our cities, in each and every family, I hold my words of respect and gratitude resonate in each American heart.

CHAKRABARTI: So once again, that's Vladimir Zelenskyy. In December of 2022, when he spoke before the United States Congress. So here's a really short clip of the deepfake video of Zelenskyy that appeared earlier in the year. It's in Ukrainian. So it's going to sound different from Zelenskyy speaking in English. But here's what that deepfake sounded like.

(CLIP OF ZELENSKYY DEEPFAKE)

CHAKRABARTI: So we just wanted to play only a few seconds of that. Professor Farid, does the technology exist right now to quickly be able to tell the difference?

FARID: Yeah, that's the right question to ask. First, let me mention that there are two ways of creating that fake. Three ways of creating fake audio. One, is you just have an impersonator, somebody who's just good in impersonating them. Two, is that you cloned the voice from just a few minutes of audio. So I can, for example, upload a few minutes of audio of you, Meghna. And I can clone your voice and then I can type, and then it will synthesize an audio of you saying what I want you to say. Let's consider that an offline process.

And there's also a real time voice cloning where, as I'm speaking with about a half a second delay, it will be converted into another person's voice. Your voice, president Zelenskyy's voice, whoever. And so those are the three threat vectors. And now the question you want to ask is, Can we detect it? And the answer is yes. But six months from now, who knows? This is very much an adversarial game. The technology is constantly changing. And so we build defenses. That's what we do here in my lab at UC Berkeley. And usually, we get 6 to 12 months of defense, and then a new technology comes out and we have to build another defense and then another defense.

And it's very much that cat and mouse game in an arms race. And it's very difficult because my starter gun goes off after my adversary has already released their offensive weapon. And so I'm always playing catch up by design. So it's a very hard task. But here's what's also hard. It's a big Internet. There are billions of uploads a day and we can't analyze every single piece of content. I can't be on a private call between Bill Browder and whomever he's speaking with. So even the defenses are not enough. They're necessary, but they're not enough to solve the problem in its entirety.

CHAKRABARTI: So, you know, a little bit later in the show, I'm going to return to this "What can we do?" question. Because as you noted, so much has changed in the past few years and so much will change even in the next six months that the cat and mouse game is almost infinite here. But I wanted to also take a moment to understand with greater clarity the kinds of uses for these deep fakes when it comes to national security. The wartime use is one of them, obviously. But we've seen other examples. For example, I believe there was an image or a video that was faked of the Pentagon recently that actually had an effect on the stock market. Can you tell us about that?

FARID: So this was just a couple of weeks ago. There was a not very good fake image purportedly showing the Pentagon being bombed. And it was absolutely AI generated. There was the telltale signs that we could see. That image was posted on Twitter on a verified account, so that blue checkmark. Thank you, Elon Musk. And went viral. It was retweeted by wait for it, RT, the Russian propaganda machine of the government. And the stock market dropped in a period of 2 minutes, half a trillion dollars. That's insane.

CHAKRABARTI: And we can directly link it back to that image?

FARID: Yeah. Yeah. The tick-tock, you can look at the timing of how the image went up, the reaction to the market. It plummeted. The image was debunked, the market rebounded. And the thing that's fascinating about it was it [was] not a particularly good deepfake. And it was a confluence of things. It was the fact that it was a verified account that looked like Bloomberg News, that it was retweeted by a number of different outlets and people move fast on social media. And so that's the other aspect of this, is before anybody thought about this problem, there was the sell off.

Now, look, everything rebounded. It was fine. But did somebody make billions of dollars in that dip? We don't know. But you know somebody's looking at that reaction. Thinking, well, if one image can drop the market, surely another one can. So if somebody's going to try to manipulate our markets using simple fake images, probably. And that's, I think, something that we need to start taking very seriously.

CHAKRABARTI: Well, so in this example, though, isn't part of the problem not just on the side of the proliferation of the deepfake. But also, I mean, if the markets respond so quickly, part of that must be because of all the automated trading that goes on. The system itself has some weaknesses.

FARID: This is a fascinating question because I don't think we've done the full postmortem. But there is a scenario here where an AI generative image caused predictive algorithms to respond to the market. And what a weird world we're living in now where AI's manipulating AI. But yes, I think that's almost certainly there was some role of automated trading here that panicked when they saw what they thought was breaking news on Twitter about a bombing at the Pentagon.

What a weird world we're living in now where AI's manipulating AI.

CHAKRABARTI: Okay. Well, the reason why I bring that up is because, again, in a few minutes, I want to explore deeply about how it's not just the deepfake itself. It's the environments that those deepfakes are deployed into, that also have to be strengthened when it comes to economic and national security.

So hang on here for just a second, Professor Farid, because I want to walk through a couple more examples of how other experts see the threat to national security when it comes to synthetic media or deepfakes. So we have a moment here from a June 2019 House Intelligence Committee hearing on national security challenges of deepfakes. And by the way, your research was actually quoted quite extensively at this hearing, but we're about to hear Clint Watts from the Center for Cyber and Homeland Security at George Washington University. He described to the House committee some of the national security threats he sees presented by manipulated media.

CLINT WATTS: Deepfake proliferation presents two clear dangers. Over the long-term, deliberate development of false synthetic media will target U.S. officials, institutions and democratic processes with an enduring goal of subverting democracy and demoralizing the American constituency. U.S. diplomats and military personnel deployed overseas will be prime targets for deepfake disinformation conspiracies planted by adversaries. Three examples would be mobilization at the U.S. Embassy in Cairo, the consulate in Benghazi, and rumors of protests at Incirlik Air Base. Had they been accompanied with fake audio or video content, could have been far more damaging in terms of that.

CHAKRABARTI: So Clint Watts there describing how deepfakes could have an impact politically on national security internally to the United States and also have an impact on U.S. interests abroad. Well, joining us now is Jamil Jaffer. He's founder and executive director of the National Security Institute at the Antonin Scalia Law School at George Mason University. And also a venture partner with Paladin Capital Group, which invests in dual use national security technologies. Jamil Jaffer, welcome to you.

JAMIL JAFFER: Thanks for having me, Meghna.

CHAKRABARTI: Okay. So just want to get a quick sort of temperature check from you. How concerned are you about the use of deepfakes or synthetic media, as they're more broadly called, as a potential threat to U.S. national security?

JAFFER: Well, look, I think we should all be concerned. And I think, you know, Professor Farid has laid out some great examples of where this can go wrong, how these tools can be utilized by nation states and by a broader audience of folks to generate, you know, concerns, generate economic change in the marketplace and generate a political response.

You know, we know about what happened in 2016 with the efforts by the Russians to manipulate our elections. And we now know that the nation states are aware of and capable of using these technologies as they get faster, and better and more efficient, to engage in things that could potentially affect U.S. and allied national security.

CHAKRABARTI: You know, it seems to me, though, that if we take sort of a 30,000-foot view of the situation, that deepfakes are just the latest means of what has always been with us, when it comes to nations battling each other, using information to undermine other countries. So what would you think makes synthetic media different from what nations used to do to each other before?

JAFFER: No, I think you're exactly right, Meghna. We've had information operations, you know, going back thousands of years in wartime to affect adversary's perceptions of our capabilities or the other side of it. And so you're exactly right. What I think is important here about this new trend, which Professor Farid has also identified, is the rapidity, the speed, the efficiency, the ability to deliver messaging in real time as events are happening and to shape people's perception of what's going on. You know, we heard about what happened with that call that Bill Browder had. Where in the moment, as Professor Farid laid out, you can change what people are perceiving.

Is this in fact Petro Poroshenko on the other side of this conversation? Is President Zelenskyy saying put down our weapons? Is the President of the United States going to order an attack on another nation? You know, there was a video up that was generated by a research institution at Northwestern University of a terrorist, or actually a dead terrorist, Mohammad al-Adnani saying something that Bashar al-Assad said.

And so, you know, these are fairly early days of this technology. And so certainly being able to determine whether something is, in fact, a person. In fact, who they say they are. Whether, in fact, a video is what it purports to be. And how you can tell when things aren't that, are going to be critical going forward. Not just in the political arena, but in the business arena for markets and the like.

CHAKRABARTI: So, Professor Farid, let me go back to you for a moment. Pick up on what Jamil Jaffer is saying. And let's just focus for a second on how the deepfakes could potentially undermine not just national security vis-à-vis U.S. officials, but national security in terms of our belief, meaning Americans' belief in, you know, in their own democracy. So the demoralization question that Clint Watts, in that clip I had played earlier, had mentioned.

FARID: Yeah. What's amazing about Clint's comments back in 2019, four years ago is he was quite prescient. I think he got it just about right. And I think there's a number of things that we are starting to see. So one is, as Jamil was just saying, the technology is getting better and better, more ubiquitous. People are starting to use it. And what that means is that we've eroded trust, that when you see a video of President Biden saying something, there's this, is this real or is it not real? And the question you got to ask yourself is, how do we have a democracy? How do we have a society when people are fundamentally skeptical about everything they read, see and hear online?

We've eroded trust. ... When you see a video of President Biden saying something, there's this, is this real or is it not real?

What happens, for example, when there really is an audio recording of a politician saying something illegal or offensive, they have plausible deniability. They can deny reality at this point. Reality starts to become very weird just when things can be synthesized and manipulated. And suddenly it's getting very difficult to even reason about basic facts of what's going on in the world, from police violence, human rights violations, elections, everything that happens in the world is suddenly, well are we sure about that?

And I worry about our very democracy because, as you were saying, we've already seen the impact of misinformation, disinformation [led] to things like the Jan. 6th insurrection. Lead to spectacular conspiracy theories that have disrupted our response to COVID. And what happens when you inject deepfake videos and audio into that already existing ecosystem, that should be of real concern to us as a society.

CHAKRABARTI: Yeah, I think I'm seeing some experts call it the liar's dividend, right? So Jamil Jaffer, picking it up from there. Let's look internationally. Could you imagine a scenario in which deepfakes are used to, let's say, mislead U.S. military personnel abroad? Because, of course, you know, our U.S. soldiers and airmen and Marines, they're all connected to the Internet as well in certain ways. Is that a potential threat?

JAFFER: Certainly, a possibility. As Hany lays out exactly right. You know, this all comes at a time when we were already seeing an erosion in not just in America, but around the world, in reliance on the rule of law, reliance on rule of law, institutions right here in the United States. People question our elections. We question whether law enforcement is doing the right thing, whether what we're seeing on television or what the president [is saying] is real news or fake news. Are they facts? Are they alternative facts?

So this is all coming at a time when our entire society is questioning these topics. And things that are in the public debate. And you add on to that, you know, what are our perceptions of us overseas? What are our military members thinking? Now, one would assume that we have ways of verifying that messages being passed through official channels are legitimate. You know, everyone knows the scene from a nuclear launch code being passed on, where you break over the package and you verify the numbers and there's multiple people verifying it.

So there are ways, you know, that we have, through encryption and the like, to verify whether messages are legitimate. The problem is that it's all taking place in the context of a political environment, where there's a bit of decay in truth and a decay in facts. And, you know, people are inclined to think that what they're seeing might not be that their own lives are lying to them. And that's what creates the possibility for this liar's dividend that Bobby Chesney and Danielle Citron have come up with. This idea of people saying even something that's true isn't true, right? President Trump says, "I didn't say that." Or even though there's a video of him saying that "I didn't say that. That's a deepfake." Right?

CHAKRABARTI: So, Jamil, I mean, how much do we know about whether the United States is interested or even currently deploying these same tools that we're talking about, in terms of they're a threat to the U.S., but something that's potentially effective and hard to combat must be a powerful tool for the U.S. to use as well?

JAFFER: Well, you know, certainly the United States, like every other nation state, has engaged in information operations and psychological operations historically, as part of our military operations in war and in battle. And even, you know, in the lead up to a conflict, we might do it through covert action or the like. And so this is not an area that's unknown to the United States. In fact, you know, some would argue that the United States, Russia, China are the best at this type of information and psychological operations.

And so the question becomes, you know, has the U.S. looked at this capability? And there's no doubt that our special operations forces, our intelligence agencies are actively exploring these capabilities. They're also actively exploring defenses, these technologies. And we know that DARPA has programs on semantic forensics and the like to identify, you know, the kind of tools. And we've seen, you know, American companies partner with the government institutions to help identify how do we figure out what a deepfake is and the like.

The United States, Russia, China are the best at this type of information and psychological operations.

And the private sector is investing the space. You know, at Paladin Capital, we're spending time looking at how do you ensure that algorithms are strong and well and defensible? How do you identify these capabilities? And so, you know, both in the private and public marketplace, we're looking both at the offensive capability, but also how to defend against it. And by the way, it's worth noting, AI — and Hany can talk about this actively — AI can both create these capabilities and also help defend against it.

CHAKRABARTI: Well, you mentioned special operations, Jamil, and just to put a finer point on it, reporting from The Intercept a little earlier this year, back in March, found that U.S. Special Operations Command, in fact, is openly signaling its interest in developing synthetic media as a tool for the U.S. military. There's a document that The Intercept has taken a look at and actually published now, a SOCOM document that signals SOCOM's desire to use deepfakes.

Specifically, the document says that they're looking for next generation capability to do things like take over the Internet of things and to deal with the digital space, social media analysis, deceptive technologies. So, I mean, it's out there in terms of the U.S.'s desire to use these technologies. Hany, did you have a response to that?

FARID: Yeah, I can't speak to the offensive part, but I think Jamil is absolutely right that it would defy credibility that the U.S. is not at least looking at this. I can certainly speak to the defensive part, because it is absolutely true that now for many years the U.S. government, through DARPA and through other funding agencies, have been funding research, the kind of research we do here and many other places in the world, to try to build defenses.

And Jamil is right too, that it's fascinating to see essentially the same basic tools being used for offense and defense. And that's actually why this task is so hard, is because many times the defensive tool can be used against you. If you are in the business of playing offense, you want to be in the business of also playing defense, because that's the only way to know whether your techniques are going to work or not.

CHAKRABARTI: Yeah, that's an excellent point. So I just wanted to add the specific language from SOCOM's document about what they're trying to learn and gather and contract out. They want to quote, 'Improve means of carrying out influence operations, digital deception, communication disruption and disinformation campaigns at the tactical edge and operational levels.'

Now, to circle back to what both of you just said, about if the same tools are needed for offensive capabilities and defensive capabilities when it comes to synthetic media, we did actually reach out to folks at DARPA who are currently at work on this.

And the reason why we did it is because back in 2019, at that same congressional hearing that I had mentioned earlier, we remembered hearing the founder of the Media Forensics Research Project, or also known as MediFor, which is at DARPA, the Defense Advanced Research Projects Agency. And the head of that program used to be a gentleman named David Dorman. And here's what he said to Congress in 2019.

DAVID DORMAN: When the MediFor program was conceived at DARPA, one thing that kept me up at night was the concern that someday our adversaries would be able to create entire events with minimal effort. These events might include images of scenes from different angles, video content that appears from different devices and text that is delivered through various mediums, providing an overwhelming amount of evidence that an event has occurred. And this could lead to social unrest or retaliation before it gets countered. If the past five years are any indication that someday is not very far in the future.

CHAKRABARTI: So that's David Dorman, almost exactly four years ago, speaking to Congress about DARPA's MediFor program. So we recently spoke to the current head of that program, Wil Corvey. DARPA has actually replaced MediFor with another project called Semantic Forensics, which both of you have mentioned. And Wil Corvey told us that a lot has changed since that 2019 congressional hearing, namely the explosion of generative AI.

WIL CORVEY: We've gone from a media landscape where it was a bit of an outlier, right, to find a piece of created or synthesized media, to now where we would expect, actually maybe very soon, the bulk of Internet media to at least have been retouched by one of these computational models. And so it really becomes much more of a characterization problem for us now, as opposed to merely a detection problem or primarily a detection problem.

CHAKRABARTI: Corvey also told us that means DARPA has to speed up how quickly deepfakes can be detected, to get to a place where the technology can pick apart a piece of synthetic media to see how it was created. And they say that because, of course, not all deepfakes are meant to be malicious.

CORVEY: You may have seen, like the image of a squirrel riding a skateboard in Central Park, but imagine then that I did some Photoshop editing on top of it, right? That might now be the state of the art for sort of computer aided art. So moving forward as a culture of makers online, that might be a signature or something, that we would want to set aside as a completely benign purpose of a couple of different computational techniques. Unfortunately, though, those very same computational techniques could be utilized for propaganda purposes, right? And so differentiating between those kinds of stacks of analytics is another part of the scaling.

CHAKRABARTI: Corvey also told us that the technologies SemaFor is working on could be used to flag potentially harmful deepfakes for moderators to review. Human moderators. But the actual policy implementation would be up to the social media companies or to government regulators. Good news, though, Corvey says. SemaFor has been able to use older, generative AI models to train AI to detect the things made by those same models.

CORVEY: So a lot of models that are the best performing models at this moment have a predecessor model that is related in the way that it was implemented. It has a similar model architecture, and so it turns out the computer systems that are tooled for the detection of these kinds of models can use those architectural similarities, in addition to human expertise in order to achieve really good detection accuracies, even on unseen models.

And so we had a particular collaboration within video, within the program, where we were able to show that for face generation models, for instance, it works if you train it on a predecessor model and then deploy it on sort of the current model. And in that case, we were able to release a detection model at the same time that Nvidia released their new face generation model. So basically, it would be much less likely that someone could use that particular model for a nefarious purpose.

CHAKRABARTI: Professor Farid, what I'm still curious about regarding the tools used to detect deepfakes in the national security realm, is that several years ago the concern was, yes, we can do it, but there was a speed and scaling problem in terms of could we analyze everything that could potentially be a deepfake. Now, earlier in the hour, I thought I heard you say that that scaling problem is still real.

FARID: Yeah, very much so. The way I think about these defenses that we've just been talking about, that Wil very nicely described, is that they are necessary but not sufficient. We need the tools to detect manipulated media, but we also need regulation to force companies like Twitter, like Facebook, like TikTok to do better on moderating content.

We need more education. We need people to understand what these threats are and how to reason about a very complex and fast-moving world. And we need people to slow down on the Internet. Part of the problem is that people are moving so fast on the internet, resharing, retweeting without really thinking about the implications of what they're doing. So I think there are many aspects of what we need to do to start to regain some trust.

And I'll make a little pitch, also. That there is some other technology that is being developed called the Content Authenticity Initiative, where synthetic media will be signed and watermarked at the point of creation. And so one way to think about this problem is if you wait until something is in the wild, it's very hard to rein it back in. But if you are at the point of creation, either a real image or a synthetic image, you can sign and watermark and fingerprint that content so that when it does go into the wild, it can be very quickly detected. And there's some very nice technology coming out that I think is going to help us regain some trust. But it is still part of a larger ecosystem that we need.

We also need regulation to force companies like Twitter, like Facebook, like TikTok to do better on moderating content.

CHAKRABARTI: Oh, interesting. Okay. Well, Jamil Jaffer, the other issue that seems to continue to dog folks who are worried about democratic and national security is the velocity part, right? Like the deepfakes wouldn't be as effective in changing many minds if they didn't get to as many minds as quickly as they did. So can they be slowed down by the social media companies?

I have to say I possess a great deal of pessimism about this. Because we've already seen unwillingness from the major social media companies in the recent past to even slow down the velocity of known misinformation and disinformation, let alone harder to detect deep fakes. So should we be putting more pressure on the platforms by which these synthetic media proliferate to try and cut the velocity of them?

JAFFER: Well, I think there's no doubt that we're going to see some amount of regulatory moves by the government. You've already heard discussion about it on Capitol Hill. You've got Europe trying to make some moves in this space. They've got the AI Act and the like. They've got other forms of regulation that they've been considering with respect to some of the larger social media companies. A lot of it, frankly, wrongheaded and not particularly sophisticated.

But there is some amount of activity going on in governments that we will see going forward. The question becomes, where's that balance? You definitely are going to need some amount of government action and regulation in this space. But you also want to disincentivize innovation. You want to ensure that people are moving forward and are using technology in the long run. In the longer scheme of things, artificial intelligence, the machine learning, the kind that Hany spends a lot of time looking at.

This is going to be transformative for our society and create huge benefits for society. There are certainly downsides and there is a need for the government to get engaged. The question is at what level, how often, where. When it comes to this issue of misinformation and disinformation, I think the best methodology is truthful content and clarity about what is and what is not true.

And things like this authenticity initiative that Hany's talking about, that's where the real opportunity is. Because remember, the platforms and creators benefit when they're able to say, This is my content. This is legitimate content. It's to the benefit of the platforms and the creators when you're able to generate authenticity and demonstrate authenticity.

CHAKRABARTI: You know, I want to come back, circle back around to something that Professor Farid, that you said a little earlier. And that is really we're moving towards a world in which it's not unreasonable to, in a sense, have a little bit of constant doubt about almost everything that we're absorbing, digitally. Honestly, I think that's going to mess with the human mind. Because we're not really evolved to the point of not being able to believe in our own realities.

And then contemporaneous with that, is kind of one of the neurological and emotional reasons deepfakes work. And what got me thinking about that is I went back and watched a 2019 conference that happened at the Notre Dame Technology Ethics Center, and they were talking about deepfakes and national security. And at that conference, Boston University law professor Jessica Silbey said she thought the challenge in combating deepfakes wasn't exclusively technological.

JESSICA SILBEY: Deepfakes. Many of them, their purpose is to denigrate and to dominate ideologically or physically. So, what about our sociology feeds those stories more than others? I think we have to think hard about. It's a cultural problem as much as it is a technological problem.

CHAKRABARTI: Professor Farid, what do you think about that?

FARID: Yeah, I think Professor Silbey is right, by the way. There is very much a human component to this, which is that we respond to the most outrageous, salacious, hateful and conspiratorial content online. The reason why social media keeps recommending this content to us is because we keep clicking on it. And Professor Silbey is right.

We need to look hard inside of ourselves and ask, What is wrong with us? Why do we keep enabling this type of content? You can absolutely blame the social media companies. You can blame YouTube for this week saying, we will no longer take down content that denies the election. The last national election here in the U.S. We should criticize them for that. But we should also ask why is it that we keep migrating to this content, over trusted content, over truthful content? And I don't have an answer for that, but I think that has to be part of the solution.

We need to look hard inside of ourselves and ask, What is wrong with us? Why do we keep enabling this type of content?

CHAKRABARTI: Well, we've got about 30 seconds left. And Jamil, I'm going to give you the last word here. I mean, what would you recommend that the United States do to prepare itself for, again, in the national security realm, for the near continuous threat now of synthetic media?

JAFFER: Yeah. Look, I think we need a way of getting authorized and approved content. Content that's authentic, out more often, more regularly. We need to think about regulation. We need to think about in the context of innovation and ensuring that we promote that innovation. And at the end of the day, we have to recognize, and the American public has to recognize that our adversaries are using this technology. They're going to use it against us. We need to be skeptical of the content we see and ensure it's authentic when we rely upon it, and pass it on to others.

This program aired on June 7, 2023.

Related:

Headshot of Claire Donnelly

Claire Donnelly Producer, On Point
Claire Donnelly is a producer at On Point.

More…

Headshot of Meghna Chakrabarti

Meghna Chakrabarti Host, On Point
Meghna Chakrabarti is the host of On Point.

More…

Tim Skoog Sound Designer and Producer, On Point
Tim Skoog is a sound designer and producer for On Point.

More…

Advertisement

More from On Point

Listen Live
Close