Skip to main content

Advertisement

How social media companies are preparing for misinformation after Election Day

47:12
Computer monitors and a laptop display the X, formerly known as Twitter, sign-in page, July 24, 2023, in Belgrade, Serbia. The secretaries of state from Michigan, Minnesota, New Mexico, Pennsylvania and Washington are urging Elon Musk to fix an AI chatbot on his social media platform X, saying in a letter Monday, August. 5, 2024, that it has spread election misinformation. (AP Photo/Darko Vojinovic, File)
Computer monitors and a laptop display the X, formerly known as Twitter, sign-in page, July 24, 2023, in Belgrade, Serbia. The secretaries of state from Michigan, Minnesota, New Mexico, Pennsylvania and Washington are urging Elon Musk to fix an AI chatbot on his social media platform X, saying in a letter Monday, August. 5, 2024, that it has spread election misinformation. (AP Photo/Darko Vojinovic, File)

False claims and threats about the 2024 election are expected to get worse after Election Day. How are social media companies preparing?

Guests

Tess Owen, journalist, currently a contributor at WIRED. Her latest piece is titled “Facebook Is Auto-Generating Militia Group Pages as Extremists Continue to Organize in Plain Sight.”

Yael Eisenstat, senior policy fellow at Cybersecurity for Democracy, a research center based out of New York University and Northeastern University.

Cynthia Miller-Idriss, founding director of the Polarization and Extremism Research and Innovation Lab, known as PERIL, at American University.

Transcript

Part I

MEGHNA CHAKRABARTI: Last month, in a public Facebook group called U.S.A. Militia We The People, someone posted quote:

"I'm looking for a group of people that see and understand the dire situation our country is in. A group of people that understand that civil war is at our doorstep if Kamala makes it into office. Am I in the right place?"

End quote.

Beneath that post, someone responded, quote:

"Same here, brother. We better start getting our bleep together."

End quote.

This Facebook group is small, around 160 members. It only has a few posts, and they're mostly people unsuccessfully trying to link up with others to do military style training.

It's also not clear who started this group, but it appears to have been created roughly three months ago. U.S.A. Militia is among the groups that reporter Tess Owen came across while looking on Facebook for misinformation, disinformation, and plans for possible violence after the 2024 election. Tess is a journalist and contributor for WIRED, and she joins us now.

Tess, welcome to On Point.

TESS OWEN: Hi, Meghna. Thanks so much for having me.

CHAKRABARTI: And I also see that just yesterday another WIRED article came out from a colleague of yours saying election fraud conspiracy theories are already thriving online. So you guys are doing quite a bit of reporting on this. Let's start off with this U.S.A. Militia We The People group.

Advertisement

How did you find them?

OWEN: Sure. We actually, earlier this year on WIRED, we had another story about how, after laying low for a spell after January 6th, the paramilitary movement, the militia movement had been quietly reorganizing and rebuilding on Facebook of all places, despite longstanding bans that Meta has against many of the groups in that movement.

And what it looked like was that these groups, these actors were really trying to harness, take advantage of the mainstreaming of anti-government sentiment. That's come in part due to the prosecutions of January 6th rioters and the indictments of Donald Trump, and Facebook is really important to these groups.

They need it to reach and recruit new people. And so we had that story came out in May with help from the tech transparency project, which is a watchdog that does monitor extremism and disinfo among other things. And the following month, Meta took down, I think they said about 900 assets linked to one single militia.

We had another story come out yesterday again using TTP data, showing that militias are still relying on Facebook to recruit and organize and share worrying rhetoric about what they believe is a coming civil conflict around the election. And, yes, many of these groups are very small. Many of them are hyperlocal, which is actually typical for militias, to be honest.

There are some that I saw which had been some county level militias, for example, in Virginia that have just formed in recent weeks and are organizing meetups in the coming days.

CHAKRABARTI: Okay, so meetups with the hopes of accomplishing what?

OWEN: I've actually, I've been to some in my capacity as a reporter before.

They're called militia musters. And they, that's like a rallying of the troops. Standing. Seeing who's there, talking about the issues at hand, seeing who's ready to be part of this movement.

CHAKRABARTI: Okay. So these are then attempts, whether nascent or not, to actually physically organize post the election, depending on the outcome.

And tell me if I'm reading this wrong, but I also thought that part of, I see in part of your reporting is the role of Facebook in making it very easy or even auto generating, that's in the headline of the piece, auto generating some pages for these groups. What does that mean?

OWEN: This has been an ongoing issue with Meta that's been flagged many times over the years.

And there were a few instances that TTP identified. And we looked at screenshots and saw that they were linked to real profiles, where Facebook will auto generate a page linked to a group. And so in two cases, we saw that they had auto generated pages for American Patriots 3%, which is a militia group that is banned by Facebook was banned in 2020.

CHAKRABARTI: And so auto, I suppose maybe I'm dating myself here, but auto generate, auto generating means that even though the group is banned, Facebook is still making a page for that group that's publicly available.

OWEN: That's right. And the pages, there was maybe one follower on them, that they don't get a lot of traction, but it does show that their moderation mechanisms are not really working the way they should.

And Meta's defense when we spoke to him about this was, Oh, AP3 could mean anything. It doesn't necessarily mean something nefarious, but in one of those examples, the AP3 page they generated was for a shooting range in New Mexico that the group frequents.

CHAKRABARTI: Okay. So these are then some examples of how groups are either trying to organize, or as you were talking about, or Facebook is just generating stuff.

I have to note that you are properly calling the company Meta, which is, of course, the overall company that owns Facebook, but I just can't stop calling it Facebook.

OWEN: I know. It's tough.

CHAKRABARTI: But I appreciate your accuracy in calling it Meta. So you also have been following how, gosh, even more than in 2016, even more than in 2020, the speed at which very virulent misinformation and disinformation is spreading through social media is faster than ever.

And I think you pointed out to our producer that we saw an example of that with what happened just in the immediate aftermath of Hurricane Milton.

OWEN: That's right. I think that especially now that Elon Musk is at the helm of X, formerly Twitter, which I also have trouble remembering, and he's campaigning for Trump, the guardrails are really off. And a good test of how that particular platform functions during a fast-moving news event was yes, during a few weeks back during hurricanes Helen and Milton, when we saw baseless narratives designed to essentially erode faith in government authorities and agencies proliferate very quickly online.

And we've since learned that the rapid spread of disinformation was also aided in part by Russian disinformation campaigns. And so that was, I think, a test of what we might see in the coming weeks. And you mentioned at the beginning of the story that my colleague, David Gilbert at WIRED actually wrote, but where there is now a well-coordinated network of election denial groups that just in the last week or so have claimed, for example, or spread a viral video that claimed to show ballots in Bucks County, Pennsylvania being destroyed. That was determined to be fake. But the goal of that was to undermine confidence in the election system. There was a viral rumor that spread very quickly. Claiming that the Pentagon had quietly issued a directive giving the U.S. military the authority to use lethal force against civilians.

Another viral claim baseless as well, that some areas of California were allowing non-citizens to vote this year. And these are just some examples that have been shared across social media, in X in particular, but also other platforms. And they're getting boosted by some of the biggest names imaginable.

CHAKRABARTI: Mhm. So when we think about the run up to November 5th, it's a horribly cynical thing to say, but I'm not surprised that we have these like established machines, these misinformation machines, even domestically. But I'm thinking back to your original reporting about groups that are trying to organize.

Have you found that there's reason to be concerned about the amplification of such misinformation and distrust in government, even in the moments after the election.

OWEN: In terms of extremism, it's a very different landscape than what we were dealing with, for example, back in 2020.

Back then, the threat of extremist organized groups was very visible. We'd seen extremist groups across, from the militias to Proud Boys, out in the streets throughout that year in part because of COVID lockdown protests, and then to counter, George Floyd protests. And so they were really out and present.

And that hasn't really been the case for the last few years, but it doesn't mean the threat has gone away. It's just morphed. And what my colleagues and I reported on some series of U.S. intelligence memos recently, that have expressed concern about online chatter from extremists trying to encourage attacks on voting infrastructure, for example. Or leveraging fantasies of a coming conflict around the election to promote violence or try and mobilize their followers to violence.

And perhaps most concerningly in one memo that we viewed, analysts say that they are dealing with what they call a threat gap. Since extremists have moved a lot of their communications to encrypted platforms. They're still using Facebook and X, and that's really important for their reach and recruitment, but in terms of their internal chats, we're just not really seeing everything we once did, assuming that there is more to see, because they are using encrypted platforms.

CHAKRABARTI: I see. Okay, so there's enough concern there about online chatter turning into reality that the U.S. intelligence services are trying to, at least within their own groups, talk about it. In the last minute that we have with you, Tess, and I really appreciate your reporting on this and helping us set up an understanding of the scope of the problem.

I know you've taken your findings to Meta slash Facebook. What has their response been?

OWEN: Their response is typically, that they are, this is an adversarial space. That's often bad actors will rebrand or try and come back to the platform with different names. And that is something that we've heard from them over and over again throughout the years.

In terms of the most recent story, they contest that perhaps not all of, just because something calls itself a militia isn't necessarily a militia. Which again, I'm very careful. I know that there are, the use of the word militia is not always in reference to a paramilitary group, but we're getting very careful to filter out ones that aren't.

But I think bottom line is that it feels as if the moderation efforts have been a little lackadaisical over the years, seeing as this is such a repeated and pervasive problem.

Part II

CHAKRABARTI: It's highly likely that we will not know exactly who won on the evening of November 5th. We also know that if Donald Trump does not win, he has said he will not concede. We also know that about a third of all Americans still believe that the 2020 election was stolen from Donald Trump and two thirds of Republican voters still believe that their faith in the elections is already at rock bottom.

And this is a ripe environment for the rapid and perhaps dangerous spread of misinformation about Trump this election. So what are social media companies doing about it? That's what we're going to find out now. And joining us is Yael Eisenstat. She's senior policy fellow at Cybersecurity for Democracy, a research center based out of New York University and Northeastern University.

Yael, welcome to On Point.

YAEL EISENSTAT: Thank you for having me.

CHAKRABARTI: Okay. So first of all, how would you describe in your view of what I summarized, the importance of understanding the potential impact that election disinformation could have after voting is finished on November 5th.

EISENSTAT: So I am so glad at least that we're talking about this because, frankly, four years ago, this was not part of the public conversation nearly the way it is now.

All of these narratives, whether it's election denialism, whether it's still saying that the 2020 election was stolen, whether it's spreading disinformation or misinformation about who's voting, if illegal aliens are voting, all these things, these are narratives that are really going to help people who are already primed to not believe in the election results get even more angry if the election doesn't go the way they want it to.

And, four years ago, the major platforms like Facebook and Twitter really failed to even think about the period after the vote, and started relaxing their policies immediately. But this period right now, from November 5th onwards, is no longer about campaigning. It's about inspiring. It's about recruiting, mobilizing, using all that anger that has been fomented through disinformation over the months and years to possibly call people to action.

And that's what social media companies have to grapple with this time around.

CHAKRABARTI: And of course, we have seen that call to action can be very successful, a la January 6th, 2021. By the way, I do want to note for listeners that On Point reached out to 13 different social media companies in order to get statements or on the record interviews with them.

11 of those 13 social media companies did not respond, two declined our interview request, that is Discord and TikTok did send us a statement, though, saying, quote:

"We protect the integrity of our platform by removing harmful misinformation about electoral processes, partnering with independent fact checkers to assess content and labeling claims that can't be verified.

We also direct people to reliable election information at our election center in the app, which has been viewed more than 11 million times this year."

End quote. That's for TikTok. Yael, parse that for us. Is that TikTok saying they're doing, is that evidence of TikTok's good work in protecting information post November 5th?

EISENSTAT: So I'm actually really surprised that so many companies ignored this request. TikTok, ironically, many will, who study these platforms actually believe TikTok does enforce elections integrity policies better than most of the companies. They may have other reasons to do, they're under scrutiny for other issues here in the U.S. But they do seem to have actually stricter policies, they don't allow political advertising, for example. Now, whether or not they're enforcing those rules perfectly is a different question. What's hard in answering your question here is there really is a lack of transparency for journalists and researchers and independent civil society orgs to truly study what is happening on these platforms.

But from what I have seen, TikTok does seem to actually be a little bit more forward leaning in terms of the election's integrity work.

CHAKRABARTI: Why are you surprised that no one talked to us? Quite frankly, in this day and age for us, we're pretty used to it, even though we push as hard as we can.

But is there a specific reason why you're surprised?

EISENSTAT: Yeah, we all saw what happened four years ago. And what was really interesting then is you had executives, including Sheryl Sandberg, go out and say, these things were not organized on our platforms. They really said that they tried to abdicate their responsibility and helping fuel some of what happened on January 6th.

And I would think that four years later, they would want to assure the public that they're doing everything they can to make sure they don't actually contribute to potential violence again. So I do find it surprising that they don't feel the need to communicate with the public. I do know that it's like a hot political football right now, that many of these companies, they don't want to anger either side.

There's still a business incentive here to play all sides, in case, for whoever ends up winning the election, but I do think they have a broader responsibility to the public to tell us, despite everything you've heard, despite the fact that we have rolled back a lot of our policies. Let me just give you a really quick example, 2020 election denialism and election lies used to be prohibited on the major platforms for a time. And last year, all of the platforms, YouTube, Meta, X, they all rolled back that policy and why does that matter? Because people who are continuing to lie about the 2020 election being stolen are fueling the flames, making people angry, priming people so that if they say that the 2024 election was stolen, they will have been so steeped in this narrative that they are going to believe it.

And you would hope that the companies would want to tell people, no, we know this. We take it seriously, despite everything you're hearing from us. So that's why I'm surprised.

CHAKRABARTI: Perhaps I just have a more cynical take on this because let's stick with Meta here for a second for years now. Not even in the misinformation or disinformation space.

Let's take something even more basic, like privacy. We've seen Mark Zuckerberg repeatedly saying, I care about your privacy, but then act as if he doesn't, make decisions for the company that shows that he doesn't care about people's privacy. Oh, I care about kids. Actually, he doesn't. I don't think I've seen a pattern of, let's say, true sense of service to the public good from any of these tech companies.

For as long as they've been around. That might be a harsh assessment, but it does not surprise me that at this critical point, they don't want to take any chances and actually talk about admitting that they have some kind of responsibility here, in terms of helping to ensure faith in the electoral process of the United States.

EISENSTAT: I think that last point is really critical. It's ... do they even believe that they have a responsibility here? And I worked at the company. I worked at Facebook specifically on election integrity in 2018. And at the time, even then, I was trying to get us to fact check political advertising, to think about voter suppression and political ads.

And what's really interesting is these companies, Meta in particular, is one good example. Have you believe that they are just these neutral pipes through which speech travels, so it's not their fault, it's the people engaging in this speech. But in the earlier segment, I heard you specifically talking to Tess about this auto generation concept.

I want to drill in a point about that. Auto generation of content is Facebook itself creating content for these militia pages. It is not even, they cannot hide behind saying it is third party speech. It's free speech. It's the other people who are saying this, not us. Their own tools are creating pages. So yes, you're correct.

They're not going to want to speak too publicly about the fact that they are actively playing a role with some of their tools in fanning the flames and recommending content, in letting militias and people who are very openly talking about election denialism or organize on their platforms and that they haven't fixed that problem from four years ago.

I don't know. You called it a cynical take. I think it's just a realistic take. They have yet to convince us in any way possible that they have fixed these problems and that they are going to rise to the challenge and live up to their responsibility to keep us safe and to not harm democracy.

CHAKRABARTI: I'm going to bring another voice into this conversation in just a second, but you mentioned that you had worked on Facebook's civic integrity team. Does that team actually still exist?

EISENSTAT: That's a great question, because it sounds like it doesn't. I worked on actually the business integrity team, which is the sister org.

But what it means is I was working on the political advertising integrity policies while the civic integrity team was working more on sort of the newsfeed and the organic content. So similar work, just different parts of the business. And that is one of the key questions. We've seen that they've had mass layoffs across the industry.

We've seen lots of friends and former colleagues from Civic Integrity no longer employed at the companies. Now, publicly, Meta will state that they still have, what, 35,000 trust and safety people around the world. But it's very odd because most of the Civic Integrity teams do seem to have been disbanded.

And one other point, so we're focusing a lot on Meta. Obviously, a lot of the narratives are also being spread on X now, by the owner of the platform himself. But one of the reasons we can't take our eye off of Meta is the difference is X, it does drive a lot of how journalists find trends to report on and a certain sort of niche audience, whereas Meta is much broader.

It's where people organize. It's where people find each other. It is where it is so much broader than what you're seeing on X, and I feel that if you have built a company to have that large of an impact, you are absolutely responsible for being able to protect against what is happening on your platform.

CHAKRABARTI: Well, Yael, hang on here for just a second because I want to bring Cynthia Miller-Idriss into the conversation now. She's founding director of the Polarization and Extremism Research and Innovation Lab, also known as PERIL, at American University, and is a professor at American's School of Public Affairs and School of Education.

CHAKRABARTI: Professor Miller-Idriss, welcome to On Point.

CYNTHIA MILLER-IDRISS: Thanks. Hi, Meghna.

CHAKRABARTI: I am pretty open about my dubiousness around the efforts that these companies are putting forth. And I just want to back that up with the notion that, or my belief that they have the money to do more.

But let me ask you from your examination thus far of some of the efforts that the tech companies have been or social media companies have been making, do you think that they're doing, what do you think that they're doing to prepare for the days after November 5th?

MILLER-IDRISS: I think that we are holding them to the lowest standard possible. And I think what I mean by that is our expectations for social media companies, even in the best of circumstances, right now, are just that they'd be very good at removing bad content reactively, right? And so what we're asking them to do is more efficient and more accurate at taking down harmful conspiratorial misinformation, disinformation, once it's already appeared, once it's often been viewed millions and millions of time and downloaded sometimes and then saved and shared.

So it becomes a whack a mole situation anyway. Instead of considering the possibility that they might also have an obligation to prevent people from being persuaded by that propaganda in the first place. What if we held them to a standard where we said you should help people access good information or be more skeptical of the potential for online manipulation to occur?

Advertisement

So the most exciting thing I heard you say in that TikTok statement was that they actually were referring people at least to accurate information about elections. That to me is a minimum viable sort of signal that there's some proactive work going on at least one platform. So I'm not excited at all about the removal of bad content.

I think that's like the lowest hanging fruit we could possibly expect. And I'd like to see them go after some higher hanging fruit.

CHAKRABARTI: So let me argue some basics here. And in fact, I think of any argument that the social media companies have put forward about the limits of what they can do in the U.S. context, which is something I want to come back to in a second. The most persuasive is why should we take down things that are posts where people say, I believe such and such? Because I mean they have the freedom of speech to say that. It's just a person's particular belief. If that turns, that becomes viral and a lot of people engage with that content, there's really nothing we can do that's legal or in the spirit of what American freedom of information is supposed to look like.

MILLER-IDRISS: Yeah, they're not wrong. This is also the U.S. government's approach to violent extremism and preventing it. Which is to say that the U.S. federal government takes a stance that it's not their job to prevent someone from becoming radicalized, but it's their job to prevent a radicalized person from picking up a gun. Because this stance has always been that preventing someone from becoming radicalized would infringe on free speech.

And I think that's a terrible mistake. And I think the social media companies are making the same mistake. I think that we've seen with, in the public health world, with things like diabetes or cardiac disease, that you can invest in public health approaches. Upstream 20 years earlier to teach people about the choices they could make for healthier eating habits or exercise habits that reduce their incidence of disease.

And the same thing is true for the media ecosystem, the information ecosystem that we encounter every day. We can teach people how to be more skeptical of media, how to access better information, how to be savvier. We can do things that reduce the production of bad content or that clear out structural things like in the public health world, it would be the equivalent of making sure there are not food deserts, right?

We can do those kinds of things without saying you have to make those choices. So I think it's like a failure of imagination for us to think about what does it look like to have a more socially cohesive democracy? A democracy that people are resilient within this ecosystem that presents them with lots of bad information, and that they know how to recognize and reject manipulative content and tactics.

We're not even considering that yet. We're considering it for violent extremism a little bit. The U.S. Government, now the Biden administration has adopted a public health approach to prevention of violent extremism officially, and its national strategy. But we're not really thinking about what does that look like in the tech sector or in the social media platforms in terms of our expectations.

Part III

CHAKRABARTI: So I'm joined today to talk about social media with Yael Eisenstat and Cynthia Miller-Idriss. And, Yael, I wanted to pick up on something that Professor Miller-Idriss said about how just blocking overtly dangerous content is low hanging fruit. So what are the other tools, then, or strategies that either are available to social media companies or that they should consider in order to create what Professor Miller-Idriss called a healthier information atmosphere around elections.

EISENSTAT: Yeah. I could not agree more that just focusing on content moderation, what to take up, what to leave, what to take down, what to leave up is, I agree, the lowest hanging fruit. I do think it's important, especially if you say you have rules around disinformation, around election lies or whatever your rules are, you should actually live up to those rules and that should be the bare minimum.

But what I have been focused on for years is actually not what is the speech itself, but what are the companies doing with that speech? Because they continue to paint this as a question of free speech. And so just to give you a few examples of what they could do. One of the problems, especially in a contested election or really heated time is this idea of what I call frictionless virality.

It's what spreads quickly, what gets attention, what can you click on, share, boost. And people understand that the way they get more carry on a platform is to get more and more shares and engagement, a few years ago when it was still Twitter, they did an experiment ahead of an election, of saying, we're going to build friction into the process to see if it slows down mis and disinformation.

You may recall this when you wanted to retweet a news article, for example, you got a pop up that said, Have you read this article? It's such a tiny little blip of an intervention, but it makes your brain have that quick second reaction to, wait a minute, that's a good question. You react emotionally, very differently than you would react to actually having to stop and letting your brain catch up to your emotions.

That's what the concept of friction is. So they did this experiment. You either, one, some of the pop ups said, have you read this article? Some of them actually asked you to put in your own thoughts before you were able to quote tweet somebody else's piece. And they found that this actually did slow down the spread of disinformation ahead of that election.

But what happened after the election? They scrapped the whole program. Why? Because if you're slowing down your platform, even if it's just by a few seconds, then that is antithetical to the business model that wants to keep you scrolling and engaging as fast as you can. So of course, I would always argue for ways to build in friction into the platform.

But also there are other things such as these platforms actually usually give what I would call high value users more leeway to break the rules than the rest of us. So why is somebody who is intentionally engaging in inflammatory rhetoric or click baity type information so they can build up a huge audience, and now actually has more influence over public perception than someone with a smaller audience. Why are they getting to break the rules more than everyone else on the platform? So I would hold all users to the same standards. And then to the point that Dr. Miller-Idriss just brought up, you mentioned how TikTok is pushing people towards good information.

We've been advocating this for years. Flood the zone with good information. Now Meta and even X will say, We do. We have these pop ups at the top of your page that says for information about the election, click here. I don't think having to click through to a different information center is the right answer.

You can actually, I hate to use the term flood the zone, because that was Stephen Bannon's term, but you could. For example, if you are not going to enforce your rules against certain high value users, but you know that they engage in a lot of conspiracy theories. How about right after their post, the next post is something that your company has put out about, Would you like to understand better how to find your polling place? Or whatever it is, rather than saying, click through to this separate center. How about making our feeds a little bit more, I don't want to use the word balanced, but not just, if you go to the X For You feed right now, it is shockingly bad how much it is flooded with very particular viewpoints.

This program aired on October 30, 2024.

Headshot of Claire Donnelly
Claire Donnelly Producer, On Point

Claire Donnelly is a producer at On Point.

More…
Headshot of Meghna Chakrabarti
Meghna Chakrabarti Host, On Point

Meghna Chakrabarti is the host of On Point.

More…

Advertisement

Advertisement

Listen Live