Advertisement

How to stop online extremism from becoming offline violence

25:13
Download Audio
Resume
Blurred hands are typing on a laptop computer in the dark with illuminated keyboard and illegible mystic program code visible on the screen.
(Courtesy of Getty Images)

In the wake of the mass shooting in Buffalo, New York, this past week, social media platforms immediately faced public scrutiny for their role in the attack. Footage of the suspect's livestream circulated from Twitch, to Streamable, to Twitter, and Facebook. Despite best efforts to scrub the video from social media sites, the shooting was reportedly viewed over 3 million times. The shooter's digital footprint also included Reddit, 4chan, and Discord discussion boards leading up to the racially-motivated attack in a predominantly Black neighborhood of Buffalo.

The proliferation of extremist content online is an issue Silicon Valley has struggled to tamp down on in recent years. But accountability and solutions need to involve everyone—not just Big Tech. This week on Endless Thread, we talk to two experts on if and how online platforms can better moderate content to prevent offline violence in the future.

Nora Saks, an Endless Thread producer, is filling in for Amory Sivertson as co-host for this episode.

Show notes

Support the show: 

We love making Endless Thread, and we want to be able to keep making it far into the future. If you want that too, we would deeply appreciate your contribution to our work in any amount. Everyone who makes a monthly donation will get access to exclusive bonus content. Click here for the donation page. Thank you!

Full Transcript:

This content was originally created for audio. The transcript has been edited from our original script for clarity. Heads up that some elements (i.e. music, sound effects, tone) are harder to translate to text. 

Ben Brock Johnson: Hi, Nora.

Nora Saks: Hi, Ben.

Ben: Bye Amory. Who is on vacation.

Nora: She absolutely deserves it!

Ben: She definitely does. But thank goodness we have you here with us this week to talk about the hard stuff.

Nora: Ooof. It has not been a great week for America, and guns, and racism, and the internet. Not that any particular week feels particularly great for those things these days—but this one was real bad.

Ben: True. And the shooting that happened in Buffalo New York, in which an 18-year-old man targeted Black people in a grocery store, killing ten, has had a LOT of connections to the internet. The suspected terrorist in the shooting may have fed his racist beliefs with the help of 4chan and Reddit. He may have used the popular video game voice and instant messaging application Discord, he streamed the shooting on Twitch, the video later got posted to Facebook, the list goes on here.

So today, instead of our regular planned programming, we’re going to talk about all of that and content moderation.

Nora: With the help of some people who know a LOT more about it than us. Starting with Joan Donovan.

Dr. Joan Donovan: Hi, my name is Joan Donovan, and I'm the research director of Harvard Kennedy School's Shorenstein Center on Media, Politics and Public Policy.

Nora: Joan told me she was on vacation, at her nephew’s birthday party, when her phone lit up with texts about the shooting.

And since she’s a researcher of the far right, she’s in a race to get data before the tech platforms remove it. But she has to review the original content.

Joan: This is my job. And so I have read the manifesto, watched the video, and also read the chat logs from Discord, where he essentially makes a list of everything he needs to get done before he goes and does this extremely hideous and violent act.

Ben: More from Joan in a second here. But there’s some important things to talk about in terms of HOW we talk about this stuff.

For instance, we don’t yet know, while we are recording this early in the week, what we collectively know about all of the online presences of this shooter. We know for instance that there is a similar username on Reddit that got suspended on May 15, to the shooter’s handle on Twitch, etc. etc. As Joan said, a lot of people are going through this information right now and trying to figure out where this person was active and how they were active.

Also, we should say we don’t plan on mentioning the name of the suspect. We’re not going to go deep on the content of the suspect’s posts. We ARE going to pay attention to content moderation more broadly, and how platforms, unfortunately, make some of this stuff easier to spread.

Nora: Yeah, and what Joan Donovan says we need to pay attention to INSTEAD of things like the specifics of this individual’s action—or their manifesto—is the power of inspiring others to do the same—thanks to social media channels.

Joan: It's a hard thing because we can no longer subtract what happens online from real life.

These live streams have become a new weapon in the far-right. To me, it's most reminiscent of cross burnings and public hangings. Because you do all of this devastation, not necessarily because any one particular person that you might murder has any kind of power, but rather that it is the power of the terror itself that is delivered en masse.

Ben: Joan says one of the scary things about online radicalization, which is more and more often leading to terrorism in the real world, is that to the person being radicalized, it feels like they are in a process of discovering something, some kind of path to enlightenment. The idea of getting red-pilled is based on actually choosing the red pill — the idea of a person having a choice.

Joan: They feel as if they’re self-discovering this information because they're able to browse it. And it's set up a bit like forbidden knowledge in the sense that they believe that this information has been hidden from them for their entire lives, but they don't realize that it's essentially bait in these environments.

Nora: In other words, if a person is open to digital hate culture, the result is really a predetermined path. Something that’s designed, not an individual discovery.

Ben: Which means we need designed solutions, too. Not just thoughts and prayers.

Joan: It's really going to take a whole-of-society solution to fix the problems endemic to the internet. And so if people do care about free and fair elections, if they care about high quality scientific information, if they care about free speech, then they're going to want to advocate for an internet that is producing content in the public interest and is circulating the best information that we have. And we should be incentivizing these corporations to ensure that these kinds of events, when they do happen, are such an anomaly that we can't expect it to happen again in the same way, just like we have from Christchurch till now.

Nora: You heard Joan reference the Christchurch mass shooting there, which is in New Zealand. The one where an Australian gunman carried out racist violent attacks against members of two mosques in Christchurch after being radicalized online. And he live streamed it the attack as well. People are saying the Buffalo attack is potentially a copycat attack.

Ben: Someone else who looks carefully at this stuff and knows a lot about that previous set of attacks is evelyn douek.

evelyn douek: My name is evelyn douek, and I am the senior research fellow at the Knight First Amendment Institute at Columbia University. And I study content, moderation and regulation of online speech.

Ben: When you first heard about the mass shooting in Buffalo, what went through your mind? What was your first action? What did you do first?

evelyn: “Not again,” is the first thought that pop into one's head at this moment, but it's sort of surprise and despair, but also not surprise. And then, you know, fairly quickly, given what I study, there's the question of "Is there going to be a social media aspect to this?" And the answer is almost inevitably "yes" at this point.

Ben: evelyn, where are you from originally?

Evelyn: (Laughs.) Yeah. Yeah, I don't. Don't sound exactly like you. I'm from Australia. G'day.

Ben: (Laughs.) Is there anything about how this stuff has appeared or not appeared in Australia that informs how you come to this work?

evelyn: I mean Australia is actually a really interesting case study in how governments or the government responded to the Christchurch massacre. The perpetrator in that case was an Australian, and he was radicalized in Australia. And so it raised all of these questions about what is Australia doing to address its extremism problem. That includes, you know, education and reaching out to vulnerable communities and, you know, counter speech and things like that.

Instead of examining a lot of that stuff, what Australia did in the direct aftermath of the Christchurch massacre was pass this completely performative piece of legislation that threatened to punish the platforms for, you know, this piece of video, this piece of content on their sites in a way that was totally, you know, dumb to how these platforms actually work.

The problem was not that the platforms were not trying to remove the video, and it had all of these, like, threats of criminal sanctions, you know, to lock executives up. And it's never been used. It's never been used in the years since it was passed, this legislation.

And I think it also speaks to, you know, the way that governments can be performative in these moments where they can, you know, leverage it for political gain and make these bold statements. But they're not really thinking about the ways that they need to address these problems. You know, saying that it's all tech platforms fault—. I mean, the tech platforms, don't get me wrong, they need to take responsibility for their contribution and the way that their systems failed in these cases. But to say that they are the only people that have responsibility or that they're the only people that can fix these problems, that's completely wrong. And we need to think about this much more broadly and not just, you know, do political grandstanding.

Ben: We’ll be back in a minute.

[SPONSOR BREAK]

Nora: So, evelyn, you have a paper coming out in the Harvard Law Review? I don't think it's come out yet. The one that's called “Content Moderation as Administration.” And in it, it seems like you're arguing that the public's understanding of content moderation is misleading and incomplete, and that has a lot of consequences for how we govern it or don't. So, big picture, can you tell us what we are getting wrong, what we fundamentally misunderstand? And we're going to ask you how that connects to this recent mass shooting.

evelyn: Sure. I mean, I think when most of us think about content moderation, we think about individual pieces of content. Individual decisions. Was a certain tweet, a certain post, a certain video taken down or left up? What are the rules? How did this one post compare to those rules? Were they applied effectively? Did that individual user get an appeal? It's very legalistic, right? That's the kind of thing that we think about when we think about court cases, about free speech. Which makes sense, because lawyers are the people that talk about content moderation most of the time and we’re to blame because when you put speech in front of lawyers, like sort of First Amendment court based, court centric terms of how we think about it.

But really what's more important than those individual decisions are the systems that's behind them. And all of the decisions upstream from that individual post that get made before any piece of content is posted — about how a platform is designed, how a content moderation system is designed, what systems they have in place to protect against failure — those are all the decisions that really matter. And for regulators who are thinking about how do we rein in platforms, how do we make them accountable for the decisions that they're making, those are the kinds of questions they should be asking, not looking at those final decisions way downstream where it's sort of basically too late.

Nora: So what jumps out to you in terms of the roles tech platforms and content moderation played in the mass shooting in Buffalo? Before, during, and after the actual incident?

evelyn: This is the kind of event that platforms should be and, you know, have been preparing for. Sort of there was this mass reflection after the Christchurch massacre that was live streamed about why platforms had so fundamentally failed to contain the spread of that video.

And after that, you know, there was a lot of pressure, justified pressure from lawmakers to try and make them do better. A lot of, sort of, thinking about what could be done, what systems could be put in place to do that.

And platforms sort of did, they did, you know, put in some systems, make some agreements, look at the technology involved to clean it up. And, you know, in this case, in some sense, that's a little bit of a success story, right? Twitch took down the stream within two minutes of it starting. That's phenomenal. I can't remember the exact figure, but it was upwards of 40 minutes, around 50 minutes for the Christchurch massacre live stream. So, you know, in some sense that's a big improvement.

But in another sense that two minutes was enough, that two minutes was enough for someone to download the video and for it now to be spread on on many different platforms. And in terms of thinking about containing it, I mean, this is a story about how it's not enough to look at an individual platform. It's not enough to look at what was Twitch's response. But we need to think about it as an ecosystem. You know, this is where people are coordinating on other platforms that don't have moderation, that aren't even trying to contain the spread of these content. And then they're, you know, leveraging different aspects of different platforms. You know, they're not just posting the video to Facebook or Twitter anymore because there's systems in place to try and identify those videos more quickly. But they'll post links to where the video is stored on other platforms. Which means that, you know, there's the leveraging different aspects of different platforms and finding ways to get around the content moderation systems.

Ben: And this is a game of cat and mouse that basically is like, you know, a tale as old as time in some ways, right? You know, in the conversation about this particular shooting and others, we've heard, you know, 8chan mentioned, 4chan mentioned, Reddit mentioned, Discord, Twitch, YouTube, Facebook, Twitter. Can you just go through them a little bit for us and talk about, you know, what is remarkable or not on the different platforms?

evelyn: Yeah. I mean, you're absolutely right that this is at this point, you know, trying to stem the spread of this is like sticking your fingers in holes in a dam wall. It's just, you know, the flood in the in the aftermath of something like this is extremely hard, perhaps impossible, to contain. You know, it's important at this stage when we're talking to sort of resist any firm conclusions about what happened. There's still a lot up in the air. We don't necessarily know what happened. We don't know where the systems failed because they surely have failed.

And so there's this question, you know. There's this organization that a bunch of platforms set up in the aftermath of the Christchurch massacre called the Global Internet Forum for Countering Terrorism, or the GIFCT, as the cool kids call it, which is a place, you know, sort of specifically designed for platforms to work together to counter this kind of thing. They have a crisis incident protocol that they activate in situations like this, and they did indeed activate it this time. And the way it works is that they will upload a hash to a common database, so basically a digital fingerprint, of that video that then platforms can run any uploads to their platform against. And you know, if it hits, if it matches, the fingerprint matches, they'll take it down. That's a voluntary organization. They're not mandated to be part of that. And there are many, many more members than there were. And many of the platforms that you mentioned, the most prominent ones are members of the GIFCT. But 8chan isn’t. 8chan is not running its uploads to its platform against that that database. There are a number of like sort of what we tend to call “dark corners of the web” that aren't trying to stem the flow of this. They’re in fact, you know, there are people on them trying to help the flow of this. And it's really hard to work out what to do about that in this case, where, you know, a lot of the actions that platforms take are, you know, whether they do it effectively or not, are voluntary, and they are trying to stem the flow of it, even if they are often failing and maybe don't anticipate these events as as well as they should.

Ben: Let's talk about the Facebook example briefly for a second. You know, the reporting on it or the suggestion, some of the, the sort of Twitter conversation I've seen about it was that effectively it was getting flagged by people, the video was getting flagged by people. But Facebook's content moderation system was sending messages back. And I don't know if it was algorithm generated or human generated, I don't know the details of how Facebook does its content moderation in terms of layers. But the messages that were coming back were that essentially this video of a mass shooting did not go against their content moderation rules. Can you talk about that?

evelyn: Yeah. I mean, so that's obviously an epic fail, right? This is in the midst of a crisis. And the video, the video in question that everyone, you know, should be that the platforms are ostensibly looking out for was being flagged, you know, it's not that they had to fish it out for themselves. It was users were drawing it to the attention of the platform and they were still getting that response. You said, you know, you don't know what happened.

And the truth is, no one really outside Facebook at this stage really knows what happened within the platforms. And that's part of the problem. They don't have any particular obligation to tell us. I mean, presumably in the aftermath of this, there will be some sort of reflection and perhaps more transparency, as there was in the aftermath of the Christchurch massacre, to talk about how it failed.

The thing is, you know, platforms do heavily rely on artificial intelligence. You know, if a person was looking at that video, they're probably pretty likely to identify that it's the video in question and take it down. And so, you know, it's likely that that was the result of automated decision making, which, you know, for all of the sort of discussion about the magic of AI and how it's going to take over the world — it is actually pretty dumb and is pretty easy to circumvent.

So in the example of these these videos, I talked about the hash database, which, you know, sounds pretty airtight, but actually it's pretty easy to get around because people can make minor alterations to the video, whether it's the ratio or the color or sort of put a watermark on it or something like that. Things like that can fool an algorithm that's pretty dumb and looking for one thing and one thing alone. So, we don't know what happened, but over-reliance on artificial intelligence, which is cheap and fast and, you know, when operating at the scale of Facebook might make sense. But also sometimes you just need some people. You just need to throw some people at the problem. And it seemed, you know, highly likely in this case, if there were humans looking at that video, that they would have been able to see it for exactly what it was.

Nora: Hm, wow. How can lawmakers make tech platforms more accountable for the spread of this kind of thing?

evelyn: So this is actually a really tricky question. You know, there's a lot of talk about Section 230, which immunizes platforms for the content on their platform and, you know, discussion about "just repeal 230 and that will fix the problem." But it's actually not necessarily 230, Section 230 of the Communications Decency Act that protects platforms. The problem often is this pesky thing called the First Amendment. It's a sad fact that things like hate speech and graphic content aren't illegal and can't be made illegal under the First Amendment.

So there has to be different kinds of law making rather than just making things illegal or making platforms liable for content like hate speech, which they can't be liable for because it's not illegal.

And so the kinds of tools that lawmakers need to be thinking about, you know, we were talking about transparency. It would be great to know what Facebook did in this case and what exactly failed to put pressure on them to to reform that. It would be great to talk about, you know, mandates for things like due process. For the people that flagged this video, why are they not getting an explanation of what exactly happened or why the video wasn't being taken down? Were they given an opportunity to appeal? And if so, was it to a human who might, again, be able to see exactly what it was? All of these things about what systems Facebook has in place, more information about, you know, this crisis incident protocol, I think would be valuable in holding platforms to account for the systems that they have in place.

Ben: I still kind of arrive at this fundamental question, which is like, platforms want content. They want that content to be highly engaged with. They want the least amount of friction for the user inputting that content onto the Internet. And that seems like incredibly fundamental to the way that they work and the way that they make money. We are asking for things that are fundamentally opposed to that. How do we deal with that fundamental opposition? Is that possible?

evelyn: So I think this is a great example of why we need to think about systems rather than individual pieces of content, right? Because platforms don't want this video on their platform. It's not like they are like, “Yes! All the extra hits from this horrific video. You know, this is so engaging.”

Advertisers don't want to appear next to these kind of content. So I do genuinely believe that they are trying their best to remove this video. I don't think that there's some sort of nefarious motivation in sort of keeping eyeballs on the site. If anything, it might drive a lot of users, such as myself, who, you know, avoids places where this video might crop up.

But what you're pointing to is the systems that are designed to spread engaging content that in an event like this can be leveraged to do real harm. And so it's not that platforms want this video to spread, but it's the people who are spreading the video or leveraging the systems that the platforms already have in place that make it easier to do so. And again, this is a really tricky question about what lawmakers can do within the confines of the Constitution to change those systems, the extent to which the First Amendment prescribes, you know, forcing them to change their business model.

But I think those kinds of questions around introducing friction that are content neutral, that are saying, look, whether this video is good or bad, you should have more safeguards in place to prevent, you know, the quick spread of content, you know, not looking at trying to, you know, hamper specific kinds of speech, but just trying to reform the system. Maybe that's somewhere where lawmakers can get more done and can make more progress.

Ben: So Nora, as a former person who reported on tech all the time and current co-host of Endless Thread, I guess I think about this stuff a fair amount, just in terms of content moderation. So, it was interesting to me to hear evelyn both hold platforms accountable for the solution but also point out, to me, that they don’t want this stuff on their platforms either. But to me, also, we so clearly need an outsized response to this stuff at this point. Because I feel everything that’s being done is not enough, is clearly not enough.

Nora: It’s clearly not enough, because this isn’t an anomaly right, as some of the experts have said. That landed with me and what I also took away from talking to both Joan and evelyn, who kindly shared their expertise with us, is that there’s only so much tech platforms can do to moderate content downstream of the event. It’s very reactive, even in the best scenarios. And so I’m gonna think a lot more about some of the things evelyn was advocating for in terms of what kind of laws can we make to increase transparency and to get out in front of this problem more than we have.

Ben: It was interesting, too, to hear her talk about the idea of actually going into these dark corners of the internet and pushing back against hate speech, against advocacy for violence, real or imagined. It sounds like we need a lot more proactive behavior from people who are not advocating for race-based murder.

Nora: Amen to that.

Ben: Amen indeed. Thank y’all for listening. This was not our plan, to make this episode this week but we felt like it was an important conversation to have with people who know about this stuff. And to think about it. We hope it’s thought provoking for you. Take care of each other. Contribute to good things on the internet. Put good things into the world, please.

Nora: Hear, hear. Like, like. Share, share. I’m Nora Saks.

Ben: And I’m Ben Brock Johnson. And we will be back next week with more.

Endless Thread is a production of WBUR in Boston.

Our team is me, Nora Saks, Amory Sivertson, Dean Russell, Quincy Walters, Matt Reed, Megan Cattel, Emily Jankowski, Grace Tatter, and Paul Vaitkus.

Headshot of Nora Saks

Nora Saks Producer
Nora Saks was a producer with WBUR's podcast team. 

More…

Headshot of Ben Brock Johnson

Ben Brock Johnson Executive Producer, Podcasts
Ben Brock Johnson is the executive producer of podcasts at WBUR and co-host of the podcast Endless Thread.

More…

Advertisement

More from Endless Thread

Listen Live
Close