Advertisement

What A Facebook Exec Is Teaching Harvard Law Students About Hate Speech And Internet Trolls

The Facebook logo in 2012 is displayed on an iPad in Philadelphia. (Matt Rourke, AP file photo)
The Facebook logo in 2012 is displayed on an iPad in Philadelphia. (Matt Rourke, AP file photo)
This article is more than 1 year old.

When faced with the worst of the internet — trolls, Russian propaganda machines or global terrorism — Facebook turns to Monika Bickert.

Bickert is Facebook's vice president of global policy management. Her team oversees the social media giant's rules for what types of content are and aren't allowed on the platform. This past fall, Bickert taught Harvard Law School students about the challenges of managing online content in a course called "Social Media and the Law."

It's an issue that's one of Facebook's biggest challenges. The company has repeatedly faced criticism for allowing misinformation and hate speech to spread on its platform, and has shifted strategies under public pressure and rising scrutiny — especially since the 2016 election.

Here's what Bickert, a Harvard Law School alum, had to say about her class, and how Facebook is now tackling the problems she discusses. (This conversation has been edited for length and clarity.)

What are you teaching students in this class? 

Monika Bickert is Facebook's vice president for global policy management and counterterrorism. (courtesy Facebook)
Monika Bickert is Facebook's vice president for global policy management and counterterrorism. (courtesy Facebook)

This semester we covered topics like hate speech, terror, propaganda, misinformation. We also looked at the sorts of topics that law students are used to addressing, like due process and systems of adjudication and appeals. But we're looking at them through the lens of how a social media company should apply its policies.

A lot of these issues are things that Facebook has struggled with — and there have been failures in some cases — what is it that these students can learn from you? 

I remember when I was a student that I loved the opportunity to talk to a practitioner, or somebody who was actually out doing the work and see what it looked like practically. The dialogue that we have in class is designed to help them put those ideas in the context of what this really looks like behind the scenes at a tech company trying to enforce our policies.

So you mentioned hate speech, and I'm wondering how do you respond to the criticism that Facebook hasn't done enough to tamp down on hate speech on the platform? There was the example of Myanmar with the genocide being tied back to communication on Facebook.

Well, there's always more we can do, of course. I will say that we've made tremendous strides over the past six years that I've been in the company at enforcing our longstanding rules against hate speech. And we're now, especially in the past few years, at the point where we proactively detect most hate speech that we remove from the site before anybody reports it to us.

We're also now doing things like if somebody searches for white supremacy related terms on Facebook, we will surface to them resources that will actually help them get away from hateful ideologies. 

We're now providing content from groups like Life After Hate in the United States. And these are basically groups that have made a study of how to help people get away from hateful ideologies. We want to make sure that people have access to that content if they're looking for hateful content on Facebook.

(Editor's note: A study by the European Commission found that Facebook and other tech companies have gotten faster at removing hate speech. However, Time Magazine has reported those improvements may be uneven depending on language. And reviews by The Guardian and BuzzFeed have found that despite the company's tougher policies, in some cases white supremacists continued to operate on Facebook.)  

You said Facebook is proactively taking down hate speech, how do you go about doing that? 

It's not as easy as people sometimes think. Sometimes people will say, 'well, can't you just have a filter that looks for, for instance, racial slurs?' But a lot of the content that we see that contains slurs is activists or others saying we need to talk about this word in society, or this morning somebody called me this on the subway, and we want to allow that conversation to happen.

So what we instead do is we have used image matching and matching groups of terms to identify speech that might be violating our hate speech policies. And then that is sent over to our content reviewers.

Since the 2016 election, there's obviously been a lot of concern and conversation about disinformation, fake news and misinformation on Facebook. There was that doctored video of Nancy Pelosi that Facebook kept up. Why in that case was that left up on Facebook?

When we find that content is not true on Facebook -- when that determination has been made by one of our third-party fact-checking groups that we work with — then we don't remove the false content. What we do is we label that it has been fact-checked so that anybody who sees it, will see, 'Oh, this has been marked false by a fact checker,' and then we also put the fact checker's article next to it. And that's what we did with the Nancy Pelosi doctored video.

Do you think that's enough, though?

If something is in the public discourse, we think what's most important is making sure that people have the context to understand what they are seeing, and what they should be thinking about it.

So working with the fact-checking entities, these are not Facebook. These are groups outside of Facebook. They've been certified by the Poynter Institute for meeting certain criteria for how they do their fact checks. We work with them so that we can get more truthful information out there and label as false any content that they have rated false.

So what's the line for Facebook when it comes to misinformation?

We have clear policies against hate and violence and threats and any of that. And we have for years. Any of that content that we find, we remove. We now put out a report every six months called our community standards enforcement report, where we show not only how much of that sort of speech we have found and removed but also how good were we at finding it before anybody reported it to us. And then we do a study to say how prevalent is this sort of content on Facebook.

We're in a new presidential election season now where many of these issues will likely come up again as they did in 2016. And we know from 2016 that Facebook had a big influence on people in that election. Are you concerned at all about the influence of Facebook this time around?

We're focused on making sure that we have the right policies and teams in place to do what we can to promote free and fair elections around the world. Now we have more than 300 people at the company who are focused on elections integrity. And we're also focused on removing inauthentic voices from elections. And that could be fake accounts, which we now have removed billions of fake accounts this year.

But we're also focused on finding more sophisticated networks. Around the 2016 election, we found and removed a group of actors that were out of Russia, the Internet Research Agency. We're looking for other groups in elections around the world who are trying to engage in inauthentic, coordinated behavior like that. In the past year, we removed groups like that in more than 20 instances around the world.

So it sounds like there is some type of political speech that Facebook will remove?

When it comes to misinformation, in general, we don't remove content simply for being false. Now, there are some times where we will actually remove false content from the site. Overwhelmingly, that's where we're talking about an imminent risk of physical harm and where a safety group on the ground in that region has confirmed for us that the content is false and that it could, in fact, lead to harm on the ground. We've removed information, for instance, in Sri Lanka, in Myanmar, in Bangladesh, where we have seen a threat from that kind of misinformation.

So in areas where we are able to assess the content like threats and terror propaganda and hate speech, then we can go after that, find it and remove it. When it comes to deciding what is true and what is false all over the world — and keep in mind about 87% of Facebook users are outside the United States — then we don't think it's appropriate for a private company to be the truth police.

Twitter has said it won't allow political ads. Why won't Facebook fact check political ads?

We know that ads are an important way for candidates to reach their audiences. At the same time, we want to make sure that there is transparency and people can really see how this is being done. We have launched a couple of years ago a political ads library where people can see who was running what ads.

We know that political speech is heavily scrutinized. In fact, not a day goes by in the United States that we don't see coverage of what politicians are saying and criticism of what they're saying. We want to make sure that if people are seeing that on Facebook in an ad that they're able to trace who's saying this, who are they saying it to, and who is paying for all of this.

Is there anything that you've learned from your students this past semester that you will take back to Facebook?

The students in this class are not just brilliant and hardworking, they're also very willing to approach a topic openly. At the end of the class, I will ask them to put their CEO hat on or do a show of hands and say, 'how many of you would take this approach? How many of you would take this other approach?' And that process of watching them examine an issue they're not familiar with and really get comfortable discussing it and weighing it is fun for me every time.

Related:

Zeninjor Enwemeka Twitter Reporter
Zeninjor Enwemeka is a reporter who covers business, tech and culture as part of WBUR's Bostonomix team, which focuses on the innovation economy.

More…

Advertisement

Advertisement