Advertisement

Only Humans Can Fix Facebook’s Fake News Problem

Facebook CEO Mark Zuckerberg makes the keynote speech at F8, theFacebook's developer conference, Tuesday, May 1, 2018, in San Jose, Calif. (AP Photo/Marcio Jose Sanchez)
Facebook CEO Mark Zuckerberg makes the keynote speech at F8, theFacebook's developer conference, Tuesday, May 1, 2018, in San Jose, Calif. (AP Photo/Marcio Jose Sanchez)

At Facebook's recent F8 conference Mark Zuckerberg announced the company’s next step to fight fake news. Facebook has surveyed the public on how trustworthy they believe news sites to be. And going forward, Facebook will promote or demote news posts based on popular opinion. Your human judgement, powering their algorithms.

Clever, but as a software engineer, it strikes me as misguided. When I was at Google, we always wanted to use the best data available. That was never random people on the street -- even for Google Maps, when we were literally trying to figure out what was on the street. The point is, the source matters. If Facebook users reliably knew which sites to trust, there wouldn’t be a fake news problem in the first place.

If Facebook users reliably knew which sites to trust, there wouldn’t be a fake news problem in the first place.

Facebook already partners with five journalism organizations to fact-check individual stories. They must have generated a wealth of data for predicting future accuracy. Instead of mining Facebook’s users for their opinions about media sources, the company should use the data it has already collected about which outlets run false or misleading stories. It could even expand the mission of its existing team to measure balance and bias.

But Facebook can’t announce that. As a culture we would rather trust machines we don’t understand, than trust humans whose flaws we know too well. The cardinal sin for an Internet platform is to be caught exercising judgment.

We want an all-knowing, impartial, robotic decision-maker, who democratically consumes the world’s information and produces the one true answer. Internet platforms have to meet that expectation — even when it’s humans that are, or should be, behind the curtain.

Our faith in software has been shaken over the last two years, now that we’ve seen how algorithms can have their own biases and vulnerabilities. With software shaping so much of the internet, the public is demanding better outcomes, more transparency, less abuse. And I agree. Scrutiny will make engineers like me create better products.

There’s one solution we don’t talk about often enough, though: Bring back the humans.

Do you remember how Facebook's fake news crisis started? Two years ago, a technology reporter at Gizmodo discovered that Facebook’s Trending Topics team had hired journalists to correct their internal news algorithm when the journalists thought it had made a mistake. They could promote or hide stories in the trending section, just like a newsroom editor.

When the stakes are high and the decisions few, we should prefer humans over machines.

Within a week of that news, a former contractor with Facebook alleged a systematic liberal bias. Republican senators launched an inquiry. Facebook fired the entire team a few months later, and the quality of the articles promoted dipped immediately.

On its first unsupervised weekend, the algorithm promoted a fake article about Fox News firing Megyn Kelly and a real article about a man doing unmentionable things with a McChicken sandwich.

Concerns about bias — liberal or conservative — are warranted. But prioritizing a list of results always requires decision-making based on judgment. Asking tech companies to hide the decision-making deep inside their machines — where we can’t see it happen — didn’t remove the decision or ensure nonpartisan accuracy. Without our guidance, the algorithms just take whichever side seems likely to keep our attention longer, regardless of whether a story is true or false.

At first, that was a true (if disturbing) story about a McChicken sandwich.

Soon it was a fake story about Pope Francis endorsing Trump.

Then it was a tale about the Supreme Court secretly impeaching Trump.

The problem is not lies on the internet. It’s the algorithms that deliver those lies, fueled by users with bad intentions.

Attendees roam the showroom floor during F8, Facebook's developer conference, Tuesday, May 1, 2018, in San Jose, Calif. (Marcio Jose Sanchez/AP)
Attendees roam the showroom floor during F8, Facebook's developer conference, Tuesday, May 1, 2018, in San Jose, Calif. (Marcio Jose Sanchez/AP)

Algorithms can only work off the data they have — clicks, likes and reshares — which measure our immediate impulses. The algorithms built on our observed behavior risk promoting our worst biases. They also naturally steer viewers towards extreme content. (One example: 88 percent of Americans believe the benefits of vaccines outweigh the risks, but only 21 percent of YouTube videos about vaccines are positive.) And algorithms can be manipulated. Most people know that ISIS and Russia have long manufactured crowds of fake users to boost stories. Last year the University of Oxford documented 28 countries engaged in some form of social media manipulation.

From Facebook’s description, their popularity contest is likely to suffer from three biases: 1) Response bias, where respondents pretend to not trust outlets in order to get them demoted; 2) In-group bias, where respondents subconsciously favor sites who share their political views; and 3) Participation bias, where the only respondents familiar with an obscure site are in the target audience. If there is significant bias, using the data could make Facebook worse, not better.

When the stakes are high and the decisions few, we should prefer humans over machines. Facebook knew that truth two years ago, but walked away from it under pressure.

If Facebook is serious about fighting back against fake stories, they should go back to using journalists to cover the news. If we want the best results, we should let them.

Related:

Headshot of Brian Lefler

Brian Lefler Cognoscenti contributor
Brian Lefler has worked as a software engineer at Google, Amazon and the U.S. Digital Service.

More…

Advertisement

More from WBUR

Listen Live
Close