Advertisement

Facebook Has Only Itself To Blame

(Joel Saget/AFP via Getty Images)
(Joel Saget/AFP via Getty Images)

In the exhaustion that followed the 2020 election and 2021 race for the Senate seats in Georgia, many people reported suffering from Facebook Fatigue, and the decline in Daily Active Usage (DAU) figures bears that out. But it’s becoming clear that this isn’t a temporary phenomenon. Attrition, or at least reduced usage, has persisted, and growth in the U.S. user base has stalled, with 2021 likely to have the lowest annual growth rate in the company’s history.

Perhaps some drop-off was inevitable. I know that once I reconnected with those old friends from high school, I realized that I’d lost touch with many of them for good reason, and found ways to renew my relationship with others via more private channels. As delightful as it is to see photos of my friends’ children and pets, I’m content to see them every few weeks. And just as too much innocuous geniality is tiresome, so too is the extreme irascibility that characterizes so much online discourse. As Roxane Gay and others have noted, otherwise decent people, feeling helpless offline, seek some sense of agency in social media through endless patrolling and correcting that leave their readers unenlightened and drained.

These are just the personal, quotidian reasons for tuning out. But Facebook is guilty of more than just boring or enervating us. Most recently, the company disabled the accounts and data access for two researchers who were studying how misinformation spreads on the platform.

While it’s premature to declare that the social media behemoth has foundered, let alone sunk, it certainly finds itself cruising in icier waters. And for good reason. Mark Zuckerberg’s groovy, vacuous rhetoric about “connecting the world” notwithstanding, Facebook is an amoral platform willingly exploited by a small but mighty band of immoral actors.

To understand how it got that way – and to map out a path to regulating it appropriately – it’s worth revisiting the goal Mark Zuckerberg articulated back in 2014, before the platform’s power to influence was well understood.

Facebook is an amoral platform willingly exploited by a small but mighty band of immoral actors.

“Our goal is to build the perfect personalized newspaper for every person in the world,” Zuckerberg said. “We’re trying to personalize it and show you the stuff that’s going to be most interesting to you.”

Of course what’s most interesting to us tends to be statements and stories that validate our existing beliefs, or better still, that make us feel privy to some insider knowledge that only those with our superior minds can fully appreciate. (Yes, QAnon proponents and Mike Lindell followers, I’m looking at you.) More insidious, though, is the underlying belief that being shown only what we want to see is somehow a good thing. Few people want to see photos and articles about climate catastrophes, genocide, and imperiled refugees, but if humanity and the planet we live on is to survive, we need to.

That’s why newspapers and news sites exist. The good ones – the credible ones — take it upon themselves to show us what we don’t want to see, as factually and impartially as possible. And that’s why Mark Zuckerberg’s characterization of Facebook as a “newspaper” is more relevant than ever, even if he now chooses to define his corporate mission in far squishier terms.

The platform’s role in propagating misinformation – most notably during the 2016 election and the COVID-19 pandemic – is well-documented. The consequences of lies at such a massive scale have led 64% of Americans to say social media have a mostly negative effect on the way things are going in the country today. (Perhaps not-so-coincidentally, 69% of American adults say that they use Facebook, suggesting that they know whereof they speak.)

In response to these criticisms, Facebook launched a fact-checking program in 2016 and expanded it two years later. But this initiative falls short on multiple fronts. For starters, while moderators are free to remove posts that violate the company’s “community standards,” fact-checkers do not remove posts that are demonstrably false. Rather, as Facebook explains on its Publisher page, “We take action against misinformation by reducing its spread so that fewer people see it.” This begs the question of why it’s better to make misinformation available to fewer people than to prevent it from spreading at all.

And while fact-checkers can flag specific instances of misinformation, Facebook’s algorithms for identifying variations on each rotten apple are grossly inadequate. Alter a graphic, change a word, and misinformation hustlers can feel fairly certain that their lies will spread undetected. As the Columbia Journalism Review recently reported, “Our analysis found that Facebook still struggles to recognize similar content and propagate its fact-check labels (False, Altered, Partly False or Missing Context) at scale. For example, we found 23 instances of a meme that attributed the murder of a Black Chicago teenager to ‘other Black kids’ … Facebook was unable to recognize other iterations of the meme using similarity-matching algorithms ...”


More from WBUR


Furthermore, algorithms for spreading content work much more quickly than the human beings evaluating it. For example, there were 302 fact checks of Facebook content in the U.S. conducted in January of 2020. But according to a study by Popular Information, “much of that work was conducted far too slowly to make a difference.” Just nine of Politifact’s 54 fact checks were conducted within 24 hours of the content being posted to Facebook, and fewer than half were conducted within a week.

Perhaps Facebook’s fact-checking partners (which amazingly includes in their ranks the Tucker Carlson-founded mendacity mill, The Daily Caller) will be able to move beyond a demoralizing, uselessly slow cycle of Whack-a-Mole. But until these algorithms are improved, Facebook should more explicitly and systemically distinguish between ordinary people sharing their (real or fabricated) lives with friends and family from those seeking not so much to connect as to broadcast, i.e. between individuals and publishers.

Anthropologist Robin Dunbar hypothesized that most people can maintain meaningful interpersonal relationships with a maximum of 150 people. And while some in the age of social media, that number may be larger, nobody is contesting the notion that we have limited room in our heads and hearts for genuine, sustained connections. So why not decree that anyone with more than 150 or 300 or even 500 followers be considered a publisher, subject to a different set of rules? Make them pay to post, which will at least diminish the overall noise. Require them to submit their content to fact-checking before it goes live, just as real, credible news organizations do.

If Facebook is ever to rise beyond acting as a cultural mirror infinitely replicating only the images we want to see, if it is to ever realize Mark Zuckerberg’s new alleged mission to “bring the world closer together,” it has to do so on the basis of truth, not Likes.

Follow Cognoscenti on Facebook and Twitter.

Related:

Headshot of Julie Wittes Schlack

Julie Wittes Schlack Cognoscenti contributor
Julie Wittes Schlack writes essays, short stories and book reviews for various publications, including WBUR's Cognoscenti and The ARTery, and is the author of “This All-at-Onceness” and “Burning and Dodging.”

More…

Advertisement

More from WBUR

Listen Live
Close