Support the news
The unwavering commitment to freedom of speech in the United States has, thus far, quashed any meaningful effort by regulators to curb hateful and violent ideas expressed on social media. Congressional committees have taken social media companies to task for their complicity in online election meddling and their lax posture towards hate speech, but these inquisitions have resulted in zero federal laws or any significant or comprehensive regulations. A few bills have been proposed, including Senator Ed Markey’s (D-Mass) "Kids Internet Design and Safety (KIDS) Act," but none have advanced beyond early committee review.
Social media companies have also been vocal about taking action, including Facebook’s CEO, Mark Zuckerberg, who is asking the federal government to let him and his industry know what kind of content is prohibited. But this enthusiasm for government control is tepid; most social media companies are invested heavily in lobbying efforts to keep government regulation at bay.
The result is a veritable Wild West where terrorists, drug dealers, white nationalists and others have almost complete free rein to converse, compare, share and incite one another. The latest prominent example is a hateful anti-immigrant manifesto posted to 8chan by the perpetrator of the mass shooting in El Paso, Texas. It is worth noting that after the shooting, internet service providers blocked services to 8chan, in direct response to public outrage — that's hardly the kind of comprehensive, proactive and fair system that we want or need.
We must find ways to restrict access to the deep, dark reaches of forums and chat rooms where hate festers.
To be clear, these fringe groups stewing online represent a minuscule slice of the American population, but they find common cause on social media. Individual platforms have historically implemented codes of conduct and act periodically to evict scofflaws, but new sites emerge, new sub-topics are populated and the hate and violence continue.
Surprisingly, a new model for confronting this anarchy online — and its resulting violence in the real world — comes from the United Kingdom, where lawmakers passed the most far-reaching and restrictive internet regulation law in the Western, democratic world in 2017 with their Digital Economy Act. While intended as a means to limit children's exposure to pornography, the law also offers a template for regulating hate speech.
U.K. lawmakers sketched out a model whereby porn websites are required to present a landing page where visitors must demonstrate that they are at least 18-years-old. If sites fail to properly “card” visitors, the government is empowered to ask advertisers to pull their ads, restrict payment providers from processing payments or get internet service providers to cut off the websites altogether.
The law, passed two years ago, has still not been fully implemented due to challenges in deciding which mechanisms the government will use to get site visitors to prove their age. Early critics suggested this would take the form of a physical license. Today the government agency charged with developing the implementation of this law, the British board of film classification, has only outlined rough guidelines on how an independent third party age validator might operate (with some worried that those independent validators might actually be the porn websites themselves). Either way, full roll-out of the program is not expected until 2020.
While it is too early to fully assess the U.K. experiment, it gives us much to consider as we weigh the role the internet plays in serving as a breeding ground for hate and violence.
[This new requirement] provides a key link between hateful and violent activity online and the local infrastructure of regulators and police investigators.
The notion of a barrier to complete and unfettered access to the internet feels un-American. Just as free speech has and continues to be highly regulated in the physical world (try yelling “fire” in a crowded theater or handing out child pornography on a city street), today’s era demands a similar pivot towards fair and reasonable restrictions of speech in the online world. We must find ways to restrict access to the deep, dark reaches of forums and chat rooms where hate festers. The U.K. model suggests a path for us here in the U.S.
The use of an age validator or third-party license grantor eliminates the anonymity that fuels rage and animus on much of the dark web. Posters and readers of such vitriol would need to register their internet access, as the U.K. law demands for users of porn, and the government becomes empowered to respond to misuse by revoking that access, or severely curtailing it. We know that in the physical world, without the shield of anonymity, hate mongers tend to be quiet and on the fringes — it is the shield of privacy online that allows them to thrive.
An identity validator or license ties a person’s social media activity to their home base — or more precisely, the location where they registered themselves for the license. This provides a key link between hateful and violent activity online and the local infrastructure of regulators and police investigators. It makes the aspatial internet spatial, and empowers the vast network of local, county, and state governments and law enforcement officials to regulate, monitor and enforce behavior in online public spaces.
With this kind of model, the internet can be a safer place, consequently making the real world safer, too.
- Trolled Online, Women In Politics Fight To Hold Big Tech Accountable In The U.K.
- Did Facebook CEO Mark Zuckerberg Intend To Deceive?
- The Manipulated Pelosi Video: Why We Embrace Fiction Over Fact
- Only Humans Can Fix Facebook’s Fake News Problem
Support the news