Protecting Free Speech Or Hate Speech? Shootings Intensify A Cybersecurity Dilemma

Download Audio
Rene Aguilar and Jackie Flores prayed Sunday at a makeshift memorial for victims of Saturday's mass shooting at a shopping complex in El Paso. (Andres Leighton/AP)
Rene Aguilar and Jackie Flores prayed Sunday at a makeshift memorial for victims of Saturday's mass shooting at a shopping complex in El Paso. (Andres Leighton/AP)

The weekend's mass shootings in El Paso and Dayton have intensified a long-running debate about gun control — and a newer debate about internet control.

In Greater Boston's large cybersecurity industry, some executives are thinking hard about the decision one of their counterparts in California has made to cut service to a website where the alleged El Paso gunman posted racist comments shortly before the shooting.

There is no consensus on how to handle the competing interests of free speech and public safety.

"We're not content police, but I think we do have a responsibility to our employees and our customers and our shareholders and, as a company, we have a cultural responsibility," said Tom Leighton, chief executive of Cambridge-based Akamai Technologies.

Akamai's services include protecting websites from hackers. That's what San Francisco-based Cloudflare did for the website 8chan until late Sunday, when Cloudflare Chief Executive Matthew Prince wrote in a blog post that his company is "terminating 8chan as a customer."

Prince noted that the suspected shooter in El Paso is the third person this year to post hateful messages on 8chan before allegedly carrying out a murderous rampage. Cloudflare's move left 8chan vulnerable to cyberattacks, effectively forcing the site offline temporarily.

Leighton, who cofounded Akamai in 1998, said he and his partners decided from the outset "not to accept certain kinds of customers. That included certain kinds of adult content, illegal gambling, criminal activity, terrorism and related content. We don't take that business and, if we discover that kind of content somehow got onto the platform, then we discontinue it."

For Waltham-based Carbon Black, the line is "any overt hate speech that could lead to terrorism or direct harm to others," according to the company's chief cybersecurity officer, Tom Kellermann.

He added that Carbon Black aims to "protect data across a broad range of the political spectrum" but has a blanket policy not to do business in countries such as Iran, China, North Korea and Russia, "all of whom are constantly attacking the U.S."

Kevin O'Brien, chief executive of Waltham-based GreatHorn, said he respects the prerogative of any company "to make ethical decisions about whom they will and won't work with, when it comes to security."

But the former philosophy major added that the central question, as he sees it, is: "Do you believe that people who have views that you might judge to be morally reprehensible should be silenced? And I would say the answer is no, as difficult as that is."

O'Brien pondered a hypothetical scenario in which his firm, which specializes in email security, discovered that it had shielded messages used to plot violence. Could he live with that?

"It's a powerful question, and it is something we think about," he said. "The question I would then ask is: Is the root problem here that there were secure communications? Or is there some other cause that we should be focusing on? I think it's a natural question to say, 'Have we done everything possible to stop these things?' But it's not Cloudflare or GreatHorn or security services that are creating this problem, and I think that we're shifting the blame and the focus to the wrong thing, if that's what we're talking about.

"So, that's the answer that I come up with. But do I grapple with it? Is it difficult? Do I find these things incredibly troubling and hard to sleep at night because of them? Absolutely."

This segment aired on August 6, 2019.


Callum Borchers Reporter
Callum covered the Greater Boston business community for Bostonomix.



More from WBUR

Listen Live