Advertisement

Child sexual abuse material is on the rise online. Will lawmakers and big tech finally act?

49:36
Download Audio
Resume

Sign up for the On Point newsletter here

In just one year, reports of suspected child sexual abuse material online jumped by 35%. A big spike in an ongoing trend.

"What I think has been particularly troubling over the last 20 years is that the victims are getting younger and younger and younger. The acts are getting more and more violent and becoming more prevalent," Hany Farid, a professor of computer science at UC Berkeley, says.

The content is easy to find, because it's not hiding. It's circulating on the largest social media platforms in the world.

"It is not in the dark recesses of the Internet. It's on Facebook, it's on Instagram and TikTok. It's on social media. You don't have to look hard to find this material," Farid says.

Critics and child advocates say that companies are not doing enough to take down the content.

"Nobody wants to hear about this. It's awful. But we can't bury our heads in the sand and think, well, this isn't going to happen," Farid says.

Today, On Point: Child sexual abuse material online, and what lawmakers and the tech industry can do to stop it.

Guests

Yiota Souras, senior vice president and general counsel at the National Center for Missing and Exploited Children, a private, non-profit organization established in 1984 by the U.S. Congress.

Hany Farid, professor of computer science at the University of California, Berkeley. Professor Farid, along with Microsoft, invented PhotoDNA, the main method for detecting illegal imagery online.

Also Featured

Jane Doe, whose son John Doe was coerced and threatened by a predator into making and sharing child sexual abuse material online at age 13. The family is currently suing Twitter for allegedly not taking the material down once notified. They are being represented by the National Center on Sexual Exploitation Law Center and The Haba Law Firm.

Damon King, principal deputy chief of the Child Exploitation and Obscenity Section’s (CEOS) within the U.S. Department of Justice, Criminal Division.

Transcript

MEGHNA CHAKRABARTI: Jane remembers looking at the message on her phone. It was January 20, 2020.

JANE DOE: It was like 11:30 at night, and I got a text from ... an associate at work asking me, is your son so-and-so? And I said yes. And they said, ‘Well, he is on the phone with my niece right now and he's discussing that he doesn't want to live anymore and he is suicidal.’

CHAKRABARTI: Jane's son, John, was 16. He was thriving in class, in baseball, at church. So she thought, You’ve got the wrong kid here.

But that night, John told her, yes, he was suicidal because of a video that was being passed around in school.

DOE: So, you know, the first kick in the gut is that I think I know my kid and he's suicidal. And the second kick in the gut is that what is this video?

CHAKRABARTI: The video was made three years earlier. It was child pornography. And her son was in it. This is On Point. I’m Meghna Chakrabarti.

This story is about child exploitation and may not be appropriate for all listeners. Jane and John Doe are not their real names. They have filed a lawsuit, and the court granted them anonymity because John was a minor when the sexually explicit video of him was made.

The video was not hidden away on the dark web. It was posted on one of the largest public social media platforms in the world, Twitter, where it was watched more than 167,000 times.

DOE: Repeatedly, we reported it as child pornography material, tried to tell his age. They weren't going to take it down.

Repeatedly, we reported it as child pornography material, tried to tell his age. They weren't going to take it down.

Jane Doe

CHAKRABARTI: There has been an explosion of child sexual abuse material online, on almost every social media platform in the world.

In 2021, there were almost 30 million reports of suspected online child abuse material, a 35% increase from the previous year, according to the National Center for Missing and Exploited Children. Millions more images and videos continue to circulate that have never been reported.

Tech companies know this. They also know possessing or distributing these materials is a federal crime.

In November of 2022, Twitter's owner, Elon Musk, posted:

CHAKRABARTI: More recently, Twitter announced it had suspended more than 400,000 accounts associated with child abuse material in January 2023 alone, saying:

“Not only are we detecting more bad actors faster, we’re building new defenses that proactively reduce the discoverability of Tweets that contain this type of content.”

But even with such efforts, child sexual abuse content continues to swarm social media sites. And critics claim it’s because the companies themselves are willfully not doing enough.

In John’s case, even after he and his mother made multiple reports to Twitter about the explicit video, the company made a choice.

"We've reviewed the content," Twitter said in an email to the family. "[We] didn't find a violation of our policies, so no action will be taken at this time."

Today, we're going to talk about why.

John is Jane’s youngest child. She says he's a straight A student, played baseball since he was five and was very involved in their local church.

Home was their sanctuary.

DOE: My husband worked night shift. I worked day shift so that our kids never had to go to daycare. They've never ridden a bus because I felt like if we kept them safe, then they would always be safe.

They've never ridden a bus because I felt like if we kept them safe, then they would always be safe.

Jane Doe

Jane says they allowed only limited use of social media and had family rules about what kind of websites the kids could access.

But that night, in January 2020, when 16-year-old John was suicidal, Jane discovered that a trafficker had in fact reached into their home and found their son three years earlier … through Snapchat.

John was 13 at the time. And the person sent him Snaps pretending to be a 16-year-old girl. The messages were convincing. John thought she went to his school.

DOE: And he said to me, mom, she had to have known me because she knew that I played baseball. She knew that I went to this specific high school. So I thought it was somebody that he knew.

CHAKRABARTI: One night, John had a friend from the baseball team sleep over. He was also 13. And they both started messaging with the person. When the person realized John and his friend were in the same room, the person sent them photos – nude photos that John thought were coming from a 16-year-old girl.

DOE: He honestly was like, Mom, I was 13. I had never seen a naked girl before. Like, I hadn't.

CHAKRABARTI: Jane's heart still breaks, because of what happened next. The person asked John and his friend to send back nude photos and videos of themselves together. And they did.

DOE: I wish that he hadn't responded to it, but he was, you know, 13. And how many 13-year-old boys that have never seen anything like that, will do the same thing?

CHAKRABARTI: Suddenly, things changed. The person wanted much more graphic videos of the boys performing sexual acts, and directly threatened the boys if they didn't comply.

DOE: The threats continued, and the blackmail continued. To the point that they told him that if he didn't continue to escalate the things sent, that the predator threatens to send what was already taken to his parents, to his pastor, to his coach, place it online.

CHAKRABARTI: John was being pulled into an alarming new trend: child exploitation via self-generated content. A method where what at first seems like an innocuous contact soon leads a minor to create, transmit, and exchange sexually explicit images of themselves to child predators. Children have reported being manipulated, threatened, coerced and even blackmailed into sending increasingly explicit photos and videos of themselves.

At this point, John realized the person was lying about being a 16-year-old girl. He knew what was happening was wrong. But he didn't know how they got his number or Snapchat username.

And he was scared. The threats continued for weeks.

And then, the trafficker asked to meet in person. John refused.

DOE: He did not meet them. He said he just stopped communicating with them.

CHAKRABARTI: The trafficker stopped messaging him. And John prayed it was over. It was not.

Three years later, John was in high school. And suddenly, a compilation of the explicit videos from that sleepover was posted on Twitter. It went viral. Recall, the videos had initially been sent over Snapchat — a platform whose servers are designed to automatically delete messages after they've been viewed by all recipients.

DOE: Teens think that those go away, but the person was keeping the videos somehow. And then they compiled several videos together to make one significant video, and then that’s what was on Twitter.

Teens think that those go away, but the person was keeping the videos somehow.

Jane Doe

CHAKRABARTI: Meaning, the video was now circulating widely on one of the largest public social media platforms in the world.

Twitter’s content standards clearly state that the company has a "zero-tolerance child sexual exploitation policy," which forbids "visual depictions of a child engaging in sexually explicit, or sexually suggestive acts."

According to court records, on December 25, 2019, someone sent Twitter a content alert about specific accounts that were posting child sexual abuse material. Twitter did not take action against those accounts.

A few weeks later, one of those same accounts posted the video of John and his friend. By January 20, 2020, John’s classmates had seen it. That was when Jane received the text message about her son wanting to take his own life.

DOE: I think the most disheartening thing is that from a parent’s standpoint is to know that your child was going through all of this for so long and that you didn’t even know until years later. And I couldn't help him because he didn't feel like he could tell me. So, you know, when I found all this out in January of 2020, you know, I don't know — it's hard to even explain. You just feel like the whole world was ripped out from under you.

CHAKRABARTI: And Jane says John felt doubly victimized. Because the videos ended up on social media, people thought John had willingly made them, even though in reality, he'd been coerced.

Jane and her son contacted school officials, law enforcement, and Twitter. They reported the video to Twitter on January 21st. Twitter replied the same day, and asked for documentation proving John's identity. The family sent a copy of his driver's license.

Jane sent two more content complaints to Twitter on January 22nd. The company replied with an automated message, and then did not respond for days. On January 26th, Jane contacted Twitter again. And finally, on January 28th — Twitter emailed its decision:

"We've reviewed the content, and didn't find a violation of our policies, so no action will be taken at this time."

DOE: Basically their end result was that if we wanted to fill out ... another complaint, we could. But to say that they weren't going to do anything, and it didn't break any of their policies was mind blowing to me.

To say that they weren't going to do anything, and it didn't break any of their policies was mind blowing to me.

Jane Doe

CHAKRABARTI: It was also shocking to an agent from the Department of Homeland Security, whom Jane had contacted via a connection from her pastor.

DOE: His words were, it's, you know, black market material that's being allowed on there, and he was blown away.

CHAKRABARTI: The agent approached Twitter, and on January 30th, after receiving his direct request on behalf of the United States Federal Government, Twitter took down the video.

But by that time, it had been retweeted more than 2,000 times, and watched more than 167,000 times.

DOE: What was on Twitter was taken down, but I have no idea if it's on other platforms or if someone took it from Twitter and put it somewhere else. To this day, we have no idea where it is and how many people are still viewing it.

CHAKRABARTI: Jane says, it also means that John will have to live with the fact that the video may be out there for the rest of his life.

Jane and John Doe are suing Twitter. They claim the company benefited from violating federal law, specifically the Trafficking Victims Protection Reauthorization Act. The case is currently pending before the U.S. 9th Circuit Court of Appeals.

We contacted Twitter for comment on Jane and John’s case. The company did not respond to our requests.

In court filings, Twitter makes several specific arguments for its innocence, most notably invoking Section 230 of the Communications Decency Act. Tech companies often assert the Section releases them from liability for user-generated content on their platforms.

However, a 2018 federal law amended Section 230 and made it illegal to "knowingly assist, facilitate, or support sex trafficking."

Twitter rejects the allegation that it knowingly assisted trafficking in John’s case. In a motion to dismiss, Twitter says it had no role in the creation of the videos, nor that it "knowingly provided any assistance to the perpetrators."

The company also claims that there's no evidence that "Twitter's failure to remove the videos was a knowing and affirmative act to aid a sex trafficking venture, rather than mere error."

So, in the age of social media, what, exactly, constitutes knowing assistance of child sexual abuse? How much time must pass before a decision not to take down such content goes from mere error to facilitating its spread?

These are the questions now before judges in the 9th circuit.

There is one more question that haunts Jane and John. Jane thought she'd done everything right to protect her son. So, why did the trafficker target him? How did they get his number?

DOE: Did I ever think this could happen in my own house? No, we never thought this could happen to us. I mean, we don't know who he or she is, and they haven't been stopped, so that's my greatest fear, that other children are being hurt.

Did I ever think this could happen in my own house? No, we never thought this could happen to us.

Jane Doe

CHAKRABARTI: Local law enforcement did open an investigation, but never positively identified a perpetrator.

Digital forensics led to a house barely 20 minutes away from John and Jane's home, but police said they lacked sufficient evidence to make an arrest.

When we come back, we'll hear more about the sharp rise in child sexual abuse materials online, and what needs to change with technology, politics and the law to do something about it.

CHAKRABARTI: Let's turn ... to Hany Farid, who is a professor of computer science at the University of California, Berkeley. And Professor Farid, along with Microsoft, invented PhotoDNA, the main method for detecting illegal imagery online. Professor Farid, welcome to you.

HANY FARID: It's good to be with you.

CHAKRABARTI: We'll get back to what this kind of content actually is and what's driving its unpleasant growth online. But at first glance, how would you evaluate or how would you judge what the platforms are doing in order to curb the content online?

FARID: The absolute bare minimum. And this is not a new trend. I got involved with Microsoft back in 2008 to try to combat the then growing, which has only exploded in the last decade and change. And the technology companies have been dragging their feet already for five years and they continue to drag their feet. They have good, don't get me wrong, they have good talking points. We have zero tolerance. We do X, we do Y.

But the reality is that you can go on to Twitter, you can go on to Facebook, and you can go on to these mainstream social media sites and find this horrific content. And these technologies are being weaponized against our children, which are getting, of course, younger and younger and younger, and the problems are getting worse. So you just heard about this extortion, which is now a rising problem.

The live streaming is now a rising problem, in addition to the sharing of images in video to the tune of tens of millions a year. And the tech companies keep saying the same thing. We have zero tolerance for this, but the problem persists and is getting worse.

CHAKRABARTI: Hang on here for just a moment, because I think we have Yiota Souras on the line now. You are with us?

YIOTA SOURAS: I am. I'm here.

CHAKRABARTI: Thank you so much for joining us today. And once again, Yiota is senior vice president and general counsel at the National Center for Missing and Exploited Children. So you heard Professor Farid there describing how much of this child sexual abuse material is online on these public and open platforms. Yiota, how would you describe the growth in the amount of this content, not just over the past year or two, but even prior to that?

SOURAS: The volume has really increased exponentially. So you heard in the opening in 2021, the National Center for Missing and Exploited Children received over 29 million reports. It's a tremendous number. It's a heartbreaking number. In 2022, we received over 32 million reports. And as Professor Farid was just detailing, not only do we see the numbers increasing, but we see that the types and the ways that children are victimized evolving as well.

So we certainly heard in the opening about the blackmail or the extortion that is at issue in the Twitter case. Well, we are also seeing abuse that is getting increasingly violent, egregious, and many children in the images who are very young infants and toddlers who are often too young to even call for help.

CHAKRABARTI: So I just want to focus for another moment on this self-generated content, because I think it's something that maybe a lot of people listening right now may not have heard of as even a possibility. So, Yiota, can you tell us more about that? Because, I mean, even self-generated content is kind of a misnomer, right? We're talking about children who are, you know, sending images of themselves, but often under duress.

SOURAS: Absolutely. So we have seen alarming increases in the amount of self-generated reports of exploitation that are being received here at NCMEC. And I'll explain a little bit about how that generally arises. You know, much is in the case in the Twitter example with John that we heard in the opening, children can be enticed under false pretenses. So, a child can be lured into believing that they are communicating with a same age or similarly aged child, that they are trading imagery with someone who they think they may know, or who may be a peer in some way.

We have seen alarming increases in the amount of self-generated reports of exploitation.

Yiota Souras

But in fact, it is an adult. It is an offender who knows how to manipulate that child, how to get images from them. And then what? Once the images are shared, much as in the case that we heard in the opening, things turn very quickly. So we often see cases where children are then pressured, blackmailed into sending increasingly egregious imagery, sending more imagery, sending imagery where they are involved in abuse with other children. Upon threat that the original image will be shared online. So at this point, the offender has made clear to the child that they know their friend group, they know their family group, their school activities. So they can make that threat very, very real to children.

The other way we're seeing this evolve is financial sextortion. And that is a much newer trend that we're seeing. It's primarily targeted to teenage boys, and it involves the same opening as a general sextortion crime. A boy might be approached online again, enticed or lured into sharing an image. And then the threat is not from war images, but it is for money. And we've seen some tragic outcomes where young boys have taken their lives as a result of financial sextortion.

CHAKRABARTI: Professor Farid, this may sound like a completely gauche question, but I still have to ask it anyway. In many of these cases of the quote-unquote, self-generated content, how are the perpetrators getting access to the children? I mean, is it because the platforms are so open? Or how?

FARID: Yeah, that's exactly right. Is that we have created, if you will, a sandbox where kids as young as eight, nine, ten, 12, 13 are intermixing with adults and with predators. And this is by design. The platforms just have completely opened up. There are very few guardrails. Settings have, up until fairly recently, always defaulted to the most open even for minors. And it's relatively easy for predators to find and get access to young kids and then extort them.

The platforms just have completely opened up. There are very few guardrails.

Hany Farid

And so this is not a bug. This is a feature. This is the way the systems were designed. They were designed to be frictionless, to allow maximum user generated content to be uploaded, to allow maximum engagement by users. And then the side effect is this horrific contact that you are seeing where predators very easily find these young kids. And it's not just one platform. Understand that this is all the platforms. I mean, I've no problem jumping up and down on Twitter's head on this, but all the platforms have this problem.

CHAKRABARTI: So your resource, as we said earlier, obviously it is illegal to possess and disseminate child sexual abuse material. It's a crime, and yet thus far successfully the platforms. And to Professor Farid's point, all of them have evaded liability. And even in the Twitter case that we talked about with John Doe, almost all of John's claims were dismissed by a judge because of Section 230 of the Communications Decency Act.

There's one claim that's still continuing to work its way through the courts. So why is it that Section 230 continues to provide this wall of protection for the platforms, even in the face of, you know, obviously illegal content?

SOURAS: Well, hopefully that's about to change with some pending legislation and some legislation that we'll see, I think, a little bit later this year. But as you noted with Section 230, quite simply, there is no legal recourse for a victim of this crime or their families when sexually explicit images of a child are circulated online. There is no legal recourse against a technology company that is facilitating, that is knowingly distributing, that is keeping that content online. As we saw again in the Twitter case where they had actual complaints and notices regarding that content and still kept it up. Section 230, as we know, has been around for quite a while.

There are some benefits to it societally and for the technology industry, but it has been misinterpreted at this point to create this perverse environment, which Professor Farid detailed. It enables children and adults to have private conversations and exchange imagery in total solitude in a way that we would tolerate in in any other aspect of our lives. There is nowhere in the world that we would allow children and adults to be together, to have intimate conversations, to share photos with no adult supervision, with no review, with no responsibility by the platform that is actually hosting those platforms. And yet Section 238 permits that currently.

CHAKRABARTI: So, Professor Farid, I'd like to hear you on this. Because as we know, the platforms, the tech companies frequently point to Section 230, not just as a source of strict legal liability, but also in terms of, well, they don't want to get in the game of — this is what they say, of infringing on speech. Or be then eventually forced to say, do content moderation on other variously questionable activities, say drugs, weapons, terrorism, things like that.

FARID: I think that's exactly right. I always like when Mark Zuckerberg says how much he loves the First Amendment and then bans legal adult pornography from his platform. Same thing for YouTube. These platforms have no problem finding and removing perfectly legal content when it's in their business interests. Why did Facebook and YouTube ban adult pornography? Because they knew if they didn't, their sites would be littered with this content. And advertisers don't want to run ads against sexually explicit material, even if it's legal. And so what did the companies do?

It is not in their business interests to remove all sorts of harm.

Hany Farid

They got really good at it. And they're good at it. They develop technology and they're very aggressive about removing because it's in their business interest, but it is not in their business interests to remove all sorts of harm. So then when it comes to harms to individuals and societies and democracies, the companies say, Well, you don't want me to be the arbiter of truth, you don't want me to be policing these networks.

Well, that's awfully convenient, isn't it? This is not a technological limitation. This is a business decision that the companies have made, that they are in the business of maximizing the creation and the upload of user generated content so they can monetize it. And taking down content that is not in their interest simply is not a priority for them.

CHAKRABARTI: I want to just focus for a minute or two on Meta/Facebook, because it's the biggest still out there. And just want everyone to know that we did reach out to all of the tech companies that we're discussing this hour. And aside from Apple, which sent us a kind of a standard statement, we did not hear back from any of them, including Meta. However, on Meta, Facebook's website, they do have a statement about online child protection, and they claim that they have basically that they collaborate with industry child safety experts and civil society around the world.

And I think the global nature of this we should also highlight, to fight the online exploitation of children. And they have specially trained teams with backgrounds in law enforcement, online safety analytics and forensic investigations. So Yiota, my first question to you is, given what you know about what Meta has set up right now in terms of its staffing and the resources that it's putting towards fighting this problem, how would you evaluate it? Are they doing an adequate job?

SOURAS: So Meta is certainly doing more than most companies, I would say, to detect and report and remove this content. That being said, every company, including Meta, can and must do much, much more. And I just want to point out that what Meta does is reactive. So when content is already on the site, they do have tools where they can detect it and report it to NCMEC's CyberTipline. They are the largest reporter, as you might imagine. They are a tremendously global social media company. Everybody is on one of their properties, WhatsApp, Instagram, Facebook, etc.

Meta is certainly doing more than most companies, I would say, to detect and report and remove this content. That being said, every company, including Meta, can and must do much, much more.

Yiota Souras

However, what they're not doing is making any efforts to curtail the creation of that content, the initial sharing of that content, the sextortion, the blackmail, all of the other crimes that we've discussed. They all occur and Meta simply like all the companies reacts after the fact to remove them, after the harm to the child, after the offender has those images, after they are going to use those images for their own gratification or to entice another child into abuse. They don't stop the victimization of children, children who have been abused, whose images now continue to circulate for years and years in tremendously large numbers, thousands and tens of thousands of times a year.

CHAKRABARTI: So I want to dig a little bit deeper into what Meta says it's doing, because there's a section of the child safety portion of their website which confuses me a little, to be perfectly honest. It's this whole section called Understanding Offenders. Meta says that they did this in-depth analysis of the content that they reported to NCMEC, the content over a two-month period of time. And here's what they found, that in that two-month period of time, more than 90% of the content was the same as or visually similar to previously reported content.

Copies of just six videos were responsible for more than half the child exploitative content in that time. And then they say, while the data indicates that the number of pieces of content does not equal the number of victims, that the same content potentially slightly altered, is being shared, obviously over and over and over again. How do you read that?

SOURAS: Well, we certainly have read that that statement on Meta's website before and had discussions with them about it. And I want to point out a couple of items where they are trying to diminish the harm and the impact of the proliferation of this content online. First, I cannot think of any other crime where we would say, Well, this is just the same crime. This is just another incident of a drug purchase or of a gun crime. And yet Meta is basically saying this, in this instance.

They're saying, well, this is just the same sexually abusive image that is being traded to another offender or among multiple other offenders. And they're trying to diminish it when the impact to the child is tremendously real every time that image is recirculated. It is a crime. Every time that image is recirculated. The child is entitled to restitution. That offender should be criminally prosecuted. So this attempt to diminish recirculation as if it is a lesser crime or not even a crime at all.

... You know, NCMEC absolutely objected to on principle and certainly in this instance. The other issue regarding the number of images, again, that it comes down to a small handful of images. Again, an attempt, I think, to diminish the tremendous volume of the problem and the proliferation of this problem. The images at issue here are child pornography. So, again, whether there are six or 600 different images, they are sexually abusive, egregious images of children being sexually abused. And that in and of itself, you know, has to be reported again, is criminal and is not to be diminished.

CHAKRABARTI: It also, again confounds me that Meta, which arguably has the largest data set of almost any company in the world, only chose two representative months to do this analysis on. But anyway, we have lot more to discuss about what more if anything, the tech companies can and should be doing to curb child sexual abuse material online.

Now, let's listen to a little bit of what met CEO Mark Zuckerberg has said somewhat recently about the problem of this material on his platform that, of course, owns Facebook, Instagram and WhatsApp. We contacted Meta with some questions and with a request for an interview and statement. They did not respond. So instead, we're going to play you a little bit of what Zuckerberg said while being questioned about exploitative material online. And he was questioned by Republican Representative Ann Wagner of Missouri in October of 2019.

MARK ZUCKERBERG: Congresswoman, thanks. Child exploitation is one of the most serious threats that we focus on.

ANN WAGNER: What is Facebook doing? 16.8 of the 18.4 million.

ZUCKERBERG: Well, Congresswoman, the reason why the vast majority come from Facebook is because I think we work harder than any other company to identify this behavior.

WAGNER: What are you doing to shut this down? These accounts peddle horrific illegal content that exploits women and children. What are you doing, Mr. Zuckerberg, to shut this down?

ZUCKERBERG: Well, Congresswoman, we build sophisticated systems to find this behavior.

WAGNER: 16.8 million and growing of the 18.4 images.

ZUCKERBERG: Well, Congresswoman, I don't think Facebook is the only place on the Internet where this behavior is happening. I think the fact that the vast majority of those reports come from us reflects the fact that we actually do a better job than everyone else of finding it and acting on it.

CHAKRABARTI: Mark Zuckerberg in a congressional hearing from October 2019. Professor Farid, from your perspective, as a computer scientist, can you tell us really right now what are, to be fair, the achievements, if any, the companies like Facebook have made in taking down the content and how the content has also changed that now make it even harder to do so today.

FARID: I think it's important to understand that this is a fast-moving space. So back in 2018, I worked with Microsoft to develop PhotoDNA, whose goal was to find and remove previously identified images. Because images were the dominant medium in the mid-2000s. That technology still being used at Facebook and other companies. But for example, despite calls in 2008 that we needed to build a video standard that allows us to find and remove previously identified videos, there is still no industry standard. We still don't have effective technology to deal with live streaming.

We still don't have effective technology to deal with grooming, with sextortion. And that's not, in my opinion, because we don't have the technological abilities to do it. It's because it's just not a priority. I think it's perfectly fair for media to say, yes, the majority of reports come from us because we are looking, I have no problem with that.

We still don't have effective technology to deal with grooming, with sextortion. And that's not, in my opinion, because we don't have the technological abilities to do it.

Hany Farid

But the problem, and if I can just come back to something you were talking about before the break, when Meta says, well, 90% of the videos are the same and there's only six videos. The problem is that they actually don't know that. Because all they can tell you is of the material we have found. And that's important because it changes the denominator in this equation. They have no idea what's happening in vast majorities of their platforms like WhatsApp, for example, which end to end encrypted.

So it's really disingenuous for them to say we are finding X percent of the content because they don't know what the denominator is in this. So the reality is we don't really even know how bad the problem is, but we know that it's getting worse and that it's morphing, that the predators are constantly adapting to anything that the two companies do and that they're not adapting quickly enough.

CHAKRABARTI: Meta also says again, this is from what they have posted online in their child protection section of the website. That they're working to understand intent, that understanding intent is somehow part of how they're going to build effective solutions to stop the spread of this material. Because they say that along with NCMEC, they developed a research-based taxonomy to categorize a person's intent in sharing this content, and they evaluated some 150 accounts that were reported for uploading abusive and exploitative material.

And then they say they estimate that more than 75% did not exhibit malicious intent, i.e. did not intend to harm a child. Instead, they were sharing appeared to share for other reasons, such as outrage or poor humor. ... Why does intent even matter if the content itself is prima facie illegal?

FARID: It doesn't. And Yiota got it exactly right earlier, which is they are trying to minimize what is actually happening on their platforms. And it's a particularly offensive thing to say to a victim. It's like, Oh, the person sharing this didn't really intend to harm you. First of all, you have no idea what their intent is. So stop pretending you do. This is a silly thing to say, and it doesn't matter. The material is illegal. And ... I've read exactly what you're talking about, particularly offensive. And by the way, 150 accounts out of 3 billion. Really? Oh, come on.

CHAKRABARTI: Well, Yiota, I'd love to hear on this. And also, Hany beforehand was talking about, WhatsApp has end to end encryption. I'm thinking Apple. That's where they put a lot of their efforts too, and they are a source of a minuscule amount of reports to NCMEC. So you can talk about Apple, but if you want to talk about Facebook for a little bit more first, go ahead.

SOURAS: Sure. Well, just absolutely agree with Professor Farid. The issue of intent and then characterizing intent as outrage or some sort of poor humor is incredibly offensive and damaging to survivors. The intent does not matter when the crime has occurred. As you noted, this is per se illegal. This is content that there's no context for, there's no subjectivity around.

The issue of intent and then characterizing intent as outrage or some sort of poor humor is incredibly offensive and damaging to survivors.

Yiota Souras

Child sexual abuse imagery is always illegal. So to try to categorize it or again, minimize it in some way by saying somebody didn't really mean it in an illegal sense is, you know, is really quite illogical and again, misses the point of what is actually going on on the platform.

CHAKRABARTI: Okay. So what I would like to do now is just hear a little bit from someone in law enforcement who has to start working on this issue, because a few weeks ago, we spoke with Damon King, who is the principal deputy chief of the child Exploitation and Obscenity section within the U.S. Department of Justice. And he has been prosecuting cases on child sexual abuse material since the year 2000. And he says the problem has grown exponentially.

DAMON KING: To be honest, I can't even imagine having enough resources to prosecute every case that can be identified, and followed from inception through investigation to prosecution, because there are that many. We definitely have to pick and choose what we focus on, and we have to triage what we investigate and go after based on our resources. So that is saying that we go after the worst of the worst. That's certainly true. These individuals can come from any walk of life, everything from, you know, an individual who might be, you know, the CEO of a multinational corporation down to a person who is homeless.

CHAKRABARTI: So once again, that's Damon King at the Department of Justice. And by the way, on average, there's about 3,000 federal child exploitation prosecutions a year. That does not count for those prosecutions going on at the state and local level. And in 2021, the Internet Crimes Against Children Task Force Program, which works with state and local law enforcement, arrested more than 10,000 people.

Now, we're going to move to talking more about if and what the platforms can do more proactively to get this stuff off of their platforms. Because listening to Damon King, it doesn't sound like relying on the enforcement end is going to really be an adequate solution. But Yiota, I just want to clarify one thing about what Section 230 actually allows for. So let's pull it out of the world of the digital world and into just like the paper world.

Let's say if a community newspaper, like a community, alt-type of paper that relied on submissions from the community like a.k.a. analog user generated content. And published it and some of it was this child sexual abuse material, like images of children. Would that not automatically be illegal, and would not the government be completely within the law to shut them down?

SOURAS: It would be. It absolutely would be.

CHAKRABARTI: So then the difference between that and a digital platform is what?

SOURAS: Under the law. It's simply what you just said, the fact that there is a physicality to whether it's a newspaper or some other context, and then the online world, the CDA, Section 230, has chosen to treat the online world and crimes that occur on the online world very differently.

CHAKRABARTI: And is this a legacy of when the CDA was first written? I mean, because the world is very, very different now in the in the intervening years.

SOURAS: It absolutely is. You know, 1996, when the CDA was first implemented was, you know, the beginning, the beginning years of the Internet. We obviously wanted the Internet in the United States to prosper. We wanted to have tremendous growth. We wanted to see what the opportunity was. Those are all incredibly positive reasons to have tried to protect that industry at the very beginning. But as you noted, the world is very different, and we have seen what an unregulated Internet brings us.

It brings us 32 million reports of child sexual exploitation every year. And I'll just note on that number, that is only what is detected. A company has no obligation to try to detect, report or remove any of this content. So, again, the 32 million is only what a company has chosen to take efforts to find and to report.

32 million reports of child sexual exploitation every year. And I'll just note on that number, that is only what is detected.

Yiota Souras

CHAKRABARTI: Okay, Well, let's talk about the EARN IT act, because I believe that Yiota, you were hinting at that a little bit earlier in March of 2020, a Senate hearing was held on this act because senators from across the aisle agreed that the tech industry could do more.

Now, Elizabeth Banker, deputy general counsel of the Internet Association, which represents more than 40 of the world's largest tech companies, including everyone from Amazon to Facebook, Twitter, Snapchat, Google, etc., testified at this hearing. And here is what Senator Richard Blumenthal of Connecticut, one of the bill's sponsors, had to say at the end of the hearing.

SEN. BLUMENTHAL: I think you should take back to your clients a message here, and you're hearing it, I think, loud and clear that the tech companies, big or small, can be part of the solution. Or they will be part of the problem, and they will lose all Section 230 protection if they don't become part of the solution here. And I think you'll see a whirlwind of opposition to Section 230 writ large across the board unless we deal with this discrete and very solvable problem.

CHAKRABARTI: So that was three years ago. Just this week, on Tuesday, the Senate Judiciary Committee held another hearing on child safety online and they discussed child sexual abuse material. Here's Republican Senator Josh Hawley of Missouri expressing frustration about why there's been so little progress in the past three years.

SEN. HAWLEY: We have these hearings every so often. A lot of these hearings are great. Everybody talks tough on the companies and then later on watch, we'll have votes in this committee, real votes. And people have to put their names to stuff. And oh, lo and behold, when that happens, we can't pass real tough stuff. So I had to say this to my colleagues. This has been great. Thank you, Mr. Chairman, for holding this hearing. This has been great. But it's time to vote. It's time to stand up and be counted. I've been here for four years. It's been four years of talk. That has got to change.

CHAKRABARTI: Senator Josh Hawley. Now the EARN IT act, if it passes, would hold companies accountable by getting rid of Section 230 immunity if they were found to knowingly have child sexual abuse material on their platforms. And so that would open them up to liability. Professor Farid, what is preventing the EARN IT Act from advancing?

FARID: Yeah, that's a great question. I've worked with folks on Capitol Hill for years on this. They're very good at holding hearings. They're not so good at actually doing anything about it. I think part of the issue is that the tech lobby is incredibly powerful, make no mistake about it. On the one hand, they will say we want to be regulated. We think this is important. We have zero tolerance for this.

But behind closed doors, they are lobbying to squash any regulation that is going to hold their feet to the fire. And I think that's what's slowing this down. The good news is there are some very aggressive legislations coming out of Brussels, the E.U. and out of the U.K. and out of Australia. I think we should be embarrassed that those on the other side of the pond are much, much more progressive in trying to hold these companies responsible for the harms that they are creating.

CHAKRABARTI: So you say that the Europeans are showing us an example of how to craft finely tailored legislation, that that does something meaningful about child sexual abuse material, while also, you know, protecting the speech issues that are very real in this country.

FARID: I think that's exactly right. There's tension here, and we should admit there is tension. Those who say we should have an open and free Internet. I understand that. I understand where that point of view comes from. But at the same time, we are seeing measurable harm from child sexual abuse to terrorism, to illegal drugs, illegal weapons, disinformation campaigns, non-consensual sexual imagery. We can't ignore that.

And I think as we have done in the physical world, we have to find a balance. There has to be a balance. And we have to put rules of the road and guardrails on to make sure that we are protecting the most vulnerable among us. And we have not done that for 20 years. And I think shame on us for doing that.

CHAKRABARTI: But you're saying that there's an example potentially of how to do it. Well, Yiota. We've only got about a minute left here. And I keep thinking about all the parents of children who have been exploited and about drain at the top of the show who felt so personally devastated that she felt like she hadn't protected John. So for parents listening right now, I mean, I want to actually end with a very practical question. What do you advise they do in this age of, you know, people being able to reach their kids through things like Snapchat in order to protect their kids?

SOURAS: It's such a great question. So, you know, I would say first and foremost, to the extent parents can know what your children are doing online, have dialogs with them about it. See the apps they are on, share that experience with them to the extent possible. And, you know, most importantly, know that your child can come to you if they are approached online, if something makes them feel uncomfortable, if one of their friends is having this experience, know that they can reach out to trusted adults and actually share this information and get help. And of course, NCMEC has many resources for parents on these issues, as well.

Know that your child can come to you if they are approached online. If something makes them feel uncomfortable, if one of their friends is having this experience, know that they can reach out to trusted adults and actually share this information and get help.  

Yiota Souras

This program aired on February 16, 2023.

Headshot of Paige Sutherland

Paige Sutherland Producer, On Point
Paige Sutherland is a producer for On Point.

More…

Headshot of Meghna Chakrabarti

Meghna Chakrabarti Host, On Point
Meghna Chakrabarti is the host of On Point.

More…

Tim Skoog Sound Designer and Producer, On Point
Tim Skoog is a sound designer and producer for On Point.

More…

Advertisement

More from On Point

Listen Live
Close