Skip to main content

Advertisement

The ongoing saga of the Kids Online Safety Act

47:18
Students work on a laptop computer at Stonewall Elementary in Lexington, Ky., Feb. 6, 2023. (AP Photo/Timothy D. Easley, File)
Students work on a laptop computer at Stonewall Elementary in Lexington, Ky., Feb. 6, 2023. (AP Photo/Timothy D. Easley, File)

The last time Congress passed a law to protect children on the internet was 26 years ago. That’s before Facebook or the iPhone was even created.

Lawmakers on both sides of the aisle agree regulation is long overdue. It’s the 'how' that’s the question.

Today, On Point: The ongoing saga of the Kids Online Safety Act.

Guests

Lauren Feiner, senior policy reporter at The Verge. She covers the intersection of Silicon Valley and Capitol Hill.

Bailey Sanchez, senior counsel for U.S. legislation at Future of Privacy Forum, a nonprofit organization based in D.C.

Lorna Woods, Professor of Internet Law at the University of Essex. Her work influenced the UK’s historic Online Safety Act, which passed in 2023.

Transcript

Part I

MEGHNA CHAKRABARTI: Members of Congress don't agree on much these days, even on basic facts. But if there's one thing they dislike more than each other, it's the power of big tech. And that mutual distrust brought about a recent, rare, overwhelming bipartisan consensus.

The ayes are 91, the nays are 3, and the motion is agreed to.

CHAKRABARTI: On July 30th, the Senate overwhelmingly passed a couple of bills to protect kids online. The bills call for tech companies to take quote, reasonable steps to mitigate harm to children on their platforms and to prevent some kinds of data collection on minors. Republican Senator Marsha Blackburn of Tennessee, right after the vote.

MARSHA BLACKBURN: A message that we're sending to big tech. Kids are not your product. Kids are not your profit source, and we are going to protect them in the virtual space.

CHAKRABARTI: It's been two years since Blackburn and Democratic Senator Richard Blumenthal of Connecticut first introduced the legislation. And since then, bipartisan support has grown dramatically.

Advertisement

By the time the bill made it to the Senate floor for a vote, it had 62 cosponsors, making for all manner of strange bedfellows, such as Massachusetts Senator Elizabeth Warren and South Carolina senator Lindsey Graham.

LINDSEY GRAHAM: Now, Elizabeth Warren and Lindsey Graham have almost nothing in common, I promised her I would say that publicly. The only thing worse than me doing a bill with Elizabeth Warren is her doing a bill with me. We have parted that because Elizabeth and I see an abuse here that needs to be dealt with.

CHAKRABARTI: This is On Point, I'm Meghna Chakrabarti, and let's set aside the bipartisan bonhomie for just a moment. Because the dangers faced by kids on the internet are very real. And it's that awful reality that drove grieving parents to lobby Congress for months in support of this legislation. Parents like Mary Rodee, who lost her 15-year-old son Riley to suicide in 2021.

She told her story to Fox News back in January.

MARY RODEE: I let him get Facebook because he wanted to buy a snowmobile on Marketplace. And little did I know that a criminal from across the world could contact him, send him child sexual abuse material, get him to send it in return, and then threaten him for money, scaring him to death.

And they haven't been stopped. There's nothing stopping them from continuing to come into our children's phones while we're in the kitchen making dinner and make these types of threats on them.

CHAKRABARTI: Now recall, there were three senators who voted against the bill. They are Democratic Senator Ron Wyden of Oregon, Republican Senator Mike Lee of Utah, and Republican Senator Rand Paul of Kentucky.

Senator Paul explained his nay vote on the Senate floor.

RAND PAUL: You're going to see it pass overwhelming today because of the title, Kids Online Safety Act. Who could oppose that? And there are some tragic stories of people who committed suicide or died because of things that happened on the internet. No one is here to discount that, but it has to be thoughtful.

How we fix it. Is removing all discussion of climate change, abortion, gambling ads, and beer ads going to do anything that would have addressed the life of any of the children who tragically have lost their lives? I think not. This is a ham-fisted bill that will not fix the problem, but will be the first big bill to regulate speech online.

CHAKRABARTI: And it's this concern that the government is taking concrete steps that could limit access to certain online information, even if it's demonstrably harmful for children. That is the concern that's united groups such as NetChoice, that's the lobbying group for Big Tech, or the ACLU. And those groups have been united with several LGBTQ advocacy organizations such as TransOhio.

Dara Adkison is TransOhio's Executive Director.

DARA ADKISON: What concerns me most about the KOSA bill and bills generally seeking to limit the scope of what is accessible on the internet is who decides what should be accessible and how it's accessible. Being the head of a trans organization in a state where we've actively had legislatures, tell us to our faces that they would like to see our website and the services that we provide to youth and adults across the state restricted. It's not a leap and a bound and it's anything but hyperbole to see how a bill like KOSA could lead to our website and other websites being limited in their access in the internet.

CHAKRABARTI: Now the legislation still has to clear the House, which is in recess until September.

Now I should note one more thing. Congress has not created a single new law on child safety in the internet since 1998. Now, that's back when barely 40% of U.S. households had computers, and only 25% of households had access to the internet, according to the U.S. Census Bureau. And of those lucky 25%, we were still talking about back when dial up was king.

And young folks, young listeners, if you have no idea what I'm talking about, ask your elders. So there is general political agreement right now that some kind of modernization is desperately needed when it comes to kids safety online, but major political differences over exactly how to do that could mean modernizing legislation is never enacted at all.

So let's try to understand exactly the dynamic on the Hill right now. And we'll start with Lauren Feiner. She's senior policy reporter at The Verge and she covers the intersection of Silicon Valley and Capitol Hill and joins us from Washington. Lauren, welcome to On Point.

LAUREN FEINER: Thanks for having me.

CHAKRABARTI: Okay, first of all, let's just get straight to the nuts and bolts.

What's your sense about what might happen in the House if this bill is allowed to on the floor when the House comes back from recess in September?

FEINER: Yeah, I think anytime you have a bill passed out of the Senate with more than 90 votes, that's something that the House really has to take seriously.

And Speaker Mike Johnson has previously said, I believe before the vote, that it was a topic he was interested in. He didn't really go into specifics on the bill, but seemed generally open to it. That said, there was a report that maybe this wouldn't reach the House. I think it's a bit early to really know for sure.

But certainly, that the dynamics in the House could be a bit different. It's a much bigger bill. There's a lot of different voices and personalities there, and there's a lot more time until the House comes back for some of these concerns to play out among members.

CHAKRABARTI: And we should note that as far as I understand, and Lauren, correct me if I'm wrong, but President Biden has said that if the bill reaches his desk before his term in office ends, that he would sign it, correct?

FEINER: That's correct.

CHAKRABARTI: Okay. Now, what's fascinating to me is, again, just so that we, that it's clear exactly what we're talking about, there's really two bills that are under the microscope. Now, I've lumped them together as one big body of legislation, but I just want to talk through what the two are very briefly. So folks know. First of all, there's one that may be of lesser immediate interest, but also important called the Children's Online Protection Act 2.0. Can you tell us just a little bit about that?

FEINER: Yeah. So this is essentially an update to an earlier children's online privacy bill.

That's the bill that we got last time we had major legislation for kids online that you mentioned. So basically, under the existing law, it protects kids under 13 with certain privacy protections. This would increase the age to 17 and add some new provisions like banning targeted advertising to that group.

CHAKRABARTI: Okay, we're going to come back to that later because data collection, of course, is actually very important, but this is not the part of the legislation that's really gotten a lot of attention. The other is the Kids Online Safety Act or aka KOSA, which will, that's the phrase we'll hear a lot.

What is it that KOSA proposes to do that's actually given so much cause for concern amongst a really interesting, diverse group of advocates out there.

FEINER: Yeah. So there's a few different things that this bill does, but really the key feature that has been really promoted by the advocates of the bill.

And that is of concern to people who oppose the bill, is the duty of care, which is basically this responsibility that would be put on online platforms that are accessed by minors. To take reasonable measures in how they design their products to litigate, to mitigate a set list of harms, cyber bullying or sexual exploitation or promotion of eating disorders, for example.

CHAKRABARTI: Okay. So we're gonna talk in a minute about exactly how platforms or internet companies would be expected to accomplish that. But let's stick with sort of the scene in Washington for another minute here. Lauren, because there's a lot of interesting groups have come together saying, this is a bad idea.

You heard Rand Paul, first of all, who's famously libertarian, being concerned about free speech and the government perhaps infringing on online free speech rights. Then the ACLU recently sent 300 high school students to Capitol Hill to lobby against the bill. And there's been concern from, as I mentioned earlier, LGBT activist groups that the government could use the legislation to prevent young people from receiving supportive content online.

Some conservative groups are very concerned as well. They're also worried that, what, information, anti-abortion information could be censored in various states if that's something the government's concerned would wish to do. What do you make of the real big diversity of advocacy groups opposed to this Online Safety Act?

FEINER: Yeah, I think, often we see in tech policy that there are strange bedfellows. It doesn't really fall along partisan lines. A lot of the time there's different coalitions that make up the advocates and the opponents. And I think that's the case here, as well. We see tech companies may be concerned, or tech groups concerned about how this would be implemented, the impact on their business, what would potentially put them in the way of more legal risk.

And then we see groups like the ACLU that are concerned about a chilling effect on speech and the ability to have free expression on the internet. So I think here we're having this strange overlapping in interests, but maybe for slightly different reasons. And meanwhile, does that mean that the tech companies, Meta, et all, are like, this is great.

Advertisement

The heat's coming off of us. And we may actually walk out of all this without having to do a thing.

FEINER: I think we're seeing some of the big tech companies state, not directly voice their opinions on this bill. I think they're tending to leave that to their groups like NetChoice.

But we are seeing some smaller platforms saying that maybe we do support this bill.

Part II

CHAKRABARTI: I'm joined today by Lauren Feiner. She's a reporter who covers Silicon Valley and Congress with The Verge. And let me bring Bailey Sanchez into the conversation now.

Bailey is Senior Counsel for U.S. Legislation at Future of Privacy Forum, the Future of Privacy Forum. It's a non profit organization based in Washington. Bailey Sanchez, welcome to On Point.

BAILEY SANCHEZ: Hello. Great to be here.

CHAKRABARTI: So first of all, how would you describe what the Future of Privacy Forum's stance is on KOSA?

SANCHEZ: So we do not take positions for or against bills. We are not like a lobbying group and we're also not like an advocacy group. What we try to do is just bring like down the middle analysis, and our sweet spot is really either explaining technology and how the targeted advertising. And then we also do a lot of comparative analysis looking at like how maybe KOSA compares to COPPA, the current law or how it compares to what the states are doing.

Cause the states have been quite active on kids privacy and safety, as well.

CHAKRABARTI: Okay, so we're going to talk about states in just a minute. Also, we'll get a little bit later, we'll take an international look as well, because this is obviously a global issue and other, some other countries are a bit ahead of the United States, so we'll do that in a second.

But so then let's get into more of the specifics here. Lauren got us started in understanding what the bill would require tech companies to do. Can you give us more detail? Has there been, is there specific language in the bill that says these certain kinds of content would be ones that would need to be regulated?

SANCHEZ: Sure. So just stepping back a bit, we've mentioned this already, but COPPA was passed in 1998. A lot has changed about the internet since then. And COPPA was also focused specifically on privacy. And by that, it really means like data collection, data use and not so much like the safety or design of a service.

And also, it only applied to children under 13. So really two of the biggest things that this new bill, that came out of the Senate does, is it creates protections for teenagers as well. It seems that Congress really thinks that your need for privacy protections and safety protections online does not stop on your 13th birthday.

And then it also is more focused on like design and safety as the name suggests. It's not just looking at what you can do with the data, but really taking a step back and looking at the design. So to your question, yeah, there's nothing that really specifies, like what types of content would be restricted.

I think Congress has tried to stay away from any content First Amendment questions, but of course those are still there. I think where the concern comes from advocacy groups is that the duty of care is very novel. There are two states that have passed similar but different duties of care.

Those have not gone into effect yet. In KOSA's duty of care, you have a duty to exercise reasonable care in the creation and implementation of any design feature, to mitigate harms such as depression and anxiety. So it doesn't say, you need to mitigate like either anti-abortion or pro-abortion content, but I think there is a thought that like it could be interpreted quite expansively, that seeing something anti-abortion could be tied to anxiety or depression in some way or similarly, like seeing like pro or anti gun content.

CHAKRABARTI: So that's why this has become political, right? Because depending on a group's political viewpoint, they can see certain types of content as anxiety inducing, for example, or not.

But who would be in charge of regulating this?

SANCHEZ So that's actually a really interesting question. So the KOSA in part can be regulated by the state attorney generals as well, but over the course of this being amended, because KOSA has been worked on for over two years at this point, over the course of amendments, they actually took away the ability for state attorney generals to enforce the duty of care.

So it would just be enforced by the FTC, and that was directly in response to advocacy groups that were saying, Florida might have one very particular interpretation of the duty of care, but California might have a completely different one. So the FTC being the only one that would enforce that part, I think, is a step in the right direction.

There's, of course, just questions about what that would mean, just because of the uncertainty and because a president can appoint new FTC chairs. And so what does that look like when we're in an election year?

CHAKRABARTI: Okay, so hang on here for just a second, because Lauren, I want to turn back to you and lean on your expertise a little here, because I don't want to have this conversation be totally out of context regarding what we know that the internet companies themselves, the tech companies themselves know about some of the harms that have come from especially social media, because there's been several hearings on Capitol Hill where tech company CEOs have had to sit in front of really frustrated legislators. Frances Haugen, formerly of Facebook, was the whistleblower that brought all of this data saying, here's how much Facebook knew about the harms regarding mental health, specifically it was causing amongst young people.

Can you just remind us about what the known backdrop is to the bills that we're talking about right now.

FEINER: Yeah. I think the Francis Haugen revelations were really critical in making this a big issue on Capitol Hill, which kind of uncovered that Meta had understood some of the harmful impacts of its platform on particularly young teenage girls.

And we've seen things like this pop up throughout the years, about how companies are aware of the harms that their designs cause kids. but at the same time, I think those companies would say that they do take certain measures to protect kids. And there are also pro, positive benefits of their platforms that teens can get from it and use.

When they're used safely. But I think that's what makes a bill like this attractive to such a wide group of lawmakers, is that it feels like something that's attacking the core issue and putting the responsibility on tech platforms. I think the issue that some of these advocacy groups have is, Who will this really end up protecting? And is this really the best way to achieve that goal?

CHAKRABARTI: Lauren Feiner with The Verge, thank you so very much for joining us today.

FEINER: Thanks for having me.

CHAKRABARTI: Okay. So let me turn back to this issue of how exactly this would work, and Bailey Sanchez, because, okay, are we living in a world?

Let me just take a step back here and say, are we living in a world where, because the internet is so vast and so profoundly woven into the fabric of our daily lives, that there will never be a single piece of legislation that will satisfy everyone. And because of that, I'm sounding tongue in cheek, but I don't mean to be. But because of that, no matter what piece or idea comes forth, some advocacy group out there is going to be very bothered by it or troubled by it.

And so therefore does that mean that new online regulations, specifically for kids, are just basically going to be an impossibility for the vast majority of people.

SANCHEZ: I think what's tough is that if you step back and think about it, if there are concerns about like design or advertising practices on a service, I think to myself, maybe some of these are protections that I would like as well.

I think a lot of advocacy groups are interested in we call it like federal comprehensive privacy. So it would not be specific to an age group. It would be privacy for everyone. But I think like kids privacy and online safety, tends to be a more bipartisan issue. So it seems like Congress is interested in moving forward on something where they can get broader consensus.

And again, this is not just about privacy. This is about safety and design. And as Lauren mentioned, there's been quite a focus on just like the harms that kids have experienced. But obviously, there's always going to be tradeoffs when you're focused on one specific group versus the whole country.

CHAKRABARTI: Yeah, but we were, we're living in a political environment now where it seems as if very, lots of groups are unwilling to make any tradeoffs at all, but it's interesting that you said the part of the issue is the focus on kids, right? Because to be fair, I understand that there actually are some other proposals out there that are more broad in their scope, which actually, ironically, has some support from the very groups that we're talking about, that are concerned about the kids legislation.

There's pieces of legislation like the American Innovation and Choice Online Act, the Open App Markets Act, or the American Privacy Rights Act. Is that correct?

SANCHEZ: Yeah, that's correct. But this is the legislation that I think has gone the farthest in quite some time. And so it is very significant.

Again, we don't know what'll happen in the House, but the fact that this passed just so overwhelmingly, I think is a very huge deal, to be quite honest.

CHAKRABARTI: Okay, but those other acts are not child specific, are they? They're more broad in terms of regulation of the internet.

SANCHEZ: Yes.

CHAKRABARTI: DO I have that right.

SANCHEZ: Yeah. And then, so some of them are focused on privacy. Some might be more focused on like an AI angle. Cause of course, this isn't the AI podcast, but it's hard not to talk about AI. But I think the Kids Online Safety and Privacy Act is trying to do a lot. And I think trying to do privacy, it's trying to do safety, it's trying to do design.

There's absolutely like a filter bubble transparency section in it that I think is very, not very discussed. But yeah, it is focused just on kids. But I think there, because it's trying to do so much, it is hard to make it a perfect language that like every single group will love. I think there will always be trade offs.

CHAKRABARTI: Again, forgive me, I'm in a slightly editorializing mood, but because of the unwillingness or challenge when it comes to finding acceptable political compromise, does it not mean that is hard to take meaningful steps forward in the world of technology? At a time where the technology itself, you mentioned AI, is racing ahead so quickly that we may never catch up to it in terms of how we as a society want to put some rules around how that tech impacts our life.

SANCHEZ: Yeah. I think it's challenging. Cause my understanding is a lot of the advocacy groups, it's just the duty of care that they are hung up on. That's mostly what we've talked about today, but KOSA would also create just some safeguards for minors, the covered services, like social media, would need to have limits on strangers contacting kids.

They need to have an ability for kids and teens to easily delete their account or raise concerns they're seeing on the platform. I think those are fairly non-controversial provisions, so it's just this very small section that seems to be the big sticking point for a lot of people, but it doesn't seem like an area that Congress is willing to budge too much either.

As I mentioned they took away the ability for state attorney generals to enforce that section. But at this point, I think that's as far as the compromise is going to go.

CHAKRABARTI: Okay. But, okay, so this is really important. Let me, help me understand this. I want to be sure that I understand it in detail and that this duty of care, essentially the idea is that the duty of care first rests on the tech companies, right?

And that they would have to make changes to protect children from harmful online content. And if they failed to do that, then the FTC could come in, or I presume maybe even court cases could be used to enforce that duty of care, correct?

SANCHEZ: Yes. So what often happens is in privacy law, we have something called like risk assessment.

So when a new service or feature is going to be launched, a platform, we'll walk through their risk assessment. They're going to look for like all the data flows, all the potential impacts and try to mitigate those. So under KOSA's duty of care, they would not just need to mitigate, are we using data that could result in harmful decisions, but is our service likely to create any anxiety, depression, create online violence or bullying.

So it starts with them initially, but yes, so the FTC would go and then say, perhaps we think that you did not act in the best interests of children. So you're just like taking your best compliance steps and hoping that the FTC does not knock at your door. But for what it's worth, I think like the FTC tends to go for cases that they feel confident that they could win.

I am not sure that an FTC with limited resources would go after a platform solely on the duty of care. I would guess they might try to prioritize several violations within the act. But of course, I think folks are just very anxious about what the unintended consequences could be.

CHAKRABARTI: Oh, so it could potentially be all bark and no bite regarding the duty of care portion. Okay. That's really interesting. Can we get super nerdy for a second, Bailey?

SANCHEZ: Yes, I'd love to get super nerdy.

CHAKRABARTI: Because you know how you said like the definitions of harms are we're vague in the bill, right? Depression, anxiety, etc.

It got me thinking about the recent Chevron case in the Supreme Court, right?

SANCHEZ: (LAUGHS)

CHAKRABARTI: Because that case, the Supreme Court literally said regarding the Chevron Deference, the doctrine, that now federal agencies, they don't have the power to make specific rules if those specific rules are not enumerated in the originating law, right? Clean Air Act. And so it would seem like the KOSA is just ripe for that kind of criticism, that there's no specific list of, this is what constitutes harm. So what would, how could the FTC even make a decision about something's harmful? It would end up in the courts, wouldn't it?

SANCHEZ: Yeah, I think that's a fair question. It's funny you asked that, because I was going to write about that in my blog and then I was like, I think I'm getting too in the weeds, getting too nerdy.

CHAKRABARTI: No, never.

SANCHEZ: And yeah, so KOSA at one point did include some specific FTC rulemaking that was taken out actually before the Chevron decision but that really perked my attention.

But yeah, I think that's a fair question to ask. If this is too, I think, vague of a mandate, could it be enforced by the FTC anyways?

CHAKRABARTI: Isn't that interesting though, because if we don't, if the criticism is that we're worried that politics could get in the way of FTC enforcement with Chevron now in play, and the Supreme Court having said the best place for things to get worked out in terms of their legality is in the courts.

I don't think advocates would see courts as being any less politicized nowadays.

SANCHEZ: No, I think that's a very fair question. And I think a lot of the recent like state privacy and safety laws, like that debate is now being played out in the courts.

CHAKRABARTI: So how different is KOSA then from some of the state privacy laws that have passed?

Yeah, so Maryland actually recently passed an age-appropriate design code that is based on California's. California's is in the court, that was just heard in the Ninth Circuit a couple weeks ago. So Maryland, so KOSA's has a duty of care to mitigate harms, whereas like Maryland has a duty of care to like act in the best interest of children.

So it's like a positive versus a negative, how that plays out practically, again none of these laws with duties of care have gone into effect. So we're just speculating how compliance might work. But KOSA'S is similar to Connecticut and Colorado. So Connecticut and Colorado passed comprehensive privacy laws that apply to all individuals.

And then they decided to go back and add amendments, specific for children, they said, here's our baseline for everyone. And now that we have laid the land, here's the extra provisions we wanna do on top. And so theirs are more tied to specific, like, traditional privacy harms.

Like they use the phrase like intrusion upon seclusion, unfair and deceptive treatment. So those are harms that we have seen played out in courts over the years, whereas KOSA's, I think, are more amorphous, but they're more modern.

Part III

CHAKRABARTI: Before we jump to taking an international look, Bailey, I just have to say that if you do want to write that blog post connecting KOSA and Chevron, Be my guest.

We will absolutely link to it. At our website. (LAUGHS)

SANCHEZ: Good to hear.

CHAKRABARTI: Because, no, look, I'm glad to hear that you were even thinking about that because nothing happens in a vacuum, right? A Supreme Court case over there really does connect to potential legislation over here and helping people understand those connections.

It's not nerdery. It's public service, Bailey. So you've got my support in writing that. But let's take a look at how similar legislation and rules have unfolded elsewhere in the world. And for that, I'd like to introduce Lorna Woods into the conversation. Lorna is a professor of internet law at the University of Essex in the UK, and her work influenced the UK's historic Online Safety Act, which passed in 2023, so just last year.

Professor Woods, welcome back to On Point.

LORNA WOODS: Hi there, Meghna. Thank you for inviting me.

CHAKRABARTI: First of all, give us a quick summary of what the Online Safety Act in the UK is attempting to do.

WOODS: It sounds awful similar to some of what you've been talking about. It wants to regulate the system design.

So to do that, it imposes on social media and on search, to duties, to broad duties. The first one is a risk assessment, and the second one is what is called a safety duty, and that is a duty to mitigate harms relating to certain sorts of content and there's two main categories of content in issue. The first one is illegal content and then the second is content harmful to children. And those duties only apply to services which have a certain number of child users.

So It's this idea of, take reasonable steps to prevent foreseeable harm, except the act puts a bit more detail in as to what those duties look like, and in particular, it gives the regulator, Ofcom, the obligation to set down guidance as to how to do a risk assessment, to provide codes of practice as to how you comply with the duties.

And it has to come up with what's called a risk register, which has been I think a troll through the existing research and information that's out there, on potential causes and consequences of harm.

CHAKRABARTI: May I just jump in here, Professor? Sorry, Professor, just for a second, just to clarify, does the UK's law specify specifically what would be defined as a risk in order to do a risk assessment, or perhaps more importantly, does it specify what constitutes content that is harmful to children, because as you've been hearing, that's the debate here in the United States right now.

WOODS: Yes, to both, but at quite a level of generality. Coming in with more detail in the sort of statutory guidance and codes of practice. The Act does identify certain categories of content.

We're not talking about depression here. The Act has identified content so what we've got is pornographic content, content providing or encouraging instructions for suicide or active deliberate self-injury or providing instructions around eating disorders, but also abusive content in relation to a range of characteristics, race, religion, sex, sexual orientation, disability, or gender reassignment.

So there is a certain amount of information there, and then Ofcom has been tasked again with providing guidance on where the boundary sits. So the difference between pornography and sex education, for example.

CHAKRABARTI: I see. Bailey, I'm going to come back to you here in just a second. But Professor Woods, my understanding is that even though the law was passed in the UK last year, it has not yet been fully implemented, yes?

WOODS: That's right. And that's because of the obligation on Ofcom to provide more detail. Shortly after the act was enacted, received Royal assent, Ofcom started with a series of consultations, which had been opened the first half of this year. So this is getting various stakeholders feeding into what the rules should look like.

CHAKRABARTI: Okay.

WOODS: And Ofcom at the moment is reviewing those responses, and we're expecting the first code, that's the one dealing with illegal content towards the end of this year, so in force, beginning of next year.

CHAKRABARTI: Okay. And Ofcom, just to remind American listeners, being the British regulator in this case for this law.

So Professor Woods, again, just to be very cognizant of things happening within a certain context, right now the UK is being completely roiled by massive protests in several cities over, they were triggered by a horrible stabbing incident, but the protests have become over migration and immigration.

And, I believe, Prime Minister Starmer himself looked to social media and blamed social media in part for inflaming the passions of many Britons. So it makes me wonder, with some, with a category as general as abusive content, which you said had to do with everything from race to disability to origin, sexuality, et cetera, is regulating that still going to be still very controversial in the UK?

Because it sounds like again, what Ofcom might consider abusive content, others in the UK might consider just their expression of their conservative values, for example.

WOODS: I think some of the content in the UK is beyond the priority content harmful to children.

That's the phraseology in the act, and actually would be criminal content. Some of the more, if you like coded stuff, could well be for within this regime, there is some weaknesses in the way the Act has been structured. It doesn't really cover misinformation, and some commentators are saying the starting point for some of the unrest in the UK were misreports about who the person stabbing the three children were, saying it was an immigrant or someone of the Muslim faith, completely false. I don't think that will be easily caught by this regime. The other issue is it's not really designed for direct instructions to companies to say, take this down, act on this.

The regime is very much aimed at setting up a system and then measuring the response of that system. So it's not going to expect perfection. And it certainly doesn't expect Ofcom or the government to be telling platforms, stop this content, kick this person off the platform, in a way, it's not designed for that.

CHAKRABARTI: I see. Okay, so Bailey Sanchez, let me turn back to you. What are your thoughts about the efforts, I shouldn't even say efforts, because the law has been passed in the UK. But the UK's child online safety efforts as compared to what we're still debating here in the United States.

SANCHEZ: I think one thing to keep in mind is that the United States is unique in that we have the First Amendment. We are very focused on our free speech rights. And so I think this actually came up in that California Age-Appropriate Design Code case that I mentioned, that was just heard in the Ninth Circuit.

California Age-Appropriate Design Code actually was modeled off of a UK Code of practice called the UK Age-Appropriate Design Code, and where California age-appropriate design code ran into some trouble is they did use those words like content. They were like platforms need to mitigate content.

And so once you bring content into the debate, like that, say, like harmful content, there's questions of what is harmful content? And that immediately implicates the first amendment. And so I think in the United States, you need to be much more, perhaps specific, about what types of harms and make sure that there's like a means and fit.

You can't be so much like principles based, like I feel like in the UK. You can have those more general principles, and they might work but here in the U.S. we love the First amendment, and we love litigation. So those are two, I think, pieces that really change the debate here.

CHAKRABARTI: So let me turn back to you, Professor Woods. First of all, if you have a response to what Bailey Sanchez was saying, I'd love to hear it. But second of all, in the event that once the law in the UK is fully enacted and Ofcom, as you said, has come up with its more specific rules and how it might assist tech companies in complying with the law, if companies are found not to be in compliance, what does the UK's regulation say would happen?

WOODS: Okay, I agree with Bailey, just a couple of points on that. The Online Safety Act does talk about the need to protect freedom of expression and privacy. So there is that specific reference in there, but it refers to the European Convention on human rights and that version of freedom of expression, which allows more intrusions into free speech to be justified than I think would happen under the First Amendment, but the act does recognize the importance of free speech.

In terms of enforcement, the act envisages that in a way. There will be a gradual buildup. So Ofcom can issue notices telling the services how they could improve their performance, that there's been a problem, that they're not doing enough, but then gradually, ratchets up to the possibility for fines and Ofcom doesn't have to go to a court or anything.

It can levy the fines up to 10% of global qualifying revenue. There are criminal provisions in relation to non-compliance with investigation. So separate possibilities for enforcement on the process of carrying out an investigation. And then finally, as the ultimate fallback, there are something called business disruption measures.

And in these cases, Ofcom would have to go to a court to get an order. They fall into two categories. One relates to what's called ancillary services. So Ofcom can go and get an order that tells people providing services to a platform not to do that. So that could be advertising services or payment services.

And then the final possibility is blocking the service. But that is envisaged very much as a last resort when everything else has failed.

CHAKRABARTI: Oh, interesting. Professor Woods, I would love to have you back as and when the law is fully implemented in the UK, perhaps giving it a little time to play out to see really what its impacts are.

But until then, let me thank you so very much for joining us today.

WOODS: It's a pleasure. Thank you for inviting me.

CHAKRABARTI: So that was Lorna Woods. She's a professor of internet law at the University of Essex in the UK. By the way, Bailey Sanchez, before we wrap up, I mean there are some actually specific things, we've been talking rather generally, but there's some specific things in KOSA.

That I think social media platforms especially would be expected to do, right? Essentially default to the safest settings possible for accounts that belong to minors or that the platform thinks belongs to minors, right? Disabling addictive product features, opting out, or being able to opt out of personalized algorithmic recommendations, turning off features like autoplay essentially for videos, or rewards on platforms.

So these are specific things that the companies would be expected to do. It sounds, that actually sounds rather significant to me.

SANCHEZ: Yeah. Yeah, that's correct. And for what it's worth, some platforms do this already, but I think KOSA would be setting that expectation. This is what Congress thinks like the baseline safeguards need to be.

Like you mentioned, there's some default settings focused on like manipulative design or like engagement techniques. There also are quite a bit of like parent tools, like the ability to manage a minor's privacy and account settings, not read their messages, or anything like that, but being able to set, like friends only posts or such things like that.

And then ability to like restrict purchases and financial transactions. Cause that's something that I think we've seen a lot of stories of over the last several years, is like kids not realizing that the money in the game they're playing is real, is their parents' money. And so a parent like by default now would be able to manage those types of things.

CHAKRABARTI: You mentioned parents, and inevitably, I understand, we will get a lot of feedback from people saying the first line of responsibility is with parents, and I don't disagree with that. That's also a part of this picture here, but given the ubiquity of the internet in our lives, maybe some kind of common rulemaking about child safety is also, it's also worth our time and effort.

This program aired on August 6, 2024.

Headshot of Paige Sutherland
Paige Sutherland Producer, On Point

Paige Sutherland is a producer for On Point.

More…
Headshot of Meghna Chakrabarti
Meghna Chakrabarti Host, On Point

Meghna Chakrabarti is the host of On Point.

More…

Advertisement

Advertisement

Listen Live