FTC's Latest Fine Of YouTube Over COPPA Violations Shows That COPPA And Section 230 Are On A Collision Course

from the this-could-be-an-issue dept

As you probably heard, earlier this week, the FCC fined Google/YouTube for alleged COPPA violations in regards to how it collected data on kids. You can read the details of the complaint and proposed settlement (which still needs to be approved by a judge, but that’s mostly a formality). For the most part, people responded to this in the same way that they responded to the FTC’s big Facebook fine. Basically everyone hates it — though for potentially different reasons. Most people hate it because they think it’s a slap on the wrist, won’t stop such practices and just isn’t painful enough for YouTube to care. On the flip side, some people hate it because it will force YouTube to change its offerings for no good reason at all and in a manner that might actually lead to more privacy risks and less content for children.

They might all be right. As I wrote about the Facebook fine and other issues related to privacy, almost every attempt to regulate privacy tends to make things worse, in part, because people keep misunderstanding how privacy works. Also, most of the “complaints” about how this “isn’t enough,” are really not complaints directed at the FTC, but at Congress, because the FTC can only do so much under its current mandate.

Separately, since this fine focused on COPPA violations, I’ll separately note that COPPA has always been a ridiculous law that makes no real sense — beyond letting politicians and bureaucrats pretend they’re “protecting the children” — while really creating massive unintended consequences that do nothing to protect children or privacy, and do quite a bit to make the internet a worse place.

But… I’m not even going to rehash all of that today. Feel free to dig into the past links yourselves. What’s interesting to me is something specific to this settlement, as noted by former FCC and Senate staffer (and current Princeton professor), Jonathan Mayer: the FTC, in this decision, appears to have significantly changed its interpretation of COPPA, and done so in a manner that is going to set up something of a clash with Section 230. What happened is a little bit subtle, so it requires some background.

The key feature of COPPA — and the one you’re probably aware of whether or not you know it — is that it has specific rules if a site is targeting children under the age of 13. This is why tons of sites say that you need to be over 13 to use them (including us) — in an attempt to avoid dealing with many of the more insane parts of COPPA compliance. Of course, in practice, this just means that many people lie. Indeed, as danah boyd famously wrote nearly a decade ago, COPPA seems to be training parents to help their kids lie online — which is kinda dumb.

Of course, the key point under COPPA is not actually the “under 13” users, but rather whether or not a website or online service is “directed to children under 13 years of age.” Indeed, in talking about it with various lawyers, we’ve been told that most sites (including our own) shouldn’t even worry about COPPA because it’s obvious that such sites aren’t “directed to children” as a whole and therefore even if a few kids sneak in, they still wouldn’t be violating COPPA. In other words, the way the world has mostly interpreted COPPA is that it’s not about how whether any particular piece or pieces of content are aimed at children — but whether the larger site itself is aimed at children.

This new FTC settlement agreement changes that.

Basically, the FTC has decided that, under COPPA, it no longer needs to view the service as a whole, but can divide it up into discrete chunks, and determine if any of those chunks are targeted at kids. To be fair, this is well within the law. The text of COPPA clearly says in definitional section (10)(A)(ii) that “a website or online service directed to children” includes “that portion of a commercial website or online service that is targeted to children.” It’s just that, historically, most of the focus has been on the overall website — or something that is more distinctly a “portion” rather than an individual user’s channel.

Except, that under the law, it seems that it should be the channel operator who is held liable for violations of COPPA under that channel, rather than the larger platform. In fact, back in 2013, the last time the FTC announced rules around COPPA it appears to have explicitly stated, that it would apply COPPA to the specific content provider who was directed at children and not at the general platform they used. This text is directly from that FTC rule, which went through years of public review and comment before being agreed upon:

… the Commission never intended the language describing ??on whose behalf?? to encompass platforms, such as Google Play or the App Store, when such stores merely offer the public access to someone else?s child-directed content. In these instances, the Commission meant the language to cover only those entities that designed and controlled the content…

But that’s not what the FTC is doing here. And so it appears that the FTC is changing the definition of things, but without the required comment and rulemaking process. Here, the FTC admits that channels are “operators” but then does a bit of a two-step to say that it’s YouTube who is liable.

YouTube hosts numerous channels that are ?directed to children? under the COPPA Rule. Pursuant to Section 312.2 of the COPPA Rule, the determination of whether a website or online service is directed to children depends on factors such as the subject matter, visual content, language, and use of animated characters or child-oriented activities and incentives. An assessment of these factors demonstrates that numerous channels on YouTube have content directed to children under the age of 13, including those described below in Paragraphs 29-40. Many of these channels self-identify as being for children as they specifically state, for example in the ?About? section of their YouTube channel webpage or in communications with Defendants, that they are intended for children. In addition, many of the channels include other indicia of child-directed content, such as the use of animated characters and/or depictions of children playing with toys and engaging in other child-oriented activities. Moreover, Defendants? automated system selected content from each of the channels described in Paragraphs 29-40 to appear in YouTube Kids, and in many cases, Defendants manually curated content from these channels to feature on the YouTube Kids home canvas.

Indeed, part of the evidence that the FTC relies on is the fact that YouTube “rates” certain channels for kids.

In addition to marketing YouTube as a top destination for kids, Defendants have a content rating system that categorizes content into age groups and includes categories for children under 13 years old. In order to align with content policies for advertising, Defendants rate all videos uploaded to YouTube, as well as the channels as a whole. Defendants assign each channel and video a rating of Y (generally intended for ages 0-7); G (intended for any age); PG (generally intended for ages 10+); Teen (generally intended for ages 13+); MA (generally intended for ages 16+); and X (generally intended for ages 18+). Defendants assign these ratings through both automated and manual review. Previously, Defendants also used a classification for certain videos shown on YouTube as ?Made for Kids.?

That’s a key point that the FTC uses to argue that YouTube knows that its site is “directed at” children. But here’s the problem with that. Section 230 of the Communications Decency Act, specifically the often forgotten (or ignored) (c)(2) is explicit that no provider shall be held liable for any moderation actions, including “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” One way to do that is… through content labeling policies, such as those that YouTube used and described by the FTC.

So here, YouTube is being found partially liable because of its content ratings, which is being shown as evidence that it’s covered by COPPA. But, CDA 230 makes it clear that there can’t be any such liability from such a rating system.

This won’t get challenged in court (here) since Google/YouTube have agreed to settle, but it certainly does present a big potential future battle. And, frankly, given the way that some courts have been willing to twist and bend CDA 230 lately, combined with the general “for the children!” rhetoric, I have very little confidence that CDA 230 would win.

Filed Under: , , , , ,
Companies: google, youtube

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “FTC's Latest Fine Of YouTube Over COPPA Violations Shows That COPPA And Section 230 Are On A Collision Course”

Subscribe: RSS Leave a comment
16 Comments
Stephen T. Stone (profile) says:

given the way that some courts have been willing to twist and bend CDA 230 lately, combined with the general "for the children!" rhetoric, I have very little confidence that CDA 230 would win

Makes me wonder how many sites/comments sections would go dark within 24 hours after a ruling that takes down or severely limits CDA 230.

Anonymous Coward says:

Re: Re: Re: Re:

"/." on it’s own, tells us nothing.

Actually it does. It’s a direct reference to a specific site. Perhaps a bit unconventional and obscure to the public but he has stated the exact site he is asking you to try posting anonymously on.

To wit, he wrote a "/" and a ".", or, in English, a "slash" and a "dot". Pronounce them in the sentence and you get:

Try to post an anonymous comment on slash dot

This is obviously a reference to slashdot.org, which no longer allows anonymous comments at the moment.

That One Guy (profile) says:

Probably shouldn't open that can...

So here, YouTube is being found partially liable because of its content ratings, which is being shown as evidence that it’s covered by COPPA. But, CDA 230 makes it clear that there can’t be any such liability from such a rating system.

This won’t get challenged in court (here) since Google/YouTube have agreed to settle, but it certainly does present a big potential future battle. And, frankly, given the way that some courts have been willing to twist and bend CDA 230 lately, combined with the general "for the children!" rhetoric, I have very little confidence that CDA 230 would win.

Funny thing about gutting 230 ‘for the children’: it would be insanely counter-productive. If providing a ratings system opens a platform up to liability then they’ve basically got two options: Ditch the rating system, or ditch the content, and as the content is what makes the platform worthwhile guess which is likely to see the axe first?

For those complaining about how sites like YT aren’t ‘doing enough’ to be kid friendly wait until the ratings system goes up in smoke and either the parents themselves have to actually do the work of going through videos to see what is and is not kid friendly, or accept that letting their kids on the site could result in them seeing some very not ‘kid friendly’ content.

urza9814 (profile) says:

...what?

How exactly do you get from "YouTube specifically is paying people to create content and pages on their website that are explicitly marketed towards children" to "YouTube is not intentionally marketing anything to children"? Am I missing some crucial part of the argument here?

YouTube Kids is not moderation, it’s a service. They aren’t purging offensive content, they’re creating a curated list of non-offensive content. Those are very different actions. They’re also marketing this service towards parents claiming that it is explicitly designed for children. That’s not moderation, it’s MARKETING. It’s YouTube’s own speech saying these things, not users or user content.

aerinai (profile) says:

My head hurts

So maybe I’ve missed some of this nuance… Wouldn’t the account that is actively viewing the content be the account that is considered ‘over 13’, ‘under 13’ or unknown (anonymous / no account)? I literally only watch baby shows on YouTube for my kid. I don’t really YouTube much, but I don’t log out of Chrome, log back in as my 1 year old… that sounds insane. So I’m assuming that they are targeting me with ads in this case. I get it. Awesome.

So, let’s say my kid is 10. He creates his own Google account. He is under 13. He watches another video… COPPA doesn’t say you can’t advertise, it just says you can’t use his ‘private’ data to target, correct? Generalized advertisements (like you see on Disney Channel, Cartoon Network, etc.) are still allowed.

Now, let’s say my kid jumps on my computer and stays logged into my account. I don’t see how any sane person could ‘blame Google’ for the actions of the user (in this case my kid) for Google sending him targeted ads (albeit based primarily on my data). Hell, it probably thinks i’m totally into Power Rangers and Powder Puff Girls or whatever happens to be popular (he is using my account after all).

So… while I agree that this does seem like an unnecessary expansion of the law, I don’t know why they had to expand it in the first place. Either Google knowingly targeted children’s accounts (content is irrelevant), or they didn’t.

If Google saw an account flagged as sub-13 and ignored the law, that is kind of on them.

If they allow ‘anonymous’ browsing; I’d assume that their TOS would cover them (you must be 13 or older).

If you have an adult’s account and are actually a child… this seems like yet another example of the intermediate liability… which is dumb. If you are mortally upset, then the government should sue the users for… using a service?

What am I missing?!

bob says:

right target wrong rational.

I agree the channels, not service, are responsible for the content they provide. However the control of targeted ads is not managed by the channels its managed by the service. A channel (though I could be wrong) can decide to monetize and stuff but unless they are marketing within the video they have no control of what YT/Google shows for ads.
So I can see why in this case people would go after the service and not the channels individually. However their justification of the ratings system is bogus due to CDA 230.

Anonymous Coward says:

Section 230 of the Communications Decency Act, specifically the often forgotten (or ignored) (c)(2) is explicit that no provider shall be held liable for any moderation actions, including "any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable."

True, but Section 230 is intended to address certain specific issues with content, such as the endless defamation claims which are used by the unscrupulous to silence critics. It’s not a panacea.

There are some sorts of content issues (such as the failure to warn of violent criminals on a site, which Internet Brands learned the hard way with their "model mayhem" site) which aren’t protected by 230.

Stephen T. Stone (profile) says:

Re:

Section 230 is intended to address certain specific issues with content

47 U.S.C. § 230 doesn’t apply to content so much as it applies to who posts it. It places legal liability on those most responsible for the creation and publication on that content; that may or may not include whoever owns/operates the platform on which that content is published.

But if you don’t believe me, read what Chris Cox said on the Congressional record about the intent and purpose of 230:

We want to encourage people like Prodigy, like CompuServe, like America Online, like the new Microsoft network, to do everything possible for us, the customer, to help us control, at the portals of our computer, at the front door of our house, what comes in and what our children see.

[O]ur amendment will do two basic things: First, it will protect computer Good Samaritans, online service providers, anyone who provides a front end to the Internet, let us say, who takes steps to screen indecency and offensive material for their customers. It will protect them from taking on liability such as occurred in the Prodigy case in New York that they should not face for helping us and for helping us solve this problem. Second, it will establish as the policy of the United States that we do not wish to have content regulation by the Federal Government of what is on the Internet, that we do not wish to have a Federal Computer Commission with an army of bureaucrats regulating the Internet because frankly the Internet has grown up to be what it is without that kind of help from the Government. In this fashion we can encourage what is right now the most energetic technological revolution that any of us has ever witnessed. We can make it better. We can make sure that it operates more quickly to solve our problem of keeping pornography away from our kids, keeping offensive material away from our kids, and I am very excited about it.

Anonymous Coward says:

Re: Re:

True, but Section 230 is intended to address certain specific issues with content, such as the endless defamation claims which are used by the unscrupulous to silence critics. It’s not a panacea.

You are objectively and factually wrong. That is one of its intended uses, it is far from the only one.

There are some sorts of content issues (such as the failure to warn of violent criminals on a site, which Internet Brands learned the hard way with their "model mayhem" site) which aren’t protected by 230.

Which are all covered under separate laws. Indeed, the intro to that section explicitly states it does not supersede other federal laws. If it’s not covered under a separate law, then it falls under 230. Tada! Panacea!

ECA (profile) says:

Again... I love it..

Privacy of children??
HOW IN HELL CAN YT/GOOGLE know everything??

Everyone demands privacy, and the only way to do that is NOT to pay attention to the data..
DOnt Sort it, dont do this/that or the other..
Treat it all the same, to Show that you are keeping it private.

Are you REALLY going to enter your Childs info on the net??
REALLY???
When almost any site on the net can read it off of your Machine, while you are wondering the net??

GET A HINT.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...