Last updated 20 hours ago. Our resources are updated regularly but please keep in mind that links, programs, policies, and contact information do change.
- The Los Angeles Litigation Over Platform Design
- FTC Authority Under Existing Law
- FTC Enforcement Actions to Date
- The Regulatory Gap for Teenagers Ages 13-17
- First Amendment Constraints on FTC Action
- Enforcement Actions Available to the FTC Now
- Why the FTC Has Been Cautious
- How the Litigation Could Shift FTC Strategy
- The Core Issue: FTC Power Versus FTC Will
Internal company documents have been uncovered showing employees discussing the psychological effects of their platforms on teens despite knowing potential harms. Meta’s Instagram and Google’s YouTube face claims in a California courtroom that they deliberately engineered their platforms to addict children, using techniques borrowed from slot machines and Big Tobacco. The Federal Trade Commission has explicit authority to protect children online and has extracted billions in penalties from tech companies before, yet hasn’t brought aggressive enforcement action despite this evidence.
The FTC has the power to act but hasn’t used it. The agency has the legal tools and mandate, but what it seems to lack is either the will or strategic clarity to use them at the scale the problem requires.
The Los Angeles Litigation Over Platform Design
The plaintiffs in the Los Angeles case aren’t arguing about content moderation or free speech. They’re arguing about design. The lawsuit alleges that Meta and Google embedded psychological tricks into their platforms to create addictive engagement patterns, including features like infinite scrolling, algorithmic feeds, and strategically timed notifications designed to pull users back into compulsive engagement cycles.
What makes this litigation particularly damaging is the documents. Newly released internal Meta emails show employees explicitly comparing their platform to addictive substances. A 2019 internal presentation indicated that teens were compelled to spend time on the app despite awareness of how it affected their mental health.
The judge rejected the companies’ attempts to dismiss the case based on free speech and website liability protections. Platforms can’t be held liable for user-generated content, but they can be held liable for their own design choices. Section 230 protects websites from liability for what users post. It doesn’t protect how the website itself is designed. That distinction creates legal space for FTC enforcement.
FTC Authority Under Existing Law
The FTC has a law that lets it stop unfair business practices. An unfair practice is one that seriously harms people, they can’t avoid it, and the harm outweighs any benefits. You don’t need fraud or misrepresentation—you need conduct that’s inherently unfair in its effects.
Apply that standard to infinite scrolling combined with algorithms designed to maximize engagement in teenagers. The injury—psychological dependence, mental health harms—is substantial. Teenagers can’t reasonably avoid it because the features are designed to be difficult to stop using. The benefits accrue mostly to the companies’ advertising revenue, not to the kids.
That’s a plausible unfairness case under existing law. The FTC wouldn’t need Congress to pass new legislation.
Then there’s the Children’s Online Privacy Protection Act (COPPA), which is supposed to protect kids under 13 online. COPPA requires websites directed to children under thirteen to notify parents and obtain consent before collecting personal information. The law explicitly recognizes that children are vulnerable to manipulation in ways adults aren’t. It gives the FTC authority to define what counts as personal information, to approve consent methods, and to investigate violations.
But COPPA has a major gap—it only covers kids under thirteen, leaving teenagers in a regulatory vacuum. For kids under thirteen, the law gives the FTC real enforcement power. Platforms can be fined tens of thousands of dollars per day for each violation.
FTC Enforcement Actions to Date
In 2019, the FTC settled with YouTube and Google for $170 million over COPPA violations. YouTube had collected data from viewers of child-directed channels without parental consent and used it to serve targeted ads.
But look at what the settlement required. YouTube had to develop systems for channel owners to identify child-directed content, notify creators about COPPA, and provide annual training. These are compliance measures, not fundamental changes to how the platform works. The settlement addressed how YouTube made money from children’s viewing, not the architecture of its recommendation algorithm or notification system.
The FTC also reached a $5 billion settlement with Meta in 2019 over broader privacy violations. The resulting 2020 agreement included some rules about minors—privacy reviews of new products, greater security for personal information, restrictions on certain data practices. But that order predated the emergence of extensive evidence about Meta’s internal research on Instagram’s effects on teen mental health.
In May 2023, the FTC proposed modifications to Meta’s consent order that would have been genuinely aggressive. The proposal would prohibit Meta from monetizing data collected from users under eighteen entirely. It would require Meta to obtain written confirmation from a third-party assessor that its privacy program complies with the order before launching new products or services. It would expand restrictions on facial recognition. However, the proceedings were subsequently stayed and the modification has not been finalized.
More recently, the FTC sued TikTok in August 2024 for serious COPPA violations. The complaint alleged that TikTok knowingly allowed millions of children under thirteen to create accounts, collected personal information without parental consent, and failed to delete children’s data when parents requested it. TikTok allegedly created hidden ways for children to bypass age verification, creating accounts classified as “age unknown” that grew to millions. The case represents aggressive enforcement with potential penalties in the billions, though the legal theory focuses on technical rules about how to verify age and get parent permission rather than challenging the broader business model of engagement maximization.
The Regulatory Gap for Teenagers Ages 13-17
COPPA only protects children under thirteen. Teenagers aged thirteen to seventeen—the demographic most engaged with social media and most vulnerable to its psychological effects—fall outside federal protection entirely.
This isn’t an oversight the FTC can fix on its own. The age cutoff is written into the statute. The agency’s own staff report noted that social media companies often treat teenagers the same as adults despite extensive research showing that digital platforms contribute to negative mental health impacts on young users. The report explicitly recommended that Congress should pass new privacy laws to protect teenagers over thirteen.
Congress hasn’t done that. Multiple proposals exist—COPPA 2.0 would expand protections to include teenagers under seventeen—but they remain pending without clear paths to passage. The House version differs from the Senate version. Both face First Amendment concerns and disagreement about specific rules versus giving regulators flexibility to decide case-by-case.
Meanwhile, states are filling the vacuum. California’s Age-Appropriate Design Code Act, which took effect in July 2024, applies privacy and safety requirements to online services likely to be accessed by children under eighteen. Maryland’s Kids Code imposes similar requirements. Multiple other states have enacted laws imposing age verification, parental consent provisions, or restrictions on features like algorithmic feeds targeting minors.
State regulation creates different rules in different places. Platforms must comply with different requirements in different jurisdictions. The FTC, as a federal agency with nationwide jurisdiction, could theoretically establish uniform standards.
First Amendment Constraints on FTC Action
The First Amendment protects platforms’ right to choose what content to display. When platforms choose what content to display and how to present it, they’re making editorial judgments that courts have recognized as protected expression.
If the FTC tries to restrict algorithmic feeds, infinite scrolling, or notification timing, it’s arguably restricting how platforms communicate information—activities that might be constitutionally protected. In 2024, the Supreme Court blocked a Texas law that would have required social media platforms to offer “unmanipulated” feeds based on recency rather than algorithmic curation. The Court reasoned that platforms’ selection and arrangement of content constitutes protected speech.
But there’s a potential distinction that matters. A Supreme Court justice suggested that algorithms designed purely to make money might not have the same free speech protection as editorial choices. The difference between algorithms designed to amplify content the platform believes is valuable versus algorithms designed solely to maximize engagement metrics could create space for FTC regulation.
Some legal experts argue the government can regulate features designed to make money, even if they involve speech. When platforms employ engagement-maximization features not as editorial choices but as mechanisms to exploit user psychology for advertising revenue, those design choices might constitute commercial conduct subject to regulation rather than protected speech. Infinite scrolling isn’t a form of expression. It’s a mechanism to manipulate how people behave, like how casinos design their floors—something the government can permissibly regulate as a consumer protection matter.
Enforcement Actions Available to the FTC Now
Several concrete enforcement pathways exist using the FTC’s current authority. None require congressional action.
First, the agency could bring Section 5 unfairness cases against specific design features. Charge that infinite scrolling combined with algorithmic content selection designed to maximize engagement constitutes an unfair practice because it causes substantial injury to minors who can’t reasonably avoid it, and the injury isn’t outweighed by benefits. The internal company documents emerging in the current litigation provide exactly the kind of evidence the FTC would need.
Second, more aggressive COPPA enforcement remains available. The FTC can sue platforms that knowingly let children under thirteen use their services despite claiming age restrictions. The TikTok action demonstrates this approach.
Third, the FTC could propose rulemaking establishing industry-wide standards. Instead of suing individual cases, create rules requiring platforms to turn off infinite scrolling for minors unless users turn it back on, implement mandatory “time-out” periods, create friction in notifications, or adopt other design requirements reducing addictive potential. Rulemaking has advantages: clear rules for the future, same rules for all platforms, and no arguments about whether old behavior broke the law. But it’s extraordinarily time-consuming—typically requiring years of a process where the agency publishes proposed rules and lets the public respond. It would face litigation over FTC authority and First Amendment concerns.
Fourth, settlement agreements could require stricter rules on companies the FTC has previously sanctioned. The 2020 Meta consent order and future orders with other social media companies could include specific prohibitions on engagement-maximizing design features for minors, requirements for parental controls, and monitoring provisions.
Fifth, the FTC could approve programs that follow stricter rules than COPPA requires. COPPA allows industry groups to create programs with stronger privacy rules, subject to FTC approval. The agency could condition approval on requirements that covered companies implement specific design features reducing addictiveness for minors.
Why the FTC Has Been Cautious
Legal uncertainty explains some of the FTC’s caution. The First Amendment questions are real. There’s disagreement about what Section 230 actually covers. It’s hard to tell when the government is regulating behavior versus restricting speech.
But legal uncertainty isn’t the whole story. The FTC doesn’t have enough staff and has many other things to do. In recent years, it has focused substantial resources on antitrust enforcement against large tech companies, competition issues regarding merger review, and artificial intelligence policy. Children’s privacy is one of many issues the FTC handles.
FTC leadership also matters. Under Chair Lina Khan, the agency started taking a tougher stance on tech companies. Khan said children should be able to use the internet without companies constantly tracking them and selling their information. Under her leadership, the FTC finalized major amendments to the COPPA Rule in January 2025, requiring parents to approve before companies use children’s data for ads.
FTC leadership transitioned in 2025 to Andrew Ferguson as Chair. While Ferguson said protecting kids is a priority, it’s unclear exactly how he’ll enforce the rules. Commissioners’ beliefs about FTC power affect how they enforce the law. Conservative commissioners want to limit FTC power, while progressive commissioners want to use it more broadly.
How the Litigation Could Shift FTC Strategy
The ongoing litigation could alter the FTC’s calculus in several ways. The lawsuit is revealing extensive internal company documents showing what platforms knew about their effects on youth mental health and when they knew it. If these documents prove companies knew about harms but chose addictive designs anyway, the FTC could sue them for unfair practices.
The case is testing legal theories about platform liability for design features. If courts rule that addictive design is a defect or negligence, it supports the FTC regulating these features. If courts rule against the plaintiffs, it limits what the FTC can do.
The high-profile nature of the case will generate public attention to the question of why federal regulators aren’t leading enforcement. If plaintiffs prevail in obtaining a substantial verdict, it may pressure the FTC to protect children. If defendants prevail, platforms may become more confident in resisting regulatory action.
The Core Issue: FTC Power Versus FTC Will
The FTC has the legal power to address many of these practices now being litigated in Los Angeles. It could bring enforcement actions against platforms for unfair practices in designing addictive features targeting minors. It could propose expanded rules under COPPA or create new rules addressing engagement-maximizing design. It could use settlement agreements to require stricter rules.
What the FTC hasn’t done is use this power aggressively. The reasons include unclear laws, disagreement among commissioners, limited staff, and debate about the best approach. But research on mental health harms and state attorneys general getting involved make pressure on the FTC almost certain.
As company documents reveal what platforms knew about harms to youth, pressure will grow on the FTC to protect children. The real question isn’t whether the FTC has the power—it’s whether it has the will to use it. Whether it uses them will determine what regulatory framework governs the relationship between social media companies and their youngest users for years to come.
Our articles make government information more accessible. Please consult a qualified professional for financial, legal, or health advice specific to your circumstances.