Last updated 3 months ago. Our resources are updated regularly but please keep in mind that links, programs, policies, and contact information do change.

The digital lives of American children and teenagers have become a focal point of intense national debate and legislative action.

As young people spend increasing hours on platforms like TikTok, Instagram, and YouTube, concerns among parents, pediatricians, and policymakers have mounted over potential harms ranging from mental health struggles and addiction to data exploitation and online predators.

This has inspired a complex and rapidly evolving response from federal and state governments, creating a fragmented landscape of rules that often clash with constitutional principles of free speech and present significant technical challenges.

At the center of this storm are the technology companies themselves, which are simultaneously rolling out new safety features, lobbying to shape future laws, and legally challenging regulations they deem unworkable or unconstitutional.

The Federal Foundation: The Children’s Online Privacy Protection Act (COPPA)

For over two decades, the primary federal law governing children’s online privacy has been the Children’s Online Privacy Protection Act (COPPA). While it remains the bedrock of U.S. policy, its limited scope and the rapid evolution of technology have prompted a significant modernization effort by the Federal Trade Commission (FTC), the agency responsible for its enforcement.

What is COPPA? The Original Framework

Enacted by Congress in 1998 and put into effect by the FTC in 2000, COPPA’s central mission is to put parents in control over what personal information websites and online services can collect from their children. The law was a direct response to the early internet’s largely unregulated data collection practices.

Its rules apply to operators of commercial websites and online services, including mobile apps, that are either directed to children under the age of 13 or have “actual knowledge” that they are collecting personal information from users in that age group. This “actual knowledge” standard means the law can apply even to general audience sites if they are aware of young users on their platform. Most non-profit organizations, however, are exempt from COPPA’s requirements.

The core pillars of the original COPPA rule include:

Privacy Policy: Operators must post a clear, comprehensive, and easily accessible privacy policy that details their practices for collecting, using, and disclosing children’s personal information.

Verifiable Parental Consent: Before collecting, using, or disclosing any personal information from a child under 13, an operator must first provide direct notice to the parent and obtain “verifiable parental consent”. This is the hallmark of COPPA, designed to ensure parents are the gatekeepers of their child’s data.

Parental Rights: The law grants parents specific rights, including the ability to review the personal information collected from their child, the right to refuse its further collection or use, and the right to direct its deletion at any time.

Data Minimization: Operators are prohibited from conditioning a child’s participation in a game, the offering of a prize, or another activity on the child disclosing more personal information than is reasonably necessary to participate in that activity.

The very stringency of these requirements has had a profound, if unintended, effect on the internet. The significant cost and work involved in complying with COPPA has led many websites and social media platforms to simply ban children under 13 from using their services altogether, leading to the now-ubiquitous “you must be 13 or older” age gates.

This has created a regulatory “cliff,” where a child gains access to the vast, unregulated digital world on their 13th birthday with virtually no federal data privacy protections. This gap is the primary motivation behind the recent wave of state and federal legislative proposals aimed at protecting teenagers.

The 2025 COPPA Overhaul: What’s New?

Recognizing that the digital world of the 2020s is vastly different from that of 2000, the FTC finalized a major update to the COPPA Rule on January 16, 2025. The amendments, which become effective on June 23, 2025, with a full compliance deadline of April 22, 2026, are designed to address modern technologies and data practices.

Key changes in the 2025 Final Rule include:

Expanded Definition of “Personal Information”: The rule’s scope now explicitly includes biometric identifiers such as fingerprints, retina and iris patterns, voice data, and facial data. It also covers government-issued identifiers like Social Security numbers. This update is a reactive measure to the rise of biometric technologies and ensures they fall under COPPA’s protections.

While the FTC chose not to include “data derived from voice data, gait data, or facial data,” it clarified that templates created from such data, like facial templates used for recognition, are considered personal information.

Stricter Data Retention & Security Mandates: The updated rule formalizes and strengthens data management requirements. It explicitly prohibits the indefinite retention of children’s personal information, mandating that it be retained only for “as long as reasonably necessary” to fulfill the specific purpose for which it was collected. Furthermore, companies are now required to establish, implement, and maintain a comprehensive, written children’s personal information security program.

Enhanced Notice and Consent for Third-Party Disclosures: In a significant move to counter the opaque data-sharing economy, operators must now provide much greater transparency. Their online privacy notices must disclose the specific identities and categories of any third parties with whom they share data, as well as the purposes for that disclosure.

Critically, the rule requires operators to obtain a separate, affirmative parental opt-in to disclose a child’s information to third parties for purposes like targeted advertising, unless that disclosure is integral to the site’s core function. This gives parents granular control over not just data collection, but data dissemination.

New Methods for Verifiable Parental Consent: To make compliance more practical, the FTC has officially approved several new methods for obtaining verifiable parental consent. These include using knowledge-based authentication questions that would be difficult for a child to answer, allowing a parent to submit a government-issued photo ID for verification, and using text messaging coupled with additional confirmation steps.

Greater Transparency for Safe Harbor Programs: The rule heightens requirements for COPPA Safe Harbor programs—self-regulatory bodies that certify companies for COPPA compliance. These programs must now publicly disclose their membership lists and submit more detailed reports to the FTC, increasing their accountability and transparency.

COPPA in Action: High-Stakes Enforcement

The FTC enforces COPPA vigorously, and violations are treated as “unfair or deceptive acts or practices” under Section 5 of the FTC Act, which can result in substantial financial penalties. These enforcement actions often serve as powerful catalysts for industry-wide change, sending a clear message about the FTC’s expectations.

Notable enforcement cases include:

Google and YouTube (2019): In a landmark case, the companies paid a record $170 million penalty to settle allegations that YouTube illegally collected personal information from children without parental consent. The FTC alleged that YouTube profited from using this data to deliver targeted ads on child-directed channels. The settlement forced YouTube to implement a system requiring creators to identify their content as “made for kids,” which fundamentally altered the business model for a massive segment of the platform’s creators and demonstrated how targeted enforcement can reshape an entire ecosystem.

Epic Games (2022): The creator of the popular game Fortnite agreed to pay $275 million to settle allegations of COPPA violations. The FTC charged that Epic collected personal information from players under 13 without notifying parents or obtaining their consent, and that its default settings enabled live voice and text communications for children, putting them at risk.

TikTok: The platform’s predecessor, Musical.ly, paid a $5.7 million penalty in 2019 for illegally collecting data from users under 13. However, the Department of Justice and the FTC filed a new lawsuit in August 2024, alleging that TikTok and its parent company, ByteDance, have continued to violate COPPA and a 2019 court order by knowingly allowing children under 13 on the platform and collecting their data without parental consent.

Other Major Platforms: The FTC has also brought enforcement actions against other major tech companies for COPPA violations, including Microsoft for its Xbox services and Amazon for its Alexa voice assistant, underscoring the broad applicability of the law across different types of online services.

The State-Level Surge: A Patchwork of New Rules

In the absence of new federal laws extending privacy protections to teenagers, a growing number of states have begun to act as “laboratories of democracy,” passing their own aggressive and often conflicting regulations. This has created a complex and challenging compliance patchwork for technology companies operating nationwide.

California’s “Safety by Design” Model

California has positioned itself at the forefront of this legislative wave with a suite of laws that champion a “safety by design” philosophy, moving beyond simple data privacy to regulate the very architecture of online services.

California Age-Appropriate Design Code Act (AADC / AB 2273): This landmark law, which went into effect on July 1, 2024, is modeled on a similar regulation in the United Kingdom. Its approach represents a fundamental shift in regulatory thinking.

Expanded Scope: The AADC’s protections apply to all minors under the age of 18, a significant expansion from COPPA’s under-13 focus. It covers any business providing an online service, product, or feature that is “likely to be accessed by children”.

Core Philosophy: The law’s central tenet is that businesses must prioritize the “best interests of children” over their own commercial interests when designing and operating their services.

Key Requirements: The AADC imposes several novel obligations on businesses:

Data Protection Impact Assessments (DPIAs): Before launching any new product or feature likely to be accessed by children, businesses must complete a comprehensive DPIA. This assessment must identify potential risks of harm—such as exposure to harmful content, contact from predators, or the use of features that encourage extended use (like autoplay)—and create a timed plan to mitigate or eliminate those risks.

High Default Privacy Settings: All default privacy settings for minor users must be configured to the highest level of privacy available.

Prohibited Activities: The law explicitly prohibits several practices, including profiling a child by default, using a child’s personal information in a way known to be “materially detrimental” to their physical or mental well-being, and using “dark patterns” to trick or nudge children into giving up their privacy rights. The collection of precise geolocation data is also heavily restricted.

California’s Social Media Addiction and Transparency Laws:

Protecting Our Kids from Social Media Addiction Act (SB 976): Signed into law in September 2024, this bill directly targets the addictive nature of social media feeds. It prohibits platforms from providing an “addictive feed” to a minor without first obtaining verifiable parental consent. It also restricts platforms from sending notifications to minors during overnight hours and typical school hours.

See also  Isolationism vs. Interventionism: America's Forever Foreign Policy Debate

Social Media Transparency Law (AB 587): This 2022 law requires large social media companies to be more transparent about their content moderation policies and practices by filing detailed semiannual reports with the California Attorney General.

Other Related Laws: California has also passed legislation to protect the earnings of child “vloggers,” ensuring a portion of their income is set aside in a trust, similar to protections for traditional child actors. The California Consumer Privacy Act (CCPA) also contains specific protections for minors, prohibiting the sale of personal information of children under 16 without consent.

Utah’s “Parental Control & Liability” Model

Utah has taken a different, though equally aggressive, approach. Rather than focusing on platform design, Utah’s laws emphasize parental control over access and introduce a novel liability framework for mental health harms. The state’s original 2023 laws were replaced with an updated version (SB 194 and HB 464) that took effect in October 2024, following legal challenges.

Age Assurance and Parental Consent: The law requires social media companies to implement an “age assurance system” with at least 95% accuracy to identify all Utah account holders who are minors (under 18). It then mandates that companies obtain verifiable parental consent before a minor can create or maintain an account.

Default Settings and Feature Disabling: For all identified minor accounts, the law requires strict default privacy settings, such as making accounts private and restricting direct messaging to connected accounts only. It also forces platforms to disable features designed to prolong user engagement, such as autoplay and infinite scroll.

Parental Supervision and Curfews: Platforms must provide parents with supervisory tools, including the ability to set time limits and view their teen’s contacts. The law also establishes a statewide default social media curfew for minors, restricting access between 10:30 p.m. and 6:30 a.m. unless a parent overrides it.

Private Right of Action (HB 464): The most groundbreaking and controversial part of Utah’s model is the creation of a private right of action. This provision allows a minor or their parent to sue a social media company for damages if the minor suffers an “adverse mental health outcome”—such as depression or anxiety—that is caused, in whole or in part, by their “excessive use” of an “algorithmically curated social media service.”

The law establishes a rebuttable presumption that the platform’s addictive design caused the harm, effectively shifting the burden of proof onto the company to prove it was not responsible. This moves beyond data privacy into a new legal territory of product liability for psychological harm, which, if upheld, could fundamentally reshape the risk calculus and business models of social media platforms.

The Growing Patchwork: Other States to Watch

The actions in California and Utah are part of a broader national trend, with numerous other states attempting to enact their own rules.

Arkansas: The state’s Social Media Safety Act (originally SB 396, later Act 689) required third-party age verification for all users and explicit parental consent for minors to create an account. However, in a key development that signals the legal hurdles these laws face, it was blocked by a federal court in 2023 and again in 2025 after being challenged by the tech industry trade group NetChoice, which argued it was an unconstitutional burden on free speech.

Florida, Texas, Louisiana, and others: A wave of similar laws has been passed or introduced in states across the country. Florida’s HB 3, for example, requires age verification and bans social media accounts for children under 14. Texas and Louisiana have also enacted laws requiring parental consent for minors to create accounts.

This proliferation of disparate state laws has created a significant compliance headache for national platforms and is a primary driver for the industry’s push for a single, preemptive federal standard.

The divergent approaches taken by the states—from California’s focus on platform design to Utah’s on parental access and liability—reveal a fundamental disagreement about where the responsibility for children’s online safety should lie.

StateLaw(s)Age ScopeKey RequirementsKey ProhibitionsCurrent Legal Status
CaliforniaAge-Appropriate Design Code Act (AADC/AB 2273); Protecting Our Kids from Social Media Addiction Act (SB 976)Under 18Prioritize child’s best interest; Conduct Data Protection Impact Assessments (DPIAs); High default privacy settings; Parental consent for “addictive feeds”Profiling children by default; Using “dark patterns”; Collecting data detrimental to child’s well-being; Late-night/school-hour notificationsAADC is in effect but facing ongoing legal challenge (NetChoice v. Bonta); SB 976 signed into law
UtahSocial Media Regulation Act (SB 194 & HB 464)Under 18Age assurance system (95% accuracy); Verifiable parental consent for accounts; Default time-of-day curfew (10:30pm-6:30am); Private right of action for mental health harmAlgorithmic features like autoplay and infinite scroll for minor accounts; Data collection without parental consentPassed and effective, but facing ongoing legal challenge (NetChoice v. Reyes); preliminary injunction granted
ArkansasSocial Media Safety Act (SB 396 / Act 689)Under 18Third-party age verification for all users; Express parental consent for minors to create an accountMinors creating accounts without parental consent; Retaining ID data after verificationPermanently enjoined (blocked) by a federal court as unconstitutional
FloridaHB 3Under 14 (ban); 14-15 (consent)Age verification; Parental consent for users aged 14 and 15Accounts for children under 14 are prohibited; Anonymous account creationSigned into law, effective July 1, 2025

The Next Generation of Federal Law: Congress Weighs In

As states forge ahead with their own regulations, a parallel effort is underway in the U.S. Congress to establish a national standard for protecting minors online. Several major bipartisan bills have been introduced, each proposing a different framework for tackling the issue.

This congressional activity is largely a reaction to the growing state-level patchwork and the perceived shortcomings of the existing COPPA framework for teenagers.

The Kids Online Safety Act (KOSA): A “Duty of Care”

The Kids Online Safety Act (KOSA) has emerged as one of the most prominent and heavily debated federal proposals. With strong bipartisan sponsorship, it represents a significant philosophical shift from existing law by introducing a “duty of care” for online platforms.

Core Concept: KOSA (introduced as S.1409, H.R.7891, and S.1748 across different sessions) would legally require covered platforms (including social media, online gaming, and messaging apps) to “act in the best interests” of users they know to be minors. This duty would compel them to take “reasonable measures” in the design and operation of their services to prevent and mitigate a list of specified harms, including anxiety, depression, eating disorders, substance abuse, bullying, and sexual exploitation.

This approach focuses on improving the online environment itself rather than simply controlling access to it.

Key Provisions:

Safeguards for Minors: Platforms would have to provide minors with readily accessible and easy-to-use tools. These would include options to limit who can communicate with them, restrict public access to their personal data, disable “addictive” design features like autoplay and infinite scroll, and opt out of personalized algorithmic recommendation systems in favor of a chronological feed.

Parental Tools: The bill mandates that platforms provide parents with tools to help supervise their child’s use of the service. These tools must include the ability to manage their minor’s privacy and account settings, restrict in-app purchases, and view metrics on time spent on the platform.

Transparency and Audits: KOSA would require platforms to undergo independent, third-party audits and issue annual public transparency reports detailing the foreseeable risks of harm to minors on their service and the steps they are taking to mitigate them.

Legislative Status: KOSA has garnered significant bipartisan support, particularly in the Senate, where it passed with a 91-3 vote in July 2024. However, it has faced a more difficult path in the House of Representatives, where concerns about its potential impact on free speech have slowed its progress. If passed, the law would be enforced by the Federal Trade Commission and state attorneys general.

The “Hard Gate” Bills: PKOSMA & KOSMA

A competing legislative strategy focuses less on platform design and more on controlling access through strict age gates and consent requirements. Two prominent bills exemplify this approach.

Protecting Kids on Social Media Act (PKOSMA): This bill (introduced as S.1291 and H.R.6149) proposes a direct “access control” framework. Its key provisions would:

  • Require social media platforms to verify the age of their users
  • Outright ban children under the age of 13 from creating or maintaining an account
  • Require affirmative parental consent for teenagers aged 13 to 17 to use the platform
  • Prohibit the use of algorithmic recommendation systems to serve content to any user under the age of 18

The bill also proposes that the Department of Commerce establish a voluntary pilot program for a secure digital identification credential to facilitate age verification.

Kids Off Social Media Act (KOSMA): This similar bill (introduced as S.4213 and S.278) also focuses on hard age limits and curbing algorithmic influence. It would prohibit users under 13 from having accounts, require the deletion of existing accounts belonging to children, and ban the use of personalized recommendation systems on users under the age of 17.

Uniquely, KOSMA also seeks to limit social media use in schools by tying eligibility for the federal E-Rate program (which provides discounts on telecommunications services) to schools implementing policies that block social media access on their networks.

These “hard gate” bills represent a fundamentally different philosophy from KOSA. While KOSA aims to make the digital environment safer for minors who are already there, PKOSMA and KOSMA aim to keep younger users out of that environment entirely and require parental permission for teens to enter.

The Push for COPPA 2.0

Alongside these new legislative frameworks, there is a strong bipartisan effort to simply update and expand the existing COPPA law. This proposal, known as “COPPA 2.0,” is seen by some as a more pragmatic and legally sound approach.

Core Goal: The primary objective of COPPA 2.0 is to extend the proven privacy protections of the original COPPA to cover teenagers, specifically those aged 13 through 16.

Key Provisions: The proposed update would:

  • Ban targeted advertising to children and teens
  • Create a digital “eraser button,” giving parents and teens the right to easily delete personal information collected by a platform
  • Strengthen the enforcement standard by changing COPPA’s “actual knowledge” requirement to a “constructive knowledge” standard (e.g., covering platforms that are “reasonably likely to be used” by minors), closing a loophole that allows some platforms to claim ignorance of their user demographics

Legislative Status: COPPA 2.0 has been introduced in the Senate, often alongside KOSA, and enjoys broad support. Notably, it has been endorsed by major tech companies like Google, which may see it as a more predictable and manageable form of regulation than the novel “duty of care” in KOSA or the technically daunting age gates in PKOSMA. Like KOSA, it passed the Senate in July 2024 but stalled in the House.

See also  What Happens Step-by-Step When the Government Shuts Down

The existence of these three distinct legislative pushes—KOSA’s “duty of care,” the “hard gate” approach of PKOSMA/KOSMA, and the “privacy extension” of COPPA 2.0—reveals a deep philosophical divide in Congress over the best way to protect young people online.

Bill Name (Acronym)Primary Goal/MechanismAge ScopeKey ProvisionsCurrent Status (as of late 2024/early 2025)
Kids Online Safety Act (KOSA)Establish a “duty of care” for platforms to prevent and mitigate harms to minorsUnder 17Duty of care; Default high-privacy settings; Tools to disable addictive features & algorithmic feeds; Parental supervision tools; Transparency reportsPassed Senate; Stalled in House. Reintroduced in new sessions
Protecting Kids on Social Media Act (PKOSMA)Control access through age gates and parental consentUnder 13 (ban); 13-17 (consent)Mandatory age verification; Ban on users under 13; Parental consent for teens 13-17; Ban on algorithmic recommendations for users under 18Introduced in Senate and House; Referred to committee
Kids Off Social Media Act (KOSMA)Ban young users and limit algorithmic influenceUnder 13 (ban); Under 17 (algorithms)Ban on users under 13; Deletion of child accounts; Ban on personalized recommendation systems for users under 17; Links E-Rate funding to school social media blocksIntroduced in Senate; Referred to committee
COPPA 2.0Extend existing data privacy protections to teenagersUnder 17Extends COPPA protections to teens 13-16; Bans targeted advertising to minors; Creates a data “eraser button”; Strengthens “knowledge” standardPassed Senate as part of a larger bill; Stalled in House. Endorsed by Google

The Constitutional Crossroads: Free Speech vs. State Regulation

While lawmakers at the state and federal levels are responding to public pressure to protect children, their efforts are running headlong into a formidable obstacle: the First Amendment of the U.S. Constitution. The legal challenges mounted against these new laws are not mere technicalities; they represent a fundamental clash between the government’s interest in protecting minors and the constitutional rights to free expression and access to information.

Opponents of these new regulations, primarily civil liberties groups and tech industry associations, base their legal challenges on several core First Amendment principles.

Social Media as Protected Speech: The foundational argument is that activity on social media platforms—posting text, sharing images, watching videos—is a form of speech. As such, it is overwhelmingly protected by the First Amendment. These laws are therefore not merely regulating commercial conduct; they are regulating expression in what has become the modern public square.

The Strict Scrutiny Standard: Because many of these state laws are “content-based”—meaning they regulate online services based on the type of content they host or the potential harm that content might cause to minors—they are subject to the highest level of judicial review, known as “strict scrutiny”. To survive this test, the government must prove that its law is “narrowly tailored” to achieve a “compelling state interest” and that it is the least restrictive means of doing so. This is an exceptionally difficult standard to meet.

The “Chilling Effect” on Speech: A major concern is that vague or overly broad laws will have a “chilling effect” on constitutionally protected speech for everyone, not just minors. For example, a law that holds a platform liable for content that is “materially detrimental” to a child’s well-being, without a clear definition of that term, could lead the platform to proactively censor a wide range of lawful content for all users to avoid the risk of litigation.

This would effectively reduce the online world for adults to only what is deemed fit for children, a concept the Supreme Court has repeatedly rejected.

Minors’ Own First Amendment Rights: It is a critical, though often overlooked, point that minors themselves possess significant First Amendment rights to speak, receive information, and explore ideas. While these rights are not absolute, particularly within the school setting, courts have consistently affirmed that the government cannot enact blanket bans that prevent minors from accessing protected materials.

This is especially true for marginalized youth, such as LGBTQ+ teens, who may rely on online platforms to find community and support that is unavailable to them offline.

The primary vehicle for these constitutional challenges has been a series of lawsuits filed by NetChoice, a technology industry trade association whose members include Meta, Google, TikTok, and Amazon. These cases have become the central battleground for defining the legal limits of social media regulation.

NetChoice v. Bonta (California): NetChoice filed suit to block California’s Age-Appropriate Design Code Act (AADC). In September 2023, a federal district court granted a preliminary injunction, halting the law from taking effect on the grounds that it likely violated the First Amendment by compelling and restricting speech. In August 2024, the Ninth Circuit Court of Appeals largely agreed, upholding the injunction against the law’s most speech-restrictive provisions and sending the case back to the lower court for further review. In March 2025, the district court again granted a preliminary injunction against the entire law, reaffirming its constitutional concerns.

NetChoice v. Reyes (Utah): NetChoice also challenged Utah’s Social Media Regulation Act. In September 2024, a federal judge granted a preliminary injunction, preventing the law from being enforced. The judge’s order stated that the law was likely an unconstitutional content-based restriction on speech and that the state had failed to demonstrate a compelling interest that would justify such a broad intrusion on First Amendment rights.

A Pattern of Success: This pattern has been repeated across the country. NetChoice has successfully obtained court injunctions blocking similar laws in Arkansas, Ohio, and Texas, among others. The consistency of these rulings demonstrates that, as currently interpreted by federal courts, the First Amendment poses a formidable barrier to state laws that impose broad age-gating and content-based restrictions on social media.

The Age Verification Dilemma: The Technical and Privacy Lynchpin

At the heart of many of these laws—and the legal challenges against them—is the thorny issue of age verification. The requirement that platforms reliably determine the age of their users is a lynchpin for enforcement, but it presents immense technical and privacy-related problems.

The Technical Challenge: There is currently no easy, reliable, and privacy-preserving method to verify a user’s age online at the scale required by social media platforms.

Self-Declaration: The simplest method, having users self-report their birthday, is notoriously unreliable and easily circumvented by children.

Facial Age Estimation: Platforms like Meta and Instagram are experimenting with AI-powered tools that estimate a user’s age from a video selfie. However, these systems are not perfectly accurate, with studies showing average error rates of one to three years. This could result in a 14-year-old being granted access while a 17-year-old is locked out.

Government ID Verification: Requiring users to upload a copy of a driver’s license or other government-issued ID is perhaps the most accurate method, but it is also the most invasive. It would force platforms to collect highly sensitive data from all users, creating a massive “honeypot” for data breaches and cybercriminals. It would also effectively eliminate the possibility of anonymous or pseudonymous speech, which is a critical aspect of online expression for many, including activists and whistleblowers.

The Privacy Paradox: This leads to a central paradox: in an effort to protect children’s privacy and safety, these laws often require platforms to collect more sensitive data from all users. This trade-off is a primary objection for civil liberties groups like the ACLU and the Electronic Frontier Foundation (EFF), who argue that the cure may be worse than the disease.

The “Unconstitutional Catch-22”: The legal challenges have exposed a Catch-22 for lawmakers. If a law mandates a specific, intrusive age verification method like ID scans, it is likely to be struck down for chilling adult speech and violating privacy. If, however, a law is vague on the method, it can be struck down for vagueness and for creating a situation where the only practical way for a company to protect itself from liability is to implement intrusive verification, which again chills speech.

This technical and legal impasse is a key reason why so many “hard gate” laws are failing in court.

The ongoing court battles suggest an emerging legal distinction. Laws that focus on controlling access to content (like Utah’s parental consent mandate) are facing the strongest First Amendment headwinds. In contrast, laws that focus on regulating the underlying design and data practices of a platform (like parts of California’s AADC or the federal KOSA) may have a better, though still challenging, path to surviving judicial review, as they can be framed as regulating harmful business practices rather than speech itself.

The Great Debate: Voices on All Sides

The flurry of legislative and legal activity is a reflection of a deep and passionate societal debate over the role of social media in the lives of young people. This debate is not just about legal technicalities; it is about fundamentally different views on risk, freedom, and responsibility in the digital age.

The Case for Stricter Regulation: Protecting a Vulnerable Generation

Proponents of stronger government regulation—a coalition that includes many parents, pediatricians, child safety advocates, and bipartisan lawmakers—argue that intervention is urgently needed to address a public health crisis.

The Youth Mental Health Crisis: A primary driver of the regulatory push is the widespread belief that social media is a significant contributor to rising rates of anxiety, depression, eating disorders, and suicidal ideation among teenagers. Advocates point to the unique vulnerabilities of the adolescent brain, where the regions associated with peer feedback and social reward are highly sensitive, while the prefrontal cortex, responsible for self-control and long-term thinking, is not yet fully mature.

Platforms “Addictive by Design”: A core tenet of this argument is that social media platforms are not neutral public squares. They are sophisticated, profit-driven systems engineered to maximize user engagement. Features like infinite scroll, autoplaying videos, notifications, and personalized algorithmic recommendations are seen as “addictive by design,” created to hook users and keep them on the platform for as long as possible, often to the detriment of their well-being.

An Unfair Fight for Parents: From this perspective, it is unrealistic and unfair to place the entire burden of protection on parents. No matter how diligent, a parent is outmatched by a multi-billion-dollar corporation with teams of engineers and psychologists designing these persuasive systems. As Senator Brian Schatz argued, “Few parents, no matter how diligent or tech savvy, can confidently answer ‘yes'” to whether they understand how these apps might harm their children.

See also  Supply-Side vs. Demand-Side Economics: Two Schools of Economic Thought

Risks of Exploitation and Harm: Beyond mental health, advocates highlight the risks of cyberbullying, contact with online predators, and exposure to content that promotes dangerous behaviors like self-harm or substance abuse. There is also a growing concern around the economic exploitation of children in “family vlogs,” where they are effectively child performers without the legal and financial protections afforded to traditional child actors.

The Case for Civil Liberties: Defending Free Speech and Privacy

In opposition stand civil liberties organizations like the American Civil Liberties Union (ACLU) and the Electronic Frontier Foundation (EFF), as well as free speech groups like the Foundation for Individual Rights and Expression (FIRE). They argue that while the goals of protecting children are laudable, the methods proposed in many of these laws are unconstitutional and counterproductive.

Censorship and Viewpoint Discrimination: The central argument is that many of these bills, particularly KOSA, are “internet censorship bills in disguise”. Vague mandates requiring platforms to mitigate “harm” or prevent content that could lead to “anxiety” or “depression” are seen as invitations for censorship.

They fear that state attorneys general with particular political agendas could use these laws to pressure platforms into removing constitutionally protected speech on controversial topics like gender identity, racial justice, or reproductive rights, under the guise of protecting children.

Harming Marginalized Youth: A critical point raised by these groups is that such laws could disproportionately harm the very young people they claim to protect. LGBTQ+ youth, for example, often rely on online communities to find support, access health information, and connect with peers in ways that may not be safe or possible for them offline.

Laws that require parental consent or that lead to the censorship of “sensitive” topics could cut these young people off from life-saving resources.

Constitutional Overreach: These groups argue that the government cannot usurp the role of parents in deciding what their children can see and read. While the state has an interest in protecting children, that interest does not justify broad, sweeping bans that infringe on the First Amendment rights of both minors and adults. They contend that these laws use an “ax” where a “scalpel” is required, failing the “narrowly tailored” test of strict scrutiny.

The Threat of a Surveillance State: The age verification requirements embedded in many of these laws are seen as a grave threat to privacy and anonymity online. Forcing every user to verify their identity with a government ID or biometric scan would create an unprecedented system of digital surveillance and chill anonymous speech, which is vital for whistleblowers, activists, and ordinary citizens discussing sensitive topics.

Beyond Bans: The Role of Education and Empowerment

A third perspective in this debate argues that the focus on legislative bans and restrictions is misplaced. Proponents of this view contend that such measures are often ineffective, as teens are adept at circumventing them, and can even be counterproductive by driving usage underground and stifling the development of crucial digital literacy skills.

Instead, this approach emphasizes:

Digital Literacy Education: Investing in robust educational programs that teach young people how to critically evaluate the information they see online, understand how algorithms work, recognize manipulative design, and navigate online social dynamics safely.

Parental Guidance and Co-engagement: Empowering parents with better, easier-to-use tools and encouraging them to engage directly with their children’s digital lives, rather than simply imposing restrictions. This involves open communication and helping teens develop their own skills for self-regulation.

This complex debate is ultimately a clash of deeply held values. One side prioritizes the state’s duty to protect children from perceived harms, while the other prioritizes the constitutional principles of free expression and individual autonomy, fearing that the tools of protection will inevitably become tools of censorship.

The fact that both sides frequently invoke the concept of “parental rights” to support their opposing positions illustrates the politically charged and malleable nature of the conversation.

The Industry Response: Platform Tools and Public Stances

Caught between intense public pressure, a chaotic legislative environment, and the threat of massive liability, major technology companies are engaged in a sophisticated, multi-front response. This involves rolling out a host of new safety features to demonstrate self-regulation while simultaneously lobbying and litigating to shape the regulatory landscape in their favor.

Inside the Apps: A Look at Platform Safety Features

In response to the growing scrutiny, platforms have significantly expanded their suites of safety tools and parental controls.

Meta (Facebook, Instagram, Quest):

Teen Accounts and Default Settings: Meta has implemented “Teen Accounts” for users under 16 (or 18 in some regions). These accounts automatically default to more private settings, such as preventing unknown adults from sending them messages, hiding their friend/follower lists, and restricting who can tag or mention them.

Parental Supervision: Through the “Family Center,” parents can link their account to their teen’s to monitor time spent, schedule breaks, see their teen’s contacts, and receive notifications if their teen reports someone. For the Meta Quest VR platform, parents can approve app downloads and set usage limits for parent-managed accounts, which are now available for pre-teens aged 10-12.

Well-being and Content Controls: Features like “Take a Break” and “Quiet Mode” nudge teens to manage their time. Tools like “Hidden Words” allow users to filter out offensive content from their direct message requests. In a recent update, Meta now requires parental permission for teens under 16 to go Live on Instagram or to disable a feature that blurs images containing suspected nudity in DMs.

TikTok:

Strict Age-Gated Features: TikTok’s approach heavily relies on hard-wired, age-based restrictions. Accounts for users aged 13-15 are set to private by default, their videos are ineligible for recommendation in the “For You” feed, and direct messaging is disabled entirely. Features like going LIVE and sending or receiving virtual gifts are restricted to users 18 and older.

Family Pairing: This is TikTok’s robust parental supervision suite. It allows a parent to link their account to their teen’s to set daily screen time limits (customizable by day of the week), filter the “For You” feed using keywords, control who can comment on their teen’s videos, and view their teen’s follower and following lists.

Digital Well-being Nudges: By default, all accounts for users under 18 have a 60-minute daily screen time limit, after which a passcode is needed to continue watching. The platform also has a “wind down” feature that interrupts the feed with calming music for teens using the app late at night and disables push notifications for teens during nighttime hours.

YouTube (Google):

YouTube Kids: For its youngest audience (generally under 13), Google offers a completely separate, walled-garden app called YouTube Kids. It contains a much smaller, filtered library of content. Parents can select from three content levels (Preschool, Younger, Older) or, for maximum control, use the “Approved Content Only” mode, which allows a child to watch only the specific videos, channels, and playlists that a parent has hand-picked.

Supervised Experiences: For parents who feel their pre-teen or teen is ready to move to the main YouTube platform, Google offers “Supervised Experiences.” This allows a parent to create a supervised Google account for their child, which grants them access to the main YouTube app but with significant restrictions. Features like creating content, commenting, and live chat are disabled, and parents can choose a content filter level (“Explore,” “Explore more,” or “Most of YouTube”) and manage their child’s watch and search history via the Family Link app.

Teen Well-being Features: For all teen users, YouTube has turned on “Take a Break” and bedtime reminders by default, disabled autoplay by default, and implemented systems to limit recommendations of videos that, while not violating policies, could be problematic if viewed repetitively (e.g., content focused on social aggression or body comparison).

This proliferation of complex controls, while offering more options, also risks creating a “safety gap.” Tech-savvy and highly engaged parents may be able to navigate these settings, but those with lower digital literacy may be unaware of or overwhelmed by them, potentially leaving their children less protected.

FeatureMeta (Facebook/Instagram)TikTokYouTube (Google)
Default Private AccountsYes, for new users under 16Yes, for new users under 16N/A (Uses separate supervised experiences)
Parental Consent for Settings ChangesYes, for teens under 16 to make settings less strictYes, via Family Pairing for many settingsYes, parent controls all settings in YouTube Kids and Supervised Accounts
Screen Time LimitsYes, via Parental Supervision (daily limits, scheduled breaks)Yes, via Family Pairing (customizable daily limits) and a 60-min default for all under 18Yes, via Family Link for Supervised Accounts and a timer in YouTube Kids
Content Filtering/ControlsContent controls with “Standard,” “Less,” and “More” settings; “Hidden Words” for DMs“Restricted Mode” to filter mature themes; Keyword filtering for For You feed via Family PairingThree content levels in Supervised Accounts (“Explore,” “Explore More,” “Most of YouTube”); “Approved Content Only” in YouTube Kids
Direct Message RestrictionsAdults cannot message teens who don’t follow them; Text-only message requests requiredDisabled for users under 16; Restricted for users 16-17Disabled in all supervised experiences
Parental Supervision DashboardYes (“Family Center”)Yes (“Family Pairing”)Yes (“Family Link”)
Algorithmic Feed ControlsCan choose a chronological feed; Can hide topicsSTEM feed available; Keyword filtering via Family PairingRecommendations are filtered based on content level; Autoplay off by default for teens

Corporate Policy and Lobbying: Shaping the Debate

While platforms publicly tout their safety features, they are simultaneously engaged in a high-stakes effort to influence the laws that will govern them. This is a two-pronged strategy of proactive self-regulation combined with aggressive lobbying and litigation.

The Push for an App-Store-Level Standard: The industry’s preferred solution is a uniform federal law that places the primary responsibility for age verification and parental consent on the major app stores, namely Apple’s App Store and Google’s Play Store. Meta, in particular, has been a vocal proponent of this approach.

From their perspective, this would create a consistent, industry-wide standard, simplify compliance, and leverage the existing parental approval systems that app stores already have for purchases. This strategy, however, is also a competitive one. Google has publicly pushed back, arguing that Meta’s proposal is an attempt to “offload their own responsibilities” and would create new privacy risks by forcing app stores to share age information with all app developers, whether they need it or not.

This reveals a clear schism between the major tech players on what the “right” federal solution should be.

Official Stances on Legislation:

Meta: Publicly supports the idea of federal legislation to avoid the state-by-state patchwork but has not fully endorsed KOSA, instead promoting its app-store-centric solution. This has led to accusations from lawmakers that the company is trying to deflect responsibility from its own platform design.

Google: Has taken a different tack, publicly endorsing COPPA 2.0. This move aligns the company with a more straightforward privacy-focused bill and positions it as a responsible actor in contrast to its rivals.

TikTok: Has offered qualified support for some provisions in KOSA but not others, while highlighting its own internal safety features and partnerships as evidence of its commitment to youth safety.

Lobbying and Litigation: The most direct way the industry is shaping regulation is through its trade association, NetChoice. While individual companies maintain public postures of cooperation, NetChoice is waging a highly successful legal campaign to block state laws in court on First Amendment grounds.

This legal strategy runs in parallel with direct lobbying efforts in Washington. For example, in the first nine months of 2024, Meta alone spent nearly $19 million on lobbying, with some of that effort directed at opposing KOSA. This dual strategy allows platforms to appear responsive to public concerns through product updates while working behind the scenes to defeat or shape regulations they view as an existential threat to their business models.

Economic and Technological Realities

Beyond the legal and social dimensions of the debate, there are significant economic and technological challenges that shape what solutions are actually feasible. These practical considerations often receive less attention but can determine whether well-intentioned policies actually succeed.

The Cost of Compliance

Age verification and parental consent systems are expensive to implement and maintain. For large platforms like Meta or TikTok, the cost may be manageable, but for smaller platforms and emerging companies, these requirements could create prohibitive barriers to entry.

This dynamic could inadvertently consolidate the social media market around a few major players who can afford compliance costs, potentially stifling innovation and competition. Critics argue this could actually make the online environment less safe by reducing the diversity of platforms and giving existing giants even more market power.

Technical Limitations

Current age verification technologies are imperfect. Traditional methods like credit card verification or government ID checks can be circumvented by determined teenagers and create privacy risks for all users. More sophisticated biometric approaches raise even greater privacy concerns and may not be reliable enough for widespread use.

The fundamental challenge is that any system robust enough to reliably verify age is also invasive enough to raise serious privacy and civil liberties concerns. This technical paradox underlies many of the constitutional challenges to proposed regulations.

Global Coordination Challenges

Social media platforms operate globally, but regulations are enacted locally. A platform might face different age verification requirements in Utah, Texas, the European Union, and Australia, creating a complex compliance puzzle.

Some platforms may choose to implement the most restrictive standards globally rather than maintain separate systems for different jurisdictions. Others might withdraw from certain markets entirely if compliance costs become too high. These business decisions can have far-reaching effects on what services are available to young people in different locations.

The debate over social media regulation for minors reflects broader tensions in American society about technology, childhood, privacy, and the role of government. As this issue continues to evolve, it will likely require ongoing dialogue among parents, young people, policymakers, technologists, and civil liberties advocates to find solutions that effectively protect young people while preserving important rights and freedoms.

Our articles make government information more accessible. Please consult a qualified professional for financial, legal, or health advice specific to your circumstances.

Author

  • Author:

    We appreciate feedback from readers like you. If you want to suggest new topics or if you spot something that needs fixing, please contact us.