Last updated 3 months ago. Our resources are updated regularly but please keep in mind that links, programs, policies, and contact information do change.
The internet that parents knew as teenagers no longer exists. What began as a frontier governed by self-regulation has become a battleground where states are rapidly imposing new rules about what children can see and do online. The result is a patchwork of conflicting laws that vary dramatically depending on which side of a state line you live.
This a reactive scramble by lawmakers who believe there’s a crisis in youth mental health and safety, fueled by a lack of updated federal guidance. The surge in state legislation represents a fundamental shift in how America thinks about the internet and children’s place on it.
At the center of this regulatory upheaval sits a basic tension: the government’s interest in protecting children clashes with constitutional guarantees of free speech and privacy. States are passing laws faster than courts can review them, creating legal chaos that affects everyone from tech giants to individual users.
The legal framework governing children’s online experiences has remained largely unchanged for a quarter-century. Two federal laws—the Children’s Online Privacy Protection Act and Section 230 of the Communications Decency Act—created the foundation, but their narrow scope left massive gaps. COPPA focuses only on data privacy for children under 13. Section 230 provides broad liability protection for platforms. Together, they left the issue of content access for teenagers almost entirely unaddressed at the federal level.
That vacuum explains why regulatory focus has shifted so dramatically toward access control, as lawmakers attempt to legislate in spaces these foundational laws left behind.
Federal Foundation: The Rules
COPPA: Protecting Data, Not Content
The Children’s Online Privacy Protection Act, enacted by Congress in 1998, stands as the primary federal law governing young children’s online privacy. Its central goal isn’t to police content but to place parents in control over what personal information is collected from their children online. The law applies specifically to children under age 13.
The law is implemented through the COPPA Rule, enforced by the Federal Trade Commission. The Rule’s requirements apply to operators of commercial websites and online services (including mobile apps) that are either directed to children under 13 or have “actual knowledge” that they’re collecting personal information from this age group.
COPPA’s core provisions mandate that covered operators must:
Post Clear Privacy Policies: Websites must provide clear, comprehensive, and understandable notice of their information-collection practices regarding children.
Obtain Verifiable Parental Consent: Before collecting, using, or disclosing a child’s personal information, operators must obtain “verifiable parental consent.” This involves making reasonable effort, taking into account available technology, to ensure a child’s parent authorizes the data collection.
Provide Parental Rights: Parents must be given the right to review personal information collected from their child, request its deletion, and refuse to permit its further collection or use.
The definition of “personal information” under COPPA is broad. It includes a child’s name, address, and email, but also modern identifiers like photos, videos, audio files containing a child’s voice, geolocation data, and persistent identifiers like cookies that can track users across different sites. The rule was amended in 2013 to strengthen these protections and account for the rise of mobile apps and new online technologies.
The FTC and State Attorneys General enforce COPPA. The FTC has brought numerous high-profile enforcement actions, levying significant fines against major tech companies. TikTok (formerly Musical.ly) paid a $5.7 million penalty, and Epic Games, creator of Fortnite, agreed to a $275 million penalty for COPPA violations.
Despite its importance, COPPA’s limitations are critical to understanding the current regulatory landscape. The act’s protections cease once a child turns 13, leaving teenagers in a regulatory gray area. Most importantly, COPPA is fundamentally a data privacy law, not a content moderation law. It governs what information can be collected from a child, but it wasn’t designed to prevent a child from viewing or accessing any particular type of content, including pornography.
Section 230: The Internet’s Shield
Section 230 of the Communications Decency Act of 1996 is arguably the most consequential law governing the modern internet, often called “the twenty-six words that created the internet.” It provides broad legal immunity to providers and users of “interactive computer services”—a term that includes everything from social media giants and search engines to small blogs and review sites—from laws that might otherwise hold them legally responsible for content created by others.
Section 230’s original intent was twofold. First, it was designed to resolve the “moderator’s dilemma” that emerged from early internet case law. Courts had ruled that platforms doing no content moderation weren’t liable for user content, but platforms that attempted to moderate their sites could be held liable as a “publisher” for anything they missed. This created a perverse incentive for platforms to do nothing to remove harmful content. Section 230 solved this by encouraging “Good Samaritan” blocking and filtering of offensive material.
Second, it was intended to promote free development of the internet, unfettered by government regulation, by ensuring that the person who creates speech is the one held responsible, not the service that hosts it.
The law’s core provision is found in subsection (c)(1), which states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Courts have interpreted this provision expansively. In the seminal case Zeran v. America Online, Inc., a federal appeals court held that Section 230(c)(1) bars lawsuits that seek to hold a service provider liable for its exercise of traditional editorial functions, such as deciding whether to publish, withdraw, or alter content. This broad immunity protects platforms from a wide variety of claims based on third-party content, including defamation, negligence, and other torts.
This immunity isn’t absolute. Section 230(e) contains several key exceptions, stating that the shield doesn’t apply to federal criminal law, laws pertaining to intellectual property (like copyright and trademark), or the Electronic Communications Privacy Act. In 2018, Congress amended the law with FOSTA-SESTA, adding an exception for civil and criminal actions related to online sex trafficking.
However, for purposes of accessing online pornography, Section 230’s immunity is paramount. It generally shields websites that host user-generated adult content, as well as social media platforms where such content might be shared, from being held liable for its presence. This legal reality has forced regulators and lawmakers concerned about minors’ access to pornography to shift their focus away from holding platforms liable for hosting content and toward imposing direct access controls on users, such as age verification.
New Federal Push: KOSA and COPPA 2.0
In response to perceived shortcomings of the existing federal framework and growing public concern over social media’s impact on youth mental health, Congress has seen a renewed push for comprehensive new regulations. The two most prominent pieces of legislation are the Kids Online Safety Act (KOSA) and the Children and Teens’ Online Privacy Protection Act (COPPA 2.0).
These bills represent two distinct, though sometimes overlapping, approaches to updating federal law. The protracted and intense debate surrounding KOSA reveals a fundamental ideological split in how to regulate the modern internet. One side believes the very architecture and design of online platforms are the source of harm and must be regulated. The other fears that any such regulation is a Trojan horse for government-led content censorship that would violate the First Amendment.
This deep philosophical divide, more than simple partisan politics, explains why a bill with broad bipartisan support has become a multi-year legislative saga.
Kids Online Safety Act: A New “Duty of Care”
The Kids Online Safety Act represents a significant departure from previous federal approaches. Instead of focusing on data privacy or liability shields, KOSA seeks to impose a proactive “duty of care” on covered online platforms. This would legally require platforms to act in the best interests of minors by taking “reasonable measures in the design and operation” of their products to prevent and mitigate a list of specified harms.
These enumerated harms include medically recognized mental health disorders such as anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors, as well as child sexual exploitation, online bullying, and the promotion of illicit products like tobacco and alcohol. The bill’s focus is on design features that platforms use to increase engagement, such as infinite scrolling, autoplay functions, personalized algorithmic recommendations, and push notifications.
In addition to the duty of care, KOSA includes several other key provisions:
Safeguards for Minors: Platforms would be required to provide minors with easy-to-use tools to protect their information, disable addictive features, and opt out of algorithmic recommendations. The strongest privacy settings would be enabled by default.
Parental Tools: The bill would give parents new controls to help support their children, including the ability to restrict purchases, view time-spent metrics, and manage their child’s privacy settings.
Transparency and Research: KOSA mandates independent audits and public reports on the risks to minors and requires the National Academies of Sciences to study the impact of social media on youth well-being.
KOSA has followed a tumultuous legislative path. It has garnered significant bipartisan support, passing the Senate Commerce Committee and the full Senate with overwhelming votes in different sessions, and has been endorsed by President Biden. However, it has repeatedly stalled before becoming law due to intense debate and opposition.
The debate around KOSA is fierce and multifaceted:
Proponents, including a broad coalition of over 250 organizations, parents’ groups, and medical associations like the American Academy of Pediatrics and the American Psychological Association, argue that the bill is a necessary step to hold Big Tech accountable for designing products that are harmful to children. They contend that platforms have a responsibility to prevent foreseeable harms caused by their design choices, just like companies in any other industry. The bill has also gained endorsements from some tech companies, including Microsoft, Snap, and X (formerly Twitter).
Opponents, led by civil liberties groups like the American Civil Liberties Union and digital rights organizations like the Electronic Frontier Foundation, argue that KOSA is a dangerous censorship bill in disguise. They claim that the “duty of care” provision, despite its focus on “design,” would inevitably force platforms to over-filter and censor vast amounts of lawful speech to avoid liability. They express particular concern that the law could be used by politically motivated state attorneys general to target content related to LGBTQ+ issues, reproductive health, or racial justice, under the pretext that such topics are “harmful” to minors. These groups argue that KOSA would ultimately harm the very young people it purports to protect by cutting them off from vital online resources and communities.
COPPA 2.0: Extending Protections to Teenagers
A more straightforward legislative proposal is the Children and Teens’ Online Privacy Protection Act, commonly known as COPPA 2.0. Rather than creating a new regulatory framework like KOSA, COPPA 2.0 aims to modernize and expand the existing Children’s Online Privacy Protection Act.
The core provisions of COPPA 2.0 would:
Extend Age Protections: The bill would raise the age of protection for data privacy from children under 13 to include teenagers aged 13 to 16, requiring online services to obtain their consent before collecting their personal information.
Ban Targeted Advertising: It would prohibit internet companies from delivering targeted advertising to children and teens.
Create an “Eraser Button”: The bill would establish a right for users to demand the deletion of personal information collected from a child or teen, creating a so-called “Eraser Button.”
Strengthen the “Actual Knowledge” Standard: It would revise COPPA’s “actual knowledge” standard to close a loophole that allows platforms to effectively ignore the presence of minors on their sites.
COPPA 2.0 has often been advanced in tandem with KOSA, passing the Senate Commerce Committee in July 2023. While it’s less controversial than KOSA, it’s part of the same broad legislative effort to update federal online safety laws for the modern era.
States Take the Lead: Age Verification Laws
In the absence of comprehensive new federal legislation, individual states have aggressively stepped into the regulatory void, creating a complex and often conflicting “patchwork” of laws aimed at protecting minors online. This state-level activity has generally fallen into two distinct categories: laws that mandate age verification for access to pornographic websites, and broader regulations that impose restrictions on how minors can use social media platforms.
This flurry of state action has triggered a wave of legal challenges, primarily from tech industry groups arguing that these laws violate the First Amendment.
Pornographic Website Mandates
Beginning in 2022, a growing number of states have enacted laws that require commercial websites containing a “substantial portion” of material “harmful to minors” to verify the age of their users. The threshold for what constitutes a “substantial portion” is often defined as one-third of the site’s content.
Louisiana Leads the Way
Louisiana became the first state to pass such a law, with Act 440 taking effect on January 1, 2023. The law established a model that many other states would follow. It requires any commercial entity that publishes or distributes a website with more than 33.3% content “harmful to minors” to use “reasonable age verification methods” to ensure users are 18 or older.
The law specifies acceptable verification methods, including checking a digitized identification card or using a commercial system that relies on government-issued ID or public/private transactional data. Enforcement is primarily through a private right of action, making a non-compliant website “liable to an individual for damages resulting from a minor’s accessing the material.” Subsequent legislation gave the Louisiana Attorney General the power to investigate non-compliant sites and impose fines of up to $5,000 per day.
Texas Adds Health Warnings
Texas followed with House Bill 1181, a law similar in structure to Louisiana’s but with a unique and highly controversial provision. In addition to requiring age verification for sites where more than one-third of the content is “sexual material harmful to minors,” HB 1181 mandated that these websites display specific health warnings on their landing pages in 14-point font or larger.
The required warnings state, for example, that “Pornography is potentially biologically addictive, is proven to harm human brain development,” and “increases the demand for prostitution, child exploitation, and child pornography.”
This law was immediately challenged in court but became the subject of a landmark Supreme Court decision in June 2025, Free Speech Coalition v. Paxton, which upheld the age verification requirement and fundamentally altered the legal landscape for these types of laws across the country.
Social Media Regulations for Minors
A second, more recent trend in state legislation involves imposing broader regulations on mainstream social media platforms. These laws go beyond pornography sites and aim to control how minors interact with services like TikTok, Instagram, and Facebook.
Utah’s Aggressive Approach
Utah has been at the forefront of this movement. In 2023, the state passed the Utah Social Media Regulation Act (comprised of bills SB 152 and HB 311), which was one of the most aggressive attempts to regulate minors’ social media use in the country. The original law, set to take effect in March 2024, included sweeping provisions that would have required social media companies to:
- Obtain express parental consent for any Utah resident under 18 to open or maintain an account
- Verify the age of all new and existing Utah account holders
- Impose a default nighttime “curfew,” blocking minors’ access between 10:30 p.m. and 6:30 a.m. unless overridden by a parent
- Give parents full access to their child’s account, including all posts and messages
- Create a private right of action allowing parents to sue platforms for harms caused by “addiction” to the platform, with a rebuttable presumption of harm for minors under 16
Facing an immediate lawsuit from the tech industry trade group NetChoice, which argued the law was unconstitutional, the Utah legislature took the unusual step of repealing and replacing the entire act during its 2024 session. The new laws (SB 194 and HB 464) softened some of the original provisions but maintained a core focus on age assurance, default privacy settings for minors, and liability for adverse mental health outcomes caused by “excessive use” of algorithmically curated services.
Despite these changes, the new laws continue to face legal challenges on First Amendment grounds.
Legal Challenges Everywhere
The experience in Utah isn’t unique. NetChoice has launched a coordinated legal campaign against these state laws across the country, achieving significant success in blocking them from taking effect. Federal courts have granted preliminary injunctions against similar social media laws in Arkansas, California, Ohio, Florida, and Mississippi, with judges consistently ruling that the laws likely violate the First Amendment by restricting access to protected speech for both minors and adults.
In a major victory for the group, a federal court issued a permanent injunction against the Arkansas law in March 2025, declaring it unconstitutional. These legal battles underscore the profound constitutional questions raised by the states’ attempts to regulate the digital sphere.
State-by-State Breakdown
The following table provides a snapshot of the rapidly evolving landscape of state laws mandating age verification and regulating social media for minors. The specifics of each law and its legal status are subject to frequent change.
State | Law(s) (Bill Number) | Type | Key Requirements | Current Legal Status (as of late 2025) |
---|---|---|---|---|
Alabama | HB 164 | Pornography Site AV | Requires reasonable age verification for sites with substantial harmful material | Enacted, effective Oct. 1, 2024 |
Arizona | HB 2112 | Pornography Site AV | Requires reasonable age verification for sites with at least one-third pornographic content | Enacted |
Arkansas | SB 66 | Pornography Site AV | Requires reasonable age verification for sites with substantial harmful material | Enacted, effective July 31, 2023 |
Arkansas | SB 396 (Social Media Safety Act) | Social Media Regulation | Required age verification and parental consent for users under 18 | Permanently Enjoined by federal court as unconstitutional |
California | AB 2273 (CAADCA) | Social Media Regulation | Requires services likely used by children to estimate user age and configure default high-privacy settings | Partially Enjoined by federal court; Data Protection Impact Assessment requirement blocked |
Florida | HB 3 | Social Media Regulation | Bans social media accounts for children under 14 and requires parental consent for 14- and 15-year-olds | Preliminarily Enjoined by federal court |
Georgia | SB 351 | Pornography Site AV | Requires age verification for sites with substantial harmful material | Enacted, effective July 1, 2025 |
Idaho | H 498 | Pornography Site AV | Requires reasonable age verification for sites with at least one-third pornographic content | Enacted, effective July 1, 2024 |
Indiana | SB 17 | Pornography Site AV | Requires reasonable age verification for sites with substantial harmful material | Enacted, effective Aug. 16, 2024 |
Kansas | SB 394 | Pornography Site AV | Requires reasonable age verification for sites with substantial harmful material | Enacted, effective July 1, 2024 |
Kentucky | HB 278 | Pornography Site AV | Requires reasonable age verification for sites with substantial harmful material | Enacted, effective July 15, 2024 |
Louisiana | Act 440 (2022) | Pornography Site AV | Requires age verification using government ID or transactional data for sites with over one-third harmful content | Enacted, effective Jan. 1, 2023. Legal challenge dismissed on procedural grounds, appeal likely |
Louisiana | SB 162 (Secure Online Child Interaction and Age Limitation Act) | Social Media Regulation | Requires parental consent for users under 16 | Enacted, effective July 1, 2024. Challenged in federal court by NetChoice |
Mississippi | SB 2346 | Pornography Site AV | Requires reasonable age verification for sites with substantial harmful material | Enacted, effective July 1, 2023. Social media portion preliminarily enjoined |
Missouri | 15 CSR 60-18 | Pornography Site AV | Requires reasonable age verification | Enacted, effective May 7, 2025 |
Montana | SB 544 | Pornography Site AV | Requires reasonable age verification for sites with substantial harmful material | Enacted, effective Jan. 1, 2024 |
Nebraska | LB 1092 | Pornography Site AV | Requires reasonable age verification | Enacted, effective July 19, 2024 |
North Carolina | HB 8 | Pornography Site AV | Requires reasonable age verification for sites with substantial harmful material | Enacted, effective Jan. 1, 2024 |
Ohio | N/A (Parental Notification by Online Operators Act) | Social Media Regulation | Required parental consent for users under 16 | Permanently Enjoined by federal court as unconstitutional |
Oklahoma | SB 1959 | Pornography Site AV | Requires reasonable age verification | Enacted, effective Nov. 1, 2024 |
South Carolina | HB 3424 | Pornography Site AV | Requires reasonable age verification | Enacted, effective Jan. 1, 2025 |
South Dakota | HB 1053 | Pornography Site AV | Requires reasonable age verification | Enacted, effective July 1, 2025 |
Tennessee | SB 1792 | Pornography Site AV | Requires reasonable age verification for sites with substantial harmful material | Enacted, effective Jan. 1, 2025 |
Texas | HB 1181 | Pornography Site AV | Requires age verification for sites with over one-third harmful content; includes mandatory health warnings | Enacted. Age verification requirement upheld by U.S. Supreme Court. Health warning requirement struck down by 5th Circuit |
Utah | SB 152 / HB 311 (2023) | Social Media Regulation | Required parental consent, nighttime curfews, etc. | Repealed and replaced in 2024 |
Utah | SB 194 / HB 464 (2024) | Social Media Regulation | Requires age assurance, default privacy settings, creates liability for mental health harms | Enacted, effective Oct. 1, 2024. Challenged in federal court |
Virginia | SB 1515 | Pornography Site AV | Requires age and identity verification for sites with substantial harmful material | Enacted, effective July 1, 2023 |
Wyoming | HB 43 | Pornography Site AV | Requires reasonable age verification for sites with at least one-third pornographic content | Enacted, effective July 1, 2025 |
How Age Verification Actually Works
The proliferation of state laws mandating age checks has thrust the underlying technology of age verification into the spotlight. These systems, often referred to under the broader umbrella of “age assurance,” are the technical means by which online platforms attempt to enforce age-based access rules.
However, the methods vary widely in their sophistication, effectiveness, and intrusiveness. This has created a fundamental dilemma: the most effective and reliable methods of verification are also the most invasive of user privacy, while the most privacy-preserving methods are often the easiest to circumvent.
This technological trade-off poses a significant challenge for lawmakers, platforms, and users alike.
Methods of Age Verification
Online age verification isn’t a single technology but a spectrum of approaches, each with distinct mechanisms and levels of certainty.
Self-Declaration (Age Gates): This is the most basic form of age check, where a user is simply asked to enter their date of birth or check a box confirming they’re over a certain age. It relies entirely on the honor system and is widely considered ineffective for preventing access by determined minors, as it’s trivial to enter a false birthdate.
Document-Based Verification: This is a much more robust method where users are required to upload an image of a government-issued identification document, such as a driver’s license or passport. Technology like Optical Character Recognition (OCR) is used to extract the date of birth from the document. To prevent fraud (e.g., using a photo of someone else’s ID), this process is often combined with a “liveness check,” which requires the user to take a real-time selfie or short video that can be compared to the photo on the ID.
Data-Based Verification: This method leverages third-party databases to confirm a user’s age. This can involve checking a user’s name, address, and date of birth against records held by credit reference agencies or other private sector databases. Another approach involves checking with a user’s mobile network operator to see if parental controls or adult content blocks are active on their account, which can serve as a proxy for age.
Biometric Age Estimation: This emerging technology uses artificial intelligence and machine learning to estimate a user’s age based on their physical characteristics. The most common form is facial age estimation, where an AI analyzes the geometry of a user’s face from a selfie to place them within an age range. It’s important to distinguish this from facial recognition, which matches a face to a specific identity; age estimation simply provides a probability of age. While popular, its accuracy can be inconsistent, particularly for people of color, transgender individuals, and others outside of the AI’s training data norms.
Privacy-Preserving Technologies (Zero-Knowledge Proofs): In development are advanced cryptographic methods like Zero-Knowledge Proofs (ZKPs). A ZKP would theoretically allow a trusted third party (like a government agency or bank) to issue a cryptographic “proof” that a user is over 18 without revealing the user’s actual identity, birthdate, or any other personal information to the website they’re visiting. This offers a potential path to strong verification with minimal privacy invasion, but the technology isn’t yet widely implemented or standardized.
Effectiveness and Challenges
While these technologies offer the promise of control, their real-world application is fraught with challenges related to effectiveness, the potential for circumvention, and profound risks to privacy and security.
Effectiveness and Accuracy: No current system is foolproof. Studies and expert analyses have concluded that there’s currently no single solution that provides perfectly reliable verification while fully respecting user privacy. Research on the sale of e-cigarettes online, for example, found that sites relying on simple age gates had a youth purchase success rate of over 93%. Even more advanced methods like facial age estimation aren’t perfectly accurate and can be set with “buffers” (e.g., classifying anyone estimated under 25 as a minor) to reduce false positives, which can inadvertently block access for eligible young adults.
Circumvention: Determined users have numerous ways to bypass age verification systems. These include using Virtual Private Networks (VPNs) to appear as if they’re accessing a site from a country without such laws, using a parent’s or another adult’s identification, or purchasing or trading verified adult accounts online. Furthermore, the response of some major adult content providers, like Pornhub, has been to completely block access to their services in states with these laws. This action doesn’t eliminate demand but rather pushes users toward thousands of other non-compliant sites that may have far fewer safety and security measures in place.
Privacy and Security Risks: This is perhaps the most significant concern raised by opponents of age verification mandates. Requiring users to submit sensitive personal data—such as scans of government IDs, financial information, or biometric data like a facial scan—creates enormous, centralized databases of highly personal information. These databases become prime targets for hackers and data breaches. A breach could expose not only a user’s identity but also their association with visiting a pornographic or other sensitive website, leading to risks of blackmail, identity theft, stalking, and other forms of exploitation. Even without a breach, there are concerns about “function creep,” where data collected for age verification could be repurposed for commercial surveillance, advertising, or government monitoring.
Constitutional Battles in the Courts
The rapid enactment of state-level age verification laws has ignited a fierce constitutional battle, pitting the government’s stated interest in protecting children against the First Amendment’s guarantees of free speech and the implicit right to privacy. This conflict is being fought primarily in federal courts, where tech industry groups and civil liberties organizations have mounted robust legal opposition.
The landscape of this battle was fundamentally reshaped in 2025 by a landmark Supreme Court decision, which has provided a constitutional roadmap for states wishing to regulate access to adult content, while simultaneously narrowing the avenues for legal challenges.
First Amendment vs. Child Protection
The central legal argument against age verification laws is that they violate the First Amendment. Opponents contend that these laws impose an unconstitutional “prior restraint” on speech by requiring adults to overcome significant hurdles before they can access lawful content. The process of having to provide a government ID or other sensitive personal information creates a “chilling effect,” deterring adults from exercising their right to view constitutionally protected material for fear of their privacy being compromised or their viewing habits being tracked.
Furthermore, these laws are challenged on the grounds that they’re overly broad. While the state has a recognized interest in protecting young children from obscene material, the Supreme Court has previously ruled that this doesn’t give the government the power to “reduce the adult population… to reading only what is fit for children” (Butler v. Michigan, 1957). Critics argue that by setting a single age gate at 18, these laws unconstitutionally block older minors (e.g., 16- and 17-year-olds) from accessing a wide range of content that isn’t legally obscene for them, such as information about sexual health, reproductive rights, or LGBTQ+ identity and resources.
This legal debate often hinges on the level of judicial review, or “scrutiny,” a court applies to the law. Historically, laws that regulate speech based on its content are subject to “strict scrutiny,” the highest legal standard. To survive, the government must prove the law is “narrowly tailored” to achieve a “compelling government interest” and is the “least restrictive means” of doing so. This is an extremely difficult bar to clear. Proponents of age verification laws have argued for a lower standard, such as “intermediate scrutiny” or “rational basis review,” which give the government much more deference.
Key Legal Players
The legal fight against these laws has been spearheaded by a coalition of tech industry and civil rights groups.
NetChoice: This technology trade association, whose members include Meta, Google, X, and Amazon, has been the most active litigant, filing lawsuits to block age verification and social media regulation laws in numerous states, including Utah, Arkansas, Ohio, California, Georgia, and Louisiana. NetChoice’s arguments consistently focus on the First Amendment, asserting that these laws unconstitutionally restrict access to lawful speech and force users to surrender their right to anonymity. They also raise arguments under the Commerce Clause, contending that a patchwork of state laws places an undue burden on interstate commerce.
ACLU and EFF: The American Civil Liberties Union and the Electronic Frontier Foundation have been powerful voices in opposition, often filing amicus briefs in support of NetChoice’s lawsuits. Their arguments tend to focus more on the impact on individual users, emphasizing the chilling effect on speech, the right to access information anonymously, and the specific dangers these laws pose to marginalized communities, particularly LGBTQ+ youth who rely on the internet for support and information that may be censored under vague “harmful to minors” standards.
The Supreme Court Changes Everything
In June 2025, the U.S. Supreme Court issued a landmark 6-3 decision in Free Speech Coalition v. Paxton, upholding the core age verification requirement of Texas’s HB 1181. This ruling dramatically altered the constitutional landscape for these laws.
Writing for the majority, Justice Clarence Thomas made several key determinations that provide a constitutional blueprint for states:
Intermediate Scrutiny Applies: The Court rejected the argument that the law should be subject to strict scrutiny. Instead, it ruled that because the law’s purpose is to regulate access to speech that is obscene from the perspective of minors (which is unprotected speech for that audience), any effect on adults’ access to protected speech is merely “incidental.” Therefore, the law only needed to survive the lower bar of intermediate scrutiny.
No Right to Avoid Age Verification: The majority opinion stated unequivocally that “adults have no First Amendment right to avoid age verification” when seeking to access content that is lawfully restricted for minors. The Court framed age verification as a traditional and necessary tool for enforcing age-based lines, similar to requirements for purchasing alcohol or firearms.
Privacy Concerns Minimized: The Court gave short shrift to the privacy arguments, acknowledging that verification may require submitting personal information but concluding that these risks weren’t sufficient to render the law unconstitutional.
The dissenting justices, led by Justice Elena Kagan, argued forcefully that the majority had departed from precedent. They contended that the law is a content-based restriction that directly burdens adult speech and should have been subjected to strict scrutiny. The dissent argued that the state should be required to prove that there are no less restrictive ways to protect children without so significantly impeding the rights of adults.
The Paxton decision serves as a green light for states to enact and enforce age verification laws for websites with sexually explicit content. It significantly weakens the First Amendment arguments that had been the primary basis for successful legal challenges in lower courts. While these laws can still be challenged on other grounds—such as being unconstitutionally vague, violating the Commerce Clause, or having flawed enforcement mechanisms—the core question of whether a state can mandate age verification for adult content has, for now, been answered in the affirmative.
Our articles make government information more accessible. Please consult a qualified professional for financial, legal, or health advice specific to your circumstances.