How 20 States Are Now Regulating Deepfakes—and What It Means for Elections

GovFacts

Last updated 2 weeks ago. Our resources are updated regularly but please keep in mind that links, programs, policies, and contact information do change.

Forty-seven states have passed some form of deepfake legislation since 2019, addressing various contexts from non-consensual intimate imagery to fraud. Only Alaska, Missouri, and Ohio have no deepfake laws at all.

The technology works now. Free apps let anyone swap faces in videos. Even trained observers struggle to spot sophisticated deepfakes without specialized software.

A fake released three days before voting doesn’t need to fool everyone forever. It needs to fool enough people long enough. By the time fact-checkers debunk it, ballots are already cast.

Twenty States, Twenty Different Approaches

The twenty states now regulating synthetic media in campaigns aren’t doing it the same way.

Most states with political deepfake laws require disclosure. If you create or share AI-generated content about candidates, you have to label it as artificial. Something like: “This video has been manipulated by artificial intelligence and depicts speech or conduct that did not occur.”

The timeframes vary too. Thirty days in one state. Sixty, ninety, or one hundred twenty days in another. A few regulate them year-round but with heightened scrutiny during campaign season.

Then there’s the question of who can be prosecuted. Creators, obviously. Distributors, usually. But what about platforms that host the content? What about people who share it without knowing it’s fake? What about journalists reporting on synthetic media by showing them? Most states include exceptions for news reporting, satire, and parody. But those exceptions are defined differently across states, and the boundaries are fuzzy.

Content that’s allowed in one state might be a felony in the next. A disclosure that satisfies one state’s requirements might not meet another’s. Campaigns operating nationally have to manage twenty different sets of rules, with different penalties, different timeframes, and different definitions of what counts as synthetic media in the first place.

California Tried to Ban Them. A Judge Said No.

California went further than most states. AB 2839, the Defending Democracy from Deepfake Deception Act of 2024, prohibited distributing “election content that seriously misleads voters” created or modified by AI. The law let anyone who saw the content file a lawsuit. It defined harm broadly—anything “reasonably likely” to hurt a candidate’s prospects. And it required that even satire and parody include specific disclaimers identifying themselves as satire.

A federal judge struck down the law as unconstitutional.

Political speech gets the highest level of First Amendment protection. Laws restricting it face tough tests—they need to serve important government goals in the most limited way possible. Protecting the integrity of campaigns qualifies as important. But is banning AI-generated content limited to only what’s necessary? Lies and exaggerations have always been part of politics. Selectively edited videos have been misleading people for decades. What makes AI-generated content different enough to justify prohibition?

Disclosure laws are more likely to hold up in court. Requiring labels on synthetic content is less restrictive than banning it outright. But even disclosure requirements raise questions. If you require no disclosure on selectively edited videos created with traditional tools, why require it only on AI-generated content? Is the distinction about protecting people, or is it about regulating a specific technology?

Courts will draw lines over the next few years. Some state laws will survive. Others won’t. The strictest prohibition laws face the toughest scrutiny. Disclosure laws have better odds but aren’t guaranteed to stand either.

Enforcement Is Where Theory Meets Reality

Laws on the books matter little if nobody enforces them. Enforcement of these laws faces serious practical obstacles.

Most state attorneys general handle many different areas of law—consumer protection, civil rights, antitrust, environmental enforcement. A single complaint during a campaign might not rise to the top of the priority list. Limited staff and competing priorities mean enforcement depends on whether violations are serious enough to demand attention.

Synthetic media created in one state gets distributed through platforms accessible everywhere. IP addresses can be masked through VPNs. Content gets shared and reshared until source information disappears. Even after identifying a perpetrator, prosecuting across state lines requires coordination that state prosecutors aren’t equipped to provide. Foreign actors creating fakes to manipulate American campaigns fall outside state jurisdiction entirely.

A fake released three days before voting can’t realistically be prosecuted before the campaign happens. Courts can theoretically order content removed faster, but obtaining a court order within hours requires judges available and willing to issue emergency orders based on minimal briefing. Some states have established expedited procedures. Most haven’t.

Synthetic media released in the immediate pre-campaign period are unlikely to face consequences before their impact is felt. The New Hampshire robocall perpetrator did get charged, but by then the damage was done.

Some state laws impose removal obligations for campaign-related fakes. But platforms struggle with implementation. Identifying synthetic media at scale is technically difficult. Verifying reports takes time. Removing content raises free speech concerns. Even when platforms commit to compliance, the practical difficulties of detecting and removing fakes within mandated timeframes remain formidable.

By the time content gets removed, it’s often been downloaded, reuploaded elsewhere, and seen by millions. Platform liability creates incentives for removal. But accountability only works if violations result in enforcement actions and damages.

The Satire Problem Nobody’s Solved

Most laws include exceptions for satire and parody. They have to—the First Amendment protects satirical speech, and courts have developed extensive doctrine about where those boundaries lie. But applying satire exceptions to AI-generated content in practice is harder than it sounds.

What counts as satire? A video showing a candidate saying something absurd might be obvious parody to some people and convincing fakery to others. Satire often mixes serious critique with humor. The Onion publishes satire. So does Saturday Night Live. But what about a random Twitter account posting AI-generated videos of candidates? Is that satire or disinformation?

California’s law tried to solve this by requiring satire to include specific disclaimers explicitly identifying itself as satire. The federal judge who struck down the law found that requirement unconstitutional—requiring satire to announce itself defeats the purpose. But without such requirements, how do you distinguish satire from deception?

Several states define satire as content that’s “clearly” or “obviously” satirical. But clearly to whom? Obvious by what standard? Sophistication varies widely among people. Content that seems obviously fake to one person might fool another. And intent matters—someone might create content as satire that people interpret literally and share as if it’s real.

Early enforcement actions will define these boundaries through litigation. During 2026 campaigns, the status of borderline satirical content remains ambiguous. Prosecutors and platforms will have to make judgment calls without clear guidance. Some satirical content will probably get removed or prosecuted. Some deceptive content will probably escape enforcement by claiming satirical intent.

Detection Technology Isn’t Good Enough Yet

The entire regulatory framework assumes that people, platforms, and officials can identify synthetic media with reasonable accuracy. That assumption is shaky.

Universal detectors now achieve 98.3 percent accuracy in laboratory conditions, analyzing tiny visual details and patterns that reveal AI generation. That’s an improvement from tools achieving eighty to ninety-five percent accuracy a few years ago.

But the remaining 2 percent error rate matters. With millions of videos uploaded daily, a 2 percent rate of missing real fakes means thousands slip through undetected. Incorrectly flagging real videos as fake creates its own problems—legitimate videos incorrectly flagged as AI-generated might get removed, suppressing authentic speech.

Those high accuracy rates assume optimal conditions: full-resolution videos, careful examination by trained analysts, access to specialized software. In real-world conditions, fakes circulate in compressed formats across multiple platforms, often analyzed only by human viewers without specialized training. Technology for identifying them isn’t universally deployed. Many platforms implement detection tools inconsistently, and smaller platforms remain largely unregulated.

A 2022 study found that human accuracy rates—typically fifty to sixty-five percent—were comparable to or worse than leading computer vision models. Many fakes are poorly made and easily spotted. But as generation technology has improved, the visual artifacts that previously revealed fakes have largely disappeared.

Advice encouraging people to look for telltale signs may mislead them, creating false confidence in abilities that have become unreliable as technology has advanced.

If people can’t reliably distinguish authentic from synthetic content even when told something is fake, then labeling provides less protection than disclosure advocates assume. Research showing people believe information they see repeatedly, even when warned it’s false, suggests that a fake labeled as artificial that circulates widely may still influence behavior despite the label.

The Federal Government Is Watching, Sort Of

States have moved faster than the federal government on campaign-related synthetic media. But federal action is starting to happen, in ways that might help or might complicate the state-level patchwork.

The TAKE IT DOWN Act, signed by President Trump on May 19, 2025, addresses non-consensual intimate images—both authentic and synthetic. It criminalizes online publication of intimate visual depictions without consent, with mandatory restitution and criminal penalties up to two years in prison. Covered platforms must establish notification processes and remove reported content within forty-eight hours or face liability.

That’s significant for sexually explicit fakes. But it doesn’t directly address campaign-related ones. The Protect Elections from Deceptive AI Act would criminalize seriously misleading AI-generated media about federal candidates. The NO FAKES Act would criminalize unauthorized AI-generated copies of someone’s voice or likeness, with exceptions for parody and commentary. Neither has passed yet.

An executive order issued by President Trump directs federal agencies to evaluate state AI laws for potential conflicts with federal law and First Amendment protections. The order establishes a task force to identify problematic state laws, potentially setting up challenges. The Commerce Secretary is directed to condition certain federal funding on states avoiding “onerous” AI laws. The FCC is directed to consider federal standards that would override conflicting state rules.

If federal standards emerge, they might replace state laws—potentially eliminating some protections while providing uniform national rules. If federal standards fail to materialize, the patchwork persists. Either way, there’s tension between state action and federal policy that hasn’t been resolved.

The Bigger Threat: When Nothing Seems Real Anymore

Regulations try to address a specific problem—synthetic media designed to deceive people. But experts increasingly worry about a broader threat: the general erosion of trust in all visual and audio evidence.

Once people know convincing fakes are possible, bad actors can claim that authentic footage is fake. A video of a candidate making embarrassing statements can be dismissed as “probably AI-generated” even when it’s real. This creates what scholars call the ability to deny reality by claiming it’s AI-generated—the “liar’s dividend.”

Even when people are explicitly told content is authentic, awareness of AI technology makes them less likely to believe it. During the Israel-Hamas war in 2023, the “mere possibility that A.I. content could be circulating” led people to dismiss genuine images and videos as inauthentic. That dynamic will repeat in American campaigns, where the stakes of being deceived are high and people are motivated to believe information that fits their views.

If people reflexively distrust all video and audio evidence, both authentic and synthetic media lose power as information sources. That outcome harms integrity by preventing people from accessing true information about candidates and issues. Regulation can’t solve this problem. You can’t legislate trust back into existence.

Addressing this requires sustained public education about information evaluation, media literacy initiatives, and collective commitment to truth-telling by candidates, campaigns, and news organizations. Public trust in institutions—particularly officials and journalists—plays a protective role. Citizens who trust officials’ assurances that reported fakes are fabricated are less likely to spread disinformation. Citizens who trust particular news sources are more likely to accept those sources’ factual findings.

Trust in institutions has eroded significantly in recent years. That erosion undermines these protective mechanisms. Laws can’t work if people don’t trust institutions that enforce them. You can make creating fakes a crime. You can’t make people believe their own eyes again.

How These Laws Will Affect 2026 Elections

Twenty states now have laws on the books regulating campaign-related synthetic media. Seven of those laws took effect this month. Creating or sharing campaign-related fakes now carries risk in most states. That risk varies—misdemeanor in some places, felony in others—but it exists. For campaigns and operatives, that creates deterrence. For platforms, it creates motivation to develop systems for identification and removal. For victims of fakes, it provides remedies that didn’t previously exist.

The patchwork creates confusion. Content allowed in one state might be criminal in another. Disclosure requirements vary. Timeframes differ. Campaigns operating nationally have to manage multiple sets of rules. Platforms hosting content accessible everywhere have to decide which state’s law applies. People in different states get different levels of protection.

Enforcement will be uneven. Resource constraints limit prosecution. Jurisdictional issues complicate interstate cases. Timing problems mean pre-campaign fakes often escape consequences until after voting. The most serious violations—sophisticated fakes created by well-funded actors—might get investigated. Smaller-scale violations probably won’t.

Courts will strike down some state laws. Others will survive but with narrowed scope. The strictest prohibition laws face the toughest scrutiny. Disclosure laws have better odds but aren’t guaranteed to stand either. Courts will spend the next few years drawing lines.

Generation tools will get better. Tools for identification will try to keep pace. Whether identification can maintain parity with generation remains uncertain. If generation capabilities significantly outpace identification, regulatory schemes depending on spotting synthetic media will become less effective.

For individual people: Regulation exists now and varies by state, providing some protections but not guarantees. The responsibility for detecting and rejecting fakes can’t rest entirely on individuals, whose ability to distinguish authentic from synthetic content is limited. Platform moderation needs to improve. Officials need to be equipped to respond to incidents. Campaigns need to resist the temptation to use synthetic media for deception.

The regulatory framework now in place represents a necessary first step. Protecting democratic processes from synthetic media threats will require sustained commitment from people, platforms, government, and institutions across society. Laws help. They’re not sufficient by themselves.

The Patchwork Will Keep Evolving

The regulatory framework will continue changing as courts address constitutional questions, states refine their approaches, and technology keeps advancing.

The California decision striking down AB 2839 provides a template. Other states will likely see litigation too. As courts engage with regulation, they’ll explain what the Constitution allows and forbids. Those decisions will probably push states toward disclosure-based approaches rather than outright prohibitions, since disclosure raises fewer First Amendment concerns. But even disclosure laws might face challenges.

Whether Congress passes a law addressing fakes, narrower statutes addressing specific harms, or multiple targeted bills remains to be seen. The executive order approach suggesting federal preemption could shift things considerably if agencies follow through. Federal law might override state protections while replacing them with uniform national standards. Or it might simply prevent states from advancing regulations deemed unconstitutional or economically harmful.

Generative AI systems are improving at remarkable speed. Technology for identification is also improving, and new technical standards for content authentication are being developed through private sector initiatives. Whether identification can maintain pace with generation remains an open question.

Different platforms will likely settle on different approaches. Some may implement aggressive systems for identification and removal. Others may maintain less aggressive content removal. Platform-specific differences will further complicate things.

Celebrity fakes, non-consensual sexual imagery, and impersonation used in fraud schemes all create harms similar to campaign-related ones. Some states have enacted separate legislation addressing these categories. A framework addressing synthetic media across all contexts remains absent. As specific incidents highlight harms outside the sphere of campaigns, pressure will mount for broader regulation.

The twenty states now regulating campaign-related synthetic media have built something. It’s imperfect, inconsistent, and incomplete. But it’s more than existed two years ago. Whether it’s enough to protect integrity in an age of sophisticated synthetic media will emerge through the 2026 campaigns and beyond, as these laws face their first real tests.

Our articles make government information more accessible. Please consult a qualified professional for financial, legal, or health advice specific to your circumstances.

Follow:
Our articles are created and edited using a mix of AI and human review. Learn more about our article development and editing process.We appreciate feedback from readers like you. If you want to suggest new topics or if you spot something that needs fixing, please contact us.