Last updated 1 month ago. Our resources are updated regularly but please keep in mind that links, programs, policies, and contact information do change.
As generative AI affects every sector of the economy, from healthcare diagnostics to creative industries, the legal frameworks governing these technologies are in a state of flux.
A powerful coalition of federal legislators, executive branch officials, and multinational technology corporations has coalesced around a strategy of “federal preemption,” seeking to strip state governments of their ability to enact or enforce AI safety laws.
This movement, often framed through the rhetoric of maintaining American innovation and preventing a “patchwork” of conflicting rules, also faces fierce opposition.
An array of critics, including state attorneys general, civil rights organizations, open-source advocates, and academic institutions, argues that this push for federal supremacy is a calculated maneuver to create a “regulatory vacuum.” They argue that by preempting aggressive state laws in favor of a theoretically uniform but practically nonexistent or weak federal standard, the proponents of preemption are effectively immunizing the AI industry from oversight.
The Federal Push
The campaign to override state authority is not a monolithic effort but a synchronized pincer movement involving the legislative branch, the executive office, and the private sector. The shared objective is the establishment of a singular national regulatory environment, or, as President Donald Trump has termed it, a “One Rule” standard.
H.R. 5388
In September 2025, the philosophical preference for federal control crystallized into specific legislation with the introduction of H.R. 5388, the “American Artificial Intelligence Leadership and Uniformity Act,” by Representative Michael Baumgartner (R-WA). This bill represents the most direct legislative assault on state police powers regarding technology governance in the 119th Congress.
The text of H.R. 5388 is explicit in its preemptive intent. Section 6 of the Act establishes a “temporary moratorium preempting certain State laws that restrict artificial intelligence models and systems engaged in interstate commerce.” The bill’s architects argue that the digital nature of AI renders state borders obsolete. An AI model trained in a data center in Virginia, fine-tuned by engineers in California, and deployed to users in New York cannot, they argue, be subject to fifty divergent regulatory regimes without causing a systemic failure of the American tech sector.
The legislation carves out a broad zone of immunity. It prohibits states from enforcing laws that impose “substantive design, performance, data-handling, documentation, or civil-liability requirements” on AI models. This phrasing is carefully calibrated to nullify the specific types of regulations emerging from state capitals. For instance, California’s attempts to mandate safety testing for “frontier models” (SB 1047) and Colorado’s requirements for “algorithmic impact assessments” (SB 205) would fall squarely under the prohibitions of H.R. 5388.
The rationale provided by the bill’s sponsors is rooted in the “immaturity” of current governance standards. The text cites the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework as a developing standard, arguing that “a uniform national approach during this build-out supports responsible adoption.” By framing the moratorium as “temporary” and necessary during a “build-out” phase, proponents aim to provide the industry with a window of unrestricted growth, free from the friction of state-level compliance.
Senator Ted Cruz
While H.R. 5388 advances in the House, the Senate’s push for preemption has been anchored by Senator Ted Cruz (R-TX), the influential Chairman of the Senate Commerce Committee. Throughout 2025, Senator Cruz has championed a “light-touch” regulatory philosophy, positing that heavy-handed regulation constitutes a national security risk by potentially ceding AI dominance to China.
Senator Cruz’s legislative strategy has involved attempting to attach preemption riders to “must-pass” legislative vehicles. In July 2025, during negotiations for the budget reconciliation package, colloquially known as the “One Big Beautiful Bill”, Cruz sought to insert a provision that would have imposed a ten-year moratorium on state AI laws. This maneuver was designed to bypass the traditional committee process and enact preemption as a fiscal imperative. Although this specific provision was stripped from the final bill following a decisive 99-1 vote, the attempt signaled the intensity of the desire to centralize control.
Undeterred, Cruz and House GOP leaders targeted the National Defense Authorization Act (NDAA) for Fiscal Year 2026 in late 2025. By framing AI preemption as a matter of defense policy, arguing that state regulations could impede the military’s ability to partner with commercial AI vendors, they sought to “backdoor” the moratorium into law. Cruz’s overarching argument is that the U.S. is in a “race” with the Chinese Communist Party for technological superiority. In this worldview, California’s safety regulations are not merely local ordinances but geopolitical liabilities that act as “speed bumps” for American companies while Chinese firms race ahead.
Trump’s “One Rule”
Following his inauguration in 2025, President Donald Trump moved aggressively to operationalize preemption through the executive branch, bypassing the legislative gridlock. His administration’s approach is codified in the “One Rule” Executive Order, announced in December 2025.
The President’s rhetoric on Truth Social underscores the administration’s view of state regulators as adversaries. “We are beating ALL COUNTRIES at this point in the race, but that won’t last long if we are going to have 50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS,” Trump wrote, declaring that “AI WILL BE DESTROYED IN ITS INFANCY” without federal intervention.
The “One Rule” doctrine utilizes administrative levers to coerce state deregulation. Leaked drafts of the executive order reveal a multi-pronged strategy:
Conditioning of Federal Funds: The order directs the Office of Science and Technology Policy (OSTP) to link federal AI research and infrastructure funding to regulatory alignment. States with “burdensome” AI laws could find themselves ineligible for billions of dollars in federal grants, effectively holding state budgets hostage to force deregulation.
Litigation Task Force: The Department of Justice is tasked with establishing a dedicated task force to challenge the constitutionality of state AI laws. By arguing that such laws violate the Commerce Clause or the First Amendment, the administration seeks to invalidate them through the federal courts.
FTC Preemption: The Federal Trade Commission is directed to issue policy statements asserting that federal consumer protection statutes preempt conflicting state laws regarding AI deception, further narrowing the lane for state enforcement.
This executive strategy is bolstered by the “America’s AI Action Plan,” released by the White House in mid-2025. The plan prioritizes “removing regulatory barriers to innovation” and streamlining the permitting process for critical AI infrastructure, such as data centers. Industry groups like the U.S. Chamber of Commerce have hailed this plan as a necessary corrective to “activist-driven overreach” by state legislatures.
The Money
The legislative and executive maneuvers described above are fueled by a financial influence campaign of historic proportions. The technology sector, perceiving an existential threat from the proliferation of state-level regulations, has deployed vast resources to secure federal preemption.
$1.1 Billion
A comprehensive analysis by Public Citizen reveals that Big Tech executives, corporations, and investors spent at least $1.1 billion during the 2024 election cycle and throughout 2025 to shape the regulatory landscape. This figure encompasses direct lobbying expenditures, campaign contributions, and donations to Super PACs, marking a level of political spending that rivals the pharmaceutical industry.
The spending is highly concentrated among the industry’s titans, who have the most to lose from a fragmented regulatory environment. In the first three quarters of 2025 alone, the lobbying expenditures of key players were staggering:
- Oracle: $8.66 million
- Apple: $7.27 million
- Microsoft: $6.94 million
- ByteDance (TikTok): $6.65 million
Collectively, the technology sector spent $314 million on federal lobbying in the first nine months of 2025. This capital was directed not just at promoting specific bills but at creating a political environment hostile to state regulation.
Super PACs
Beyond traditional lobbying, the industry has utilized Super PACs to influence the electoral map. “Leading the Future,” a Super PAC funded by major AI interests, raised over $100 million to intervene in congressional races. The PAC’s strategy focused on opposing candidates who favored strict AI liability regimes and supporting those who espoused “innovation-first” policies. Critics describe this as a “pay-to-play” dynamic, where the industry effectively purchases regulatory relief by installing sympathetic legislators.
Trade Associations
Trade associations serve as the force multipliers for the industry’s preemption agenda, often employing more aggressive rhetoric than the individual companies they represent.
NetChoice: This organization has been the tip of the spear in the “patchwork” narrative. NetChoice argues that a “50-state patchwork” of AI laws creates a “compliance nightmare” that stifles small businesses and startups. They contend that the internet is inherently borderless and that state regulations effectively break the digital economy. NetChoice has heavily endorsed the Trump administration’s “One Rule” executive order and the preemption riders in the NDAA.
The U.S. Chamber of Commerce: The Chamber has positioned state AI regulations as “activist-driven overreach.” In its 2025 filings with the OSTP, the Chamber argued that “conflicting state-level laws” create uncertainty that deters capital investment. They advocate for a federal framework that preempts state rules to “facilitate the development and deployment of innovative AI technologies.”
The Data Center Coalition: As the physical footprint of AI expands, the Data Center Coalition has lobbied to prevent states from imposing environmental or energy restrictions on data centers. Their lobbying spending surged in 2025, more than doubling in the third quarter to $360,000, as they fought state-level attempts to regulate the massive energy consumption of AI training clusters.
The Silicon Valley Split
While the tech sector is largely unified in its opposition to state regulation, distinct fissures exist regarding the type of federal regulation that should replace it. This internal conflict, often described as a “civil war,” influences the nuances of their lobbying strategies.
Divergent Lobbying Strategies Within the Tech Sector
| Faction | Key Companies | Core Philosophy | Stance on State Regulation | Stance on Open Source |
|---|---|---|---|---|
| The Centralizers | OpenAI, Microsoft | AI is a national security risk requiring federal licensing | Oppose: State rules are incompetent to handle “catastrophic risk” | Skeptical: Favor strict controls that may disadvantage open weights |
| The Open Source Coalition | Meta, a16z, Hugging Face | AI should be open; regulation stifles innovation (“Little Tech”) | Strongly Oppose: Liability for downstream use kills open source | Supportive: View open weights as essential for democratization |
| The Pragmatists | Anthropic | AI carries risks; regulation is inevitable but must be workable | Moderate Opposition: Willing to negotiate (e.g., sought amendments to SB 1047) | Nuanced: Support some safety checks but wary of heavy burdens |
OpenAI and Microsoft have pushed for a federal licensing regime, arguing that powerful AI models pose risks akin to nuclear weapons or pandemics. They contend that such “catastrophic risks” are the exclusive domain of the federal government. Critics argue this position is a form of “regulatory capture,” designed to erect high barriers to entry that protect their market dominance.
Meta and Andreessen Horowitz, conversely, lobby against any regulation that imposes liability on the developers of “open weights” models. They argue that laws like California’s SB 1047, which proposed holding developers liable if their models were modified and used for harm, would effectively ban open-source AI. Their push for preemption is driven by a desire to preserve the open ecosystem from what they view as existential legal threats.
The Arguments
The proponents of federal preemption rely on three primary narratives to justify the invalidation of state laws. These arguments are deployed across white papers, congressional testimony, and media appearances to build a consensus that state regulation is not just inconvenient, but dangerous.
The “Patchwork”
The most pervasive argument is the “patchwork” theory. Proponents argue that because digital products cross state lines instantaneously, it is impossible for a company to comply with fifty different regulatory regimes.
The “California Effect”: Industry advocates argue that a strict law in a large market like California effectively sets national policy. If California mandates a “kill switch” for AI models, companies will likely implement that feature universally to avoid maintaining separate product lines. This, they argue, allows the voters of one state to dictate policy for the entire nation, violating the principles of federalism.
Burden on “Little Tech”: NetChoice and venture capital firms like a16z argue that complex compliance regimes disproportionately harm startups (“Little Tech”). While giants like Google or Microsoft have armies of lawyers to navigate a 50-state legal maze, a three-person startup cannot. Therefore, they argue, preemption is necessary to protect competition and prevent the entrenchment of incumbents.
The China Race
This narrative frames AI regulation as a zero-sum game between the United States and China.
Speed Bumps to Innovation: Senator Cruz and the Trump administration argue that excessive regulation acts as a drag on innovation. They posit that while the U.S. deliberates over safety protocols, Chinese firms, backed by the state and unencumbered by democratic process, will race ahead. “We are beating ALL COUNTRIES,” Trump asserted, implying that state rules are the only obstacle to maintaining that lead.
Values Competition: The argument extends to the nature of the AI itself. Proponents of preemption argue that the U.S. must win the AI race to ensure that the technology reflects “Western values” of freedom and openness, rather than the authoritarian values of the CCP. They contend that crippling American companies with state-level “red tape” aids the adversary.
National Security
This argument, favored by companies like OpenAI, elevates AI governance to the level of national defense.
Catastrophic Risk: Proponents argue that the risks posed by “frontier” AI models, such as the potential to aid in the creation of biological weapons or cyberattacks, are matters of national security. Therefore, they argue, oversight must be centralized in federal agencies with security clearances and technical expertise, rather than dispersed among state consumer protection bureaus.
The State Battlegrounds
The push for federal preemption is a direct response to a surge of legislative activity in the states. By late 2025, over 1,000 AI-related bills had been introduced across the country. Four states have become the primary theaters of this conflict, illustrating the specific types of regulations the industry is fighting to preempt.
California: SB 1047
The conflict reached a fever pitch in California with Senate Bill 1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.” Authored by Senator Scott Wiener, the bill targeted the largest AI models, those costing over $100 million to train and using 10^26 FLOPS of compute power.
The Mandates: SB 1047 would have required developers to:
- Implement a “kill switch” to shut down a model if it went rogue
- Perform rigid safety testing prior to release
- Certify that the model would not cause “critical harm” (defined as mass casualties or damages exceeding $500 million)
The Resistance: The bill faced a “weather system” of lobbying.
Nancy Pelosi, the former House Speaker, publicly opposed the bill, calling it “well-intentioned but ill-informed” and citing the concerns of local tech constituents.
Governor Newsom’s Veto: In September 2024, Governor Gavin Newsom vetoed the bill. In his veto message, he argued that the bill was “not informed by an empirical trajectory analysis” and that regulating based on a compute threshold was a flawed approach. However, Newsom’s veto was not a rejection of regulation per se. He committed to working with the legislature on a better path forward, keeping the threat of state regulation alive and fueling the industry’s urgency for federal preemption in 2025.
Colorado: SB 205
While California garnered the headlines, Colorado enacted the nation’s first comprehensive AI law, SB 24-205, in May 2024.
The Framework: The law focuses on “high-risk” AI systems that make consequential decisions in areas like employment, housing, and lending. It mandates that deployers conduct “algorithmic impact assessments” and provide notices to consumers when AI is used.
The Industry Counter-Attack: Immediately upon signing the bill, Governor Jared Polis expressed reservations, urging the legislature to “simplify” the law before its February 2026 effective date. Throughout 2025, industry groups lobbied aggressively to water down the bill. The failure of amendments (SB 318) in mid-2025 left the original, stricter law in place, intensifying the industry’s call for federal preemption to wipe it off the books before it goes live.
New York: The RAISE Act
As of December 2025, the battle has shifted to New York. The legislature passed the Responsible AI Safety and Education (RAISE) Act, which incorporates elements of California’s failed SB 1047 but tailors them to the New York market.
The Provisions: The RAISE Act targets models trained with over $100 million in compute that are deployed in New York. It requires safety policies and prohibits the deployment of models that create an “unreasonable risk of critical harm.”
The Status: The bill currently sits on Governor Kathy Hochul’s desk. Industry groups like the Computer & Communications Industry Association are pressuring her to veto it, using arguments identical to those deployed in California. Conversely, civil society groups are urging her to sign it to fill the regulatory gap.
Connecticut: The Veto Threat
In Connecticut, comprehensive AI legislation (SB 2) passed the Senate but stalled in the House after Governor Ned Lamont threatened a veto. Lamont, echoing the “patchwork” argument, stated that Connecticut should not act alone and should instead wait for a regional consortium or federal rules. This outcome was widely seen as a victory for the preemption lobby, which successfully argued that state-level action creates competitive disadvantages.
The Opposition
The coalition opposing federal preemption is diverse, comprising state officials, civil rights activists, consumer watchdogs, and factions of the tech community itself. Their central argument is that preemption without a strong federal replacement is a dereliction of duty that exposes the public to unchecked corporate power.
The “Regulatory Vacuum”
Critics argue that the industry’s call for a “national standard” is a cynical ploy. They point out that while industry groups demand preemption, they simultaneously lobby against substantive federal regulations.
Preempt First, Legislate Never: Organizations like Public Citizen and the AI Now Institute warn against the “preempt first, legislate later” strategy. They argue that bills like H.R. 5388 wipe out state laws immediately while promising federal rules that may never materialize due to congressional gridlock. This creates a “regulatory vacuum” where neither the state nor the federal government has oversight authority.
The Enforcement Gap: Critics contend that federal agencies like the FTC and DOJ lack the resources to police the entire AI economy. State Attorneys General serve as the “cops on the beat,” enforcing consumer protection laws. Preemption, they argue, strips these local enforcers of their authority, leaving consumers with no recourse when they are harmed by AI fraud or discrimination.
Civil Rights Concerns
The ACLU and the Leadership Conference on Civil and Human Rights emphasize that state laws are currently the primary line of defense against algorithmic bias.
Concrete Harms: AI systems are already being used to screen job applicants, determine creditworthiness, and allocate housing. Evidence suggests these systems can perpetuate historical biases. State laws in Colorado and California explicitly define and prohibit “algorithmic discrimination.”
The Threat to Justice: If federal law preempts these state protections without establishing a private right of action or strong anti-bias rules, citizens will lose the ability to seek legal redress. The ACLU has sounded the alarm that “exempting AI from federal laws” via the Trump administration’s deregulation push effectively rolls back civil rights protections in the digital age.
“Laboratories of Democracy”
Legal scholars and state legislators argue that the federal system was designed to allow states to experiment with policy solutions.
The Historical Precedent: California Senator Scott Wiener rebuts the “patchwork” argument by pointing to history. He notes that California’s leadership on data privacy (CCPA) and auto emissions standards forced the federal government to adopt higher standards. He argues that state action raises the regulatory floor, whereas preemption lowers the ceiling.
Responsiveness: Critics argue that states can adapt to rapid technological changes much faster than the federal government. Freezing state laws for a decade, as proposed by Senator Cruz, would lock the U.S. into a regulatory stasis while AI technology evolves exponentially.
The Open Source Defense
The position of the open-source community provides a nuanced counter-narrative. While often wary of strict state liabilities, they are equally opposed to broad federal preemption that favors closed systems.
Transparency as Safety: Organizations like Mozilla and Hugging Face argue that openness and transparency are essential for safety. They oppose federal frameworks that would require expensive licensing (favored by OpenAI), as this would criminalize open research. They argue that a “One Rule” standard written by Big Tech incumbents would likely ban open-weights models under the guise of national security, consolidating power in the hands of a few corporations.
The Money in Detail
The scale of financial resources deployed to achieve preemption is a defining feature of the 2025 political landscape. The influence machine of Big Tech is not merely reacting to policy. It is actively shaping it through massive capital injection.
By the Numbers
Public Citizen’s analysis highlights a “gold rush” of influence peddling.
Total Tech Spending (2024-2025): The combined political spending of the tech sector surpassed $1.1 billion.
Microsoft: Spent approximately $6.94 million on lobbying in the first three quarters of 2025.
Meta: Along with Alphabet and Nvidia, Meta contributed to a combined lobbying spend of $36 million in the first half of 2025 alone.
OpenAI: The company’s lobbying expenditure increased by 44% in the first half of 2025 compared to the previous year, reaching $1.2 million, largely driven by the fight against SB 1047.
Return on Investment
Critics argue that this spending has yielded a substantial return on investment.
Vetoes Secured: The gubernatorial vetoes in California and the threat in Connecticut align perfectly with the industry’s lobbying objectives.
Policy Capture: The Trump administration’s “One Rule” executive order contains provisions that mirror the policy recommendations of NetChoice and the Chamber of Commerce almost verbatim.
Enforcement Retreat: In the first six months of the Trump administration, 47 enforcement actions against tech companies were halted or withdrawn. Public Citizen attributes this retreat directly to the industry’s political support and the administration’s deregulatory agenda.
Public Opinion
While the political and corporate elites battle over preemption, the American public is increasingly searching for clarity. Analysis of search trends and SEO data in 2025 reveals a disconnect between the public’s concerns and the deregulatory push.
What People Are Asking
Search data from late 2025 indicates a surge in queries related to AI safety and regulation.
Rising Anxiety: Keywords like “AI regulation 2025,” “is AI safe,” and “laws against deepfakes” have seen spiked volumes.
The “Safety Gap”: Pew Research polling indicates that a majority of Americans, across party lines, worry that the government is not doing enough to regulate AI. About 60% of U.S. adults express concern that the government will not go far enough, contradicting the industry narrative that regulation is unpopular.
The SEO of Dissent: Civil society groups have optimized their content to answer these queries, driving traffic to pages that explain the risks of preemption. This digital battleground is shaping public opinion, creating a well of support for regulation that politicians ignore at their peril.
What Happens Next
The conflict over AI regulation has reached a stalemate that is likely to be broken by the courts or the midterms. The failure of legislative moratoriums to pass Congress has forced the Trump administration to rely on executive power, a strategy fraught with legal peril.
Legal Challenges
State Attorneys General are preparing to challenge the “One Rule” executive order. They argue that the Tenth Amendment protects their police powers to ensure public safety and that the executive branch cannot unilaterally withhold congressionally appropriated funds to coerce policy changes. The DOJ’s Litigation Task Force will likely face years of battles in federal court over the constitutionality of preemption.
The “Splinternet”
If H.R. 5388 fails to pass and the executive order is enjoined by the courts, the “patchwork” that industry fears will become reality. Colorado’s law goes into effect in early 2026. If New York enacts the RAISE Act, companies will face strict regimes in two major markets. This may force a de facto national standard, but one set by Albany and Denver rather than Washington, precisely the outcome Big Tech has spent billions to prevent.
The Democracy Question
The “War over the Code” exposes a profound democratic deficit. While public opinion favors guardrails on powerful new technologies, the combined weight of federal power and corporate capital is pushing in the opposite direction.
Our articles make government information more accessible. Please consult a qualified professional for financial, legal, or health advice specific to your circumstances.