Last updated 2 weeks ago. Our resources are updated regularly but please keep in mind that links, programs, policies, and contact information do change.
On June 22, 2025, Governor Greg Abbott signed House Bill 149 into law. The Texas Responsible Artificial Intelligence Governance Act, known as TRAIGA, became enforceable on January 1, 2026, and it’s reshaping how companies across the country think about building and deploying AI systems. The law is both strict enough to matter and practical enough that companies can comply.
It’s the first state law built around prohibited practices rather than risk categories. Colorado tried to regulate “high-risk” AI systems, which requires someone to define what counts as high-risk and companies to figure out if they’re in or out. California passed a dozen narrow laws targeting specific problems—deepfakes here, hiring algorithms there—creating a compliance maze. The EU built a four-tier system so complex it requires teams of lawyers just to determine which tier applies to your product.
Texas identified four things you absolutely cannot do with AI. Everything else is permitted. If you cross these lines, enforcement follows.
Because the law applies to any company that does business with Texans—no revenue threshold, no employee count minimum, no “only if you’re headquartered here” carve-out—it becomes national regulation. A startup in Brooklyn with Texas users must comply. A European company selling AI tools to Houston businesses must comply. Size doesn’t protect you. Geography doesn’t protect you.
Legislators in Illinois, New York, and Florida are circulating draft bills that resemble TRAIGA. Federal regulators are citing it in policy discussions. Tech companies that initially opposed state-level AI regulation now say that if they must deal with state laws anyway, this one makes sense.
Prohibited Practices
TRAIGA draws four bright lines. You cannot intentionally develop or deploy AI systems designed to: manipulate people toward self-harm, violence, or criminal activity; infringe on constitutional rights; discriminate against protected classes; or create child sexual abuse material or explicit deepfakes of minors.
The word “intentionally” carries the entire weight of the law. This isn’t about accidental bias that creeps into training data or unintended consequences. The law explicitly rejects disparate impact as a standalone basis for liability. You must prove the company intended to discriminate, manipulate, or violate rights.
If a hiring algorithm systematically screens out Black applicants because it was trained on historical data from a discriminatory company, that’s a problem—but under TRAIGA, it’s not necessarily illegal unless you can show the company intended that outcome. The law trades broader protection against algorithmic bias for legal certainty that companies can achieve.
Government entities face two additional prohibitions. They cannot use AI for “social scoring”—assigning people numerical ratings based on behavior that determines how they’re treated in unrelated contexts. They also can’t use AI to identify people through biometric data scraped from the internet without consent, which addresses facial recognition concerns in law enforcement.
Disclosure Requirements
Government agencies must tell you when you’re interacting with an AI system, clearly and in plain language, before or at the moment of interaction. The law specifies this applies “regardless of whether it would be obvious to a reasonable person” that they’re talking to AI.
Healthcare providers have a similar obligation. If AI is used in diagnosing or treating you, they must disclose that fact before or during treatment, except in emergencies when disclosure comes as soon as reasonably possible afterward.
Private companies are not required to notify users when they use AI. Netflix doesn’t have to tell you its recommendations come from algorithms. Amazon doesn’t have to disclose that AI determines what products you see first. Your bank doesn’t have to announce that AI flagged your transaction as potentially fraudulent. The disclosure requirements apply to government and healthcare only.
Regulatory Sandbox Program
The law creates a regulatory sandbox program that lets companies test novel AI systems for up to 36 months without full compliance with certain regulatory requirements. You still can’t violate the core prohibitions—no intentional discrimination, no manipulation toward harm—but you can experiment with AI applications that might otherwise require licenses or permits.
To participate, you submit a detailed application to the Texas Department of Information Resources describing what your system does, what data it uses, what could go wrong, and how you’ll mitigate those risks. You must prove compliance with federal AI laws and submit quarterly reports on how the system performs and what risks emerge during testing.
During your sandbox period, the Texas Attorney General cannot pursue enforcement actions against you for violations of requirements that were waived. You get legal protection to innovate, and the state gets visibility into emerging AI applications before they’re deployed at scale.
Enforcement Mechanism
The Texas Attorney General has exclusive enforcement authority. There’s no private right of action, which means individuals cannot sue companies directly for TRAIGA violations. Your only recourse is filing a complaint with the Attorney General and hoping they pursue it.
When the AG investigates, they can issue civil investigative demands requiring companies to provide detailed information about their AI systems—what they’re designed to do, what data trains them, what limitations they have, how they’re monitored after deployment, what their performance metrics show. Companies must respond within 30 days unless they get an extension.
If the AG determines a violation occurred, they must provide written notice and give the company 60 days to cure it. Companies must demonstrate they’ve corrected the problem, provide documentation showing how, and update internal policies to prevent recurrence.
The penalty structure escalates based on whether violations are curable and whether companies fix them. Curable violations that aren’t fixed: $10,000 to $12,000 per violation. Uncurable violations: $80,000 to $200,000 per violation. For violations that continue after the cure period, companies face $2,000 to $40,000 per day. The AG can also seek injunctive relief and recover attorney’s fees and investigation costs.
For licensed professionals—doctors, engineers, architects—state agencies can pursue additional sanctions including license suspension or revocation, plus penalties up to $100,000, upon the AG’s recommendation.
Safe Harbors
TRAIGA includes several defenses that give companies practical protection. If you discover a violation through red-team testing—having people deliberately try to break your AI or use it in prohibited ways—that’s a defense. If you substantially comply with the NIST AI Risk Management Framework or similar recognized frameworks, that creates a rebuttable presumption you used reasonable care.
If a third party misuses your AI system in a prohibited way, you’re not liable just because your system was capable of that misuse. This protects general-purpose AI tools from liability when users employ them for harmful purposes the developer didn’t intend or design for.
Comparison to Other State and International Approaches
Colorado’s AI Act, which took effect June 1, 2026, requires detailed impact assessments for “high-risk” AI systems used in consequential decisions about employment, housing, credit, education, or healthcare. Companies must document known limitations, conduct ongoing monitoring, and provide detailed disclosures to consumers.
California has passed multiple targeted statutes addressing specific concerns. One law regulates deepfakes in elections. Another requires disclosure of AI training data. A third addresses algorithmic pricing.
The European Union’s AI Act uses a four-tiered risk approach: prohibited AI, high-risk AI, limited-risk AI, and minimal-risk AI. Each tier has different obligations.
TRAIGA’s advantage is that it clearly identifies what you cannot do rather than requiring you to navigate risk categories. It applies to virtually any company doing business in Texas, not to those hitting revenue thresholds or meeting other criteria. It’s exportable—other states can adopt it relatively easily because it doesn’t require building complex new agency infrastructure or developing specialized regulatory expertise.
Business groups have been muted in their opposition to TRAIGA. The intent-based liability standard gives companies predictability. The 60-day cure period means enforcement isn’t sudden. The safe harbors for following recognized frameworks give clear compliance guidance.
Federal Challenge
On December 11, 2025, President Trump signed an executive order directing federal agencies to challenge state AI laws deemed “onerous” or inconsistent with a minimally burdensome national AI policy. The order creates an AI Litigation Task Force empowered to challenge state laws through litigation. It directs the Commerce Department to evaluate state AI laws and identify those that require AI outputs to be altered or compel disclosures that might violate the First Amendment. It instructs the FCC to develop a federal AI reporting standard that would preempt state laws.
This doesn’t immediately invalidate TRAIGA—executive orders don’t have that power. But it signals that federal-state conflict over AI regulation is coming, and litigation could take years to resolve.
TRAIGA might survive federal challenge better than other state laws because it’s narrower in scope. Unlike Colorado’s law that imposes affirmative obligations to conduct impact assessments before deployment, TRAIGA primarily prohibits specific harmful uses. That prohibition-based approach may be more defensible against First Amendment challenges, which scrutinize requirements that compel speech or alter content more strictly than prohibitions on harmful conduct.
The executive order specifically targets state laws that require AI outputs to be altered, which could implicate content moderation requirements or bias mitigation mandates. TRAIGA doesn’t require companies to change AI outputs—it prohibits intentionally discriminatory systems.
Implementation Timeline
TRAIGA is now in force, which means companies are assessing compliance obligations and the Attorney General’s office is setting up enforcement infrastructure. The Texas Department of Information Resources is developing detailed rules for the regulatory sandbox program, which should be complete by mid-2026. State agencies are training their AI systems and developing public disclosure policies.
The AG’s office is creating the online complaint mechanism and staffing enforcement operations. They’re likely to prioritize high-visibility violations—discriminatory hiring systems, deepfakes, obviously intentional harm—to establish precedent and signal enforcement priorities.
By mid-2026, we should have clearer guidance on how the Attorney General interprets TRAIGA’s provisions. That guidance will shape how companies nationwide understand what “intentional discrimination” means in practice and what compliance looks like.
Over the next 12 to 24 months, watch for several developments: the first enforcement actions, which will signal the AG’s priorities; adoption of similar laws by other states, which would suggest TRAIGA is becoming a template; federal litigation challenging TRAIGA, which could determine whether state AI laws have a future; and international attention, because other democracies are also trying to regulate AI.
At least a dozen states are considering AI legislation for 2026, and several are explicitly modeling their bills on TRAIGA’s approach. If even half of those pass, you’ll see a regional bloc of states with compatible AI regulations, which creates pressure for a national standard—either through federal legislation or through enough states adopting similar frameworks that it becomes the de facto national rule.
Impact on Texas Residents
If you’re a Texas resident using AI tools, the most direct impact comes from enhanced disclosure requirements. When you interact with state agency AI systems, you’ll see clear notices that it’s AI, not a human. When you receive medical care involving AI, your provider must tell you.
Companies can’t intentionally discriminate in their AI systems. If a lending company uses AI to decide your mortgage application, that system can’t be deliberately biased. If a hiring platform screens your resume with AI, it can’t be intentionally discriminatory.
If you’re a small business using AI tools—customer service chatbots, hiring software, data analytics—you need to understand whether TRAIGA applies to you. If your restaurant uses a recommendation algorithm to suggest dishes, that’s probably fine. If you’re using AI to screen job applications, you need to document that the system isn’t intentionally discriminatory.
If you run or work at a tech company developing AI systems, TRAIGA’s compliance obligations are significant. You need to inventory all AI systems you develop or deploy in Texas and assess whether any fall under prohibited categories. You need documentation showing your systems aren’t intentionally designed for discrimination, manipulation, or constitutional violations. Aligning with the NIST AI Risk Management Framework helps defend against enforcement actions.
Conclusion
TRAIGA arrived at a moment when AI regulation is becoming unavoidable but there’s no consensus on how to do it. The law isn’t perfect—it provides less protection against unintentional bias than some advocates wanted, and it doesn’t include private rights of action that would let individuals sue companies directly. But it represents a pragmatic effort to address real AI harms while preserving space for innovation.
Whether it becomes the national standard depends on whether it works in practice. Can companies comply without killing innovation? Can the Attorney General enforce effectively? Do other states see it as a model worth adopting?
2026 will be a critical year for understanding how state AI regulation operates and whether a patchwork approach or unified national framework emerges. Texas has placed its bet on a prohibition-based model that’s narrower than some alternatives but broader in application than others.
If states lose authority to regulate AI through federal preemption, everything depends on what federal framework replaces state laws. If states retain authority, TRAIGA could become the template that standardizes AI regulation across America—not through federal mandate, but through state-by-state adoption of a workable model.
TRAIGA is the law in Texas, affecting any company that does business with Texans.
Our articles make government information more accessible. Please consult a qualified professional for financial, legal, or health advice specific to your circumstances.