What Is the Texas Responsible AI Governance Act (TRAIGA)?

Skip to main content
< All Topics

The Texas Responsible Artificial Intelligence Governance Act (TRAIGA) is a comprehensive state law that establishes the legal framework for the development, deployment, and use of artificial intelligence within the State of Texas. Signed into law on June 22, 2025, and officially effective as of January 1, 2026, the Act positions Texas as one of the leading regulators of AI in the United States.

Unlike the European Union’s risk-based tier approach, TRAIGA utilizes an “intent-based” liability standard. This means legal consequences under the Act primarily focus on whether a developer or deployer intended to cause harm, rather than on accidental or incidental outcomes of the AI system.

Scope and Applicability

TRAIGA has a broad jurisdictional reach. It applies to any individual or legal entity that:

  • Conducts, promotes, or advertises business within the State of Texas.
  • Offers products or services that are used by residents of Texas.
  • Develops or deploys an AI system physically located within the state.

This effectively covers both Texas-based startups and global corporations if their AI models interact with the Texas population.

Prohibited AI Practices

The Act identifies several unacceptable uses of AI. It is illegal to develop or deploy an AI system with the intentional goal of:

  • Behavioral Manipulation: Inciting or encouraging individuals to commit physical self-harm, harm others, or engage in criminal activity.
  • Unlawful Discrimination: Intentionally discriminating against a protected class (based on race, color, sex, religion, disability, etc.) in violation of existing state or federal civil rights laws.
  • Constitutional Infringement: Purposefully impairing or restricting rights guaranteed under the U.S. Constitution.
  • Unlawful Content Generation: Producing or distributing child sexual abuse material (CSAM) or non-consensual sexually explicit deepfakes of adults.

Disclosure Requirements for Public and Health Sectors

While the Act is relatively permissive for general private-sector software, it imposes strict transparency mandates on government and healthcare entities:

  • Governmental Agencies: Any state or local agency using an AI system to interact with the public must provide a clear and conspicuous notice. This disclosure is required even if it would be obvious to a reasonable person that they are interacting with an AI, such as a clearly labeled chatbot.
  • Healthcare Providers: Healthcare providers in Texas must disclose the use of AI if the system is used during the diagnosis or treatment of a patient. This must be communicated to the patient or their legal representative no later than the date the treatment is first provided.

The Texas AI Regulatory Sandbox

To balance regulation with innovation, TRAIGA established a state AI regulatory sandbox. Managed by the Texas Department of Information Resources (DIR) in consultation with the Texas Artificial Intelligence Council, this program allows participants to:

  • Test Innovations: Pilot new AI technologies for up to 36 months in a controlled environment.
  • Receive Regulatory Relief: Operate without fear of regulatory action for potential TRAIGA violations while participating in the program.
  • Mandatory Reporting: Participants must submit regular reports to the state detailing system performance metrics and risk mitigation efforts.

Enforcement and Penalties

The Texas Office of the Attorney General (OAG) holds exclusive authority to enforce TRAIGA. There is no private right of action, meaning individual citizens cannot sue companies directly under this Act.

  • Cure Period: Before an enforcement action begins, the Attorney General must provide the entity with a 60-day written notice and an opportunity to cure the violation.
  • Curable Violations: $10,000 to $12,000 per violation.
  • Uncurable Violations: $80,000 to $200,000 per violation.
  • Continuing Violations: Up to $40,000 per day for as long as the violation persists.

Safe Harbors

The Act provides a significant safe harbor for organizations that can demonstrate substantial compliance with recognized AI risk management frameworks, such as the NIST AI Risk Management Framework. Documenting internal testing, red-teaming, and adversarial audits can serve as a rebuttable presumption of reasonable care if an investigation is launched.

Was this article helpful?
0 out of 5 stars
5 Stars 0%
4 Stars 0%
3 Stars 0%
2 Stars 0%
1 Stars 0%
5
Please Share Your Feedback
How Can We Improve This Article?