State AI Laws Take Center Stage

State AI Laws Take Center Stage

Jul 29, 2025

Jul 29, 2025

a 3d image of a judge's hammer on a black background

An update on the evolving AI regulatory landscape following Congress's rejection of the federal moratorium on state AI regulation.

With federal preemption now off the table, states can continue in their role as laboratories of AI governance. Two states have emerged as particularly important case studies in such regulation, Texas and Colorado. Both have enacted sweeping AI laws set to take effect in early 2026, but with notably different approaches.

Colorado's High-Risk Focus

Colorado's Artificial Intelligence Act (CAIA), enacted in May, focuses on the development and deployment of "high-risk" AI systems and their potential to cause "algorithmic discrimination,” reflecting a common understanding that there are certain areas where AI poses the greatest potential for harm.

Colorado defines a "high-risk" AI system as any system that makes (or is a substantial factor in making) a "consequential decision," which generally relates to those involving education, employment, financial services, housing, health care or legal services. This definition is deliberately narrow, recognizing that not all AI applications present equal risks.

The law's focus on algorithmic discrimination is equally precise. Colorado defines "algorithmic discrimination" as any condition in which the use of an AI system results in an unlawful differential treatment or impact that disfavors an individual or group based on protected characteristics including age, color, disability, ethnicity, genetic information, limited English proficiency, national origin, race, religion, reproductive health, sex, veteran status or other classifications protected under Colorado or federal law.

Colorado's Requirements: Detailed but Targeted

Colorado imposes extensive obligations on both developers and deployers of high-risk AI systems:

For Developers:

  • Provide detailed product descriptions including reasonably foreseeable uses and known harmful uses

  • Supply high-level summaries of training data and known limitations

  • Make publicly available statements summarizing types of high-risk systems developed and how they manage discrimination risks

  • Disclose to the attorney general any discovered algorithmic discrimination within 90 days

For Deployers:

  • Implement risk management policies and programs governing deployment

  • Complete annual impact assessments

  • Provide consumers notice when AI makes consequential decisions and opportunity to appeal adverse decisions

  • Review deployment annually to ensure systems are not causing algorithmic discrimination

Safe Harbors and Enforcement

Colorado provides a safe harbor for businesses that discover and cure violations through their own actions (rather than complaints) and that follow specified AI frameworks such as NIST. The law does not provide for a private right of action and would be enforced by the Colorado attorney general, with offenses constituting unfair trade practices punishable by fines of up to $20,000 per violation.

Texas's Comprehensive Approach

The Texas Responsible Artificial Intelligence Governance Act (TRAIGA) was signed into law on June 22, 2025. Where Colorado takes a surgical approach, Texas created what may be the most comprehensive AI regulation in the United States.

TRAIGA's Expansive Scope

Unlike Colorado and Utah's regulation of only "high risk" AI systems, TRAIGA captures any machine-based system that "infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments." This definition is incredibly broad and presents massive practical implications for businesses trying to determine compliance obligations.

Prohibited Uses: The EU AI Act Influence

TRAIGA outlines a set of prohibited practices related to AI, including use of AI to manipulate human behavior, assign a social score, discriminate unlawfully, infringe on constitutional rights, and capture biometric data without consent. Like the EU AI Act, certain conduct is expressly prohibited by TRAIGA.

The prohibited uses include:

  • AI systems designed to intentionally incite or encourage self-harm or criminal activity

  • Systems that manipulate human behavior through deceptive means

  • Government use of AI for social scoring

  • AI that unlawfully discriminates or infringes constitutional rights

  • Biometric data capture without consent

Enforcement and Preemption

TRAIGA grants the Texas attorney general exclusive enforcement authority, and while there is no private right of action under the Act, the AG is required to create an online reporting mechanism by which individuals can report potential TRAIGA violations. The Act expressly nullifies any city or county ordinances regulating AI, aiming to prevent a local patchwork.

Comparing Approaches

The contrast between Colorado and Texas reflects two fundamentally different philosophies of AI governance.

Colorado's approach demonstrates surgical precision in AI regulation. By focusing specifically on high-risk systems in consequential decision-making contexts, Colorado acknowledges that not all AI applications present equal risks. This targeted approach minimizes compliance burden for low-risk AI applications and concentrates resources on areas where AI can cause the most harm.

The law's focus on algorithmic discrimination in consequential decisions reflects a mature understanding that the primary harms from AI come not from the technology itself, but from its deployment in high-stakes contexts where biased decisions can perpetuate or amplify existing inequalities.

Texas, meanwhile, takes a comprehensive approach that attempts to regulate the entire AI ecosystem. This broader strategy addresses a wider range of potential harms through categorical prohibitions, creates bright-line rules about entire categories of AI applications that are forbidden, and establishes government-wide standards for AI use across all state agencies.

Critics worry the definition of AI systems is so broad that it could capture basic computational tools, creating confusion about compliance obligations and potentially criminalizing routine business activities.

Practical Implications for Businesses

Multi-State Operations

Companies deploying AI across multiple states must now navigate increasingly complex compliance matrices. A hiring algorithm that's perfectly compliant in most states might trigger extensive documentation and testing requirements in Colorado if it's used for "consequential decisions," while the same system might face categorical restrictions in Texas if it falls under TRAIGA's prohibited uses.

Risk Assessment Frameworks

The different approaches also require different risk assessment strategies:

  • Colorado compliance demands deep analysis of whether AI systems make "consequential decisions" and whether they pose algorithmic discrimination risks

  • Texas compliance requires broader assessment of whether AI systems fall under prohibited uses, regardless of risk level

Documentation and Testing

Both states require extensive documentation, but with different focuses:

  • Colorado emphasizes impact assessments and bias testing for high-risk systems

  • Texas demands broader system documentation

Looking Ahead: Lessons for Federal Policy

The contrast between Texas and Colorado offers valuable lessons for eventual federal AI regulation. Colorado's precision demonstrates that effective AI regulation doesn't require broad prohibitions – carefully targeted rules can address the most significant harms while preserving innovation space. Texas's comprehensive approach will over time show both the benefits and potential risks of trying to regulate AI holistically.

States are serving exactly the function they should in our federal system – testing different approaches and generating evidence about what works. Far from stifling innovation, as some advocates of the proposed moratorium claimed, Colorado and Texas both show, in their own ways that, that innovation is still possible. Even under the more restrictive Texas law, the important sandbox program and safe harbors allow new ideas to flourish.

The Stakes Remain High

In 2025, every single state has introduced AI-related legislation, and over half of the states have enacted some kind of AI-related laws.

For compliance professionals, the message is clear – While we avoided the dangerous vacuum that a federal moratorium would have created, we now face the challenge of navigating an increasingly complex patchwork of state regulations. For smaller companies especially, the cost of multi-state compliance could become prohibitive, potentially creating competitive advantages for larger firms with large, dedicated compliance teams.

In 2025, every single state has introduced AI-related legislation, and over half of the states have enacted some kind of AI-related laws.

As these laws take effect in 2026, we'll also begin to see which approaches prove most effective at balancing innovation with protection. That real-world evidence will be invaluable as federal policymakers eventually turn their attention to national AI standards. In the meantime, businesses must prepare for a regulatory environment that demands both technical sophistication and legal precision.

About the Author

Matt Kalmick, J.D.

I'm a strategic and collaborative leader passionate about building compliance programs that reduce risk and remove regulatory barriers.

From financial services to FinTech and SaaS to cannabis, I have been managing risk and compliance in highly-regulated environments for the last 15 years.

I received my Juris Doctor from Boston College Law School, my Bachelor’s Degree from Drew University, and my Certified International Privacy Professional (CIPP) certification from the International Association of Privacy Professionals (IAPP).

More Posts