AI Regulation: A Federal Moratorium on State Laws Is a Misguided Approach

AI Regulation: A Federal Moratorium on State Laws Is a Misguided Approach

Jun 10, 2025

Jun 10, 2025

Map of the United States

As someone who has spent over a decade navigating complex regulatory landscapes, I've seen firsthand how effective regulation requires a delicate balance. It must protect consumers while facilitating innovation, establish clear standards while allowing for growth, and strive for some sort of comprehensiveness across jurisdictions while respecting local autonomy. This makes the recent push by Congress to impose a 10-year federal moratorium on state AI laws misguided and potentially shortsighted.

The Current State of AI Regulation: A Tale of Federal Inaction

The U.S. House of Representatives narrowly passed a bill in May 2025 that would ban states from enforcing any laws regulating artificial intelligence for a decade. This sweeping provision, tucked into the so-called "big, beautiful bill," represents one of the most aggressive attempts to preempt state regulatory authority in recent memory.

Particularly troubling is that many states have already begun down this path. In 2024 alone, at least 40 states introduced AI-related legislation, with six states successfully enacting new laws. In 2025, this trend has accelerated dramatically. Some 41 states enacted 107 pieces of AI-related legislation this year, covering everything from algorithmic discrimination to deepfake prevention.

Meanwhile, though the U.S. Congress has introduced scores of AI bills, few have seen meaningful advancement. This stark contrast between state action and federal inaction tells us everything we need to know about where effective AI governance is actually happening. Creating a moratorium makes no sense if there is no plan to fill the void that will inevitably be left behind.

The Regulatory Vacuum Problem

One of my favorite parts of the compliance role is the ability to provide the clarity a thriving business requires, but the proposed moratorium offers the opposite — a dangerous vacuum. As a bipartisan group of 40 state attorneys general noted recently, "This bill does not propose any regulatory scheme to replace or supplement the laws enacted or currently under consideration by the states, leaving Americans entirely unprotected from the potential harms of AI."

This is exactly the kind of "preemption without protection" approach that I've seen fail repeatedly in other industries. When regulators remove existing safeguards without replacing them with equivalent protections, or fail to put any safeguards in place to begin with, the result is inevitably consumer harm and market instability. 

Why States Are (and Should Be) Leading the Way

Regardless of industry, I’ve seen innovation in regulatory approaches often emerge at the state level before being adopted federally. This isn't a bug in our system—it's a feature. States serve as laboratories of democracy, testing different approaches and generating evidence about what works.

The federal Take It Down Act, which criminalizes AI-generated non-consensual intimate imagery, was undoubtedly influenced by similar laws that dozens of states had debated and enacted over the past five years. This is exactly how effective federal regulation should emerge—informed by state experimentation and experience.

The cannabis industry again offers an instructive example. States developed sophisticated regulatory frameworks for medical and recreational cannabis while federal law remained static. When federal policy eventually evolves, it will undoubtedly draw heavily on lessons learned from state programs.

The Real Motivation Behind the Moratorium

Let's be clear about what's driving this proposal. Major tech firms have lobbied hard for a unified, and presumably friendlier, federal approach to AI, arguing they don't want a "patchwork" of state rules that could stifle innovation and drive up compliance costs. OpenAI CEO Sam Altman has testified that a "patchwork" of AI regulations "would be quite burdensome and significantly impair our ability to do what we need to do." This is a reasonable sentiment, but only if this alternative comes to pass.

Of course, a unified approach is just one benefit of a federal regulatory scheme. Another benefit is that the current administration will likely bring a lighter touch than would a number of states’ regulators. And that’s only if Congress manages to get something done. If congress fails to act at all, which is all too likely in the current climate, Big Tech will have an open runway to do as they please.

A Better Path Forward

Rather than leaving this dangerous playing field open, imagine a pragmatic regulatory system that fosters the best of AI to make our collective lives better while ensuring no one is harmed in the process. So what does this process look like?

Stakeholder Engagement: Allowing states to manage the process allows for greater, and more urgent engagement. The best compliance frameworks emerge from genuine dialogue between regulators, industry, and affected communities. The moratorium shuts down this conversation just as it's becoming productive.

Iterative Development: Complex regulatory challenges require testing, refinement, and adaptation. State-level experimentation provides invaluable, ground-level, data for developing effective federal approaches.

Risk-Based Approaches: Different AI applications pose different risks and require tailored responses. States are developing nuanced approaches that recognize these differences based on their individual communities and industries.

Enforcement Mechanisms: Effective regulation requires clear enforcement pathways. As the state attorneys general noted, the moratorium "would make it virtually impossible to achieve a level of transparency into the AI system necessary for state regulators to even enforce laws of general applicability." At a time when the federal government is being pared down, it only makes sense that states can provide this lack of support at the federal level.

Instead of a blanket moratorium, and if their “patchwork” concerns are genuine, Congress should act with haste to accelerate their own federal AI legislation that establishes minimum standards while allowing states to exceed them where appropriate.

Conclusion: The Stakes Are Too High for Inaction

As Colorado Attorney General Phil Weiser noted, "In an ideal world, Congress would be driving the conversation forward on artificial intelligence, and their failure to lead on AI and other critical technology policy issues—like data privacy and oversight of social media—is forcing states to act."

The artificial intelligence revolution is happening now, not in ten years. Every day, AI is making critical decisions about hiring, lending, our healthcare, criminal justice, and countless other areas that profoundly impact our lives. To enact a 10-year ban on state action just as we are beginning to grasp AI's potential benefits and harms would be a huge mistake.

As privacy and compliance professionals, we understand that regulation is not the enemy of innovation—it's the foundation that enables sustainable innovation. The choice is not between regulation and innovation; it's between thoughtful, adaptive governance and a regulatory race to the bottom.

About the Author

Matt Kalmick, J.D.

I'm a strategic and collaborative leader passionate about building compliance programs that reduce risk and remove regulatory barriers.

From financial services to FinTech and SaaS to cannabis, I have been managing risk and compliance in highly-regulated environments for the last 15 years.

I received my Juris Doctor from Boston College Law School, my Bachelor’s Degree from Drew University, and my Certified International Privacy Professional (CIPP) certification from the International Association of Privacy Professionals (IAPP).

More Posts