A comprehensive federal regulatory framework for AI is neither politically nor technically feasible. A more flexible approach is needed.
Red Cell
Editor’s Note: The Red Cell series is published in collaboration with the Stimson Center. Drawing upon the legacy of the CIA’s Red Cell—established following the September 11 attacks to avoid similar analytic failures in the future—the project works to challenge assumptions, misperceptions, and groupthink with a view to encouraging alternative approaches to America’s foreign and national security policy challenges. For more information about the Stimson Center’s Red Cell Project, see here.
Since the 2024 passage of the EU’s AI Act, top-down approaches to AI governance have been gaining popularity. Europe’s AI regulation was the first to address all AI applications in one cohesive framework, and various jurisdictions, including US states as well as other countries, are currently working to emulate that approach. Even the US’s geopolitical rival, China, which was previously the textbook example of a fragmented regulatory landscape for AI, is working on a far-reaching Artificial Intelligence Law.
Justifications for a systemic approach to AI vary. In China, a broad approach is probably partly intended to increase government control over the AI sector. In Europe, some proponents emphasize civil rights, while others argue that the 2024 act was intended to inspire a “Brussels effect” and give Europe a first-mover advantage in the global race over AI governance.
US policymakers and business leaders hold varying perspectives on the extent of AI governance. Many policymakers and industry leaders favor a unified, broad national policy because navigating the currently fragmented US regulatory environment for AI can be complex or even prohibitive for AI companies. Still others, like business mogul Marc Andreessen, believe that AI should not be regulated at all to maximize innovation.
A heated debate erupted in the United States in June 2025 with the inclusion of a 10-year moratorium on the enforcement of state AI regulations in the Trump administration’s budget bill. Advocates of the provision felt that the current patchwork of state policies would only stifle innovation and thus reduce American competitiveness internationally.
However, many of those same advocates felt that the moratorium was a necessary temporary measure pending the development of a comprehensive federal AI regulation that addresses risks while minimizing uncertainty for businesses. Though the moratorium was ultimately removed from the bill, thinking critically about patchwork AI regulation remains important.
At this point, the debate over the merits of comprehensively regulating AI in the United States is academic. Given lobbying efforts by companies like Andreessen’s, discord around AI regulation in Congress, and the failure of similar regulatory efforts for digital privacy, a comprehensive federal law is highly unlikely in the near future.
Nonetheless, some members of Congress agree on the need to regulate certain aspects of AI. Notably, a consensus emerged on Non-Consensual Intimate Imagery (NCII), leading to the passage of the TAKE IT DOWN Act in early 2025. Absent a comprehensive congressional mandate, the regulatory landscape in the United States is likely to continue as a set of similarly specific regulations at the federal level presiding over an expanding hodgepodge of state regulations.
Though this landscape will certainly lead to regulatory complexity for businesses, a lack of comprehensiveness could be an asset for the United States. Comprehensive regulation risks a loss of flexibility, a necessity when regulating a technology as heterogeneous as AI. An inflexible comprehensive approach is structurally incompatible with AI and could leave both companies and consumers unprotected from emerging risks.
The Case for AI Regulatory Flexibility
The breakneck speed of technological advancement in AI, as well as a lack of consensus on regulation, makes it difficult to keep up with risks on the horizon.
First, AI is not one technology; it is a technological nexus of methods and applications, and it is not presently possible to design a regulatory regime that adequately addresses all applications of AI or one that can predict all eventualities. The complexity of defining AI and predicting its development makes it very difficult to understand the importance or scale of any associated risks, which is usually a first step in formulating regulation.
Without this understanding, regulators cannot create an appropriate governance structure. In the absence of firm definitions or predictions, AI governance necessitates an approach that prioritizes flexibility and specificity over breadth or uniformity. Regulation can still be imposed from the top, but the structure of any selected policies must account for the inherent variability of AI itself.
AI researcher Atoosa Kasirzadeh’s accumulative x-risk hypothesis posits that the risks of AI will accumulate gradually, as AI technology advances and societies repeatedly fail to adjust to new challenges, eventually leading to catastrophic societal impacts. This phenomenon is already unfolding. In the United States, regulators are just now addressing issues like deepfakes and election security.
Meanwhile, capabilities like agentic AI—which promises to create AI assistants that can interface directly with the world on behalf of AI users—wait in the wings to pose new challenges. For this reason, some scholars advocate a flexible, technology-neutral approach to AI governance that focuses on addressing harmful outcomes as they occur, rather than wasting regulators’ time and energy in trying to predict or address specific technological capabilities.
Second, the wide variety of AI applications and capabilities require different kinds of regulations. Colloquially, many people use the term “AI” to refer to large-language models, neural networks, and other applications that have direct user-facing interfaces. However, AI is an umbrella term that encompasses many different model types and functions, ranging from a chatbot like ChatGPT to a technical design tool for new chemical compounds. This variety of applications and methods complicates the analysis of AI risks and impacts. For example, two widely discussed AI risks are labor market collapse and the potential to create novel bioweapons. These impacts differ widely in scale, context, and methods, and they should not be lumped together in a single policy framework.
Third, there is bitter disagreement between two groups of AI experts regarding the future of the technology. Some believe that artificial general intelligence (AGI)—AI that has equal or greater intelligence than humans—will soon become a reality, and policy efforts should be aimed at mitigating its potential harms. Others believe AGI is a myth designed to distract from AI’s present-day risks, which include environmental degradation, racial discrimination, disruptive job losses, and other threats to global security.
These two camps are both large and vocal, making it extremely difficult for policymakers, who generally are not AI experts, to determine which risks to target and which policies to pursue. Attempts to find common ground are further complicated by the financial interests of the private sector, which have spurred concerted lobbying efforts to limit and shape regulation. Developing a comprehensive approach is difficult in such a contentious atmosphere because the methods and interests of each side are almost diametrically opposed.
In light of these considerations, it becomes clear that any policy designed to address AI comprehensively must contain a wide variety of flexible provisions that account for technological change, a broad range of capabilities and applications, and intellectual disagreements. Any comprehensive policy inherently introduces many points that require specificity, and in the case of AI, achieving precision while allowing for the above factors will be a challenge. A patchwork of flexible policies that are regularly updated might therefore be more effective than a comprehensive policy intended to regulate a technology as varied as AI.
Legislating AI Flexibility
Some policymakers have endeavored to create a flexible, comprehensive policy. For example, the EU Parliament explicitly noted that it hoped the EU AI Act would “establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.” The act pioneers a model for AI regulation that determines risk tiers for a range of AI applications and sets rules for each tier. This approach applies capacity-limiting regulations only to AI applications deemed to pose sufficient risk. The EU AI Act’s system has already inspired a flurry of similar proposed regulations that employ a “tiered-risk” system, including in states such as California and Colorado.
The efficacy of the law, however, is yet to be determined; the EU’s AI Office has been mired in disagreements about the implementation of the act. The tech sector, which is lobbying against the enforcement of the law, continues to argue that it will hamper innovation and investment. Partly as a result of this pressure, the European Commission has been significantly delayed in producing specific standards and rules as directed under the act. In June 2025, Henna Virkkunen, the European Commission’s executive vice president for technological sovereignty, security, and democracy, suggested that the enforcement of some sections of the act could be postponed.
Meanwhile, in the United States, less comprehensive, more targeted policies like the ones already being produced by Congress and the states stand a better chance of passing legislative and consensus-building hurdles, and they may also be better suited to addressing the complexity of AI. Regulations specific to certain issues and geographic areas create a piecemeal network of protections and restrictions in the AI space. Though the present AI regulatory landscape is complex, it is far preferable to one that sacrifices regulation in the present day to await an eventual, unlikely consensus between AI policymakers and stakeholders on a comprehensive national policy.
Working Within a Patchwork Regulatory Framework
The American AI regulatory patchwork poses challenges for businesses, but polarization is more likely to leave AI unregulated than it is to produce a comprehensive policy anytime soon. In the meantime, the United States can leverage the existing regulatory landscape as a testing ground for various approaches to AI policy. This could set the stage for a thoughtful, effective, and comprehensive federal approach.
On the technical side, one approach to encouraging AI development while limiting risks is to create so-called “regulatory sandboxes,” or development environments where AI companies can build new technologies with minimal regulation. However, they may only deploy them on a limited basis. Leaving the states to experiment with different approaches could create a unique regulatory sandbox that tests approaches, informing an eventual comprehensive approach at the federal level.
While legislators in Colorado test a tiered-risk approach, New Hampshire criminalizes deepfake fraud, and Hawaii funds an AI-powered wildfire forecasting system. In the long run, this experimentation will demonstrate which approaches to AI regulation are most effective across various metrics, including fostering innovation and reducing user risks.
In the short run, this variety in state regulations may mean that innovation is restricted or civil rights are inadequately protected from jurisdiction to jurisdiction. However, if Congress’ 10-year moratorium had succeeded, resulting in the disintegration of the state patchwork, unregulated AI risks would have had unfettered, long-lasting downstream impacts on American society and the US economy. With the rejection of the moratorium, states might instead feel empowered to create even more AI regulations that increase protection for citizens, benefiting at-risk individuals and Americans at large.
A 2025 report from the Pew Research Center found that a majority of American adults fear that the government will not do enough to regulate AI, highlighting that the majority of Americans view the technology with trepidation. The risks of AI are varied and well-documented. These range from present-day risks to people, the environment, and markets, to long-term risks like the development of an AI superintelligence that has the potential to destroy humanity. Such risks ought to be addressed through regulation and supportive policies, such as reskilling workers for the new AI-driven economy.
In the absence of such regulation, corporations and AI developers would benefit from adopting business practices that mitigate risks. The Biden administration embraced this idea, emphasizing voluntary commitments for tech companies over formal AI regulation. Nonetheless, many critics believe that AI giants’ efforts to undertake voluntary self-governance are merely virtue-signaling. This perspective gains merit given the way that many companies have recently backtracked on measures that address risks or privacy since the beginning of President Donald Trump’s second term. As deregulation has become popular, AI companies have quickly jumped on board.
Assuming that AI companies cannot be relied upon to self-regulate, ignoring the scope and severity of AI risks, and awaiting an improbable comprehensive federal rule endangers the American public. Reducing regulation—such as via Congress’ failed moratorium—will only increase the exposure of the American people to AI’s pressing risks. As these risks continue to expand, states and Congress must urgently prepare tailored regulations to manage them. Until the hypothetical day when a comprehensive regime can be successfully developed and passed, we need decentralized regulation—and may even benefit from it in unexpected ways.
About the Author: Giulia Neaher
Giulia Neaher is a research analyst with the Stimson Center’s Strategic Foresight Hub, where she specializes in responsible AI policy and the sociopolitical impacts of emerging technologies. She holds a Master’s in Public Policy from Harvard Kennedy School, where she was a public service fellow, and previously held roles at the Atlantic Council, Center for AI & Digital Policy, and the Center for Strategic and International Studies (CSIS).
Image: Deemerwha Studio / Shutterstock.com.