Artificial intelligenceDonald TrumpFeaturedNuclear energyRegulationThree Mile Island

Why Donald Trump’s AI Strategy Needs More Safeguards

Like nuclear energy, AI is a transformative technology that could face a severe backlash if the right precautions are not taken.

President Donald Trump has revealed his highly anticipated Artificial Intelligence Action Plan, which declares that “America is in a race to achieve global dominance in artificial intelligence (AI),” and that winning this race “will usher in a new era of human flourishing, economic competitiveness, and national security for the American people.” The AI Action Plan, in line with Vice President JD Vance’s remarks at the AI Action Summit in Paris earlier this year, envisions a future where powerful AI is ubiquitous, fueling unprecedented prosperity for Americans. 

But brushing off safety concerns is more likely to undermine that vision than accelerate it. Decades ago, many people were just as optimistic about nuclear power—until a high-profile accident at the Three Mile Island plant sparked intense backlash and nearly killed the industry. To avoid making the same mistake, US AI leaders should learn from the post-Three Mile Island transformation of nuclear power and adopt a learning-focused approach to self-regulation, a proven strategy for reducing the risks associated with powerful, complex technology and making it viable to deploy at scale.

Although AI has enormous potential for enhancing American economic competitiveness and national security, simply developing better AI models isn’t enough. History shows that new technologies must be deployed broadly in order to strengthen a nation’s overall power. For instance, the Soviet Union graduated twice as many scientists and engineers as the United States during the early Cold War, but struggled to introduce new technologies into its economy, contributing to its collapse. 

In contrast, the United States trailed many European countries in scientific discovery during the Second Industrial Revolution, but soon pulled far ahead by mobilizing public universities and industrial firms to commercialize and deploy new technologies rapidly.

Given this, the United States will not enjoy the benefits of its state-of-the-art AI models until they are deployed broadly. Still, AI adoption continues to face hurdles in the United States—often for good reasons. Concerns over inaccuracy, IP infringement, and cybersecurity have left companies hesitant to integrate AI into their business operations, and some have even banned their employees from using it. Personal injury firm Morgan & Morgan recently warned its employees about the consequences of using AI after two lawyers at the firm cited fake case law generated by AI. They are not the only ones to have done so.

These current harms and frictions pale in comparison to what might be coming. Leading AI developers are already warning about catastrophic risks from advanced AI models, which could result in “hundreds of billions of dollars in economic damage” or lead to the deaths of untold numbers of people. For example, both OpenAI and Anthropic now warn that their current models could (without mitigations in place) help terrorists develop bio-weapons, and it’s likely just a matter of time before similar AI capabilities spread widely. 

While these risks remain mostly theoretical for now, large-scale risks will become increasingly salient to the wider public as AI becomes more powerful and more integrated into critical systems. Polls indicate that more Americans are already concerned rather than excited about AI, and 82 percent of Americans would prefer slowing down AI development rather than speeding it up. A single highly visible incident of harm caused by an AI-enabled system could provoke severe public backlash and halt further deployment or development of AI.

We’ve seen the public lose faith in promising technologies before. Just like AI, nuclear energy was believed to have great potential. However, when operators at the Three Mile Island nuclear plant failed to prevent a partial nuclear meltdown that released radioactive materials into the environment in 1979, massive public outcry over risks from nuclear power plants resulted in the cancellation of 67 planned builds from 1979 to 1988. Contemporary polls indicated that 43 percent of Americans thought all existing nuclear power plants should be closed down until action was taken to prevent another accident.

In the face of existential threats to their industry, nuclear power companies established the Institute for Nuclear Power Operations (INPO) to coordinate a learning-focused approach to self-regulation. INPO’s key actions included defining industry-wide standards and best practices for personnel management, establishing mechanisms such as standardized incident reporting to enable plant operators to learn from each other’s experiences, and evaluating nuclear power companies on their safety performance. Plant operators quickly adopted INPO’s standards and best practices to avoid ranking last among their peers.

While INPO’s efforts helped improve the safety and reliability of nuclear power plants, the industry never recovered the momentum it had before Three Mile Island. To ensure the United States maintains its competitive advantage in AI, the federal government must take action before an accident occurs.

INPO’s model of industry-led self-regulation shows promise for AI, and striking similarities have already begun to emerge. The call to shut down existing nuclear power plants echoes the call for a six-month pause on AI development, as well as growing public sentiment against the technology. The formation of INPO mirrors that of organizations like the Frontier Model Forum and the Partnership on AI, which seek to build a similar community among AI developers. Much like modern AI models, nuclear plant technologies were new, complex, and poorly understood at the time.

That being said, nuclear plants differ from modern AI models in important ways. One cannot buy access to a nuclear plant or download fissile materials from open-source repositories. Additionally, there are simply more opportunities for accidents involving AI systems than nuclear plants, which are generally restricted to trained and certified plant employees. Nonetheless, managing nuclear plants’ emergent risks required a process of iterative regulation focused on learning from experience and making adjustments over time that may be apt for AI as well. Conventional regulatory bodies lack the domain knowledge and agility necessary to effectively implement this approach for AI. However, these problems can be avoided if the industry takes the initiative to regulate itself.

Without an accident like Three Mile Island to spur action, the AI industry lacks the urgency it needs to implement effective self-regulation. As public fears over nuclear safety threatened to end the industry altogether, nuclear power operators had good reason to hold each other accountable for their safety performance; however, AI has not yet reached that point. 

True, many AI companies have made voluntary commitments towards improving the security and reliability of their models. Still, talk is cheap, and no authority or critical mass of public opinion is holding them accountable. Additionally, the objective rankings and safety measures that INPO implemented for nuclear plants are difficult to replicate with AI. Existing benchmarks and evaluations for model capabilities are inadequate, and problems like AI hallucination and data poisoning remain fundamentally unsolved. 

Surprisingly, government involvement may be the missing ingredient. While AI companies currently lack strong incentives to regulate themselves, federal action can catalyze the adoption and coordination of a learning-focused regulatory approach to prevent a catastrophic accident before it happens. There are several promising avenues.

First, the US government should require AI companies to develop, publicize, and adhere to credible risk management plans, without requiring specific actions or policies to be included in those plans. This would give companies the flexibility to tailor their approach to their particular circumstances and products while providing accountability and transparency to the public. Many leading AI developers have already published risk management plans, but these plans must be made legally binding in order to prevent companies from compromising on safety measures when they become inconvenient. 

Agencies like the Center for AI Standards and Innovation (CAISI) are well-positioned to vet risk management plans and verify company compliance. These agencies can also work with AI developers to reflect and iterate on their risk management plans as new risks and risk management techniques for AI emerge over time. Effective implementation and iteration of these policies and practices would help mitigate the risks associated with advanced AI models and improve public confidence in their use.

Second, the US government should push AI companies to share safety-relevant knowledge with one another and other stakeholders, thereby fueling collective learning from experience. This could include establishing reporting requirements, mandating public disclosure of unexpected or concerning capabilities found in new models, or setting up channels for sharing threat intelligence between AI companies and the government. These measures would help build a shared learning environment among leading AI developers, overcoming the narrow commercial incentives that lead each to keep valuable information to themselves.

Finally, the US government should support the development of evaluations, including benchmarks and other measurement techniques, for AI capabilities and risks, with the goal of eventually establishing performance requirements based on those benchmarks. Appropriate benchmarks might test a model’s resistance to jailbreak attacks, usefulness for making CBRN weapons, capacity for deception, and other relevant attributes. 

As these benchmarks mature, regulators should consider placing restrictions on the development and deployment of models that fail to meet industry-standard performance levels. Having a strong understanding of the risks associated with deploying a given model would allow AI developers to gauge the effectiveness of their risk mitigations and enable regulators to prevent the release and deployment of hazardous models.

These recommendations do not replace industry self-regulation; instead, they enable it. By requiring AI companies to adhere to their own risk management plans and supporting knowledge-sharing and evaluation efforts, the US government can ensure that AI systems are developed and deployed safely, while leaving the details of implementation in the hands of those most familiar with the technology.

While there will always be risks associated with deploying new technology, nuclear power has shown that a process of iterative, learning-focused regulation can help prevent accidents from derailing promising technologies and propel future innovation. By removing barriers to effective self-regulation, the government can prevent an AI Three Mile Island before it happens and position the United States as a leader in both AI safety and opportunity.

About the Authors: Adrian Thinnyun and Zachary Arnold

Adrian Thinnyun is a Horizon junior fellow at Georgetown University’s Center for Security and Emerging Technology (CSET), where he works on the Emerging Technology Observatory (ETO) team.

Zachary Arnold is the former project lead for the Emerging Technology Observatory (ETO) at Georgetown University’s Center for Security and Emerging Technology (CSET).

Image: Brian Jason / Shutterstock.com.

The post Why Donald Trump’s AI Strategy Needs More Safeguards appeared first on The National Interest.



Source link

Related Posts

1 of 91