The United States faces an AI challenge similar to the Atomic Age and needs clear federal rules to secure leadership and manage proliferation risks amid great power competition.
On December 8, 1953, President Dwight D. Eisenhower stood in front of the United Nations (UN) to explain the dawn of the “Atomic Age”—and its risks. He cautioned that “the fearful engines of atomic might are not ours alone,” noting that the Soviet Union’s extensive investments into nuclear weapons had destroyed America’s short-lived monopoly overatomic power. The stakes were unprecedented. There was a risk of “surprise aggression.” There would be proliferation.
Today, a new artificial intelligence (AI) era is dawning—bringing with it challenges similar to those of the Atomic Age. As it looks to craft a responsive policy, the US government would do well to learn from the Atomic Age example. In particular, one regulatory lesson is very clear: When facing a great power challenger and grappling with a disruptive technology, the United States needs a centralized, clear regulatory environment that encourages the positive commercial development on which American global leadership is based.
Learning from the Atomic Age’s Domestic Regulatory Blueprint
Ike’s December 1953 message targeted an international audience. But the domestic policy machine was operating in parallel, generating Congress’s 1954 revision of the Atomic Energy Act. That legislation initiated a novel regulatory framework that has guided American nuclear development ever since. The framework has helped America to innovate, commercialize, and lead in both the commercial and military applications of nuclear technology for successive generations—and to do so safely and securely, despite a perilous global security environment and the disruptive potential of nuclear technology.
The Atomic Energy Act of 1954 revised the original 1946 Atomic Energy Act to enable private-sector commercialization of nuclear technology. But the 1954 Act ensured that such commercialization would take place under the oversight of a centralized, Federal suite of regulatory authorities, including those originally held by the Atomic Energy Commission (AEC), a single Federal regulator.
Guiding this policy was the logic that commercial development of safe and secure nuclear energy technology would advance both American interests and the global security imperiled by an era of proliferation. Through regulated commercial development, the United States could use the power of its market and market norms to avoid rampant, unpredictable proliferation—and, potentially even worse, Soviet dominance of the technology.
Washington also used its central Federal regulatory authority to advance international nuclear cooperation. The AEC directly contributed to peaceful research partnerships—backed by the AEC’s Mutual Security Fund—with countries including not only America’s closest atomic partner, the United Kingdom, but also counterparts as diverse as Brazil and Turkey. Ultimately, the AEC’s mandate contributed directly to the establishment of the International Atomic Energy Agency (IAEA) in 1957.
The fears that Eisenhower raised in December 1953 never materialized. On the contrary, the decades that followed saw relative peace paired with the rapid maturation of civilian nuclear uses—and corresponding American leadership, both commercially and militarily. The Federal government’s clear regulatory approach contributed to this outcome. So did the clarifying effect of a Cold War threat in the Soviet Union.
Why Today’s AI Moment Mirrors 1953—and What Washington Should Learn
AI and nuclear energy are very different technologies. Nuclear technology emerged from government-led, military-oriented efforts. Their initial uses demonstrated destructive force and claimed a massive human toll. By contrast, AI has been scaled in commercial markets first, under the hand of private actors, and with a far less real destructive force.
But ultimately, both technologies are inherently dual-use. And both are prone, as Eisenhower cautioned back in 1953, to surprise applications and proliferation. Moreover, today’s security environment has much in common with Eisenhower’s. AI’s technological disruption has emerged just as the United States stares down a Communist challenger in a Cold War struggle.
In 1953, faced with the dual challenges of technological upheaval and great power conflict, Washington responded with clear Federal government regulation that both empowered and oversaw the private sector. Today, the United States should take a similar approach to AI. Fragmented, state-level regulation can neither harness nor restrain the technology’sdisruptive potential.
Eisenhower was clear-eyed about the challenges of the Atomic Age. But he also saw the opportunity. His “Atoms for Peace” initiative pursued strategic—and moral—US interests. He sought not only to protect against downside risk butalso to propel American nuclear leadership. As he said, “occasional pages of history do record the faces of the ‘Great Destroyers’ but the whole book of history reveals mankind’s never-ending quest for peace, and mankind’s God-given capacity to build.” America’s builders can lead in the AI era, but they’ll need a clear regulatory hand from the US federal government.
About the Authors: Emily de La Bruyere and Nathan Picarsic
Nathan Picarsic is a senior fellow at the Foundation for Defense of Democracies, focusing on Chinese global and economic strategy. He is also the cofounder of Horizon Advisory, a supply chain data firm. He has a BA from Harvard College.
Emily de La Bruyère is a senior fellow at the Foundation for Defense of Democracies, focusing on China policy, and is the cofounder of Horizon Advisory, a supply chain data firm. She has a BS from Princeton University and an MA from Sciences Po, where she was a Michel David-Weill Fellow.
Image: US government/Wikimedia Commons















