The development of artificial intelligence cannot be stopped. Yet, that doesn’t mean it cannot be realigned with human-centered priorities.
Editor’s Note: The Red Cell series is published in collaboration with the Stimson Center. Drawing upon the legacy of the CIA’s Red Cell—established following the September 11 attacks to avoid similar analytic failures in the future—the project works to challenge assumptions, misperceptions, and groupthink with a view to encouraging alternative approaches to America’s foreign and national security policy challenges. For more information about the Stimson Center’s Red Cell Project, see here.
Red Cell
The belief in the inevitability of artificial intelligence (AI), the promise of boundless benefits, and the fear of losing to China or each other are driving American AI industry rivals into a furious race for AI dominance. This race is accelerating with insufficient regard for the risks, many of which even AI creators and corporate leaders have warned about themselves. Concerns about safety, human impact, and the inability to prevent catastrophic outcomes have been downplayed in the pursuit of speed and supremacy. The challenge is not to stop the train or even slow it down—rather, it is to shift direction to ensure that innovation remains aligned with human objectives.
Techno-Utopia: “More Everything, Forever”
“We are building a brain for the world,” explains OpenAI CEO Sam Altman in a blog post about artificial superintelligence, defined as AI that exceeds human intelligence. Robots could one day run entire supply chains by extracting raw materials, driving trucks, and operating factories. Even more remarkably, certain robots can manufacture additional robots to construct chip fabrication plants, data centers, and other infrastructure. Machines will not just power the future; they will build it—indefinitely.
Tech leaders and venture capitalists, such as Elon Musk, Mark Zuckerberg, and Marc Andreessen, envision a technologically determined future where AI eradicates disease, reverses environmental collapse, discovers limitless energy, and ushers in an era of abundance. What’s not to like? However, tech leaders also envision AI transforming society so completely that people will retreat into the virtual metaverse, form AI friendships, transact in cryptocurrencies, and rely on a universal basic income (UBI) funded partly by tech companies.
Some see an extreme, darker vision taking shape, in which nation-states gradually dissolve into network states governed by technocratic elites—a modern version of Socrates’ philosopher kings. Realistic or not, the technology is wholly transformative in its ambition. Yet, it does not adequately address the downside consequences. It resembles Aldous Huxley’s dystopian vision in Brave New World, with its core conviction that technology, especially AI, will deliver, as Adam Becker put it, “more everything, forever.”
Technocracy Reborn
Such a world echoes the 1930s Technocracy movement. Led by Elon Musk’s grandfather, Joshua N. Haldeman, the school of thought advocated for the rule of engineers and scientists. Not surprisingly, it failed to gain traction. In Marc Andreessen’s 2023 blog post, the “Techno-Optimist Manifesto,” the venture capitalist argues that all problems, natural and technological, can be solved with more technology. This mindset ignited a fierce backlash in 2018, sparked by technology companies’ unchecked power over user data and the content on their platforms, shattering consumer trust and the illusion that more technology inevitably leads to progress.
Andreessen identifies “the enemy” of progress as statism, collectivism, central planning, and monopolies. This, too, is familiar: free-market capitalism with minimal regulation is the same environment that fostered the early growth of the Internet. Andreessen’s manifesto treats society like a broken app: flawed, but fixable, as long as developers keep upgrading the technology stack. Similarly, the enemies he identifies are actually complex outcomes of human history, power struggles, and clashing ideologies. Furthermore, they are not malfunctions to be overcome with a software patch. They are part of the human condition.
Where humans fit into the picture when machines do all the heavy lifting is conspicuously absent. According to this vision, machines think, build, and make decisions for people—not with them. It is a future of engineered consensus, where politics and consent of the governed disappear alongside human agency. No wonder some in the industry, like Google CEO Sundar Pichai, support integrating social scientists, philosophers, and ethicists into the conversation—not least to remind Silicon Valley’s data scientists and venture capitalists that human agency must remain intact. That is the part of the software that needs to be patched.
Social Dislocation and the Rise of the Techno-State
It would be a mistake, however, to see the Techno-Utopians as merely profit-seekers or ideologues. They have good reason to believe that AI can solve previously intractable problems. DeepMind’s AlphaFold has already revolutionized biology, earning its creators a 2024 Nobel Prize for accurately predicting protein structures, an advancement that accelerates drug discovery. Google’s earthquake alerts improve public safety, and AI-driven cooling can cut commercial energy consumption by 9–13 percent. AI unequivocally has enormous potential to improve human life.
However, AI also promises to transform civilization at an unprecedented speed and with significant disruption. Historically, even in societies with abundant leisure time, people have had purposeful work. Yet talk of universal basic income suggests otherwise. When the Industrial Revolution displaced pre-industrial and rural physical labor, new jobs were created, and change unfolded over a few centuries, giving societies time to adjust. The AI revolution, in contrast, threatens to rapidly replace knowledge workers without offering any immediate replacement jobs. Half of entry-level white-collar jobs might vanish within five years.
Today’s business school graduates cannot become tomorrow’s bankers if the path from apprenticeship to employment collapses without something to replace it. Hollowing out the professional class destroys upward mobility by replacing higher-wage, middle-income jobs with fewer and lower-wage roles that merely support automated systems. Furthermore, when AI automates jobs that rely on reasoning, judgment, accountability, ethics, and morality—all human faculties it lacks—performance and safety could suffer.
The relentless push for speed overshadows the essential task of preparing for change. AI is already remaking society, transforming how Americans communicate, work, wage war, and engage in politics. It has become the leading edge of the national securitization of economic decisionmaking and geostrategic competition. Society’s transformation accelerates as the technology hooks users and spreads so quickly that they must adapt.
China’s rapid deployment of AI in surveillance, governance, and military applications is often invoked to justify further acceleration. However, if America leads in developing cutting-edge AI models while China deploys less advanced ones, the challenge is not just to deploy faster. It is to lead by deploying differently, namely by prioritizing safety, transparency, and preparedness for disruption, thus anchoring American AI deployment in human-centric principles rather than geopolitical urgency.
In a recent Foreign Affairs essay, American political scientist Ian Bremmer describes this new world as “technopolar,” where the technology industry is an omnipresent, non-state actor that shapes geopolitics and public life. Without question, the major US technology companies, or “Big Tech” firms, are assuming roles that were historically the domain of nation-states, and in some respects, have become intertwined with the state. While it is unclear whether they constitute a global “pole” like China, Russia, or the European Union (EU), Big Tech certainly represents a new—largely unaccountable—form of power.
Big Tech Channels the British East India Company
A better analogy for Big Tech’s weight is the British East India Company, which, from 1600 to the early 1800s, acted as a sovereign entity with immense autonomy backed by the British Crown. The East India Company forged a large part of the British Empire, acquiring territory with its own army and managing it with its own administration, while giving the British government a cut of the profits.
It took about two centuries for the nation-state to reassert itself and dissolve the company. Likewise, today’s technology giants have amassed enormous power, influencing state behavior through their control of digital infrastructure and platforms that have become integral to modern society. Can nation-states reassert sufficient control?
Probably, but not anytime soon. Big Tech is shaping US government policy (as displayed in the administration’s recently released AI Action Plan) partly because Congress has only recently begun to act, and partly due to the rapid pace of change.
Furthermore, since China debuted its DeepSeek AI model, geopolitics has fostered a symbiosis between government and industry. Unlike in China, where technology companies are firmly under the control of the party-state, in the United States, Big Tech is influencing the government’s approach. The industry’s political footprint in Washington has expanded even as public scrutiny has grown: US technology giants spent $61 billion on lobbying in 2024. Musk alone donated $288 million in campaign funds last year.
As AI’s full social, economic, and geostrategic impact is beginning to be felt by humans, AI’s creators are increasingly acknowledging that they do not fully understand how these systems function. Many systems perform well in low-stakes, error-tolerant environments. However, engineering control becomes urgent in high-impact domains, such as finance, law, healthcare, and defense, where failures have severe consequences. Hallucinations—false or fabricated responses—are common in complex reasoning tasks, occurring in nearly 80 percent of evaluations.
If the engineers who design these systems cannot satisfactorily explain how they work, can AI truly be controlled? DeepMind cofounder Mustapha Suleyman is unsure. Others say that identifying failure modes is difficult, rendering a “kill switch” (to shut down AI if it fails) unrealistic. Yoshua Bengio, a pioneer in the field, has called for investment in advanced monitoring to control agentic AI, which can act autonomously, before “we build things that can destroy us.”
Not everyone agrees. Meta’s chief AI scientist, Yann LeCun, another foundational voice, rejects AI “doomerism.” Current models are far from achieving autonomy or general intelligence, he contends, and safe, controllable AI is fundamentally an engineering problem. Yet his proposed solution—open-source innovation to improve testing and debugging—would give China access to it. For the moment, proprietary models still dominate the US ecosystem, though interest in open-source models is growing.
No Guardrails, Fragmented Governance
Even as risks have become more visible and concerns more widespread, regulation has not kept pace. International efforts, such as the United Kingdom (UK)-hosted Bletchley Declaration in 2023, offered hope for coordinated governance among major AI powers. The United States, China, and the EU committed to safe, human-centric development. But at the 2025 Paris Global AI Summit, both the United States and the UK declined to sign a follow-up declaration on ethics and safety. In fact, Vice President JD Vance publicly pushed back against “excessive” safety-oriented regulations.
The reality, however, is that the science for evaluating performance and safety is inadequate. This is particularly concerning as AI advances from being a tool that augments human capabilities—such as generative AI (think ChatGPT)—to being “agentic” and acting autonomously without human supervision (think driverless vehicles). Despite the associated exponential increase in risk, only 2 percent of global AI research studies the safety considerations, and only .3 percent of total AI funding is spent on human alignment research.
Many leading safety researchers have exited Big Tech firms in frustration. Meanwhile, binding global standards and enforceable oversight mechanisms for agentic AI do not exist. Furthermore, there are no technical protocols to address misalignment (when AI does not pursue goals its designers want) nor trusted mechanisms for fail-safe intervention (eg, a kill switch).
Although the pace of AI development suggests the need for flexible regulations, the United States appears to be an outlier in this regard. Both China and the EU, despite their different governance systems, are more focused on safety. In this fragmented regulatory landscape, global coordination becomes both more difficult and more urgent.
Ignoring Warnings About AI
Now is the moment to step back and revisit first-order questions: Where is AI taking humanity? Will it replace people? Even Techno-Utopians like Sam Altman and Elon Musk have expressed alarm. Musk estimates there is a 20 percent chance that AI could destroy humanity; Altman believes it could overpower people entirely.
In March 2023, Musk, along with technology entrepreneur Steve Wozniak and dozens of AI scientists, signed an open letter calling for a pause in the training of more advanced AI systems, asking bluntly: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?” Altman did not sign the letter, but it quotes OpenAI’s statement that advanced AI projects should limit the rate at which computational resources are increased for model training.
Nevertheless, AI development is racing ahead at warp speed. Developers are engaged in a dizzying contest to seize the future, hyped by the belief in AI’s inevitability and fear of missing out on a financial and geopolitical bonanza. Irrational exuberance has overtaken market discipline. Despite delayed returns and unresolved risks, capital is pouring into AI development with near-religious fervor: $471 billion flowed into US AI ventures between 2013 and 2024, with another $300 billion projected for 2025 alone.
Yet, the risks are not hypothetical, the warnings are clear, and often issued by those building the systems themselves. Accountability is missing, sacrificed at the altar of acceleration, leaving little room for oversight or correction. Without common global rules to safeguard human-centered innovation, AI governance risks being shaped solely by power and profit. Integrating ethics and foresight into technological development will become increasingly challenging the longer this dynamic persists, with likely profound consequences for humanity.
The Future, Unsupervised
Rarely has a technology with such transformative power—and such visible, well-understood risks and disruptions—been unleashed with so little preparation. As Anthropic CEO Dario Amodei recently warned, “You can’t just step in front of the train and stop it.” But you can—and must—“steer it 10 degrees in a different direction from where it was going.” That is the central challenge for the United States in the AI race.
About the Authors: Robert Manning and Ferial Ara Saeed
Robert A. Manning is a Distinguished Fellow at the Stimson Center, working on Strategic Foresight, China, and great power competition.
Ferial Ara Saeed is a nonresident fellow at the Stimson Center and a former senior American diplomat. She is the founder of Telegraph Strategies LLC, a consulting firm.
Image: Gorodenkoff / Shutterstock.com.