AI RegulationCold warFeaturedInnovationU.S.-China Competition

Beyond the Space Race: Collaboration and Competition in the Future of AI Governance

The AI race mirrors Cold War rivalries but lacks a clear finish line. Fragmented governance and zero-sum narratives risk overshadowing inclusive, collaborative approaches to innovation.

Editor’s note: The Red Cell series is published in collaboration with the Stimson Center. Drawing upon the legacy of the CIA’s Red Cell—established following the September 11 attacks to avoid similar analytic failures in the future—the project works to challenge assumptions, misperceptions, and groupthink with a view to encouraging alternative approaches to America’s foreign and national security policy challenges. For more information about the Stimson Center’s Red Cell Project, see here.

Red Cell

Is there a finish line for the artificial intelligence (AI) race? The relentless pursuit of artificial intelligence is reshaping our world, challenging our ethics, and redefining what it means to be human. But what does the AI race mean for geopolitics? And what are we racing for or towards? 

AI and the Zero-Sum Mindset 

On July 23, the US government released a document entitled “Winning the Race: America’s AI Action Plan.” This plan identifies over 90 federal policy actions that the US government will take in the coming months with the express goal of “winning” AI. The “AI race” concept is rooted in Western notions of winner-takes-all capitalism, framing innovation as a zero-sum game. While that paradigm may mirror elements of the Cold War, it is poorly suited to the complexities of a technology that demands cross-sector, cross-cultural collaboration and sustained ethical reflection. 

Policymakers on both sides of the aisle argue that AI innovation must be “won” by those who are most deserving, while business leaders and innovators are selling a narrative that AI will empower everyone in the broadest sense. By portraying AI development as a high-stakes competition with clear winners and losers, the narrative amplifies fears of being left behind or dominated by others, which may be counterproductive for US technology leadership in the long run. So, is there necessarily a “finish line” in AI innovation, or is progress more of a continuum?

Lessons from the Space Race 

The AI race is often compared to the space race between the United States and the Soviet Union as part of a wider narrative that highlights how zero-sum competition can drive rapid innovation while also heightening global tensions. During the Cold War, the United States and the Union of Soviet Socialist Republics (USSR) engaged in a two-player contest for aerospace dominance, driven by national pride and security concerns. While the space race produced landmark achievements and ultimately catalyzed international agreements like the United Nations’ (UN) Outer Space Treaty in 1967, which declared space the “province of all mankind,” contemporary rhetoric reflects a retreat from that cooperative spirit. Statements made by US officials as recently as this year emphasize the need to return to the moon before China and exemplify a resurgence of strategic nationalism. For example, Scott Pace, Executive Director of the US National Space Council under Trump, stated: “Outer space is not a ‘global commons,’ not the ‘common heritage of mankind,’ not ‘res communis,’ nor is it a public good.” This framing mirrors the zero-sum logic now applied to AI, where concepts like the global commons are sidelined in favor of national dominance and control.

Beyond Cold War Analogies: The Nature of AI Progress 

Just as the space race stirred visions of scientific progress and fears of planetary catastrophe, the AI race has become a battleground of competing futures. Cold War-era utopianism imagined space as a shared frontier, while dystopian anxieties warned of mutually assured destruction. Today, AI evokes similar contradictions. Promises of empowerment and abundance stand alongside warnings of surveillance, inequality, and existential risk. Both eras reveal how technological ambition can amplify collective anxiety, especially when global cooperation is sidelined by strategic rivalry. The current public narrative about the future is characterized by uncertainty, anxiety, and reactive governance, much of it driven by speculative scenarios rather than present realities observed by the majority of people on earth.

However, the analogy breaks down when we consider the scope and pace of innovation. For example, the impact of AI innovation is expected to be felt across all sectors of the economy and society, while the impact of the space race was largely limited to the telecommunications and aerospace sectors. Cold War-era technological progress unfolded over years and decades, with clear milestones like entering orbit or landing on the moon. These moments provided closure and public validation. In contrast, AI development is continuous and diffuse, evolving in days and minutes without a definitive endpoint, making it harder to declare a “winner” or measure success in absolute terms. While the first-mover advantage in the space race led to operational milestones, the advantage in the AI race may be more ephemeral, given the nature of AI development. 

This ambiguity is especially pronounced in discussions of artificial general intelligence (AGI), which is often described as the goal of the AI race. However, while many consider AGI to be an AI system that can understand, learn, and apply knowledge across a wide range of tasks at a human-like level, there is no universally agreed-upon definition for the concept, and vibrant debate continues among researchers around if, how, and when it can be achieved. The ambiguity surrounding AGI’s definition means that it may be more of a gradual realization than a definitive milestone. As we are already seeing, each AI innovation builds upon previous achievements, paving the way for new possibilities and applications, creating a continuous cycle of improvement and refinement. Framing AGI as a final destination oversimplifies the reality of AI innovation as a complex, evolving endeavor driven by iterative breakthroughs and shaped by diverse global actors across public, private, and civic domains.

Fragmented Global Governance 

As the pace of this innovation accelerates without a clear endpoint, the global governance landscape for AI is fragmenting rather than converging as the United States, China, and the European Union (EU) advance distinct regulatory and technological agendas. Aiming to replicate its success with the General Data Protection Regulation (GDPR), the EU’s approach prioritizes risk management and aims to set a global benchmark for safety, but its stringent requirements may struggle to keep pace with rapid innovation. China’s state-led model, steeped in “core socialist values,” blends centralized control with rapid industrial scaling through firms like Tencent, ByteDance, and Alibaba. Despite US efforts to curb Chinese advancement through export controls, companies like DeepSeekcontinue to innovate around constraints. Meanwhile, the United States excels in frontier model development and private sector-led investment, but its fragmented regulatory landscape, characterized by sectoral gaps and state-level initiatives such as California’s SB-1047, hinders its capacity to create a cohesive global vision for responsible AI innovation. 

The US decision in 2022 to impose unilateral export restrictions on advanced chips and AI software sent a clear signal that had significant consequences for the tech industry. Instead of harmonizing global standards, it spurred the EU to enact the European Chips Act in 2024. This has resulted in overlapping export control regimes, compelling businesses to certify compliance with multiple international rulesets.

Back home, the lack of a single federal roadmap has produced its own challenges. In fiscal year 2025 (FY25), five US federal agencies, including the National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA), Department of Energy (DOE), National Institutes of Health (NIH), and United States Department of Agriculture (USDA), launched overlapping AI trustworthiness pilots, duplicating efforts that could have benefited from shared resources. These initiatives, while well-intentioned, reflect a broader challenge faced by many governments: balancing innovation with regulation in a rapidly evolving landscape.

The discourse around open-source AI further reveals the complexity of American and Chinese approaches to AI governance. Open-source AI models and tools are poised to provide significant benefits for developing countries by offering accessible and affordable tools for AI innovation. While China promotes open-source AI as inclusive and cooperative, models like DeepSeek-R1 and Spark X1 serve strategic interests by shaping technical standards and reducing reliance on the US technology stack. In contrast, American AI leaders advocate for open-source AI but with significant commercial, legal, and strategic caution. After consulting with over 200 government officials, specialists, and employees from leading AI companies, a 2024 report released by Gladstone AI recommended banning the open sourcing of model weights for frontier models, with violations potentially punishable by jail time. The controversial report was commissioned by the US government and sparked fierce debate in the AI industry.

Rethinking the AI Narrative 

So how should we navigate the rapid innovation and anxiety of a changing world? How can we think clearly in the AI hype cycle we find ourselves trapped in? One place to start is with the stories we tell about AI innovation. The narratives, priorities, metaphors, and ethical frameworks that dominate discussions are primarily shaped by Western institutions, corporations, and policymakers. This narrow lens marginalizes the voices, experiences, and innovations of non-Western communities, perpetuating a skewed understanding of AI’s global impact. 

By framing AI innovation as a binary endeavor, the Western-centric perspective on AI creates winners and losers, failing to account for the complex cultural, social, and economic contexts in which AI innovation is occurring globally. In response, governments across the Global South are rallying for digital sovereignty and creating AI governance frameworks more tailored to local concerns. For example, Kenya and Rwanda have passed data-localization laws to keep personal and organizational data under strict local control. Meanwhile, countries like India and Brazil have published national AI strategies that emphasize a more pluralistic approach to AI governance rooted in regional priorities such as agriculture and healthcare. 

Inclusive AI Innovation 

You can’t win an AI race, but you can promote better outcomes for more people simply by including them and ensuring that the benefits of AI innovation are broadly shared. Although dystopian futures may emerge from a variety of conditions stemming from unchecked competition, bias, and imbalanced power dynamics, the most promising outcomes in AI arise from inclusive collaboration. This is because the performance of AI systems is only as good as how those AI systems were trained, and AI training relies on representative data. An AI system trained by and for California residents, for example, is unlikely to work the same when deployed in Cameroon. Fragmented standards and regulatory regimes exacerbate common issues like human oversight, jailbreak risk, and the lack of transparency around how AI functions. Therefore, if we are innovating ways for AI to empower everyone, then inclusive design is not just a moral imperative—it’s a strategic advantage. If AI continues to be framed as the next frontier of great-power competition rather than a shared global endeavor, we risk designing systems that are ill-equipped to solve the complex interdependent problems of a pluralistic world. Instead, AI systems will perpetuate historical cycles of dependency, where the benefits of AI are unevenly distributed and the burdens disproportionately borne.

The internet and the global positioning system (GPS) are prime examples of innovation born from global collaboration. These technologies emerged through a collective effort among people from diverse backgrounds, ensuring their benefits were widely shared. The internet, initially a military project, evolved into a global communication network thanks to contributions from engineers, scientists, and policymakers across continents. Its open architecture and decentralized design reflect the input of diverse voices, enabling it to become a platform for innovation, education, and social connection. Similarly, GPS, originally developed for military navigation, was transformed into a tool for global accessibility through international collaboration. By integrating insights from various fields, including geospatial science, engineering, and user experience design, GPS became a cornerstone of modern life, supporting applications from transportation to disaster response.

Building a Global AI Ecosystem 

To foster a truly global ecosystem for AI development and deployment, it is essential to create institutions that actively support participation from underserved regions and communities. This means investing in digital connectivity, localized AI education, and the development of increasingly representative data sets that empower more people to seize opportunities to innovate. For instance, the African Union’s AI Data Policy Framework (2023) represents a regional effort to pool resources and set homegrown standards, while the Association of Southeast Asian Nations’ (ASEAN) AI governance working group, launched in early 2025, seeks to build cross-border data commons in Southeast Asia. US senators from both parties have floated similar “AI for Good” funding vehicles, reinforcing that inclusivity in AI is a shared US value. Open-source collaboration platforms and regional data commons can help break down silos and promote knowledge exchange across geopolitical divides. Inclusive policy design also entails recognizing pluralistic ethical frameworks beyond dominant Western models, such as Ubuntu from southern Africa or Buddhist relational thinking in East Asia, to inform more culturally resonant approaches to governance. 

Ultimately, a globally representative AI ecosystem is not just about redistribution—it’s about redefining who gets to imagine, build, and decide the future of intelligent systems. These are profound and unprecedented times. As we navigate the AI era, we must resist the temptation to frame progress as a zero-sum game or a metaphorical race to be decided by a select few. Instead, we should embrace the diverse perspectives and talents of the Global South, whose contributions are vital to shaping a future that is equitable and sustainable. By fostering global inclusivity, we can ensure that AI becomes a tool for empowerment rather than division, unlocking its full potential to benefit humanity as a whole.

About the Author: Max Scott

Max Scott is a leading expert in responsible AI and serves as CTO of StratAlliance Global, an AI systems integrator operating at the intersection of federal procurement, frontier technologies, and global policy. As a U.S. diplomat, he helped shape international norms for ethical innovation and later oversaw the safe deployment of some of the world’s highest-risk AI models and applications at Microsoft’s Office of Responsible AI. 

Image: A9 STUDIO/shutterstock



Source link

Related Posts

1 of 115