Sovereign AI is less about isolation than about ensuring local control, trust, and value alignment in a globally interdependent ecosystem.
A willingness to be vulnerable is central to building trust. Yet for nations competing in the AI ecosystem, a key tension has emerged: how to manage reliance on US and Chinese infrastructure while upholding local norms and values. This challenge has pushed decision-makers to design risk management strategies that avoid compromising domestic ideals—especially amid legal conflicts like those between the EU’s GDPR and the US CLOUD Act.
In practice, full AI sovereignty is unrealistic due to the deep interdependencies within the global AI ecosystem. Still, efforts to assert sovereignty reflect underlying trust concerns, national security interests, and a desire for innovation-led prestige. For its part, the US should acknowledge that its allies may seek autonomy in targeted AI domains to achieve national wins without severing ties to the broader ecosystem.
Meanwhile, US hyperscalers must confront persistent skepticism about data privacy, particularly given the US government’s legal authority to access data stored by American service providers. Recognizing this limitation can foster a more honest dialogue and mutual understanding. Choosing local infrastructure, in this light, is often less about rejecting global collaboration and more about aligning AI development with national security priorities and cultural values. That, too, is a form of sovereignty.
Nations Weigh Competition Against Norms and Risk
Three main approaches to sovereign AI are emerging. The first seeks to eliminate as much external influence as possible, aiming for full national control. The second favors a more open model based on strategic autonomy—balancing collaboration with independence. The third focuses on leveraging local norms and regulations to shape the behavior of external hyperscalers, even amid unresolved data privacy concerns. More broadly, the idea of sovereign technology spans multiple domains, including cybersecurity, digital infrastructure, data governance, and AI.
China Pursues Total Control of Its AI Infrastructure
The People’s Republic of China aims to minimize foreign influence over its technological development. Under the Personal Information Protection Law (PIPL) and Data Security Law (DSL), all personal data must be stored within China, and cross-border data transfers are tightly regulated. Such transfers are only permitted in limited cases—typically when the data is classified as general rather than personal or “important.” These measures reflect China’s broader sovereign cybersecurity strategy, which includes strict control of its internet domain through the Great Firewall, blocking access to selected foreign websites and cross-border internet traffic.
Europe Pushes for Autonomy While Managing Dependence
The European approach to sovereign AI emphasizes stronger protections for data-sensitive institutions, such as government agencies, while acknowledging ongoing dependencies on US technology providers. Recent research has called for a more nationalistic—or Euro-centric—model of technology adoption, citing initiatives like the “Buy European” framework and President Macron’s call for prioritizing local AI firms in public procurement.
Additional policy proposals urge EU governments to contract with European-based cloud providers over US companies, reflecting divergent interpretations of key values like data privacy. The European Data Protection Board has also raised concerns about the extraterritorial reach of the US CLOUD Act, which can conflict with GDPR and create legal uncertainty around the cross-border transfer of personal data.
The UAE Embraces Partnerships to Balance Sovereignty
The United Arab Emirates, by contrast, approaches sovereign AI through carefully managed partnerships. Its collaboration with the Abu Dhabi government, Microsoft, and domestic firm Core42 exemplifies a strategic blend of local and global capabilities. While the UAE’s sovereign public cloud leverages the infrastructure of Microsoft Azure, it integrates Core42’s Insight offering to meet data sovereignty needs—providing control sets, audit and assurance tools, and streamlined compliance features. By emphasizing technical safeguards like strong encryption and strict access controls, the UAE builds trust through robust data security. This model addresses sovereignty concerns through technical means, without resorting to full isolation.
Technical Innovation Can Help Rebuild Cross-Border Trust
The ability to share and analyze data across borders—so its benefits can be more widely realized—depends on minimizing privacy risks to personal data. Emerging technical solutions like differential privacy and synthetic data offer promising alternatives to traditional de-identification methods. Establishing trust through credible internal safeguards and procedures can help bridge value misalignments stemming from differing interpretations of data privacy.
One possible model of AI sovereignty involves partnering with American hyperscalers alongside domestic companies, while ensuring alignment with local norms—supported by the latest advancements in data protection. This approach reflects recent research showing that “AI middle countries,” navigating between Chinese and US cloud providers, are increasingly diversifying their dependencies to safeguard against supply chain vulnerabilities.
AI Sovereignty Must Be Defined Locally, Not Imposed
Varying interpretations of sovereign AI should empower countries to assert agency over their own norms and values, developing governance frameworks tailored to their unique contexts and concerns. This kind of local agency helps guard against soft power influences that seek to impose a singular interpretation of shared principles, such as data protection. For the US, a key consideration in promoting its AI technologies abroad is to respect these local contexts and pursue collaborative, mutually beneficial solutions that align with commercial interests.
As more nations seek access to advanced AI technologies and a voice in shaping global governance, both governments and multinational tech companies will need to remain flexible—developing home-grown solutions that reflect diverse value systems. Encouragingly, a range of models for AI sovereignty is emerging, reinforcing the idea that sovereignty is self-defined, not externally dictated.
One path to responsible AI in a deeply interconnected world may be granting nations the authority to determine how technology companies operate within their borders—allowing them to implement AI in ways that reflect their own values while engaging in the global ecosystem.
About the Author: Averill Campion
Averill Campion is a consultant at the New Lines Institute for their portfolio on tech sovereignty and security. Hailing from Texas, she has spent the past decade acquiring international experience through her PhD awarded from ESADE Business School in Barcelona, Spain, and MPA and MSc from University College London and Aston Business School in the UK. Her research focuses on international collaboration in the context of AI adoption, and her scholarly work has been published in international peer-reviewed journals.
Image: Shutterstock