A Note on “Sovereignty”

Sovereignty is a word with political, historical, and emotional baggage. Despite this, I use the term “sovereign AI” because it captures the considerations I most want to see discussed: thinking about AI at a national level with a view to the national interest; considering it through a lens of autonomy, self-determination, agency, and control; and situating it within geopolitics and international interdependence. Sovereignty doesn’t mean isolationism.

In my discussion paper, I identify a set of “sovereignty factors” that any AI system can be assessed against: hosting and jurisdiction, cost and equity of access, independence and interruption, privacy and information security, governance and oversight, design and intrinsic properties, customisability, national sovereignty, and upstream and downstream determinants. These factors are a way of opening up what sovereignty actually requires in practice, rather than treating it as a single binary question.

That leads to a practical sequence. Start with AI literacy — a sovereign model that everyone uses incorrectly or can’t deploy is not sovereign. Build out domestic compute and digital infrastructure — a sovereign model hosted overseas or subject to another nation’s jurisdiction is not sovereign. Invest in fine-tuning and customisation — a sovereign model with little local awareness is not sovereign. Ensure sovereign context and data pipelines are in place. Then, if the foundations are there and the case is strong, consider training new models — or several. Pluralism over monolith.

Sovereign AI is a label for a coordinating policy vision. It sets a direction that helps people understand what regulation might be required, which initiatives could be funded, and how existing rules will be interpreted. If we can agree on a vision for sovereign AI, we can begin to address the information, coordination, economic, and policy problems that are otherwise holding us back.


CC BY 4.0 — Tom Barraclough · tom@brainbox.institute