About the Diagram
This interactive 3D model visualises two contrasting approaches to sovereign AI policy architecture. It was developed as part of ongoing work by Brainbox Institute on AI sovereignty and regulation in New Zealand, drawing on a growing body of international research on how countries can — and cannot — pursue meaningful agency over AI systems.
The core problem
AI is not a single technology. It is a transnational stack of interdependent components: talent, funding, compute hardware, energy, data, models, applications, governance, and more. Each layer carries its own dependencies, concentration risks, and policy trade-offs (Kerry et al., 2026; WEF, 2026).
But the difficulties are not only structural. Any claim to build a “sovereign” or nationally representative AI system immediately imports contested social and political questions — about whose values it reflects, whose data it trains on, whose interests it serves, and who bears the consequences when it fails. Where public funds or public data are involved, these questions become fiscal, legal, and democratic accountability problems as well.
When discussions turn to “sovereign AI,” a common assumption is that achieving it means building a complete domestic AI stack — a single entity (typically government) orchestrating everything from human capital and funding through to data acquisition, model training, legal compliance, and deployment. This is what the diagram calls the Single Vertical Model.
The Single Vertical Model (Monolithic Approach)
The left side of the diagram depicts the monolithic approach. A single pipe must pass cleanly through every layer of the stack — the “perfect shot.” Every layer crossing is a potential friction point, marked with a red X.
The concept emerged from a simple observation: if you list all the things a single sovereign AI entity would need to simultaneously align — human capability, funding, compute, environmental footprint, data acquisition and consent, curation, training, fine-tuning, legal compliance, corporate governance, access control, and ongoing model innovation — the resulting complexity is almost certainly impossible for any solitary entity to manage (Barraclough, 2025). Attempting to orchestrate all these factors simultaneously creates what the diagram labels the Systemic Alignment Trap.
The trap is not only technical. A monolithic sovereign AI that claims to be representative or to carry national character must answer fundamentally political questions that have no clean resolution: What if the model is wrong, or biased, or racist? What if it causes harm or financial loss? Who decides the right blend of training data for a “New Zealand” response? Which dialects, which accents, which cultural perspectives are included or excluded? How does such a system contend with Māori data sovereignty? These are not engineering problems — they are political disputes that a single entity would have to internalise and resolve, making the monolithic approach not only structurally infeasible but politically perilous and extremely difficult to recruit broad support for.
This finding is consistent with the conclusions of the Brookings-CEPS report Is AI Sovereignty Possible?, which found that full-stack AI sovereignty is structurally infeasible for almost any country (Kerry et al., 2026). This is often framed in terms of hardware supply chains and chip fabrication choke points — but the value-laden and socio-political layers of the stack are equally difficult to navigate, and far harder to resolve by procurement alone. The Stanford HAI analysis similarly concluded that achieving domestic development across the entire AI stack is prohibitively costly and unnecessary for most countries (Pava et al., 2026).
The Pluralistic Ecosystem (Networked Approach)
The right side presents an alternative. The same horizontal layers remain — these are non-negotiable structural requirements for any AI system. But instead of one pipe threading through all of them, there are many pipes of varying heights, spread across three dimensions.
Each pipe represents a different entity — a cloud provider, a university consortium, a startup specialising in fine-tuning, a public data trust, a regulator, an iwi or community organisation, a funding agency, an industry body. Each covers only the layers where it has genuine capability or mandate. Some are broad-stack players spanning most layers; others are narrow specialists contributing deep expertise in just two or three.
Critically, building in pluralism from the start opens wider access to the sovereign AI ecosystem by more people and groups, each bringing different legitimate approaches. Rather than requiring consensus on a single vision of what “sovereign” means, the ecosystem model accommodates diverse perspectives — cultural, commercial, academic, community-led — operating in parallel within a shared architecture. This is both a practical advantage and a democratic one.
The key structural features of this model are:
- Within-layer networks (green diagonal lines): at each layer, entities that are co-present can connect and collaborate — sharing compute, exchanging data, coordinating governance.
- Cross-layer handoffs (red diagonal lines): entities operating at different layers can connect across layer boundaries. A talent and funding specialist hands off to a compute provider, who hands off to a data curation body, who connects to a training entity. No single actor needs to span the entire stack.
- Targeted oversight rather than blanket friction: regulatory attention focuses on specific high-risk intersections rather than imposing uniform compliance burden at every layer.
This model draws on the concept of “managed interdependence” advanced in the Brookings-CEPS report — an approach that relies on strategic alliances and partnerships to reduce risks throughout the AI stack, rather than pursuing autarky (Kerry et al., 2026). It is also informed by the WEF’s concept of “ecosystem builders” — smaller advanced economies that pursue balanced, government-driven investment strategies across selected elements of the AI value chain while forming international partnerships for the rest (WEF, 2026). Australia’s Tech Policy Design Institute has developed a complementary framework treating the AI ecosystem as a system of six interdependent layers covering 101 capabilities (TPDi, 2025).
Why it matters for New Zealand
New Zealand has adopted a distributed approach to AI regulation — no dedicated cross-cutting AI Act, but reliance on existing general-purpose legislation (Barraclough, 2025). This is a sensible decision, but it creates significant challenges around information, coordination, funding, and policy direction.
The pluralistic ecosystem model provides a way of thinking about how a small country can participate meaningfully in sovereign AI without falling into the trap of trying to build everything from scratch. It shows that sovereignty is not about building a monolith. It is about enabling a network — coordinating diverse actors across shared layers, enabling specialisation, distributing risk, and making deliberate choices about where to invest and where to rely on partners.
Pursuing sovereign AI is necessarily political. Decisions about data, training, moderation, access, and funding inevitably reflect and affect competing values and interests. Pretending otherwise — or delegating these decisions to a single entity receiving public support — does not remove the politics; it concentrates it. A more open, distributed, and pluralistic approach does not eliminate these tensions, but it creates the conditions for them to be navigated by diverse actors with different mandates, capabilities, and accountabilities, rather than resolved by fiat.
The enablers of this ecosystem include: a coordinating vision that treats it as a system; surfacing and sharing relevant information; meaningful whole-of-society collaboration; and regulation that provides certainty, stability, and democratic ways of navigating competing rights and interests (Barraclough, 2025).
References
- Barraclough, T. (2025). Sovereign AI for New Zealand. Discussion paper v3.0. Brainbox Institute. Available at docref.org.
- Barraclough, T. (2025). Wasting Time on AI Regulation and Sovereign AI in New Zealand. Brainbox Institute. Available at brainbox.institute.
- Donahoe, E. & Komaitis, K. (2026). Why AI Sovereignty Depends on Interoperability Standards. Tech Policy Press.
- Kerry, C.F., Meltzer, J.P., & Engler, A. (2026). Is AI Sovereignty Possible? Balancing Autonomy and Interdependence. Brookings Institution & CEPS.
- Kerry, C.F. & Mishra, S. (2025). The Myth of the Monolith: AI Is Not One Thing. Brookings.
- MBIE. (2025). Artificial Intelligence Research Platform programme. New Zealand Government.
- McBride, K., Langengen, T., West, D., Bradley, J., & Mökander, J. (2025). Sovereignty, Security, Scale: A UK Strategy for AI Infrastructure. Tony Blair Institute for Global Change.
- NZTech. (2025). Empowering Aotearoa New Zealand’s Digital Future: Our National Data Centre Infrastructure.
- Pava, J.N., Meinhardt, C., Cryst, E., & Landay, J.A. (2026). AI Sovereignty’s Definitional Dilemma. Stanford HAI. Available at hai.stanford.edu.
- Tech Policy Design Institute. (2025). From AI Sovereignty to AI Agency: Measuring Capability, Agency & Power. Discussion Paper. TPDi, Australia.
- The Public AI Network. (2024). Public AI: Infrastructure for the Common Good. White paper (updated October 2024).
- World Economic Forum. (2026). Rethinking AI Sovereignty: Pathways to Competitiveness through Strategic Investments. WEF AI Global Alliance, in collaboration with Bain & Company.
CC BY 4.0 — Tom Barraclough · tom@brainbox.institute