The Layers
The diagram organises the AI stack into twelve layers, ordered to follow the logical flow of how an AI system is developed and deployed. Foundational enablers sit at the top; deployment and governance outcomes sit at the bottom.
These layers synthesise and adapt frameworks from several sources, including the Brookings-CEPS AI stack analysis (Kerry et al., 2026), the WEF AI value chain (WEF, 2026), the TPDi six-layer AI capability typology (TPDi, 2025), and work by Brainbox Institute on New Zealand sovereign AI factors (Barraclough, 2025).
Foundational Enablers
1. Human Capability & Talent Pool
Everything starts with people. Advanced AI research and engineering talent is concentrated in a small number of institutions and countries. Compensation differentials and compute access drive further concentration (Kerry et al., 2026). For a country like New Zealand, sovereign AI depends on investing in AI literacy across society — not just technical training, but governance, assurance, and risk management skills (Barraclough, 2025). The TPDi framework further distinguishes between skills for building AI, skills for deploying it, skills for governing and securing it, and general public AI literacy (TPDi, 2025).
2. Funding & Economic Policy
AI development requires sustained capital investment. In the monolithic model, centralised funding creates decision gridlock and forces zero-sum trade-offs — what do you cut to fund sovereign AI? (Barraclough, 2025). The use of public funds to build a system that claims sovereign or national character also triggers political accountability questions: who authorises the expenditure, what outcomes are promised, and who is answerable when the system underperforms or causes harm? The pluralistic ecosystem distributes this across diverse, competitive funding streams: public investment, private venture capital, public-private partnerships, international collaboration, and blended financing models. The WEF estimates projected annual investment in AI applications alone will reach approximately US$1.5 trillion by 2030 (WEF, 2026). The question for smaller economies is how to participate in these flows rather than trying to replicate them domestically.
3. Computing Capital & Infrastructure
The physical backbone of AI. This encompasses compute hardware (GPUs, TPUs, accelerators), cloud and data centre infrastructure, and connectivity. Single-firm choke points exist in advanced chip fabrication, and vendor lock-in to dominant cloud providers creates pricing and jurisdictional exposure (Kerry et al., 2026). While compute supply chains attract significant policy attention, the socio-political and value-laden layers further down the stack present challenges that are at least as difficult — and less amenable to resolution through procurement or infrastructure investment alone. Mitigation strategies include multi-cloud approaches, interoperability requirements, and open tooling. New Zealand’s data centre sector is growing, with hyperscale investment from Microsoft and carrier-neutral facilities enabling onshore sovereign cloud solutions (NZTech, 2025). The NZ Government has also invested up to $70 million over seven years in an AI Research Platform under the NZ Institute for Advanced Technology (MBIE, 2025).
4. Environmental Footprint
AI infrastructure requires reliable, low-cost power at scale and intensive cooling. Energy and water usage constrain where data centres can be built and how fast they can expand. New Zealand has advantages here — high renewable energy share and facilities using air-based rather than water-intensive cooling (NZTech, 2025). But environmental sustainability cannot be treated as an afterthought; it is a structural layer that shapes what is physically possible.
Data Pipeline
5. Data Acquisition, IP & Consent
The raw inputs for AI systems. Proprietary dataset control by large platforms and legal uncertainty around training data reuse create significant risks (Kerry et al., 2026). Data collected for one purpose being repurposed for AI training raises consent and intellectual property challenges. In the monolithic model, this creates massive compliance friction because a single entity must resolve all consent and IP issues across every data source. In the pluralistic model, specialised data trusts, public data releases, and trusted data-sharing infrastructure can distribute this responsibility.
In a New Zealand context, data acquisition raises particularly acute questions. Who decides the right blend of training data for a system that claims to produce a “New Zealand” response? How does such a system contend with Māori data sovereignty — the principle that Māori data should be subject to Māori governance? These are not implementation details; they are foundational political questions that any sovereign AI programme must address before a single model is trained.
6. Data Curation & Lifecycle Management
Raw data must be cleaned, structured, annotated, maintained, and eventually retired. The quality, diversity, recency, and integrity of curated data directly determine AI performance (WEF, 2026). This is labour-intensive, domain-specific work that benefits from specialisation — different entities curating medical data, legal data, cultural and linguistic data, environmental data, each with appropriate expertise and governance arrangements.
Model Development
7. Model Training & Content Moderation
Where curated data becomes a functioning AI model. Frontier-scale training capacity is concentrated in a small number of firms and jurisdictions (Kerry et al., 2026). This layer also encompasses content moderation decisions that shape what a model will and will not produce. In a monolithic sovereign context, having government or government-aligned entities responsible for content moderation raises serious concerns about free expression, cultural representation (including the role of te Reo Māori and Treaty of Waitangi perspectives), and the risk of “hall monitor” dynamics (Barraclough, 2025). The questions sharpen further: what if the sovereign model is simply wrong? What if it produces racist or culturally harmful outputs? What if it causes measurable loss? A monolithic approach concentrates these risks — and the political fallout — in a single entity.
8. Model Fine-tuning & Optimisation
Before investing heavily in training new foundation models from scratch, the pluralistic approach asks: how far can we get by customising existing models? Fine-tuning allows domain adaptation, cultural and linguistic alignment, and performance optimisation for specific use cases without the enormous cost of full training runs. This is an area where smaller economies and specialist organisations can contribute significant value (Barraclough, 2025; WEF, 2026).
Fine-tuning is also where cultural and linguistic specificity becomes concrete: which accents are supported, which dialects are prioritised, which cultural norms inform the system’s behaviour? In a pluralistic model, different organisations can pursue different answers to these questions simultaneously — a university consortium optimising for te Reo Māori, a commercial provider fine-tuning for industry-specific New Zealand English, a community organisation adapting models for Pacific language communities. No single entity needs to resolve all of these choices at once.
9. Future Model Innovation
New types of models, new architectures, new approaches beyond current generative AI. AI spans interlocking techniques, tools, and capabilities across different domains rather than a single monolithic system (Kerry & Mishra, 2025). Investment at this layer includes fundamental R&D, open-source contributions, and participation in global research networks. The pluralistic model enables university consortia, research institutes, and open-source collectives to operate at this layer without needing to span the entire stack.
Governance & Deployment
10. Legal Compliance & Liability
IP ownership, model governance, usage liability, and regulatory compliance. Regulatory divergence across jurisdictions increases compliance complexity and may advantage large incumbents (Kerry et al., 2026). In New Zealand, at least ten existing statutes apply to AI in various ways, alongside guidance from multiple agencies (Barraclough, 2025). The distributed regulatory approach means the compliance picture is fragmented — but the ecosystem model handles this by allowing specialised legal and regulatory entities to operate at this layer and connect with entities at other layers.
11. Corporate Governance & Cybersecurity
Internal organisational controls, board oversight, risk management, and cybersecurity protections. The TPDi framework identifies cybersecurity and technical robustness as distinct skills required alongside broader governance capacity (TPDi, 2025). Donahoe and Komaitis (2026) argue that sovereignty in the AI era runs through infrastructure design rather than territorial control, with open interoperability standards essential for maintaining governance leverage.
12. Access Control & Deployment
The final layer: who gets to use the AI system, under what conditions, at what cost, and with what controls. Downstream actors inherit upstream concentration risks — dependence on external models and platforms limits operational autonomy (Kerry et al., 2026). The pluralistic ecosystem enables deployment through multiple channels: public services, commercial offerings, open-source distributions, sector-specific applications, and community-governed systems. Access can be differentiated by sensitivity, purpose, and user capability.
Reading the layers together
In the monolithic model, every one of these layers must be aligned by a single entity, simultaneously. That is the systemic alignment trap — the “perfect shot” problem.
In the pluralistic ecosystem, the layers remain the same. They are structural features of any AI system. But responsibility is distributed across multiple actors, each spanning only the layers where they have genuine capability. Within-layer connections let parallel actors collaborate at the same stratum. Cross-layer handoffs let entities at different layers connect to provide the constituent parts of a functioning sovereign AI ecosystem, without any single entity needing to control everything.
Reading these layers together also reveals that sovereign AI is not a technical programme that happens to have some political dimensions — it is a necessarily political undertaking. Decisions at every layer reflect and shape competing values, interests, and power relationships. Planning for an ecosystem that supports diversity of approaches, led by diverse entities with different mandates and accountabilities, is not merely more efficient than the monolithic alternative — it is more legitimate, more resilient, and more likely to attract the broad support needed to sustain a long-term national capability.
The policy task is not to eliminate these layers or pretend they don’t exist. It is to design the regulatory and institutional settings that enable a healthy ecosystem across them — with coordination, information, collaboration, and regulation as the foundational enablers (Barraclough, 2025).
CC BY 4.0 — Tom Barraclough · tom@brainbox.institute