Technology, policy, and the geopolitics of what comes next.
I am a Senior Policy Advisor at the UK AI Security Institute (AISI), currently on leave as a Kennedy Scholar at Harvard University, where I focus on the geopolitics of frontier AI.
I have previously worked as a consultant, open-source intelligence analyst, and at the United Nations in Geneva.
Contact
Notes, essays, and works in progress. Personal views only. Often updated with new information.
Sovereign AI in Practice
An investment strategy for leverage in 2026.
Personal views only — not UK government policy.
National power is a product of technological change and economic growth. Nations that develop and deploy new technologies grow faster. Differential growth rates shift the economic balance of power. The political and military balance shifts soon after.
The capabilities of frontier AI systems are advancing at pace. Continued capability progress and diffusion are likely to drive global economic growth in the coming years. Yet the UK still lacks a stake in frontier AI development. We are on track to outsource the fundamental factors of national power to American AI companies.
Sovereign AI is our response. 2026 should be the year the UK starts to deliver Sovereign AI in practice.
Nvidia defines Sovereign AI as 'a nation's capabilities to produce AI using its own infrastructure, data, workforce and business networks'. Oracle defines Sovereign AI as 'the domestic production of AI'. This entails buying rapidly depreciating compute in a Red Queen's Race to keep up with the frontier. The UK needs a broader definition.
Sovereign AI is the capability to ensure frontier AI development and deployment adheres to the norms, laws, and values of the UK and creates domestic economic value. The AI Opportunities Action Plan uses a similar definition: 'Sovereign AI should ensure economic upside from, and influence on, the governance of frontier AI for the UK'.
Sovereign AI does not mean complete autonomy. It is neither feasible nor economically efficient for the UK to onshore frontier AI development. Instead, Sovereign AI sets an aspiration. The UK should become indispensable to the frontier AI value chain.
Sovereign assets occupy central nodes in the frontier AI value chain and have low elasticity of demand. More specifically, Sovereign AI should entail investing in assets that fit the following criteria:
A number of assets could still fit this criteria. The Sovereign AI Unit therefore needs an investment strategy. We can identify at least three plausible approaches.
Control Chokepoints: Software runs on hardware. Sovereign AI should entail identifying central nodes in the semiconductor supply chain under UK jurisdiction and investing to increase asset specificity. In practice: Arm, Imagination Technologies and Alphawave Semi all generate core semiconductor IP. A chokepoint investment strategy would deepen UK comparative advantage in IP.
Complementary Assets: Frontier AI is general-purpose. AI companies cannot own every complementary asset that enables value to be realised from frontier AI. Sovereign AI should entail investing at the interface between AI capabilities and the real economy. In practice: this means identifying bottlenecks to realising value from frontier AI in advance — targeted investments in tools to navigate the digital-to-physical divide or third-party support services.
Asymmetric Innovation: The UK is dispensable under the current paradigm. Sovereign AI should entail investing in new paradigms of frontier AI development that make assets held by the UK chokepoints or complementary to continued progress. In practice: an asymmetric innovation strategy would seek to enable research breakthroughs — crowding funding into ARIA, making high-risk high-reward investments into AI startups through the National Wealth Fund.
A chokepoint investment strategy is attractive because it starts from a point of strength. Yet controlling chokepoints in the semiconductor supply chain creates limited leverage for middle powers. The US exercises structural control over the entire supply chain, and will invest to challenge chokepoints.
ASML illustrates the extent of US structural power. ASML has a monopoly on EUV photolithography machines. Yet the first EUV machine was built at Lawrence Livermore National Laboratory. R&D was underwritten by a consortium led by Intel. ASML manufactures EUV machines under a 1999 license from the US Department of Energy. This nexus to the US limits Dutch leverage — and it was this nexus that made it impossible for the Dutch Government to resist unilateral US export controls on ASML in 2023.
Leading powers will also respond to attempts by middle powers to deepen control of chokepoints. The greater the leverage held by middle powers, the greater the incentive for leading powers to invest in onshoring. Control of chokepoints becomes fundamentally unstable.
This leaves the investment strategies of complementary assets and asymmetric innovation. More specifically, the UK should:
Invest in Complementary Assets. Frontier AI is already 'good enough' for many applications. Realising economic value from advanced capabilities will be a priority for AI companies in 2026. The Sovereign AI Unit should focus on making targeted investments at the interface between advanced capabilities and the real economy — leveraging its partnerships with AI companies to build the best understanding of the jagged frontier of capabilities globally, and investing in bottlenecks to realising value from the capabilities of tomorrow.
Enable Asymmetric Innovations. Owning complementary assets improves our position today. But lasting leverage requires betting on the next paradigm of AI development. The UK should mandate ARIA to find the 'next Transformer', with ring-fenced funding for architectural innovations. The UK should also scale the AIRR — the announcement of £1bn to expand it is welcome, but the expansion must be prioritised and compute dedicated to UK startups challenging the current paradigm.
Protect Sovereign Assets. Sovereign assets are valuable as long as they remain UK controlled. But it is not clear the UK has the economic security architecture to respond if an American AI company made a bid for a sovereign asset. What if DeepMind happened all over again? In an age of weaponised interdependence the UK needs an economic security architecture to match. Without it the UK risks subsidising the R&D of American AI companies.
The UK faces a choice. We can accept the default outcome: outsourcing the fundamental factors of national power. Or we can become indispensable — investing in complementary assets, enabling asymmetric innovation, and strengthening the economic security architecture. Sovereign AI sets an aspiration. 2026 should be the year the UK starts to deliver.
Allied Scale for AI
A net assessment approach to the global AI competition.
Personal views only — draft — not UK government policy.
'After being dismissed as a phenomenon of an earlier century, great power competition has returned'. The National Security Strategy that the Trump Administration released in 2017 signalled a shift in international relations. From the end of the Cold War, the US generally pursued cooperation with other powers. Now a new Cold War had begun, and the US was engaged in great power competition with China. This is the consensus view in Washington today, so much that criticising 'Cold War mentality' has become standard Chinese diplomatic rhetoric.
On some metrics China is winning the new Cold War. China has twice the manufacturing capacity of the US. China leads on technologies from hypersonic missiles to electric batteries. The PLA Navy is twice as large as the US Navy — and China has 200 times the shipbuilding capacity. China's GDP is only 63% of US GDP — but China passed the US on purchasing power parity (PPP) in 2014.
This narrative can be compelling. As Graeme Allison told Congress in 2015, 'never has a state risen so far, so fast on so many different dimensions'. Yet it fails to account for an important US advantage. Allies.
Taken together, the US, EU, UK, Australia, Canada, India, Korea, Japan, Mexico and New Zealand have an economy double the size of China's, even adjusted for PPP. They spend a combined $1.5tn on defence, double China's annual military budget. On net there is no competition.
China has no comparable network of allies. Russia, Belarus, North Korea, Venezuela and Iran are closely aligned with China. Yet they are relative economic minnows. The Spanish economy is larger than the Russian economy. And China's legitimacy has not grown in proportion to its power. Countries in its near abroad rely on it for trade but still prefer the US for security guarantees. Dumping and debt-trap diplomacy are constant risks for nearby countries.
However, realising the US advantage in allies requires coordination. The US must collaborate with close partners in key domains to outclass China.
AI is a key domain for great power competition between the US and China. President Trump has demanded the US 'be laser-focused on competing to win' against China on AI. President Xi has called for China to 'recognise gaps' and 'redouble efforts' against the US. Policymakers in both countries recognise that the leader on AI development and diffusion could win a strategic advantage.
The US should leverage its network of international partners to achieve allied scale on AI. This should include the countries listed above at the minimum. Activities within this group should include:
The US-UAE deal is a rough blueprint for what allied scale could look like in practice. The UAE is investing $1.4tn in the US over ten years, including projects to increase aluminium production and establish secure, non-Chinese sources of Gallium. US companies are meanwhile investing in energy infrastructure in the UAE. The UAE is winning the right to import 500,000 Nvidia GPUs p/a. However, they must 'align their national security regulations with the US' and 'prevent the diversion of US-origin technology'. Capacity building, plus technology transfer, plus measures for adversary exclusion.
This approach has clear benefits to US allies. They benefit from access to potentially transformational US technology. The costs of export controls can be negotiated at each update to the proposed shared control list. Indeed, it seems feasible that the US could secure an enduring lead on AI without leveraging its network of allies. What, then, are the benefits to the US?
Countries will only make mutual investments if they think they will benefit. Take the EU's 'AI Factories' initiative as an example. Ideally, the EU would establish datacentres where input costs are lowest. However, there is no mechanism for distributing the benefits of concentrated datacentres to other countries. Therefore, the local benefits will always outweigh the collective benefits, and we end up with each EU member state hosting their own cluster.
Overcoming this problem is critical to unlocking allied scale for AI. A new model of partnership will be needed, where countries can confidently offshore critical industries in the knowledge that they will benefit from each country playing to their comparative advantage. Reaching a new model of partnership will require US sponsorship and a collective recognition that the best route to leading on AI development and diffusion against China involves close collaboration between allies. On net there is no competition.
Two Concepts of AI and Geopolitics
Maybe AGI is the wrong milestone to track.
Two broad ways of thinking about the impact of Artificial General Intelligence (AGI) on geopolitics are emerging. They differ by the extent to which model capabilities are deemed sufficient for real-world impact.
The 'capabilities are sufficient for impact' story sounds like this. There is a 'takeoff' point in model capabilities. There is a sharp discontinuity after this point. The takeoff point is not AGI, but superhuman software engineering capabilities that close the loop on automated AI R&D. The country that reaches the takeoff point first gains a compounding advantage. The leading country can convert its advantage to prevent others from reaching the takeoff point.
The 'capabilities are insufficient for impact' story sounds like this. There is no binary 'takeoff' point in model capabilities. The country that leads in model capabilities has an advantage but not a decisive one. Automated R&D hits diminishing returns, and companies cannot dedicate entire compute budgets to recursive self-improvement. Implementation challenges further limit the advantage of the leading country. Lagging countries can plausibly outcompete the leader by adopting 'good enough' AI.
These stories matter because they imply radically different policy options. If the country that reaches a takeoff point in model capabilities first gains a decisive advantage, policy priorities become:
But if a country that diffuses 'good enough' model capabilities can outcompete the leader, policy priorities become:
As ever, the reality is likely in the messy middle. Having a lead in specific, 'spiky' capabilities matters. A country with a significant advantage in cyber operations could exercise influence over its competitors. Take Stuxnet as an example.
Yet capability does not equal deployment. It is not a given that spiky capabilities will be leveraged in national security applications. Leaving aside bureaucratic and regulatory hurdles, there is a risk that the current paradigm of AI development forces national security agencies to stay behind the curve. Agencies might be unwilling to take the risk of adopting closed-source systems built by private companies based in other countries, due to risk of backdoors. Yet the best open-source systems pose similar risks. The advantage from bootstrapping below-par open-source models also available to adversaries is likely to be limited.
And achieving a sustained advantage will depend on diffusion. Building on Paul Kennedy, Jeff Ding makes the case that technological change creates differential growth rates between countries, which lead to shifts in the economic and ultimately military and political balance of power. Successful adoption of technology builds a foundation for economic growth and creates spillover effects that allow lagging countries to compete. Higher long-term growth rates also create opportunities to invest in capabilities that compensate for 'spiky' disadvantages: what the PRC lacks in aircraft carriers it compensates in A2/AD capabilities that raise the costs of US action in the Taiwan Strait.
This points to a world where AGI is the wrong milestone to track. We define AGI as systems that achieve human-level or higher performance across most cognitive tasks. AGI is less relevant as an inflection point if you are tracking the impact on geopolitics. What matters instead is (a) specific spiky capabilities, particularly in national security domains, and (b) adoption rates. AGI is relevant mostly as a milestone that signals greater adoption is now possible, due to substitution effects.