Resilient and Sovereign AI

Recent developments have made me think a lot about the resilience and sovereignty of AI systems.

I will leave aside the ethical implications for this post — should AI be allowed for mass surveillance of American citizens, should lethal autonomous weapons operate without a human in the loop — and focus on the more mundane, practical consequences.a Many on LinkedIn and other social media are fast to show their support for Anthropic by cancelling their ChatGPT subscriptions, instead signing up for Claude. I am sympathetic to this (I became a Claude MAX subscriber just a few weeks ago), but I think it is short-term thinking. A longer-term view leads to different conclusions about what it means to depend on access to intelligence via chat-like services and applications provided by American tech companies — and resilient and sovereign AI.

Last week, following an increasingly heated negotiation over the terms of use of Claude between Anthropic and the American Department of Defenceb, the secretary of defencec not only cancelled the contract with Anthropic, but directed the Department to label the company a supply-chain risk:

“In conjunction with the President’s directive for the Federal Government to cease all use of Anthropic’s technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”

Secretary of Defence, @SecWar

The implication is not merely a lost contract for Anthropic (bad, but not catastrophic) — it is a constraint on how any company with Department of Defence contracts may do business with them. Taken at face valuee, this would force companies like AWS and Microsoft to choose between their DoD business and any commercial relationship with Anthropic, however well-separated.

This is almost certainly illegal and will be challenged in court. But until it is settled, Anthropic’s partners and customers face real risk: they may need to replace Anthropic entirely, or exit their Department of Defence business. For Anthropic — a company with an acute need to raise capital and generate revenue to keep scaling — this is more than bad. It is an existential threat. Dean W. Ball, a principal contributor to the Trump White House AI Action Plan, put it bluntly:

”(…) This is obviously a psychotic power grab. It is almost surely illegal, but the message it sends is that the United States Government is a completely unreliable partner for any kind of business. The damage done to our business environment is profound. No amount of deregulatory vibes sent by this administration matters compared to this arson.”

Dean W. Ball, @deanwball

So what are my takeaways?

  1. AI companies in countries with authoritarian governments can be destroyed at a whim. I don’t use that description of the United States lightly. If you depend on such a company for business-critical services, you need a mitigation path. For me — a very happy Claude Code user these past few weeks — that means ensuring I can switch my daily work between Claude Code and Codex at short notice, and actively working toward an open source model harness as a longer-term replacement. Projects like Cline and OpenCode — open source coding agents that work with any model provider — show that path is already viable.

  2. There has been a flood of support for Anthropic. Famously, Katy Perry has shared on Twitter that she is now subscribing to Claude Pro. Anthropic moved quickly to capitalise on this, publishing a guide on how to import your personal context (“memory”) from other vendors (https://claude.com/import-memory). I consider my personal context a core asset that I want to keep entirely under my own control. My current setup is a lightweight database on a Raspberry Pi, accessible via MCP from Claude Code, Claude Desktop, Claude Web, Claude Mobile, or any equivalent environment from another provider. (https://github.com/Magnus-Gille/munin-memory for those interested.) While memory seems to be easily portable as of now, I find myself thinking on how to ensure that I will not need to rely on these companies to expose my data in a portable format, but rather how to ensure it stays within my own control.

  3. LLM access should be thought of as fundamental infrastructure — more like electricity or heating than a typical IT service. My response is to route through services like OpenRouter, giving me access to a wide range of providers rather than a single one. I will also keep evaluating use cases that can be served by a local LLM — on a Raspberry Pi, my laptop, or eventually a more powerful server I own outright. I am not going fully “off grid” from flagship models, but I need to actively manage the risk of losing access to intelligence or to my own data. I already do this where I can — see for example SagaScript, a 100% local, free, open source Swedish and Norwegian speech-to-text application (https://github.com/Magnus-Gille/sagascript).

  4. I will look carefully at where it makes sense to rely on infrastructure owned by European or Swedish companies, hosted within the EU or Sweden. On this note, Berget AI — a Swedish startup building sovereign AI infrastructure on open models, keeping data within Swedish and EU jurisdiction — is doing important work worth keeping an eye on.


AI has entered the arena of geopolitics. Questions of AI implementation, ROI, legal constraints, and governance need to be top-of-mind for individuals, organisations, and nation states alike. Access to this technology simply cannot be left to the whims of the American administration or the big American tech companies.