On April 7, 2026, Anthropic announced Mythos. The numbers had a real impact. Thousands of zero-day vulnerabilities across every major operating system and browser - many were critical, and some decades old and had survived repeated expert review. On one benchmark against the Firefox JavaScript engine, the previous best model (Opus 4.6) had produced working exploits only twice out of hundreds of attempts. By contrast, Mythos succeeded 181 times.
Anthropic didn’t specifically train Mythos to find these exploits. The capability just emerged as a downstream consequence of the model getting better at coding and reasoning. The exact same properties that make a model better at patching vulnerabilities also make that model better at creating them. That is not something you can opt out of by simply choosing not to train for it - you get this for free, whether you want it or not.
Anthropic made a responsible call that’s also a savvy marketing move. They didn’t release Mythos publicly. Instead they created Project Glasswing with roughly 40 partner organisations (Amazon, Apple, Microsoft, Google, CrowdStrike, and others) getting restricted access to use Mythos defensively across the world’s most important software. This aims to catch and close a lot of bug-level exploits in the systems that we all depend on.
But it only covers code-level vulnerabilities in this critical software subset. And that’s just one wave of a much larger problem.
The Online World Has Changed
My first instinct was to call this an arms race, but this doesn’t really fit. An arms race has two parties. They usually have mutual deterrence. And they have the possibility of finding an equilibrium. The Cold War eventually found stability because both sides had roughly the same capabilities, and strong incentives not to escalate.
This new AI security situation has none of that. There are many more than two sides. Anyone motivated enough, with enough compute, can join in. And there’s no deterrence mechanism, no equilibrium point around which the system can settle.
The underlying asymmetry is also pretty brutal. An attacker only needs to find one exploitable vulnerability. But a defender has to find and fix all of them. That’s always been true in security, and now AI amplifies it, because automated vulnerability discovery and exploit generation can scale far more easily than any comprehensive defence.
The Barriers Keep Dropping
A key point is that you don’t need a Mythos-class model to make this work - Mythos just highlighted this change. A good-enough open-weight model running inside the right agentic harness can multiply effectiveness massively. Recent research into areas like autoresearch, agentic coding, and AI scaffolding have made this clear:
> Better results don’t require better models. Sometimes they just need a better harness.
This means that the barrier to entry for AI-augmented exploit discovery is not “build a Mythos-class model”. It’s actually “put a good-enough open-weight model in a well-designed agentic harness and let it run”.
This is Sutton’s Bitter Lesson applied to security. General methods that leverage computation will eventually win over hand-crafted human-knowledge approaches. And when you add in self-improving loops, then a state actor and a motivated teenager with a laptop both end up with access to the same level of offensive tools.
And as time passes and every new open-weight is released, the bar is lowered again.
But This Is Not Just About Code
On March 31, 2026 (one week before the Mythos announcement) alleged North Korean state-sponsored hackers compromised the Axios npm package. Axios is a very popular JavaScript library, with roughly 100 million weekly downloads. The attack on it was multi-layered - social engineering to compromise one of the maintainer’s accounts, a pre-staged malicious dependency, cross-platform payloads that targeted Windows, macOS, and Linux simultaneously, plus forensic self-destruction built in. This whole thing hit both release branches in under 40 minutes.
It’s exactly this blend that AI is expected to automate. Social engineering, code-level exploitation and speed. The same capabilities that find these code exploits also extends naturally to workflow exploits, process exploits, and personal social engineering - think about voice clones, live video generation, facial replacement, AI-driven modelling of how you communicate and of course who you trust.
You might be able to fuzz a codebase. But you can’t fuzz an approval process. And you definitely can’t fuzz your mother’s voice.
What Defence Does Scale?
If the attacker has AI and it moves at machine speed, then the defender has to match that too. Humans don’t scale to this volume. IT departments don’t scale. And per-app security solutions don’t either. The only thing that can match AI-augmented attack throughput is an AI-augmented defence.
When the threat is coming from everywhere and moving too fast for humans, then the only defence that scales is another AI - watching your whole digital life.
That’s what makes this VPN type product seem inevitable.
The Frontier Lab VPN
Imagine a frontier-class model that sits between you and everything else. Every one of your network connections routes through it. Every one of your applications is watched. Every incoming file, link, call, and message is evaluated to keep you safe.
The model monitors your activity, and projects risks based on what you’re actually trying to do, then blocks attacks before they cause damage. That’s an AI VPN. This product doesn’t quite exist yet, but it looks like our networked world is now demanding it.
The alternative would be like running through the internet naked.
Why This Is The Path Of Least Resistance
A Frontier Lab VPN is a product that can really be sold. It’s simple to explain. It transfers the responsibility away from the user. And it mirrors every previous infrastructure transition - Gmail beat self-hosted email, the cloud beat local and SaaS beat on-premise. People always choose convenience over autonomy, every single time.
There is a possibly healthier alternative, in theory. Detection and exploitation are not the same task. Finding a novel zero-day takes real compute and effort. While detecting anomalous behaviour against a baseline is more like pattern matching, and good-enough open-weight models can handle that. A local “risk copilot” that runs open-weight models to project risks and inform your decisions (rather than block them for you) is technically possible today.
It’s just that this is unlikely to happen at scale. MIT Sloan analysed OpenRouter usage data and found closed models account for roughly 80% of token usage and 96% of revenue, even though open models average about 90% of closed-model performance, and at dramatically lower cost. As they put it: “When grocery shoppers find a generic product that’s 90% as good as the brand name version but costs 87% less, they usually put it in their carts. But when it comes to large language models, most artificial intelligence users pick the more expensive option.”
When generic open models work and users still pick the branded closed models, then the distributed alternative just remains a small niche for the technically literate. The market likes to concentrate activity around the frontier labs, and this just reinforces the VPN model even further.
Even if you do not want to interact with these AI models through chat, or run them as agents, this new security layer is likely something you will want (and need) just to safely use the mobile phones and computers you’ve grown to rely on.
The Prototype Already Exists
You can already see a prototype of what this product will look like, just look at Claude Cowork.
Cowork lets you funnel your computer use through a single Anthropic-managed and sandboxed application. AI operates for you and alongside you, watching what you’re doing, helping, and taking actions on your behalf. Today, it’s framed as productivity software. But it already demonstrates the architectural starting point for the VPN you need tomorrow.
The same routing. The same monitoring surface. And the same trust model. Swap “productivity” for “protection” and you’ve already built most of the product.
You might even already have it installed.
Bigger Revenue And The Biggest Training Data
If this does become a near-universal product category, and the pressures for this are intense, then it absolutely dwarfs chat, API, and agentic coding. A recurring, and extremely sticky subscription that’s tied to “safely using your devices” is a much bigger revenue stream than selling completions by the million. And lock-in compounds through ongoing accumulating context - your preferences, your patterns and your risk history.
But revenue is only the first half of this powerful new flywheel. The other half is data.
Imagine everything you do through the VPN, that creates potential training data. Every keystroke. Every decision. And even every hesitation. The frontier labs already have the best reasoning corpora available. A Frontier Lab VPN like this then delivers them the best behavioural corpus possible - a live stream of how real humans actually use computers, communicate, and respond under pressure.
No advertising company has ever had data like this. And no state surveillance program has ever had access like this. Plus users will pay them to collect it. The revenue line is massive. The data moat is bigger. The combination is mind blowing.
The Security And Surveillance Layers Are The Same Infrastructure
There is no meaningful technical distinction between an “AI security system monitoring all your activity to protect you” and an “AI surveillance system monitoring all your activity to control you”. The infrastructure is absolutely identical. The only difference is intent, marketing and of course governance.
Unfortunately, the historical track record of maintaining this distinction is poor.
The Providers Are Already Under Pressure
The entities that are best positioned to offer this service are clearly the frontier AI labs, but the frontier AI labs are themselves under serious political and economic pressure. In the same week that Anthropic demonstrated Mythos’s defensive capabilities, a US federal appeals court allowed the US Department of War to maintain its classification of Anthropic as a supply chain risk. The hyper-relevant point is that this was because Anthropic drew red lines around mass surveillance.
The crystal clear message to other frontier labs is: comply without conditions, or face consequences.
If the new centralised security architecture that everyone ends up depending on can easily fold under state pressure, without any meaningful constraint, then the infrastructure itself becomes the point of leverage. The AI VPN isn’t just between you and the internet. It’s between you and whatever your provider is being pressured to do this quarter.
This Centralises The ‘Thinking’ Layer
Earlier infrastructure centralisation was just physical. Electricity, water, roads - you rely on them, but they don’t shape what you perceive as real. The power company can charge you more, but it can’t change what you see and think.
In contrast, a Frontier Lab VPN sits between you and everything you read, watch, click, say, and even hear through a networked device. It mediates your interactions with other people and systems. It has real-time access to your behaviour and intent.
No institution in human history has previously had that level of access to this many people at once. Not governments, not churches, not broadcast networks, not advertising platforms. This is new.
What’s Actually Being Decided
The real question is not whether you’ll use an AI VPN. Once the threat surface gets bad enough, most people won’t see any other safe option, and the market will simply converge. The more important question sits below that one:
> Which provider. Under whose jurisdiction. Accumulating what data about you. With what accountability when they’re pressured.
The ClaudeVPN in the title of this post is not outlandish speculation. It’s just an obvious “next product” for any frontier lab, and the starting point is already running on millions of machines. And as I’ve said - you might even already have it installed.


