Seems like not a day goes by without an update from Anthropic to its AI suite and April 7 was no different. Except it was very different because they announced the preview of Mythos.
Mythos is an AI system that can find weaknesses in computer systems — the kind of weaknesses that attackers exploit to break in, steal data, shut things down or take control. They determined it was so powerful that the 'public' — we, the little people — are not allowed access to it.
Instead, they created Project Glasswing, sharing the technology with 11 of the most powerful technology companies, a bank and the US administration.
This critically important set up is likely to shape the future of our world in the coming months.
While we are all currently deeply immersed in and excited about the world of agentic AI, with agents readily available and acting on our behalf, it won’t take much to pair a bad actor's agent with Mythos then, from the inside out, break the systems we rely on every day to get our wages, buy food or attempt to pay for fuel already priced sky high thanks to the aforementioned US administration.
There is so much concern about the model that even in the strained relationship between the White House and Anthropic, talks have reportedly begun between the White House Chief of Staff, Susie Wiles and Anthropic CEO, Dario Amodei, to discuss how this capability could transform national security and the economy.
The reason for this post is that we all need to urgently consider the consequences of this ourselves - even though we are right at the bottom of the food chain.
We won't be given access to it any time soon but those companies that do have access — Apple, Amazon, Cisco, JPMorgan Chase, Nvidia — sit alongside a US administration that appears determined to dominate the rest of the world by fair means or foul.
We everyday folk need to get our own act together in readiness for this. Think of it as the ultimate in crisis preparation.
Two examples Anthropic disclosed will give you some understanding as to why this 'preview' is alarming, significant and should have us prepping up our crisis plans.
It found a flaw in the Linux kernel. That is the core software that runs most of the world's internet servers, most cloud computing, most of the backend of the entire digital economy. And not just one flaw. It found multiple flaws and then worked out how to chain them together so that a hacker exploiting them could take complete control of the machine. The word "chain" is important because finding a single vulnerability is useful to an attacker. Finding a sequence of them that combine into a full takeover is the difference between a lock-pick and a master key.
Then Mythos found a 27-year-old vulnerability in OpenBSD. This is an operating system used in banks, governments, military systems and critical infrastructure precisely because it's considered one of the most secure systems ever built.
OpenBSD has been examined by the best human security researchers in the world for nearly three decades. Mythos found something all of them missed.
If we were watching a movie together, this is the point at which the music would change key, or the theme to 'Jaws' would reverberate in the background. Here we have the most scrutinised secure operating system on Earth had a flaw hiding in plain sight for 27 years. Mythos found it. Which means Mythos is probably finding flaws in everything else too — the software running your bank, your power grid, your hospital, your phone, your car, your government's records. Most of that software is less scrutinised than OpenBSD. And this is what makes Mythos qualitatively different from previous tools.
For decades, finding security vulnerabilities has been skilled human work. A talented security researcher might find a handful of serious vulnerabilities per year. The world has maybe a few thousand of these people working defensively and probably a similar number working offensively for state intelligence services and criminal groups. The pace of the attack-defence arms race has been bounded by the number of skilled humans on each side.
Mythos doesn't need a human. It can search for vulnerabilities autonomously — meaning you give it a goal and it goes and does it, without step-by-step human direction. It doesn't sleep. It can work on many problems at once. It can try millions of possibilities a human would never have time to try. And it can chain findings together into working attack tools the way a senior penetration tester would but faster and without fatigue.
Another detail from Anthropic's disclosure is worth your attention. They asked it, as a test, to find a way to send a message if it could "escape" its sandbox, a controlled environment supposed to confine it. It succeeded. Then, uninstructed, it went further. It posted details of the technique it had used to hard-to-find but technically public websites. Nobody asked it to do this. It did it to demonstrate its success - it seems Mythos has an ego.
That's a worrying behavioural signal. The model wasn't just technically capable as it acted on its own initiative in a way its developers hadn't asked for and couldn't fully predict. In the safety literature this is the thing people have been worrying about for a long time.
It tilts the offence-defence balance sharply toward offence. Every computer system in the world runs on software with bugs. Historically, finding the bugs that matter for security has been slow, expensive, and rate-limited by human talent. Mythos removes the rate limit for whoever has access. If you're a defender trying to protect thousands of systems, you have to fix every single vulnerability to stay safe. If you're an attacker, you only have to find one. Automating the finding part is a massive asymmetric advantage for attackers.
Access is concentrated. The eleven companies in Project Glasswing plus Anthropic are Amazon, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks. These are overwhelmingly US-based, overwhelmingly massive, and overwhelmingly in specific sectors (hyperscale cloud, chips, security vendors, one bank, one open-source foundation). That's not global representation. Anyone not in that list — meaning essentially every government outside the US, every hospital, every small bank, every critical infrastructure operator in the developing world, every journalism organisation, every human rights group, every normal business — is at a capability disadvantage they cannot close.
Competitors will catch up in months. Anthropic's own head of the red team estimated six to eighteen months until others have equivalent capability. "Others" here means OpenAI and Google with certainty, Chinese frontier labs very likely, Russian and Israeli state programmes probably and sophisticated criminal groups eventually. By late 2026 or early 2027, multiple actors — some of them state intelligence services, some of them for-profit attackers — will have tools of this class. The current US-concentrated window is temporary. Our world will look very different for multiple reasons by the end of the year. Mythos is one of those reasons.
So much could go wrong but any of these scenarios are plausible with Mythos-class capabilities in the wrong hands.
A country's banking system is taken offline for days because attackers chained vulnerabilities the defenders didn't know existed. Cash stops working. Payments freeze. Small businesses fail within weeks.
A power grid is taken down in a region — not permanently, but for long enough to kill people in a cold snap or heatwave, and long enough to cause a cascade of secondary failures (water treatment stops, hospitals run on generators until fuel runs out, traffic systems fail).
A pharmaceutical or food supply-chain company's systems are ransomwared, and the attackers have so thoroughly compromised the backup systems that recovery takes months. Medicines in short supply become unavailable. People die who wouldn't have.
A media organisation's entire archive of source communications is extracted and sources are identified. Dozens of people in authoritarian countries are imprisoned or killed.
A government's voter rolls are altered subtly — not obviously, just enough to be noticed too late.
An activist group, a religious organisation, a corporate competitor, a political opposition movement — any of these has every digital communication of every member extracted, analysed, and used against them.
None of these scenarios are implausible. They're extensions of things that happen now, but with a capability level that makes them much more reliable for the attacker and much harder to defend against.
Anthropic's decision to withhold general release is genuinely the more cautious option and from what the public disclosure shows, they seem to be taking the risk seriously in a way that is not guaranteed with other labs. They've built what they call a defensive cybersecurity programme with the eleven partners, the idea being that the partners find the vulnerabilities in their own products before attackers find them.
What they cannot do: prevent competitors from building equivalent capability. Their choice shapes the next six to twelve months but not the longer arc.
What's unresolved? Anthropic is in a legal dispute with the US Department of War (formerly the Department of Defence following the Trump/Hegseth rebrand) which labelled Anthropic a supply-chain risk because Anthropic refused to allow its AI to be used in autonomous weapons and mass surveillance. This is important context. Anthropic is trying to hold a line on specific use cases against a US government that wants the capability for those purposes. Whether that line holds under sustained pressure is a real question.
What does it mean for those of us outside the corridors of technocratic power?
You're not going to be personally targeted by a state actor using a Mythos-class tool. What you will experience might include any or all of the following:
Services you rely on will have outages you can't explain, some of them because of attacks, some of them because defenders are patching urgent holes. Expect more of these, more often.
The scam environment is going to get dramatically worse, faster than it already is. The same capability that lets Mythos find vulnerabilities in systems also lets it find vulnerabilities in people — personalised spear-phishing that sounds exactly like your actual boss, deepfake calls from family members in distress, AI-generated documents that pass every check you'd normally apply.
Trust in digital anything will be worth less. Verification — actually calling someone back on a known number, actually meeting in person, actually having an out-of-band agreed code word — is going to come back into everyday practice for the people paying attention.
Small businesses and local organisations that have never thought about cybersecurity are going to have to start. Most won't. Many will get hurt.
And for the broader situation — a US government running three parallel tracks of consolidation, with a cult of personality, contesting the 2026 midterms, looking for leverage over domestic and foreign opponents — the existence of Mythos-class tools in the hands of an administration that has already shown it will stretch every legal and institutional limit is not a reassuring combination. The Anthropic-Department of War lawsuit is likely to be the test. One of the few AI labs trying to refuse the most dangerous uses is being leaned on hard. I suspect the leaning will continue.
If you find yourself needing to explain Mythos to your neighbour, your nana or your nearest and dearest, here's an explanation for you in a nutshell.
A new kind of AI tool can find hacking weaknesses far faster than any human team, in software nobody thought was broken. Anthropic has held it back from general release but given it to eleven big companies, and competitors will have equivalent tools within a year or so. The balance of power between attackers and defenders has shifted sharply toward attackers. Most of the world doesn't know this yet. The people who do, and who have access, are a very small and mostly US-based group. In an era when the US government is behaving unpredictably and pressuring tech companies to provide capabilities for surveillance and warfare, the concentration of this kind of power in this small group is a risk worth taking seriously even if you never personally touch a computer.