Anthropic unveiled Claude Mythos Preview on April 7, 2026, sparking widespread attention beyond cybersecurity experts. Finance ministers addressed it at the IMF, the Bank of England governor emphasized its seriousness, and the UK government issued an open letter to business leaders nationwide.
Mythos independently identifies thousands of critical and high-severity vulnerabilities in major operating systems and web browsers, including a 27-year-old issue in OpenBSD. It produces functional exploits without human input. The UK’s AI Security Institute (AISI) evaluated it, confirming Mythos executes a 32-step simulated corporate network attack—from reconnaissance to complete control—in lab settings, a task that typically requires human experts about 20 hours.
These findings stem from controlled lab environments lacking active defenses, security monitoring, or defensive tools. Firefox tests omitted the browser’s process sandbox. While Mythos demonstrates strong capabilities, it has yet to face fortified, defended systems. AISI projects that frontier AI cyber abilities double every four months.
Security as Economics
AISI allocated 100 million tokens per network attack simulation attempt. Over ten trials, Mythos succeeded in the full 32-step sequence three times. Tested models showed no performance decline with higher token budgets; results improved with more compute. Attackers gain an edge by investing more resources. Defenders must outpace this by dedicating greater computational power to vulnerability detection. Experts recommend establishing ongoing AI-driven vulnerability operations across software assets, replacing annual penetration tests to match real-time threats.
Patches as Attack Signals
Project Glasswing grants early Mythos access to around 40 major software vendors for codebase reviews, leading to coordinated disclosures. However, each patch highlights weaknesses for adversaries. AI tools now rapidly analyze patch differences to reconstruct exploits. The Zero Day Clock tracks exploit development time dropping from 2.3 years in 2018 to about 20 hours in 2026. Delays in patching heighten risks, making mean-time-to-remediate for exposed vulnerabilities a key security metric.
Open-Source Transparency’s Risks
Mythos scans source code for flaws, directly analyzing open-source projects while partnering with closed-source vendors. This raises concerns for open development policies, such as the UK government’s open-source commitments. Public codebases invite AI scrutiny, turning repositories into exploit targets. Linux kernel vulnerabilities have risen to 10 verified reports weekly. Organizations using open source, especially in critical infrastructure, must weigh transparency against exposure.
Defense in Depth and Diversity
The UK government’s letter stresses that countermeasures against AI threats mirror standard cyber hygiene: segmentation, identity controls, egress filtering, and phishing-resistant MFA. Not all flaws pose equal danger; internal CVEs differ from public-facing ones. Diverse architectures complicate end-to-end attacks, as exploits rarely transfer across stacks. NCSC guidance on protocol breaks exemplifies this, forcing attackers across multiple technologies.
AI as Geopolitical Tool
Anthropic limits Mythos access via Project Glasswing to select partners and governments, with the US Treasury briefing major banks. Such models function as strategic assets, akin to past US encryption export controls used for influence. Restricted access could shape geopolitics or trade deals, creating dependencies for international organizations reliant on controlled AI for defense.
Leaders advise prioritizing vulnerabilities, shrinking attack surfaces, and automating security through architecture rather than expanding staff. This marks the start of escalating AI-driven challenges in cybersecurity.

