Mythos Outside, Agents Inside: The Zero Trust Answer to AI on Both Sides of the Firewall
On April 7, 2026, Anthropic announced Claude Mythos Preview and changed the way in which the world perceived cybersecurity. Merely a week later, the Federal Reserve Chairman was meeting with the CEOs of the largest U.S. banks to discuss what it meant for financial sector cybersecurity. Within two weeks, every major analyst firm, security vendor, and national cyber agency had published a response. The UK's AI Security Institute ran independent evaluations. Bain, NCC Group, and the Cloud Security Alliance issued guidance. Boards started asking their CISOs for a briefing.
What triggered this level of attention was not a breach but simply a capability disclosure.
Mythos demonstrated the ability to autonomously discover thousands of zero-day vulnerabilities across major operating systems and web browsers, some of which had existed undetected for more than two decades. More significantly, it generated working exploit code for those vulnerabilities without expert human guidance, in hours rather than weeks, and at a fraction of the historical cost.
For highly skilled researchers, finding and weaponizing vulnerabilities is not a new thing. What is new is the speed, scale, and accessibility at which this capability can now operate. Anthropic has restricted Mythos to a defensive consortium, but other open‑weight models with comparable capabilities are expected to emerge within the next 12 to 18 months. When this happens, adversaries will not ask a permission to make use of them and breach your premises.
What Mythos ultimately signals is the collapse of a long‑standing assumption that enterprise security has quietly depended on for years. The gap between vulnerability disclosure and exploit weaponization used to be measured in weeks. Security programs, patch cycles, and detection workflows were built around that gap. Today, that window is measured in hours and trending toward minutes, at a cost lower than many routine software licenses.
The question for security leaders in 2026 is no longer whether AI will change the threat landscape, that debate is already settled. The question is whether the organization’s security architecture is designed to address threats operating at both human and machine speed.
The AI threat has two faces, and they share a spine
The external threat is the one dominating the headlines. A Mythos class capability in the hands of an adversary can continuously scan internet facing infrastructure, identify exploitable weaknesses faster than vendors can patch them, chain vulnerabilities into complete attack paths, and reverse engineer closed source binaries where no public source code exists. What once required elite teams, now run almost effortlessly and at cloud scale.
However, there’s also the internal threat and this is the one most organizations are already very close to, and yet often far from understanding.
Gartner projects that 40% of enterprise applications will embed task specific AI agents by the end of 2026, up from less than 5% in 2025. These agents read databases, invoke APIs, generate content, orchestrate workflows, and take actions that directly affect business operations.
They are, for all practical purposes, a new class of non-human user. One that reasons about what to do next, operates at machine speed, and almost entirely lacks identity registration, scoped permissions, or a revocation path. Shadow AI - unauthorized AI tools adopted by employees without security oversight was a factor in roughly one in five AI-related incidents in 2025.
At first glance, Mythos and Shadow AI appear to be different problems. An external adversary equipped with Mythos is a weaponized threat. An internal AI agent misusing access is a governance failure.
Architecturally, they are the same problem though.
Both involve entities that reason, act, and adapt faster than human review can intervene. Both encounter security architecture built on assumptions that no longer hold. That authenticated users are trusted users. That internal traffic is safer than external traffic. That detection precedes damage. That perimeter is the primary control point.
Why Zero Trust is the only architecture that addresses both
Zero Trust Architecture, when implemented as an architecture rather than a label, is the only model that holds against both sides of this equation. Not because it is new, but because the conditions that once made it optional no longer exist.
Independent testing by the UK's AI Security Institute confirmed the point directly: Mythos cannot reliably execute autonomous attacks against organizations with well-hardened defences. The controls that constitute strong cybersecurity fundamentals robust access controls, network segmentation, automated patching, zero trust architecture, anomaly detection already provide significant protection against AI-enabled attacks. Most organizations, however, simply have not built those fundamentals to the required standard yet.
What Zero Trust Architecture must deliver in an AI speed environment
Mature Zero Trust Architecture is not a collection of security tools. It is a system in which identity, endpoints, networks, data, and security operations respond to each other in real time. That system level behavior is what makes it effective against AI speed threats.
Against an external adversary, the Zero-Trust architecture reduces exposure by design. Internet-facing applications are not directly reachable. There is no public IP, open port, or DNS record for automated scanners to enumerate. When an opening is found, segmentation, least privilege access, and data classification ensure the impact is contained rather than systemic.
Against internal AI agents, whether sanctioned, shadow, or manipulated through prompt injection, the same architecture applies. Every agent operates under a registered identity with scoped permissions, a human owner, and a revocation path. A finance function agent cannot access engineering infrastructure. An agent that begins exfiltrating data loses its session before the next action completes.
The architecture is held in both cases for the same reason. Signals flow across security components at machine speed. For Identity risk discovered, changes in access are made immediately. Data loss events trigger automatic containment. Humans’ oversight is reduced to review of outcomes rather than making every decision.
This is the distinction between Zero Trust as an aspiration and Zero Trust as a working system.
Where this leaves security leaders
Modern security environments are complex enough that many CISOs cannot easily determine whether their controls operate as an integrated architecture or merely coexist. Identity, endpoint, network, data, and security operations often sit under different owners, with different tooling and limited shared visibility. Investment decisions are made without a clear understanding of how the system behaves under real pressure.
What organizations increasingly lack is observability of the architecture itself.
A structured Zero Trust Architecture assessment provides that view. It shows where the system is resilient, where it is brittle, and where AI speed threats will find leverage. For security leaders, that visibility becomes the foundation for prioritization, governance, and informed decision making in an environment where both attackers and internal systems now operate at machine speed. Learn more about how Atos helps organizations design and operationalize a Zero Trust Architecture built for AI‑driven threats.
Categories
Related posts
- NIS2, DORA, CISA, SAMA: Why Zero Trust Became the Security Standard Regulators Agree On
- Zero trust architecture explained: Why attackers don’t break in anymore. They log in
- How Data Security Posture Management Helps Organizations Regain Control Over Silent Data Sprawl
- Building cyber resilience in life sciences and pharma
