Anthropic’s Mythos Announcement: The Logic Behind the Magic
I attended the HumanX conference in San Francisco this week with a singular mission: to figure out what is really going on in enterprise AI deployments, and determine how the threat landscape is evolving in this new world.
What became very evident to me is there is a disconnect between the reality of what’s happening in enterprise AI, what venture and startups think is happening, and what public equity investors think is happening.
I’m going to be writing a series of pieces diving into key takeaways from the conference for our customers at Marker, but something I think everyone should understand is how we’ve arrived at a place where a new AI model is capable of identifying vulnerabilities in code that have flown under the radar - outside the purview of some of the most skilled engineers and security professionals in the world - for, in some cases, nearly three decades.
How AI Is Detecting Vulnerabilities Humans Couldn’t
The tasks AI agents are best at today are tasks that are well-defined as they already exist within human-driven environments. Take AI coding agents as an example.
The software development process and the best practices surrounding it were well established and documented long before the advent of AI. Humans have been building software for decades, we know how to do it. The largest businesses in the world already run on this pre-AI era software.
AI coding agents have exploded in popularity because the software development process is a natural fit for what AI does best: automating well-defined tasks.
What’s happening now is that AI coding agents are getting very good at writing technically sound code.
Now, before the developers reading this reach for their pitchforks, let me be more specific- the bugs in AI-written code tend to be business logic bugs… weirdo bugs that emerge because AI coding agents don’t have all the context that human developers have when writing software as a business solution. But that is different from technical vulnerabilities in the code or software deployment lifecycle itself.
As these AI coding agents - drawing on the intelligence of frontier models - have become excellent developers purely in a technical sense, it naturally follows that they are also better at finding vulnerabilities in pre-existing software.
The Progression of Vulnerability Detection
The progression of vulnerability detection so far looks like this:
- Junior Developer / Security Researcher: not ideal for finding vulnerabilities
- Senior Developer / Security Researcher: Highly skilled at finding vulnerabilities, but limited by time, bandwidth, and human error
- Claude Mythos: The next evolution - combines technical accuracy with velocity and scale to do vulnerability detection humans can’t do
It’s impressive, but not all that surprising, Anthropic’s Mythos model is finding zero-day vulnerabilities in legacy human-written code bases faster than humans can. It’s no secret humans make mistakes, and velocity and scale are well-understood benefits of agentic AI systems. Put those two known quantities into an equation and the vulnerability detection outcome we’re seeing is a logical result.
Anthropic didn’t set out to build a vulnerability detection tool, the Mythos model’s capabilities emerged as a result of better reasoning and coding… just as a junior developer’s reasoning and coding improve as they progress from junior to senior developer.
Project Glasswing and Why Anthropic Needs Help
A question many still have is: If Mythos is so good at finding vulnerabilities, why does Anthropic need CrowdStrike, Palo Alto Networks, Amazon, Microsoft, and others involved in Project Glasswing to help solve the problem?
The answer is one that will pervade all aspects of agentic AI deployments across organizations the world over, and not just in cybersecurity. I touched on it earlier: AI needs business context to be truly effective.
Without an understanding of what business problem an organization is trying to address and without humans with expertise and knowledge about the priorities of the business providing that context to the AI, it can’t implement solutions to business problems in ways that are operationally safe and effective.
In the case of Mythos, it can find vulnerabilities at a speed and scale no human can match. But finding a vulnerability and safely remediating it across thousands of production systems worldwide are very different problems.
The partners in Project Glasswing are instrumental in solving this problem not only because they own many of the critical codebases that need securing, but because they have the domain expertise to triage and prioritize what gets fixed first, and the operational expertise and capacity to deploy patches to millions of systems without breaking the businesses that run on them.
Mythos would be unable to implement a real solution to the problem without the help of these partners- an operational constraint, not a technical one.
The Human + AI Partnership
What we’re seeing with Project Glasswing is the evolution of an ideal Human + AI partnership, one born out of necessity:
- Mythos identifying zero-day vulnerabilities at high velocity and at scale
- Senior Cybersecurity Experts providing domain expertise to (a.) further refine Mythos’ detection priorities, and (b.) provide the operational context needed to implement fixes without disrupting the businesses that depend on that software
This Human + AI partnership model is emerging as a central theme for enterprises successfully rolling out AI deployments across their organizations. It’s not a workaround for AI’s limitations, it’s the intended architecture.
The software that gets built to deliver the solution to the vulnerability problem (and many future problems) will no doubt involve the same Human + AI partnership model to make it work.
I’ll be writing more about this in the future. For now, we should be thinking about future cybersecurity announcements from Anthropic and other frontier model labs within the Human + AI partnership context. That framing will give us the clearest picture of how all of this unfolds.
Attackers are already using this partnership model. It’s time for defenders to do it too.

