editorially independent. We may make money when you click on links
to our partners.
Learn More
Anthropic’s Project Glasswing highlights how advanced AI models may rival top human experts in finding and exploiting software vulnerabilities.
Early claims from the company suggest these models, like Claude Mythos Preview, can operate at large scale and find vulnerabilities faster.
However, security leaders share mixed views on the claims.
“Mythos appears to materially change the economics and cadence of exploitation and compromise,” said Jared Atkinson, CTO at SpecterOps and a former U.S. Air Force Hunt Team member in an email to eSecurityPlanet.
“The action layer is where this plays out. Defensive AI is a vital step, but it only works if the full agentic stack, LLMs, MCP servers, and APIs, is visible and secured end to end,” said Michael Callahan, VP of Strategy at Salt Security in an email to eSecurityPlanet.
“Project Glasswing will hopefully help eliminate some of the low-hanging fruit at the largest organizations, so when competing models are available to attackers, most have already been found and patched,” said Rob Babb, Exposure Management Strategist at Seemplicity in an email to eSecurityPlanet.
Steven Swift, Managing Director at Suzu Labs cautioned, “Anthropic has a reputation for exaggerating the capabilities of their models, especially around their ability to find novel vulnerabilities.”
He added, “For example, their models have struggled with line(s) of code that could be vulnerable, but only if you ignored the preceding lines of code, that properly handled the risk and left no residual vulnerability.”
Inside Project Glasswing’s AI Security Push
Project Glasswing marks a coordinated industry effort to secure critical infrastructure at scale using AI.
At the center is Claude Mythos Preview, a frontier model that Anthropic claims has identified thousands of high-severity vulnerabilities across major operating systems and web browsers, many previously undetected.
What sets this development apart is not just the volume of findings, but the capability behind them.
Claude Mythos Preview uses advanced agentic coding and reasoning to autonomously analyze code, uncover vulnerabilities, and map potential exploits.
In testing, the model reportedly uncovered decades-old flaws missed by extensive human review and automated scans.
For security leaders, this marks a fundamental shift as AI lowers the barrier to discovering and exploiting vulnerabilities, compressing the timeline from discovery to attack from months to potentially minutes.
Because Project Glasswing is limited to select partners, Anthropic’s claims about Mythos Preview’s large-scale vulnerability discovery remain unverified by the broader security community.
Reducing Risk from AI-Powered Attacks
While the Anthropic claims cannot be verified, organizations can still take fundamental steps to securing their code.
- Integrate AI-assisted and DevSecOps tools, along with advanced fuzzing, to proactively detect vulnerabilities in code and dependencies.
- Strengthen patch and vulnerability management programs to reduce time-to-remediation at scale.
- Enforce least privilege, segmentation, and privileged access management to limit the impact of exploitation.
- Secure the software supply chain with SBOMs, dependency scanning, and code integrity verification.
- Harden CI/CD pipelines and restrict access to repositories, APIs, and sensitive codebases.
- Monitor runtime activity and logs for anomalous behavior, rapid exploit attempts, and AI-driven attack patterns.
- Test incident response plans and use attack simulation tools with scenarios around code compromise and AI-powered attack scenarios.
Collectively, these steps help build resilience against AI-driven threats while minimizing overall blast radius.
AI Changes the Cybersecurity Playbook
Project Glasswing reflects a broader shift in cybersecurity, where AI is moving beyond productivity use cases to play a central role in both identifying and exploiting vulnerabilities.
As these capabilities advance, outcomes will increasingly depend on how effectively organizations integrate AI into their defensive strategies.
The initiative also reinforces the need for coordinated efforts across industry and government, as addressing the systemic risks introduced by AI-driven cyber capabilities requires shared visibility, standards, and collaboration.
As AI becomes more deeply embedded in both offensive and defensive security operations, the need for clear governance is becoming critical.
