Anthropic has just pointed one of the most powerful AI models in existence directly at the internet’s most critical security problems, and it’s brought half of Silicon Valley along for the ride.
Project Glasswing is a cybersecurity initiative using Anthropic’s unreleased Claude Mythos Preview model to proactively find and patch critical vulnerabilities across widely used operating systems, browsers and open-source projects. The coalition behind it includes Amazon Web Services, Apple, Broadcom, Cisco, Google, Microsoft, CrowdStrike, JPMorganChase, the Linux Foundation, NVIDIA and Palo Alto Networks, among more than 40 other organisations.
The scale of the partner list deserves a closer look. This isn’t a research collaboration between a handful of security firms. The organisations involved collectively ship or depend on the core operating systems, cloud stacks, networking hardware, chips and open-source foundations that most of the world’s digital infrastructure runs on. Patches generated through Project Glasswing will flow into software updates used by most enterprises and consumers. That makes this less a product announcement and more an infrastructure play.
The reason this is happening now is important. Anthropic has stated publicly that similar AI-driven offensive capabilities, tools that can systematically find and exploit software vulnerabilities across large codebases at speed, will likely emerge in the hands of malicious actors soon. Project Glasswing is a direct attempt to harden the world’s software before that window closes.
Anthropic has committed up to $100 million in usage credits for Claude Mythos Preview across the coalition, plus $4 million in direct donations to open-source security organisations.
Meet The AI That’s Been Finding 27-Year-Old Bugs
Claude Mythos Preview isn’t being released publicly. Access is tightly restricted to defensive security partners, a deliberate design choice given that the model can do something that most AI systems can’t: systematically discover zero-day vulnerabilities and, in some test cases, auto-generate working exploits. Keeping that capability out of general circulation while deploying it defensively is the core logic of the initiative.
Early runs have reportedly uncovered thousands of serious vulnerabilities, including bugs in widely used software that had survived between 16 and 27 years of human and automated review without being caught. The specific examples cited include OpenBSD and FFmpeg, software components used in critical infrastructure and consumer devices worldwide.
Finding vulnerabilities that have been dormant for decades in systems that have been reviewed by thousands of people over many years is a signal of what AI-assisted security analysis can do that years of human review could not.
The mechanism is simple in principle, if technically formidable in practice. Claude Mythos Preview scans codebases systematically for high-severity bugs, identifies the vulnerability and in many cases produces a patch. The patch then flows through the relevant partner’s normal update and distribution process.
Anthropic’s role is to provide the model capability and the coordination layer; the partners own the software and control the deployment.
More from Cybersecurity
- Is Vibe Coding Safe Or A Cybersecurity Disaster Waiting To Happen?
- External Attack Surface Management And Why It Matters For Startups
- SpyCloud’s 2026 Identity Exposure Report Reveals Explosion Of Non-Human Identity Theft
- The Aura Data Breach Exposed 900,000 Users – Here Is What Every Business Needs To Know
- How AI And Hacking Professionalism Are Overwhelming Endpoint Security
- Navigating The Hidden Dangers Of USB Devices In The Modern Workspace
- VCs Investing In Cybersecurity In 2026
- CredShields Contributes to OWASP’s 2026 Smart Contract Security Priorities
Rising Tide Or Big Tech Moving In On Your Market?
The question that matters for cybersecurity startups and founders operating in this space is straightforward: does Project Glasswing make the market bigger, or does it compress the opportunity for independent players? The answer is probably both, depending on where you’re positioned.
The rising-tide argument has substance. Glasswing addresses a systemic problem: mass-scale latent vulnerabilities across shared infrastructure that no individual company can fix alone and that creates background noise for every security product in the market. Closing thousands of long-standing bugs in widely used operating systems and browsers raises the baseline security of the entire digital environment. That should reduce the volume of commodity exploits that smaller vendors have to help their customers defend against, freeing up capacity for higher-order work.
That said, the less comfortable reading deserves airtime too. The coalition includes most of the platforms that host, chip and network the world’s software. If AI-driven vulnerability discovery becomes embedded in the standard update process for major cloud providers, chip manufacturers and OS vendors, the market for independent tools that do similar things at the infrastructure layer becomes more crowded.
Large enterprise customers who already rely on AWS, Azure or Cisco for core infrastructure may not need a separate vendor offering AI-assisted scanning of the same stack.
Don’t Compete With The Coalition – Build Around It
For cybersecurity startups, the strategic response to something like Project Glasswing isn’t to compete directly on vulnerability discovery at infrastructure scale. That race is now being run by a coalition with $100 million in committed AI credits and the most advanced security model currently in existence.
The more durable position is in the areas that large platform players are structurally unlikely to prioritise: vertical-specific compliance workflows, incident response tooling, identity-centric defences, and the integration work that turns AI-generated findings into action within specific enterprise environments.
This pattern has played out before in security. Big Tech has consistently absorbed the most horizontally applicable layers of security, from antivirus to endpoint protection to cloud-native firewalls, while specialist startups continued to build valuable businesses in the more specific, workflow-adjacent, compliance-heavy work that doesn’t lend itself to platform-scale automation. AI-native security startups asking whether this changes their market are asking the wrong question. The better one is: which part of it doesn’t it reach?
Project Glasswing is a serious initiative from a serious coalition, and it addresses a real and underappreciated problem in global software infrastructure. The window for patching decades-old vulnerabilities before offensive AI tools arrive is narrow, and the argument for moving fast is sound.
Whether it ultimately benefits the security industry or concentrates value upward depends less on the initiative itself and more on whether the partners treat it as a foundation to build on or a moat to defend.