Anthropic, one of the world’s most prominent AI safety companies, has a problem it probably didn’t expect to be dealing with this week.
A human error during a packaging release for the npm registry accidentally exposed Claude Code’s source code – approximately 1,900 TypeScript files totalling over 500,000 lines, including internal architecture and telemetry logic. By the time anyone caught it, the internet had already done what the internet does best.
Security researcher Chaofan Shou flagged the exposure publicly on 31 March 2026. Within hours, GitHub was full of forks and mirrors. Anthropic has since issued over 8,000 copyright takedown requests to remove copies and adaptations that spread across platforms before the error was caught.
The company described it as a ‘release packaging issue’ and confirmed that no model weights or customer data were compromised. A source-map file in version 2.1.88 had inadvertently linked to a 60MB ZIP archive on Anthropic’s Cloudflare storage, making full reconstruction of the proprietary code straightforward for anyone who knew where to look. This was, notably, the second such incident – a similar exposure happened in February 2025.
The leak itself is embarrassing but containable. The more interesting part of what follows is: 8,000 takedown requests, an open-source community that moves faster than any legal process, and an unresolved question about whether copyright law is even the right tool for this situation.
You Can’t Un-Ring The Internet’s Bell
Copyright takedowns are a reasonable first response to leaked proprietary code. They’re also, in the context of 2026’s decentralised internet, somewhat like trying to stop a rainstorm with a bucket.
Anthropic’s 8,000 requests will remove many of the copies from GitHub and other major platforms. They won’t touch the mirrors, the forks that moved to alternative hosts, the copies that were downloaded before the requests went out, or the cached versions sitting in developer environments around the world.
This isn’t a criticism of Anthropic’s response, it’s the correct response. But it illustrates a structural reality that any company building proprietary software in 2026 needs to understand: once code is public, the practical window for containing it is measured in minutes. The open-source community is fast and deeply familiar with the mechanics of replication. Eight thousand copies in a few hours was inevitable.
There’s also a noteworthy legal wrinkle in all of this. Some commentators have raised the question of whether Claude Code, if it was substantially generated by AI, might not be eligible for copyright protection under current US law, as copyright in the US requires human authorship. If the code was written primarily by Claude rather than by Anthropic’s engineers, the takedown requests rest on shakier ground than they might appear.
Anthropic hasn’t addressed this directly, and it’s unlikely to become a live legal dispute, but it’s the kind of question that proprietary AI code is going to face more of as the technology matures.
More from Artificial Intelligence
- Oracle Shrinks To Scale. Is This A Strategic Reset Or A Frantic Scramble To Stay In The AI Race?
- Harvey Just Hit An $11 Billion Valuation Without Building A Single AI Model, Here Is What That Means For Startups
- AI Is Now Sitting In On Your Therapy Session, We Should Probably Talk About That
- Are Oral Exams The Solution To AI Cheating? Education Leaders Weigh In
- Google Just Made It Easy To Leave ChatGPT. The AI Wars Are No Longer About Who Has The Best Model
- No More Dirty Talk: ChatGPT’s “Adult Mode” Suspended “Indefinitely” Over OpenAI’s Age Prediction Inaccuracy
- Artists Will Now Have More Control Over What Appears On Their Spotify Profiles
- AI Has Already Changed How Coders Work – Now It Is Coming For The Rest Of Us
A Lesson For Founders Building Proprietary AI Tools
The lesson here isn’t that Anthropic is careless – this was a human error in a packaging process, those happen at every company at every scale. The lesson is about what happens when they do happen, and how much of your protection strategy relies on things that can be undone by a single mis-configured release.
For founders building proprietary AI tools, the IP protection question is becoming more pressing and more complicated at the same time. Traditional code security assumptions, such as keeping code off public servers, controlling the build pipeline and limiting access, are necessary but not sufficient.
What Anthropic’s incident illustrates is that the release process itself is a vulnerability point. A correctly packaged release that accidentally includes a source map, a debug build that ships with symbols intact, a deployment that logs more than it should – any of these can expose code that was otherwise well-protected.
The practical mitigations aren’t exotic: code obfuscation for client-side tools, automated pre-release checks that scan for unintended file inclusions, strict separation between debug and production builds, access controls on storage buckets that hold build artefacts. None of this is new advice, but Anthropic’s incident is a useful reminder that even well-resourced teams with strong security cultures can miss something in the release pipeline if the checks aren’t automated and mandatory.
The Irony Isn’t Lost On Anyone
It warrants mentioning about the specific company involved here. Anthropic is built around the idea that AI should be safe, controlled and carefully deployed. Its entire public positioning rests on being the responsible actor in a race where responsibility is rare.
Having its own tool’s source code leak twice now, and then watching the internet replicate it 8,000 times before a takedown could be issued is, at minimum, a lesson in the gap between principle and implementation.
It’s also, to be fair, a very human kind of mistake to make: a packaging error, a mis-configured file path. The sort of thing that happens in every engineering team eventually, regardless of how good the culture is. The difference is that when it happens at a company of Anthropic’s profile, building tools of this sensitivity, the internet is watching and the forks start immediately.
For the rest of the AI tool sector, the takeaway is simple: your release process is part of your security posture. Treat it that way before you need 8,000 takedown requests to make the point for you.