Claude Code's Source Leaked
Hi there, little friend! Let's talk about a silly oopsie!
Imagine your favorite toy robot, Claude. Claude has a secret recipe book inside its head that tells it how to talk and play.
Well, guess what? Someone at Claude's house accidentally left the recipe book open for everyone to see! π± It wasn't a bad guy breaking in, just a little mistake, like leaving your lunchbox open.
Now, some smart people saw parts of Claude's secret recipe. They saw new ideas for Claude, like new games it could play.
But don't worry! Claude's brain is still safe, and it can still play with you. It's just like if someone peeked at your secret cookie recipe β they know how to make them, but your cookies are still yummy! It teaches us to be super careful with our secret things. π
<h2> π¨ Alright guys huge deal breaker </h2> <p>β </p> <p>π Someone left the door open at Anthropic. And the AI world just walked in.<br> Three days ago, security researcher Chaofan Shou (@ Fried_Rice) noticed something unusual in the npm registry.</p> <p>Tucked inside version 2.1.88 of @anthropic-ai/claude-code was a 57MB file called cli.js.map a source map that acted as a complete decoder ring back to Anthropic's original TypeScript source code.</p> <p>No sophisticated hack. No zero day exploit.<br> Just a single misconfigured build script.</p> <p>What developers found inside 1,900 files:<br> π§ <strong>Self-healing memory</strong>: A three-layer architecture built to fight context decay in long AI sessions<br> π <strong>Unreleased model codenames</strong>: "Fennec" (Opus 4.7), "Sonnet
π¨ Alright guys huge deal breaker
β
π Someone left the door open at Anthropic. And the AI world just walked in. Three days ago, security researcher Chaofan Shou (@ Fried_Rice) noticed something unusual in the npm registry.
Tucked inside version 2.1.88 of @anthropic-ai/claude-code was a 57MB file called cli.js.map a source map that acted as a complete decoder ring back to Anthropic's original TypeScript source code.
No sophisticated hack. No zero day exploit. Just a single misconfigured build script.
What developers found inside 1,900 files: π§ Self-healing memory: A three-layer architecture built to fight context decay in long AI sessions π Unreleased model codenames: "Fennec" (Opus 4.7), "Sonnet 4.8," and the mysterious "Capybara" (Claude Mythos) π€ Built-in agent swarms: Claude can spawn parallel sub-agents autonomously. This isn't a feature. It's infrastructure. π» Ghost contributing: Logic for contributing to open-source repos without explicit AI attribution
Anthropic's response: Human error in release packaging. No model weights compromised. No customer data exposed. The brain is still safe. But the skeleton is now public.
Here's the lesson no one wants to say out loud:
You can spend years and hundreds of millions building a proprietary AI system. And one forgotten line in a .npmignore can make it readable to anyone with a terminal.
Security isn't just about your models. It's about your build pipeline, your CI config, your npm publish script.
The smallest door is still a door.
π Original discovery: Twitter Post - Chaofan Shou π₯Link to the opensource github repo of claude code I just published: Yasas Banu - Claude Code Repo
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudemodelrelease
NinjaOne offers a free trial of the IT management platform trusted by 35,000 organisations
If your IT team is still toggling between six different consoles to patch a laptop, check its backup status, and verify it is not running a vulnerable version of Chrome, there is a decent chance you have already heard colleagues mention NinjaOne. The Austin-based company has quietly become one of the fastest-growing platforms in IT [ ] This story continues at The Next Web
Knowledge Map
Connected Articles β Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

LinkedIn is secretly scanning your browser for 6,000 extensions, and you werenβt told
In short: Every time you visit LinkedIn in a Chrome-based browser, a hidden JavaScript routine silently probes your browser for more than 6,000 installed extensions, collects 48 hardware and software characteristics about your device, encrypts the resulting fingerprint, and attaches it to every API request you make during your session. The practice, labelled βBrowserGateβ by researchers, [ ] This story continues at The Next Web





Discussion
Sign in to join the discussion
No comments yet β be the first to share your thoughts!