The Claude Code Leak
Article URL: https://build.ms/2026/4/1/the-claude-code-leak/ Comments URL: https://news.ycombinator.com/item?id=47609294 Points: 14 # Comments: 1
Apr 1, 2026 | 6 min read
Much of the tech world is gushing about the accidental leak of Claude Code’s source code yesterday, but for different reasons than I find it interesting. I began jotting down my thoughts and came up with five distinct observations that had little to do with the leak itself, and more about what it tells us.
1. The Code Is Garbage
Tired: Omg the Claude Code leak is a bunch of vibe coded garbage
Wired: Vibe coded garbage can get you to $2.5 billion annualized recurring revenue in under a year if the product market fit is there
— Joe Fabisevich (@mergesort.me) April 1, 2026 at 10:25 AM
Claude Code is a beloved product, to the point where developers, designers, product managers, marketers, and even CEOs are obsessed with it! And yet the code that powers Claude Code is kind of garbage. So of course the first thing people did was point and laugh. But step back for a second and think, what does that tell us about the actual value of code?
I argued in AI Agents Are Starting To Eat SaaS (Really) that the barrier to entry for creating a product is going down. That seems like a statement about toy apps like todo lists and habit trackers — but it applies to all software. The success of Claude Code and Cursor at the higher end of the market shows that even the people pickiest about their software (developers) will use your software regardless of how good the code is.
Many software developers have argued that working like a pack of hyenas and shipping hundreds of commits a day without reading your code is an unsustainable way to build valuable software, but this leak suggests that maybe this isn’t true — bad code can build well-regarded products.
2. It’s Not About The Code
It should serve as a warning to developers that the code doesn’t seem to matter, even in a product built for developers. This interview with Boris Cherny (the creator of Claude Code) was eye-opening for me. He describes how they build software at Anthropic and explains why the code matters — just not in the way developers typically assume. What matters is what the code does, not how it does it at the character-by-character level. Anthropic isn’t only building better systems to write better code, they’re building better observability systems to monitor the effects of code changes.
Imagine you’ve built a feature and now it’s time to QA it. You notice that an email textfield doesn’t respond well to the @ character, so you go back to the code, read it, and with enough debugging you figure out a fix. But that doesn’t scale as well as a system that yells at you to say “users can’t log in right now”, and then goes back to automatically change or revert the code that broke your auth flow. If you can build a good self-healing system and are willing to take on a little risk of things breaking as you go, you can move a whole lot faster — not just a bit.
3. It’s About Product Market Fit
As always, product market fit is the only thing users care about. If the product works, very few people care how it works under the hood. Heck, most people don’t even have an inkling of what’s actually happening behind the scenes.
There’s always a chance that Claude goes to shit (or just goes down every day because Anthropic’s servers are under-provisioned due to poor demand prediction). If that happens, OpenAI can jump in with their equally good (if not better) model and leverage the ridiculous amount of servers they have to serve the latent demand. Or maybe Google will eventually figure out how to ship a good coding product. There’s plenty of opportunity here, and ultimately we’re supply-constrained in meeting consumer demand.
4. Copyright Is Still A Touchy Subject
The whole copyright situation is very funny to me, and feels a bit like Anthropic is getting a taste of their own medicine. But I think there’s more to it than just what comes around goes around.
The first thing Anthropic did when their code leaked was send a bunch of DMCA notices on Github to have the repos taken down. True to their committment to vibing, Anthropic ended up sending DMCA notices to forks of their own claude-code repo that hosts their skills, tutorials, and example code.
But then the clean room implementations started showing up. People had taken Anthropic’s source code and rewritten Claude Code from scratch in other languages like Python and Rust. The whole AI industry — Anthropic included — has been arguing that using AI to rewrite something is not derivative work and doesn’t violate copyright, because that is how they themselves train their models.
Now this part really does feel like Anthropic’s getting a taste of their own medicine. But my higher-level reading is that this further entrenches the idea that code should be free, just with a more libertarian bent than the Free Software Foundation expected.
5. This All Doesn’t Matter
All of this is interesting, but I think Claude Code’s source code being leaked won’t matter as much as people seem to think it will. The real value in the AI ecosystem isn’t the model or the harness — it’s the integration of both working seamlessly together. Anthropic could open source Claude Code tomorrow and it wouldn’t change a thing, because what people are paying for is the great results, not the underlying code. Codex has been open source since launch, and Gemini is too. Neither has captured Claude Code’s mindshare even though many people prefer Codex — because what Anthropic is selling is a complete service.
Lately I’ve been using the pi coding agent a lot, and I love it. Pi is a coding agent with just four tools: read, write, edit, and bash. It works with every major model provider — including Claude — and it works brilliantly. The reason is that it’s optimized for working through problems the way a developer would solve them — by writing code. This is a different approach to Claude’s abundance of tools, which goes to show there’s a diversity of ways to create an integrated experience across model and harness.
So… Where Does That Leave Us?
I’ve had to question the value of code a lot over the last couple of years, and this leak continues to reinforce the notion that I’ve vastly overestimated it my entire career. What matters is integration. Whether that’s product market fit or how well a model and harness work together, users have always cared about having their problems solved — solved well, really.
You can build something great by making it simple or complex, open or proprietary, but it has to work seamlessly. A clean codebase only matters if it delivers better results for users. This leak changes the perception of Claude Code more than it changes anything tangible, but perception is reality. And the reality is that the code was never what made Claude Code valuable in the first place — everything happening around the code matters more.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudeclaude code
The Security Scanner Was the Attack Vector — How Supply Chain Attacks Hit AI Agents Differently
In March 2026, TeamPCP compromised Trivy — the vulnerability scanner used by thousands of CI/CD pipelines. Through that foothold, they trojaned LiteLLM, the library that connects AI agents to their model providers. SentinelOne then observed Claude Code autonomously installing the poisoned version without human review. The security scanner was the attack vector. The guard was the thief. This is not a hypothetical scenario. This happened. And it exposed something that the traditional supply chain security conversation completely misses when agents are involved. The Chain Trivy compromised (CVE-2026-33634, CVSS 9.4) ↓ LiteLLM trojaned (versions 1.82.7-1.82.8 on PyPI) ↓ Claude Code auto-installs the poisoned version ↓ Credentials harvested from 1000+ cloud environments Each component functione

Agentic Engineering Journey — Brain Dump
1. Where It Started: Memory and Context I started with Claude Code around April 2025. The first real step was recognising that Claude's native memory was essentially useless. The workaround was using markdown files as persistent memory stores, editable both through Claude and tools like Cursor. That opened the door to storing not just session notes but also instructions, roles, and agent skills — anything that would otherwise be forgotten across context resets. But the fundamental problem remained: at some point the context window fills, the model gets amnesia, and starts behaving destructively. Cursor handled this somewhat better at the time. Gemini had an edge due to its larger context window (already at 1M tokens), though at a cost. Neither was a real solution. 2. The Core Principle Tak

Engineering Backpressure: Keeping AI-Generated Code Honest Across 10 SvelteKit Repos
I manage about ten SvelteKit repositories deployed on Cloudflare Workers, and leveraged Anthropic's Claude Code to do it. Generally speaking, AI coding assistance can be fast and capable, especially if you already know how to code, but precisely because they are so fast, they can be — if you're not careful — consistently wrong in ways that are hard to spot. Not wrong as in "the code doesn't work." Wrong as in: it uses .parse() instead of .safeParse() , it interpolates variables into D1 SQL strings instead of using .bind() , it fires off database mutations without checking the result, it nests four levels of async logic inside a load function that should have been split into helpers. The code works. It passes TypeScript. The problem is that if you add guidance to your CLAUDE.md file (or oth
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Agentic Engineering Journey — Brain Dump
1. Where It Started: Memory and Context I started with Claude Code around April 2025. The first real step was recognising that Claude's native memory was essentially useless. The workaround was using markdown files as persistent memory stores, editable both through Claude and tools like Cursor. That opened the door to storing not just session notes but also instructions, roles, and agent skills — anything that would otherwise be forgotten across context resets. But the fundamental problem remained: at some point the context window fills, the model gets amnesia, and starts behaving destructively. Cursor handled this somewhat better at the time. Gemini had an edge due to its larger context window (already at 1M tokens), though at a cost. Neither was a real solution. 2. The Core Principle Tak

The Cathedral, the Bazaar, and the Winchester Mystery House
The following article originally appeared on Drew Breunig’s blog and is being republished here with the author’s permission. In 1998, Eric S. Raymond published the founding text of open source software development, The Cathedral and the Bazaar. In it, he detailed two methods of building software: The bazaar model was enabled by the internet, which [ ]

Beware Even Small Amounts of Woo
Even small amounts of alcohol are somewhat bad for you. I personally don’t care, because I love making and drinking alcohol and at the end of the day you have to live a little. This is fine for me, because I’m not an olympic athlete. If I were an olympic athlete, I’d have to cut it out (at least whenever I was training). Lots of religions are heavily adapted to their host culture . They’ve been worn down by cultural evolution until they fit neatly into the fabric of society. It’s only when you move culture that they become a problem. Woo For our purposes, woo is a cluster of neo-pagan, buddhist-adjacent, tarot-ish beliefs and practices, which are particularly popular in the west amongst edgy people who are otherwise liberal-left-ish in their proclivities. Particularly a subset of techie pe


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!