Claude Code leak puts Anthropic on the other side of the copyright battle
Anthropic sent a copyright takedown after a segment of the code for Claude Code was leaked online. Anthropic has faced its own copyright issues.
By
Lakshmi Varanasi
You're currently following this author! Want to unfollow? Unsubscribe via the link in your email.
Some of Anthropic's secrets were exposed this week, giving competitors a window into how its popular AI agent, Claude Code, works.
Bloomberg/Bloomberg via Getty Images
2026-04-01T20:58:48.806Z
-
Anthropic accidentally leaked the source code for its Claude Code AI agent this week.
-
The leaked source code went viral, garnering millions of views and GitHub adaptations.
-
Anthropic sent a copyright takedown request to control the spread.
When a segment of the source code for Anthropic's celebrated AI agent, Claude Code, ended up on GitHub on Tuesday, it was a ravenous free-for-all.
Engineers of all stripes soaked it up as quickly as they could, hoping to learn from it and perhaps use it to improve their own projects.
If relying on content made by others to improve intelligence sounds familiar, that's because it's exactly what the big AI companies have been dabbling in for years as they compete to train their large-language models — Anthropic included.
So it was not without a hint of irony that, to prevent engineers from accessing the leaked code, Anthropic swiftly issued a copyright takedown notice to the GitHub repository hosting it.
"We issued a DMCA takedown against one repository hosting leaked Claude Code source code and its forks," an Anthropic spokesperson said, referring to the Digital Millennium Copyright Act.
Anthropic, OpenAI, and Google have all faced lawsuits over their use of copyrighted material — including published books, articles, scientific journals, and other content found online — without explicit permission. In response, authors, artists, and publishers have used copyright law to seek accountability and, often, payment.
In September, a court ordered Anthropic to pay $1.5 billion in damages in a class-action lawsuit brought by authors and publishers — including lead plaintiffs Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson — over allegations it used pirated books and shadow libraries to train Claude.
Reddit sued Anthropic last June for scraping volumes of user-generated content to train its models without authorization or compensating users.
And, last month, Universal Music Group, Concord, and ABKCO filed a suit against Anthropic for illegally downloading over 20,000 copyrighted songs, also for training its models.
Now the tables have turned, and Anthropic is leaning on copyright laws to protect its own creations. "We're rolling out measures to prevent this from happening again," a spokesperson for Anthropic said.
Fortunately for the company, the leak may not be as bad as some thought.
Paul Price, a cybersecurity specialist and founder of the ethical hacking firm Code Wall — which recently uncovered vulnerabilities in McKinsey's internal chatbot, Lilli — said the Anthropic leak didn't expose anything critical.
"It's more embarrassing than detrimental. Most of the real juicy stuff is in their internal source models and that wasn't leaked," he told Business Insider.
He said the company inadvertently exposed its "harness" — a software infrastructure typically used to connect large language models to the broader context in which they're used.
"Claude Code is one of the best-designed agent harnesses out there, and now we can see how they approach the hard problems," Price added, noting that it could also prove useful intel for competitors.
The leak also highlighted a paradox of the AI hype cycle: the same tools that make it faster than ever to build and ship products also make it easier for information — sensitive or not — to leak, replicate, and spread instantly.
-
Anthropic
-
AI
-
Artificial Intelligence
-
More
-
Tech
-
Innovation
Read next
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudeclaude codeGoing out with a whimper
“Look,” whispered Chuck, and George lifted his eyes to heaven. (There is always a last time for everything.) Overhead, without any fuss, the stars were going out. Arthur C. Clarke, The Nine Billion Names of God Introduction In the tradition of fun and uplifting April Fool's day posts , I want to talk about three ways that AI Safety (as a movement/field/forum/whatever) might "go out with a whimper". By go out with a whimper I mean that, as we approach some critical tipping point for capabilities, work in AI safety theory or practice might actually slow down rather than speed up. I see all of these failure modes to some degree today, and have some expectation that they might become more prominent in the near future. Mode 1: Prosaic Capture This one is fairly self-explanatory. As AI models ge
MCP TravelCode: Let AI Assistants Search Flights and Book Hotels
<p>We just open-sourced <strong>MCP TravelCode</strong> — a <a href="https://modelcontextprotocol.io" rel="noopener noreferrer">Model Context Protocol</a> server that connects AI assistants to the <a href="https://travel-code.com" rel="noopener noreferrer">Travel Code</a> corporate travel API.</p> <p>Your AI assistant can now search for flights, book hotels, manage orders, and track flight status — all through natural language conversations.</p> <h2> What is MCP? </h2> <p>Model Context Protocol (MCP) is an open standard that lets AI assistants connect to external tools and data sources. Think of it as USB-C for AI — one protocol, universal connectivity.</p> <p>MCP TravelCode implements this standard for corporate travel, giving any compatible AI client access to real travel infrastructure.
I Read OpenAI Codex's Source and Built My Workflow Around It
<p>I cloned the Codex repo and started reading. Not the README. Not the blog post. The actual Rust source under <code>codex-rs/core/</code>. After <a href="https://dev.to/jee599/71700-stars-and-60-rust-crates-inside-openais-codex-cli-source">dissecting the architecture</a> in my previous post, I wanted to answer a different question: how do you actually build a workflow around this thing?</p> <p>The answer turned out to be more interesting than I expected. Codex CLI is not just a coding assistant you run in the terminal. It is a platform with five distinct extension points, each designed to integrate into different parts of the development lifecycle. I spent a week wiring them together. This is what the setup looks like, how it works, and where it breaks.</p> <h2> The Configuration Stack:
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in AI Tools
MCP TravelCode: Let AI Assistants Search Flights and Book Hotels
<p>We just open-sourced <strong>MCP TravelCode</strong> — a <a href="https://modelcontextprotocol.io" rel="noopener noreferrer">Model Context Protocol</a> server that connects AI assistants to the <a href="https://travel-code.com" rel="noopener noreferrer">Travel Code</a> corporate travel API.</p> <p>Your AI assistant can now search for flights, book hotels, manage orders, and track flight status — all through natural language conversations.</p> <h2> What is MCP? </h2> <p>Model Context Protocol (MCP) is an open standard that lets AI assistants connect to external tools and data sources. Think of it as USB-C for AI — one protocol, universal connectivity.</p> <p>MCP TravelCode implements this standard for corporate travel, giving any compatible AI client access to real travel infrastructure.

Stop Wasting Tokens on npm Install Noise
<p>Run <code>npm install</code> in a medium-sized project. Count the deprecation warnings.</p> <p>I counted 47 in one project. Each one says something like "This module is not supported and is kept for compatibility." Claude Code reads all 47. They're identical in meaning. They cost tokens. They push your actual code out of the context window.</p> <h2> What npm Install Actually Outputs </h2> <div class="highlight js-code-highlight"> <pre class="highlight console"><code><span class="go">npm warn deprecated [email protected]: This module is not supported... npm warn deprecated @humanwhocodes/[email protected]: Use @eslint/config-array npm warn deprecated [email protected]: Rimraf versions prior to v4 are no longer supported npm warn deprecated @humanwhocodes/[email protected]: Use @eslint/objec
Building a Future in Artificial Intelligence: Complete Guide to AI-900 and AI-102 Certifications - North Penn Now
<a href="https://news.google.com/rss/articles/CBMi1AFBVV95cUxOenpVMVdybmZyd0QtLUZDUGxDM18tcTY5OVpsaVVNOTV5WDVoVy1mc2FYVU5KMHh4NlVnak1lRUhYd0VsSDUwUGJpMXdILWNaVVc2dVBWQ1I2OXpzajNqalY3dElYTTFhRVJJbUFiSEphX2FqUER5WUdTR3R5LWFzRmtVWm0wenBVU1R6Q1duakJqRHNfTEotYVNONmljaXpydWdNdHVGVFV6UmIxSUhzalNoaVF1Ml9RZkpQZGZ5dUVPRmx1WGxueG8zR0ZweHBLdHIzZQ?oc=5" target="_blank">Building a Future in Artificial Intelligence: Complete Guide to AI-900 and AI-102 Certifications</a> <font color="#6f6f6f">North Penn Now</font>
The Inside Story of the Greatest Deal Google Ever Made: Buying DeepMind - wsj.com
<a href="https://news.google.com/rss/articles/CBMi-wJBVV95cUxQVjBjeWZqLXpoZFgySHZoeXkwZm5zTEVSUXMwOG80NFVIbzRyZGFudmYxT2VuVlRxckVjUmlHQ1VvbFNBZUY3TThJcGtpaHRFNFZfaGVNVnU0dGVZSm9vSG51MlkzcjRHYWt4WUJGdzBvOHlRSk12WDBVdWtBbWdCQlRqVG9XRnBIQ0lRRExPMm1DWkpuMG1rSnN6QUJudmZEYW5lLXREaFZKUFRoQ2h5Vl9taW52MEdmV2VEeTN1aUFGVXZoSGRiQ2RkSmViWGRQRWZUdGlvMVJiVzlUZUI2NzVOcDIyU1BuQVhqd2dHSW5fam1rclF5TFhrdnl6TGpxcUxIcm1MZW9odDlMVU5WUmRtbXJXelJXel9UY0h4dmRSaFJ4WU1RdnMwWEQyT25NWkFpY0dfQ2I2eHpOeVI0anIyWno3ZG8tUGpNaEkydFF3ZEhhdzVReWpSOV8wT21FcEYzRDMxcWVOZGVEV3ZxckpLNUVadWwwZHg0?oc=5" target="_blank">The Inside Story of the Greatest Deal Google Ever Made: Buying DeepMind</a> <font color="#6f6f6f">wsj.com</font>
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!