Anthropic accidentally published Claude Code's source code. Here's the part nobody's talking about.
<p><em>Originally published on <a href="https://linear.gg/blog/claude-code-source-trust" rel="noopener noreferrer">linear.gg</a></em></p> <p>Earlier today, security researcher Chaofan Shou noticed that version 2.1.88 of the <code>@anthropic-ai/claude-code</code> npm package shipped with a source map file. Source maps are JSON files that bridge bundled production code back to the original source. They contain the literal, raw TypeScript. Every file, every comment, every internal constant. Anthropic's entire 512,000-line Claude Code codebase was sitting in the npm registry for anyone to read.</p> <p>The leak itself is a build configuration oversight. Bun generates source maps by default unless you turn them off. Someone didn't turn them off. It happens.</p> <p>What's worth writing about isn'
Originally published on linear.gg
Earlier today, security researcher Chaofan Shou noticed that version 2.1.88 of the @anthropic-ai/claude-code npm package shipped with a source map file. Source maps are JSON files that bridge bundled production code back to the original source. They contain the literal, raw TypeScript. Every file, every comment, every internal constant. Anthropic's entire 512,000-line Claude Code codebase was sitting in the npm registry for anyone to read.
The leak itself is a build configuration oversight. Bun generates source maps by default unless you turn them off. Someone didn't turn them off. It happens.
What's worth writing about isn't the leak. It's what the source reveals about how Claude Code's safety controls actually work, who controls them, and what that means for developers who depend on them.
The permission architecture
Claude Code's permission system is genuinely sophisticated. The source shows a multi-layered evaluation pipeline: a built-in safe-tool allowlist, user-configurable permission rules, in-project file operation defaults, and a transcript classifier that gates everything else. Anthropic published a detailed engineering post about the classifier on March 25th. It runs on Sonnet 4.6 in two stages: a fast single-token filter, then chain-of-thought reasoning only when the first stage flags something. They report a 0.4% false-positive rate.
This is real engineering. The threat model is thoughtful. They document specific failure modes from internal incident logs: agents deleting remote git branches from vague instructions, uploading auth tokens to compute clusters, attempting production database migrations. The classifier is tuned to catch overeager behavior and honest mistakes, not just obvious prompt injection.
None of that is the interesting finding.
The remote control layer
The source reveals that Claude Code polls /api/claude_code/settings on an hourly cadence for "managed settings." When changes arrive that Anthropic considers dangerous, the client shows a blocking dialog. Reject, and the app exits. There is no "keep running with old settings" option.
Beyond managed settings, the source contains a full GrowthBook SDK integration. GrowthBook is an open-source feature flagging and A/B testing platform. The flags in Claude Code use a tengu_ prefix (Tengu being the internal codename) and are evaluated at runtime by querying Anthropic's feature-flagging service. They can enable or disable features server-side based on your user account, your organization, or your A/B test cohort._
Community analysis has cataloged over 25 GrowthBook runtime flags. Some notable ones:
-
tengu_transcript_classifier — controls whether the auto-mode classifier is active
-
tengu_auto_mode_config — determines the auto-mode configuration (enabled, opt-in, or disabled)
-
tengu_max_version_config — version killswitch
Six or more killswitches, all remotely operable. As one community analysis put it: "GrowthBook flags can change any user's behavior without consent."
That phrasing is a bit loaded. Let me reframe it more precisely.
What this actually means
Anthropic can change how Claude Code classifies commands as dangerous. They can change which safety features are active. They can do this per-user or per-organization, without shipping a new version, without any action from the developer, and without notification beyond whatever the managed-settings dialog surfaces.
This is probably not malicious. GrowthBook is a standard tool for rolling out features safely. If Anthropic discovers a false-negative pattern in their classifier, tightening behavior across all users immediately is genuinely valuable. The design makes sense from their perspective — they're operating a system where the failure mode is an AI agent doing something destructive on a developer's machine.
But it changes the trust model in a way that matters.
When you configure Claude Code's permission rules locally, you're setting preferences that feed into a classification pipeline whose behavior can shift underneath you. The classifier that ultimately decides whether a command runs is a model call, and its parameters are controlled by flags that Anthropic sets remotely.
This is distinct from a locally enforced policy. A local policy says "block rm -rf /" and that rule holds regardless of what any remote server thinks. A classifier-based system's definition of "dangerous" is a function of a prompt template, a model, and configuration that lives on someone else's infrastructure.
The defense-in-depth question
Most developers running Claude Code in production aren't thinking about this distinction. The permission system feels local. But the source shows that enforcement is partially remote, partially classifier-based, and partially under Anthropic's real-time control.
This isn't an argument that Claude Code is insecure. The classifier catches real threats. The killswitches exist for legitimate operational reasons. Anthropic is not the adversary in most developers' threat models.
But if you're operating in an environment where you need to explain exactly what controls exist between an AI agent and a destructive action, "a classifier whose behavior is remotely configurable by the vendor" is a different answer than "a deterministic policy I wrote and can audit."
This is why defense in depth matters regardless of which agent you run. The agent's built-in controls are one layer. An external enforcement layer is a different layer entirely — it handles the cases where you want a hard boundary that doesn't depend on model judgment, and holds regardless of what any remote configuration says.
What to sit with
Every AI coding agent you use has a trust boundary between "what you configured" and "what actually enforces your intent." Before today, that boundary in Claude Code was opaque. Now it's readable.
The source shows a well-engineered system with a specific trust model: Anthropic retains runtime control over safety-critical behavior, and your local configuration is an input to their system rather than the final word.
Whether that's acceptable depends on your threat model. For most individual developers, it probably is. For teams operating agents against production infrastructure, it's worth knowing that the controls you're relying on can be silently reconfigured. Not because anyone will, but because understanding what layer you're actually trusting is how you build defense that holds when assumptions change.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudemodelversionAI Scraping
AI scraping marks the next evolution of data extraction, using AI and ML to automatically collect and analyze web data in a more efficient, intelligent, and ethical manner compared to traditional approaches. Disclaimer : This article is only for educational purposes. We do not encourage anyone to scrape websites, especially those web properties that may have terms and conditions against such actions. Introduction The internet contains an enormous wealth of information, from product prices and news articles to social media posts and research data. But how do we efficiently extract and utilize this data? The answer lies in web scraping , and more recently, its evolved form: AI scraping . Then, firstly let our eyes be on the web scraping! Web Scraping Web scraping is the automated process of
Private AI: Enterprise Data in the RAG Era
Introduction: The Modern Crisis — Data Sovereignty. In early to mid-2023, global technology enterprises became acutely aware of a significant threat to their privacy and data security. The source of this issue was the employees themselves; whether intentionally or accidentally, staff shared critical and confidential proprietary information unauthorized for external access with public AI models. The core problem is that this data became part of global knowledge bases, which these companies do not control, making it accessible to the public. Consequently, a pressing need emerged for new measures to prevent data leakage. private AI Models Prominent Companies Affected by This Risk: Samsung: A group of engineers in the semiconductor division uploaded confidential source code to ChatGPT to fix p
Long Tail: Growth Investment Secured For AI-Driven Healthcare Utilization Platform - Pulse 2.0
<a href="https://news.google.com/rss/articles/CBMiowFBVV95cUxNa3lKZDhfaWlKVThYOFdKb2Fja2VpNzZyeVNqbXV0OERiN1FTc1RiaVVGd0lUWW0wdjQzVUNmU1NJU2tJSlJPUHl1Wjh4emZHXzRJRmh2LS1SdjRhOXZJa1hsMVFINFRBWHhyQUtRZzhya1hWTXpKZ3FvNkpZQmpLMmlvSjh0V0VKaU5pVE1TRWkyNTRqWWZaWFJtZ2s3VGNBRl9V0gGoAUFVX3lxTE1kY3RGWHZHRFRzTk1YczVtcmhpdVMwUlJSV3Vtb05IeHpPYWhQaEVLeVA5S2QzVk1qdWNPYU9wLVRuZjh5U2g2LWdvZHBaX29pNDcwYkdwd3pqY1p5N3F5QkV3NURMLURuNkpOOExRRkRIZ2RHUDVTaF9LTVVqckgtWWlIMWNubzRUVXlaSUtldlVWdDA5bXRHWFNWY1plQ3RIMlpVLVd2Qw?oc=5" target="_blank">Long Tail: Growth Investment Secured For AI-Driven Healthcare Utilization Platform</a> <font color="#6f6f6f">Pulse 2.0</font>
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products
AI Scraping
AI scraping marks the next evolution of data extraction, using AI and ML to automatically collect and analyze web data in a more efficient, intelligent, and ethical manner compared to traditional approaches. Disclaimer : This article is only for educational purposes. We do not encourage anyone to scrape websites, especially those web properties that may have terms and conditions against such actions. Introduction The internet contains an enormous wealth of information, from product prices and news articles to social media posts and research data. But how do we efficiently extract and utilize this data? The answer lies in web scraping , and more recently, its evolved form: AI scraping . Then, firstly let our eyes be on the web scraping! Web Scraping Web scraping is the automated process of
University of Oklahoma researcher awarded NIH grant to advance tribally defined approaches to genomic research - ou.edu
<a href="https://news.google.com/rss/articles/CBMimAFBVV95cUxNVUU5ZEhYM0dQbE1CYnpPUE04QVVTdmlqUC00dzZQSmpKaWJoSnpINXJ2N1NIN0g4Z0hHSEh2N3RLTVJvNjc3RHV3ZWxYdkgxb0J1TlVtNDNRc1VpZmJjRDdhOFMxQUs2NkRnV3dndkpscmJ1bFJ3V3RLVVdZSnJ2SlRmYWNtZVNwYXBVdzZwTHBtTUZlZ25hYQ?oc=5" target="_blank">University of Oklahoma researcher awarded NIH grant to advance tribally defined approaches to genomic research</a> <font color="#6f6f6f">ou.edu</font>
Long Tail: Growth Investment Secured For AI-Driven Healthcare Utilization Platform - Pulse 2.0
<a href="https://news.google.com/rss/articles/CBMiowFBVV95cUxNa3lKZDhfaWlKVThYOFdKb2Fja2VpNzZyeVNqbXV0OERiN1FTc1RiaVVGd0lUWW0wdjQzVUNmU1NJU2tJSlJPUHl1Wjh4emZHXzRJRmh2LS1SdjRhOXZJa1hsMVFINFRBWHhyQUtRZzhya1hWTXpKZ3FvNkpZQmpLMmlvSjh0V0VKaU5pVE1TRWkyNTRqWWZaWFJtZ2s3VGNBRl9V0gGoAUFVX3lxTE1kY3RGWHZHRFRzTk1YczVtcmhpdVMwUlJSV3Vtb05IeHpPYWhQaEVLeVA5S2QzVk1qdWNPYU9wLVRuZjh5U2g2LWdvZHBaX29pNDcwYkdwd3pqY1p5N3F5QkV3NURMLURuNkpOOExRRkRIZ2RHUDVTaF9LTVVqckgtWWlIMWNubzRUVXlaSUtldlVWdDA5bXRHWFNWY1plQ3RIMlpVLVd2Qw?oc=5" target="_blank">Long Tail: Growth Investment Secured For AI-Driven Healthcare Utilization Platform</a> <font color="#6f6f6f">Pulse 2.0</font>
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!