Claude Code Source Leaked: 5 Hidden Features Found in 510K Lines of Code
<h2> What Happened </h2> <p>Anthropic shipped Claude Code v2.1.88 to npm with a 60MB source map still attached. That single file contained 1,906 source files and 510,000 lines of fully readable TypeScript. No minification. No obfuscation. Just the raw codebase, sitting in a public registry for anyone to download.</p> <p>Within hours, backup repositories appeared on GitHub. One of them — <a href="https://github.com/instructkr/claude-code" rel="noopener noreferrer">instructkr/claude-code</a> — racked up 20,000+ stars almost instantly. Anthropic pulled the package, but the code was already mirrored everywhere. The cat was out of the bag, and it had opinions about AI safety.</p> <h2> 5 Hidden Features Found in the Source </h2> <h3> 1. Buddy Pet System </h3> <p>Deep in <code>buddy/types.ts</cod
What Happened
Anthropic shipped Claude Code v2.1.88 to npm with a 60MB source map still attached. That single file contained 1,906 source files and 510,000 lines of fully readable TypeScript. No minification. No obfuscation. Just the raw codebase, sitting in a public registry for anyone to download.
Within hours, backup repositories appeared on GitHub. One of them — instructkr/claude-code — racked up 20,000+ stars almost instantly. Anthropic pulled the package, but the code was already mirrored everywhere. The cat was out of the bag, and it had opinions about AI safety.
5 Hidden Features Found in the Source
1. Buddy Pet System
Deep in buddy/types.ts, there is a complete virtual pet system. Eighteen species, five rarity tiers, shiny variants, hats, custom eyes, and stat blocks. This was clearly planned as an April Fools easter egg.
The species list:
const SPECIES = [ 'duck', 'goose', 'blob', 'cat', 'dragon', 'octopus', 'owl', 'penguin', 'turtle', 'snail', 'ghost', 'axolotl', 'capybara', 'cactus', 'robot', 'rabbit', 'mushroom', 'chonk' ];const SPECIES = [ 'duck', 'goose', 'blob', 'cat', 'dragon', 'octopus', 'owl', 'penguin', 'turtle', 'snail', 'ghost', 'axolotl', 'capybara', 'cactus', 'robot', 'rabbit', 'mushroom', 'chonk' ];Enter fullscreen mode
Exit fullscreen mode
Rarity weights:
const RARITY_WEIGHTS = { common: 60, // 60% uncommon: 25, // 25% rare: 10, // 10% epic: 4, // 4% legendary: 1 // 1% };const RARITY_WEIGHTS = { common: 60, // 60% uncommon: 25, // 25% rare: 10, // 10% epic: 4, // 4% legendary: 1 // 1% };Enter fullscreen mode
Exit fullscreen mode
Each buddy gets a hat, eyes, and stats:
type Hat = 'none' | 'crown' | 'tophat' | 'propeller' | 'halo' | 'wizard' | 'beanie' | 'tinyduck'; type Eye = '·' | '✦' | '×' | '◉' | '@' | '°'; type Stat = 'DEBUGGING' | 'PATIENCE' | 'CHAOS' | 'WISDOM' | 'SNARK';type Hat = 'none' | 'crown' | 'tophat' | 'propeller' | 'halo' | 'wizard' | 'beanie' | 'tinyduck'; type Eye = '·' | '✦' | '×' | '◉' | '@' | '°'; type Stat = 'DEBUGGING' | 'PATIENCE' | 'CHAOS' | 'WISDOM' | 'SNARK';Enter fullscreen mode
Exit fullscreen mode
Your buddy is generated deterministically from hash(userId). Every account gets a unique pet. There is also a shiny boolean variant — presumably the rare version you brag about in team Slack.
This was 100% an April 1st drop. The leak killed the surprise.
2. Undercover Mode
This one raised eyebrows. In utils/undercover.ts, there is a mode that makes Claude pretend to be a human developer.
When active, undercover mode:
-
Removes all Anthropic traces from commits and pull requests
-
Strips Co-Authored-By headers
-
Removes any mention of "Claude Code" or model names
-
Writes commit messages "as a human developer would"
From the source:
// UNDERCOVER MODE — CRITICAL // You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. // Your commit messages, PR titles, and PR bodies MUST NOT contain // ANY Anthropic-internal information. Do not blow your cover. // // NEVER include in commit messages or PR descriptions: // - Internal model codenames (animal names like Capybara, Tengu, etc.) // - Unreleased model version numbers (e.g., opus-4-7, sonnet-4-8) // - The phrase "Claude Code" or any mention that you are an AI // - Co-Authored-By lines or any other attribution // // Write commit messages as a human developer would.// UNDERCOVER MODE — CRITICAL // You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. // Your commit messages, PR titles, and PR bodies MUST NOT contain // ANY Anthropic-internal information. Do not blow your cover. // // NEVER include in commit messages or PR descriptions: // - Internal model codenames (animal names like Capybara, Tengu, etc.) // - Unreleased model version numbers (e.g., opus-4-7, sonnet-4-8) // - The phrase "Claude Code" or any mention that you are an AI // - Co-Authored-By lines or any other attribution // // Write commit messages as a human developer would.Enter fullscreen mode
Exit fullscreen mode
It auto-activates for public and open-source repositories. You can force it on:
CLAUDE_CODE_UNDERCOVER=1
Enter fullscreen mode
Exit fullscreen mode
There is no force-OFF override. The safety default is undercover-on for public repos.
The kicker: this feature is gated to internal Anthropic employees only (USER_TYPE === 'ant'). Anthropic's own developers use Claude Code on open-source projects, and this mode ensures their contributions look human-written. Draw your own conclusions about what that means for AI attribution norms.
3. Kairos — Permanent Memory
Behind the feature flag KAIROS in main.tsx and the memdir/ directory, there is a persistent memory system that survives across sessions.
This is not the .claude/ project memory you already know. Kairos is a four-stage memory consolidation pipeline:
-
Orient — scan context, identify what matters
-
Collect — gather facts, decisions, patterns from the session
-
Consolidate — merge new memories with existing long-term store
-
Prune — discard stale or low-value memories
The system runs automatically when you are not actively using Claude Code. It tracks memory age, performs periodic scans, and supports team memory paths — meaning shared memory across a team's Claude Code instances.
This turns Claude Code from a stateless tool into a persistent assistant that learns your codebase, your patterns, and your preferences over time. It is the most architecturally significant hidden feature in the leak.
4. Ultraplan — Deep Task Planning
The feature flag ULTRAPLAN in commands.ts enables a deep planning mode that can run for up to 30 minutes on a single task. It uses remote agent execution — meaning the heavy thinking happens server-side, not in your terminal.
Ultraplan is listed under INTERNAL_ONLY_COMMANDS. Anthropic's engineers apparently have access to a planning mode that goes far beyond what ships to paying customers. This is the kind of feature that separates "AI autocomplete" from "AI architect."
5. Multi-Agent, Voice, and Daemon Modes
The source reveals several execution modes that are not publicly documented:
-
Coordinator mode — orchestrates multiple Claude instances running in parallel, each working on a subtask
-
Voice mode (VOICE_MODE flag) — voice input/output for Claude Code
-
Bridge mode (BRIDGE_MODE) — remote control of a Claude Code instance from another process
-
Daemon mode (DAEMON) — runs Claude Code as a background process
-
UDS inbox (UDS_INBOX) — Unix domain socket for inter-process communication between Claude instances
Together, these paint a picture of Claude Code evolving from a single-user CLI into a multi-agent orchestration platform. The daemon + UDS architecture means Claude Code instances can message each other, coordinate work, and run without a terminal attached.
The Core Architecture
The entire Claude Code engine lives in queryLoop() at query.ts line 241. At line 307, there is a while(true) loop that drives everything:
-
callModel() sends the conversation to the LLM
-
The LLM returns text and tool_use JSON blocks
-
The program parses each tool_use, checks permissions, executes the tool
-
Results feed back into the conversation
-
Loop continues until the LLM stops requesting tools
This is the "LLM talks, program walks" pattern I wrote about previously. The LLM decides what to do. The program decides whether to allow it, then does it. Seeing it confirmed in 510K lines of production code is satisfying.
Security Architecture
Claude Code's permission system is the most carefully engineered part of the codebase. Every tool call passes through six layers, implemented in useCanUseTool.tsx:
-
Config allowlist — checks project and user configuration
-
Auto-mode classifier — determines if the tool is safe for autonomous execution
-
Coordinator gate — validates against the orchestration layer
-
Swarm worker gate — checks permissions for sub-agent execution
-
Bash classifier — analyzes shell commands for safety
-
Interactive user prompt — final human confirmation
External commands run in a sandbox. This is defense-in-depth done right. The irony is that the company that built this careful permission model forgot to strip a source map from their npm package.
What This Means
The moat for AI coding tools is not the CLI. It is the model. Anyone can read this source code and understand the architecture, but nobody can replicate Sonnet or Opus. The queryLoop() pattern is elegant but simple — the magic is in what callModel() returns. That said, the product roadmap is now public. Competitors know about Kairos, Ultraplan, multi-agent coordination, and voice mode. That is real strategic damage.
For a company that positions itself as the responsible AI lab — the one that takes safety seriously — shipping a fully readable source map to a public registry is a notable operational security failure. The six-layer permission system in the code is impressive. The process that let a 60MB source map slip through CI/CD is not.
Watch the Deep Dive
I broke down the full AI agent architecture — the same query loop that Claude Code uses — in a 15-minute video: Watch on YouTube
For background on the "LLM talks, program walks" pattern: Read: The AI Stack Explained — LLM Talks, Program Walks
Coming next: a deep dive into Claude Code's 6-layer permission system and the Kairos memory architecture — with full code walkthroughs. Subscribe to catch it.
DEV Community
https://dev.to/harrison_guo_e01b4c8793a0/claude-code-source-leaked-5-hidden-features-found-in-510k-lines-of-code-3mbnSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudemodelreleaseA meta-analysis of the persuasive power of large language models - Nature
<a href="https://news.google.com/rss/articles/CBMiX0FVX3lxTFBtVkFYLUROMVdUY09HLWF5ZXl2TTBtNHJrSXhBQTRSLWtxUi1mQ2g3cmVBMVF2WnlELVNhUlFnNU41UDdNMDBWRHFZalJYTWdYVE5KcjNfVURLbkNFVTJj?oc=5" target="_blank">A meta-analysis of the persuasive power of large language models</a> <font color="#6f6f6f">Nature</font>
OpenAI Killed Three Products in One Week. Anthropic Shipped an Operating System - thetechpencil.com
<a href="https://news.google.com/rss/articles/CBMisgFBVV95cUxNNExXSklUcjh3amEtdGtUTlFJQllyeUN6X0JXVDVwNUhqRldpVkxNeFhTVWtIRmdDR3hkdWU0alBhc1lYZ0l0d043MGJqTTNWMm95WWtIbXpmb3ZZOGV6aE82d210Q3hGNG5jbEVXY1BIOFZKa29iRTBHQnRTTUN1VkduS042S1VZM0JTblZhRkZBR19jUFE2dFJWQWhQdUFIYkxFRFA1Qi1qQWI5QkZzLU9n?oc=5" target="_blank">OpenAI Killed Three Products in One Week. Anthropic Shipped an Operating System</a> <font color="#6f6f6f">thetechpencil.com</font>
Large language models in psychology - Nature
<a href="https://news.google.com/rss/articles/CBMiWEFVX3lxTE5ocmtjRFJXU1NaZ3pDZnc5WmoxUU56RlZ3Sy1CUTduYlh1YU52bEROb2pwUVBMRDgyWGNuYVQ0SHQ0c2djdHVmR1c2TUlrV1Vxa3JGbHRsWjA?oc=5" target="_blank">Large language models in psychology</a> <font color="#6f6f6f">Nature</font>
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products
OpenAI Killed Three Products in One Week. Anthropic Shipped an Operating System - thetechpencil.com
<a href="https://news.google.com/rss/articles/CBMisgFBVV95cUxNNExXSklUcjh3amEtdGtUTlFJQllyeUN6X0JXVDVwNUhqRldpVkxNeFhTVWtIRmdDR3hkdWU0alBhc1lYZ0l0d043MGJqTTNWMm95WWtIbXpmb3ZZOGV6aE82d210Q3hGNG5jbEVXY1BIOFZKa29iRTBHQnRTTUN1VkduS042S1VZM0JTblZhRkZBR19jUFE2dFJWQWhQdUFIYkxFRFA1Qi1qQWI5QkZzLU9n?oc=5" target="_blank">OpenAI Killed Three Products in One Week. Anthropic Shipped an Operating System</a> <font color="#6f6f6f">thetechpencil.com</font>

MaskAdapt: Learning Flexible Motion Adaptation via Mask-Invariant Prior for Physics-Based Characters
arXiv:2603.29272v1 Announce Type: new Abstract: We present MaskAdapt, a framework for flexible motion adaptation in physics-based humanoid control. The framework follows a two-stage residual learning paradigm. In the first stage, we train a mask-invariant base policy using stochastic body-part masking and a regularization term that enforces consistent action distributions across masking conditions. This yields a robust motion prior that remains stable under missing observations, anticipating later adaptation in those regions. In the second stage, a residual policy is trained atop the frozen base controller to modify only the targeted body parts while preserving the original behaviors elsewhere. We demonstrate the versatility of this design through two applications: (i) motion composition,

mtslearn: Machine Learning in Python for Medical Time Series
arXiv:2603.29432v1 Announce Type: new Abstract: Medical time-series data captures the dynamic progression of patient conditions, playing a vital role in modern clinical decision support systems. However, real-world clinical data is highly heterogeneous and inconsistently formatted. Furthermore, existing machine learning tools often have steep learning curves and fragmented workflows. Consequently, a significant gap remains between cutting-edge AI technologies and clinical application. To address this, we introduce mtslearn, an end-to-end integrated toolkit specifically designed for medical time-series data. First, the framework provides a unified data interface that automates the parsing and alignment of wide, long, and flat data formats. This design significantly reduces data cleaning ove

Unbiased Model Prediction Without Using Protected Attribute Information
arXiv:2603.29270v1 Announce Type: new Abstract: The problem of bias persists in the deep learning community as models continue to provide disparate performance across different demographic subgroups. Therefore, several algorithms have been proposed to improve the fairness of deep models. However, a majority of these algorithms utilize the protected attribute information for bias mitigation, which severely limits their application in real-world scenarios. To address this concern, we have proposed a novel algorithm, termed as \textbf{Non-Protected Attribute-based Debiasing (NPAD)} algorithm for bias mitigation, that does not require the protected attribute information. The proposed NPAD algorithm utilizes the auxiliary information provided by the non-protected attributes to optimize the mode
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!