I Moved a Folder. Claude Code Told Me Not to Copy My Own Secrets.
Committing your .env is a serious mistake. API keys stolen, credentials exposed, the whole disaster. The best practice is clear: put your .env in .gitignore and you're covered. Except .gitignore protects your repo. Not your machine. A malicious CLI package doesn't read your .gitignore . It reads your disk. It grabs ~/.aws/credentials , your shell history, your SSH keys, your crypto wallets. And now it reads .claude/settings.local.json too (a file you probably never opened). I discovered this while moving my entire dev folder off iCloud after it corrupted my Node.js cache . Claude Code told me not to copy my own secrets . I audited 50 projects. No secret lives in cleartext on my machine anymore. TLDR : .gitignore is a lock on the front door. Malware comes through the window . Infisical behi
Committing your .env is a serious mistake. API keys stolen, credentials exposed, the whole disaster. The best practice is clear: put your .env in .gitignore and you're covered.
Except .gitignore protects your repo. Not your machine.
A malicious CLI package doesn't read your .gitignore. It reads your disk. It grabs ~/.aws/credentials, your shell history, your SSH keys, your crypto wallets. And now it reads .claude/settings.local.json too (a file you probably never opened).
I discovered this while moving my entire dev folder off iCloud after it corrupted my Node.js cache. Claude Code told me not to copy my own secrets. I audited 50 projects. No secret lives in cleartext on my machine anymore.
TLDR: .gitignore is a lock on the front door. Malware comes through the window. Infisical behind a mesh VPN injects secrets at runtime. Nothing on disk. Here's the setup.
The .gitignore Theater
Everyone laughs at the dev who commits a .env file. The consensus is settled, five words long, repeated in every bootcamp and every Twitter thread: put it in .gitignore.
And sure. .gitignore is crucial. First line of defense. Nobody's saying otherwise.
The problem is treating it as the last one.
.gitignore tells Git what to skip. That's literally all it does. It has zero authority over anything else running on your machine. A compromised npm package, a poisoned pip dependency, a rogue VS Code extension: none of them check your .gitignore before reading your files. They don't need to. They have disk access. You gave it to them the moment you ran npm install.
Andrej Karpathy, former head of AI at Tesla and one of the founders of OpenAI, discovered this the hard way with LiteLLM. He flagged a typosquatted package that had full disk read capabilities: shell history, cloud credentials, SSH keys, Docker configs, Kubernetes tokens. Everything sitting in well-known paths, in cleartext, on developer machines worldwide.
A threat group calling themselves TeamPCP claimed 500,000 stolen credentials across multiple campaigns (self-reported numbers, not independently verified, but the exfiltration technique is documented and reproducible).
Everybody showed the fire. Nobody showed the extinguisher.
What Your Machine Actually Exposes
Your .gitignore covers one directory. Malware covers your entire home folder.
The documented exfiltration from supply chain attacks like LiteLLM reads like a shopping list: ~/.aws/credentials, ~/.ssh/ contents, every .env file across every project (recursive search, takes milliseconds), shell history with every token you pasted because you were in a hurry and thought "I'll rotate it later" (you didn't), Docker registry credentials, Kubernetes tokens, crypto wallets.
And the new entry nobody monitors: .claude/settings.local.json.
That last one is how I found out. I was migrating a project to ~/dev after the iCloud mess forced me to move everything. Claude Code flagged the copy operation. Paraphrasing: "Don't copy your settings file, there are Supabase secrets in plaintext in your auto-approved commands."
The tool that created the leak warned me about the leak.
I had written a whole article about how CLAUDE.md is the new .env and how most developers treat it like a README. While I was writing that, the .claude/ folder was quietly leaking the actual secrets. Not the CLAUDE.md file itself. The settings file next to it, where Claude Code stores every command you auto-approved, verbatim, including the ones with your database credentials.
I found this on a Saturday morning while my kids were arguing about who gets the last pancake. My screen was showing seven Supabase keys in cleartext, and I was sitting there thinking "how long has this been like this." Not a great pancake moment.
The Week Everything Cascaded
This wasn't planned. I moved a folder, and each fix peeled back the next layer.
iCloud was corrupting my Node.js cache. Mysterious crashes on a machine that should have handled anything I threw at it. The fix was simple: move my projects out of ~/Documents/dev to ~/dev, outside iCloud's sync radius. Problem solved.
Except moving 50 projects means looking at 50 projects.
That's when I started opening folders I hadn't touched in months. Old .env files everywhere. Hardcoded tokens in scripts I forgot existed. And then the dependency audit, which led me straight to a supply chain attack vector hiding in my pip dependencies that I had been blindly auto-approving for eight months. My AI agent was running pip install on whatever it wanted, no questions asked.
Locking down the dependencies meant locking down the network. I set up a self-hosted mesh VPN that makes my entire infrastructure invisible from the public internet. No open ports, no exposed services, no attack surface for scanners to find.
And the mesh VPN exposed the last piece. The secrets themselves. Still sitting in cleartext on disk, still readable by anything with file access. Four problems stacked on top of each other, each one invisible until you fixed the one above it.
Like removing wallpaper in an old house and discovering the wall behind it is held together by optimism.
Infisical Behind the Mesh
So what replaces cleartext secrets on disk?
Infisical. Open source secrets manager, self-hosted, running inside the mesh VPN behind Traefik. Accessible only at infisical.mesh:8080. Not exposed to the internet. If you don't have a Netbird client authenticated and connected, that address simply doesn't exist.
Docker Compose, Postgres, Redis, sitting on the same VPS that runs the rest of my stuff. The secrets manager itself is invisible from outside the mesh. This matters. A secrets vault exposed on a public IP with a login page is just a fancier target. Mine has no public IP. No login page reachable from the outside. No DNS entry pointing to it.
I migrated seven secrets from one project alone. N8N_BASIC_AUTH_USER, SUPABASE_SERVICE_ROLE_KEY, VERCEL_OIDC_TOKEN, and four more. All previously in .env files on my disk. Now they live in Infisical, organized by project and environment.
The workflow barely changes: infisical run -- npm start. Secrets fetched at runtime, injected as environment variables. Nothing written to disk. Nothing persists after the process stops.
If malware scans your filesystem now, there's no .env to find. No credentials in shell history. No tokens in auto-approved commands.
While cleaning up node_modules after the move, I also switched to Bun. Smaller footprint, no giant node_modules directory sitting there like a buffet for anything scanning the filesystem. Not a security decision, just a side effect of the cleanup. Less surface is less surface.
\
Between scanning your code for dependency vulnerabilities and removing secrets from disk, you cover two faces of the same posture. One watches what gets in. The other makes sure there's nothing to steal if something does.
The Lock on the Door and the Open Window
The LiteLLM supply chain attack showed what a single poisoned package can do. Shell history read, AWS credentials grabbed, SSH keys copied. Documented, reproducible, affecting thousands of developers. Nobody showed the extinguisher.
Open your last three projects. Ask yourself where your secrets live. In cleartext on disk? In a .claude/settings.local.json you never opened? In shell history that a malicious pip install can read in three seconds?
I locked down my dev environment. No more cleartext secrets. No more Node.js cache corrupted by cloud sync. No more git repositories broken by filesystem interference.
One folder move. One week of cascade. The machine is clean.
Sources
Andrej Karpathy's public disclosure of the LiteLLM typosquatting attack. IntCyberDigest documentation on TeamPCP exfiltration techniques and the Trivy supply chain incidents.
() The cover is AI-generated. The crab wasn't harmed during production, but your .env file was.
Dev.to AI
https://dev.to/rentierdigital/i-moved-a-folder-claude-code-told-me-not-to-copy-my-own-secrets-27h7Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudeopen sourceproduct
research-llm-apis 2026-04-04
Release: research-llm-apis 2026-04-04 I'm working on a major change to my LLM Python library and CLI tool. LLM provides an abstraction layer over hundreds of different LLMs from dozens of different vendors thanks to its plugin system, and some of those vendors have grown new features over the past year which LLM's abstraction layer can't handle, such as server-side tool execution. To help design that new abstraction layer I had Claude Code read through the Python client libraries for Anthropic, OpenAI, Gemini and Mistral and use those to help craft curl commands to access the raw JSON for both streaming and non-streaming modes across a range of different scenarios. Both the scripts and the captured outputs now live in this new repo. Tags: llm , apis , json , llms

scan-for-secrets 0.1
Release: scan-for-secrets 0.1 I like publishing transcripts of local Claude Code sessions using my claude-code-transcripts tool but I'm often paranoid that one of my API keys or similar secrets might inadvertently be revealed in the detailed log files. I built this new Python scanning tool to help reassure me. You can feed it secrets and have it scan for them in a specified directory: uvx scan-for-secrets $OPENAI_API_KEY -d logs-to-publish/ If you leave off the -d it defaults to the current directory. It doesn't just scan for the literal secrets - it also scans for common encodings of those secrets e.g. backslash or JSON escaping, as described in the README . If you have a set of secrets you always want to protect you can list commands to echo them in a ~/.scan-for-secrets.conf.sh file. Mi
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

AI News This Week: April 05, 2026 - A New Era of Rapid Development and Multimodal Intelligence
AI News This Week: April 05, 2026 - A New Era of Rapid Development and Multimodal Intelligence Published: April 05, 2026 | Reading time: ~10 min This week has been nothing short of phenomenal for the AI community, with breakthroughs and announcements that promise to revolutionize the way we develop and interact with artificial intelligence. From building personal AI agents in a matter of hours to the unveiling of cutting-edge multimodal intelligence models, the pace of innovation is not just accelerating - it's transforming the landscape of what's possible. Whether you're a seasoned developer or just starting to explore the world of AI, this week's news is a must-know, offering insights into how technology is making AI more accessible, powerful, and integrated into our daily lives. Buildin

This Week in AI: April 05, 2026 - Revolutionizing Development with Personal Agents and Multimodal Intelligence
This Week in AI: April 05, 2026 - Revolutionizing Development with Personal Agents and Multimodal Intelligence Published: April 05, 2026 | Reading time: ~10 min This week has been incredibly exciting for AI enthusiasts and developers alike. With advancements in personal AI agents, multimodal intelligence, and compact models for enterprise documents, the field is rapidly evolving. One of the most significant trends is the ability to build and deploy useful AI prototypes in a remarkably short amount of time. This shift is largely due to innovative tools and ecosystems that are making AI more accessible to individual builders. In this article, we'll dive into the latest AI news, exploring what these developments mean for developers and the broader implications for the industry. Building a Per

Exploring AI Ethics in Content Creation: Best Practices for Maintaining Authenticity and Originality in 2026
In today's fast-paced world, AI writing tools are your trusted companions in crafting engaging content without compromising on time or quality. However, as we delve deeper into the realm of automated content creation, it's crucial to address ethical concerns that surfaced over the years, particularly those related to authenticity and originality. Embracing Transparency: Clear Disclosure Practices Transparency is the foundation of trust between creators and their audience. To maintain this bond, it's essential to disclose when AI has been used in the content creation process. This can be achieved by including a disclaimer or attribution that acknowledges the involvement of AI writing tools in generating the content. Maximizing Productivity: Top 7 AI-Powered Writing Assistants for Boosting E

Exploring Real-World AI Writing Tools Integration: Best Practices for Seamless Combination in 2026 (Case Study)
In the bustling world of content creation, time is of the essence. That's where AI writing tools come into play, your secret weapons for enhancing productivity while maintaining high-quality output. Today, we delve into the best practices for seamlessly integrating these powerful tools in 2026. Embracing Advanced AI Research Assistants AI research assistants are designed to aid writers, making content creation more efficient and less daunting. These intelligent tools can suggest ideas, generate outlines, and even draft entire articles based on your input. They serve as an extension of your creativity, helping you tackle projects with ease. Maximizing Productivity: Top 7 AI-Powered Writing Assistants for Boosting Efficiency in 2026 provides a comprehensive guide to some of the best AI resea


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!