Show HN: Zerobox – Sandbox any command with file and network restrictions
Comments
Sandbox any command with file, network, and credential controls.
Lightweight, cross-platform process sandboxing powered by OpenAI Codex's sandbox runtime.
-
Deny by default: Writes, network, and environment variables are blocked unless you allow them
-
Credential injection: Pass API keys that the process never sees. Zerobox injects real values only for approved hosts
-
File access control: Allow or deny reads and writes to specific paths
-
Network filtering: Allow or deny outbound traffic by domain
-
Clean environment: Only essential env vars (PATH, HOME, etc.) are inherited by default
-
TypeScript SDK: import { Sandbox } from "zerobox" with a Deno-style API
-
Cross-platform: macOS and Linux. Windows support planned
-
Single binary: No Docker, no VMs, ~10ms overhead
Install
Shell (macOS / Linux)
curl -fsSL https://raw.githubusercontent.com/afshinm/zerobox/main/install.sh | sh
npm
npm install -g zerobox
From source
Quick start
Run a command with no writes and no network access:
zerobox -- node -e "console.log('hello')"
Allow writes to a specific directory:
zerobox --allow-write=. -- node script.js
Allow network to a specific domain:
zerobox --allow-net=api.openai.com -- node agent.js
Pass a secret to a specific host and the inner process never sees the real value:
zerobox --secret OPENAI_API_KEY=sk-proj-123 --secret-host OPENAI_API_KEY=api.openai.com -- node agent.js
Same thing with the TypeScript SDK:
import { Sandbox } from "zerobox";
const sandbox = Sandbox.create({ secrets: { OPENAI_API_KEY: { value: process.env.OPENAI_API_KEY, hosts: ["api.openai.com"], }, }, });
const output = await sandbox.shnode agent.js.text();`
Architecture
Secrets
Secrets are API keys, tokens, or credentials that should never be visible inside the sandbox. The sandboxed process sees a placeholder in the environment variable and the real value is substituted at the network proxy level only for requested hosts:
sandbox process: curl -H "Authorization: Bearer $OPENAI_API_KEY" https://api.openai.com/... -> proxy intercepts, replaces placeholder with real key -> server receives: Authorization: Bearer sk-proj-123`
Using the CLI
Pass a secret with --secret and restrict it to a specific domain with --secret-host:
zerobox --secret OPENAI_API_KEY=sk-proj-123 --secret-host OPENAI_API_KEY=api.openai.com -- node app.js
Without --secret-host, the secret is pass to all domains:
zerobox --secret TOKEN=abc123 -- node app.js
You can also pass multiple secrets with different domains:
Node.js fetch does not respect HTTPS_PROXY by default. When running Node.js inside a sandbox with secrets, make sure to pass the --use-env-proxy argument.
TypeScript SDK
import { Sandbox } from "zerobox";
const sandbox = Sandbox.create({ secrets: { OPENAI_API_KEY: { value: process.env.OPENAI_API_KEY, hosts: ["api.openai.com"], }, GITHUB_TOKEN: { value: process.env.GITHUB_TOKEN, hosts: ["api.github.com"], }, }, });
await sandbox.shnode agent.js.text();`
Environment variables
By default, only essential variables are passed to the sandbox e.g. PATH, HOME, USER, SHELL, TERM, LANG.
Inherit all parent env vars
The --allow-env flag allows all parent environment variables to be inherited by the sandboxed process:
zerobox --allow-env -- node app.js
Inherit specific env vars only
zerobox --allow-env=PATH,HOME,DATABASE_URL -- node app.js
Block specific env vars
zerobox --allow-env --deny-env=AWS_SECRET_ACCESS_KEY -- node app.js
or set a specific variable:
zerobox --env NODE_ENV=production --env DEBUG=false -- node app.js
TypeScript SDK
Examples
Run AI-generated code safely
Run AI generated code without risking file corruption or data leaks:
zerobox -- python3 /tmp/task.py
Or allow writes only to an output directory:
zerobox --allow-write=/tmp/output -- python3 /tmp/task.py
Or via the TypeScript SDK:
import { Sandbox } from "zerobox";
const sandbox = Sandbox.create({ allowWrite: ["/tmp/output"], allowNet: ["api.openai.com"], });
const result = await sandbox.shpython3 /tmp/task.py.output();
console.log(result.code, result.stdout);`
Restrict LLM tool calls
Each AI tool call can also be sandboxed individually. The parent agent process runs normally and only some operations are sandboxed:
import { Sandbox } from "zerobox";
const reader = Sandbox.create(); const writer = Sandbox.create({ allowWrite: ["/tmp"] }); const fetcher = Sandbox.create({ allowNet: ["example.com"] });
const data = await reader.js
await writer.js
const result = await fetcher.js
Full working examples:
-
examples/ai-agent-sandboxed - Entire agent process sandboxed with secrets (API key never visible)
-
examples/ai-agent - Vercel AI SDK with per-tool sandboxing and secrets
-
examples/workflow - Vercel Workflow with sandboxed durable steps
Protect your repo during builds
Run a build script with network access:
zerobox --allow-write=./dist --allow-net -- npm run build
Run tests with no network and catch accidental external calls:
zerobox --allow-write=/tmp -- npm test
SDK reference
npm install zerobox
Shell commands
import { Sandbox } from "zerobox";
const sandbox = Sandbox.create({ allowWrite: ["/tmp"] });
const output = await sandbox.shecho hello.text();`
JSON output
const data = await sandbox.shcat data.json.json();
Raw output (doesn't throw on non-zero exit)
const result = await sandbox.shexit 42
Explicit command + args
await sandbox.exec("node", ["-e", "console.log('hi')"]).text();
Inline JavaScript
const data = await sandbox.js
console.log(JSON.stringify({ sum: 1 + 2 }));
.json();
Error handling
Non-zero exit codes throw SandboxCommandError:
import { Sandbox, SandboxCommandError } from "zerobox";
const sandbox = Sandbox.create();
try {
await sandbox.shexit 1.text();
} catch (e) {
if (e instanceof SandboxCommandError) {
console.log(e.code); // 1
console.log(e.stderr);
}
}`
Cancellation
.text({ signal: controller.signal });Performance
Sandbox overhead is minimal, typically ~10ms and ~7MB:
Command Bare Sandboxed Overhead Bare Mem Sandbox Mem
echo hello
<1ms
10ms
+10ms
1.2 MB
8.4 MB
node -e '...'
10ms
20ms
+10ms
39.3 MB
39.1 MB
python3 -c '...'
10ms
20ms
+10ms
12.9 MB
13.0 MB
cat 10MB file
<1ms
10ms
+10ms
1.9 MB
8.4 MB
curl https://...
50ms
60ms
+10ms
7.2 MB
8.4 MB
Best of 10 runs with warmup on Apple M5 Pro. Run ./bench/run.sh to reproduce.
Platform support
Platform Backend Status
macOS
Seatbelt (sandbox-exec)
Fully supported
Linux Bubblewrap + Seccomp + Namespaces Fully supported
Windows Restricted Tokens + ACLs + Firewall Planned
CLI reference
Flag Example Description
--allow-read
--allow-read=/tmp,/data
Restrict readable user data to listed paths. System libraries remain accessible. Default: all reads allowed.
--deny-read
--deny-read=/secret
Block reading from these paths. Takes precedence over --allow-read.
--allow-write [paths]
--allow-write=.
Allow writing to these paths. Without a value, allows writing everywhere. Default: no writes.
--deny-write
--deny-write=./.git
Block writing to these paths. Takes precedence over --allow-write.
--allow-net [domains]
--allow-net=example.com
Allow outbound network. Without a value, allows all domains. Default: no network.
--deny-net
--deny-net=evil.com
Block network to these domains. Takes precedence over --allow-net.
--env
--env NODE_ENV=prod
Set env var in the sandbox. Can be repeated.
--allow-env [keys]
--allow-env=PATH,HOME
Inherit parent env vars. Without a value, inherits all. Default: only PATH, HOME, USER, SHELL, TERM, LANG.
--deny-env
--deny-env=SECRET
Drop these parent env vars. Takes precedence over --allow-env.
--secret
--secret API_KEY=sk-123
Pass a secret. The process sees a placeholder; the real value is injected at the proxy for approved hosts.
--secret-host
--secret-host API_KEY=api.openai.com
Restrict a secret to specific hosts. Without this, the secret is substituted for all hosts.
-A, --allow-all
-A
Grant all filesystem and network permissions. Env and secrets still apply.
--no-sandbox
--no-sandbox
Disable the sandbox entirely.
--strict-sandbox
--strict-sandbox
Require full sandbox (bubblewrap). Fail instead of falling back to weaker isolation.
--debug
--debug
Print sandbox config and proxy decisions to stderr.
-C
-C /workspace
Set working directory for the sandboxed command.
-V, --version
--version
Print version.
-h, --help
--help
Print help.
License
Apache-2.0
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
restrict
Claude Code CLAUDE.md vs settings.json: which one controls what (and why it matters)
<h1> Claude Code CLAUDE.md vs settings.json: which one controls what (and why it matters) </h1> <p>If you've been using Claude Code for more than a week, you've probably edited both <code>CLAUDE.md</code> and <code>.claude/settings.json</code> — and at some point wondered: <em>which file should this go in?</em></p> <p>They look similar. They both configure Claude's behavior. But they control completely different things, and mixing them up leads to frustrating results.</p> <p>Here's the complete breakdown.</p> <h2> The mental model </h2> <p><strong>CLAUDE.md = Claude's brain</strong><br><br> <strong>settings.json = Claude's permissions</strong></p> <p><code>CLAUDE.md</code> is natural language. You write instructions, context, constraints, and preferences. Claude reads it like a colleague r
America’s Chip Restrictions Are Biting in China - WSJ
<a href="https://news.google.com/rss/articles/CBMigwNBVV95cUxPRXJTc0hHUU02eUlsMzBUUU52TElPazRyNDVHNmphbHAxSVlfNFNoMTl3LWpSbThlNGVuRWlOYXA3d2ZFN1ZLdDRQWW41MGoxNDVKV0FpVDVOeU5ranJ6R1FTbXZKOGVIdnd2WV9COFV5Z3dKbFRSM3lDRER1ZTVnTUxCN3czR1pyOG5salhyTV8wV0tFWWxzNGVvbWM5YllDdGdpdklMUm5HVlRNZXotUTdwcGdrTW1mbEZwOE94bXhXNEY2dExYNENydG1DMDVxZU9ET2lSN3o4cnBmbjFzcjBXVExyVGJzUU9rakY5bjVDcmd4SHZqRndMM19tam83QjB6R2Z6ZTNYZ21ZVDJfOGNQWElEdHNKMEhna1pLcFVwVkFEenNYS0RSam5zSW5hTGV4Y0YyTFJ6V0VlT2hVVUhJTU9RV3dUZ1VjaFctdDlDd0N3VHpTYVYxcHpzaUVRaFlhX1VfU2xfZHU4V3NoZy1vMDJFeEdoTF90cmFsN2tiWVU?oc=5" target="_blank">America’s Chip Restrictions Are Biting in China</a> <font color="#6f6f6f">WSJ</font>

🥷 StealthHumanizer — A Free Open-Source AI Text Humanizer with 13 Providers and Multi-Pass Ninja Mode
<h2> Why StealthHumanizer? </h2> <p>With the rise of AI-generated content, tools that can humanize text are in high demand. But most solutions are paid, require sign-ups, or limit your usage. I wanted to build something different — a completely free, open-source text humanizer that anyone can use without restrictions.</p> <p><strong>StealthHumanizer</strong> supports 13 text generation providers, 4 rewrite levels, 13 distinct tones, and a multi-pass "ninja mode" for maximum naturalness.</p> <h2> Features </h2> <h3> 🔄 13 AI Providers </h3> <p>StealthHumanizer works with OpenAI, Anthropic, Google, Mistral, Cohere, and many more providers. Switch between them freely — whatever works best for your content.</p> <h3> 📊 4 Rewrite Levels </h3> <p>From light touch-ups to complete rewrites, choose
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Laws & Regulation
Judge fines and sanctions two city lawyers over AI-generated citations - WDSU
<a href="https://news.google.com/rss/articles/CBMiogFBVV95cUxNeHlROEhuWUVNN1p1RWhkSnNmZTdfa09lUl9LQ2FBOWJIVFZEM0JWb0gyN2RQdnN4cm8xY1B1YTU5dGl6MGhfRjdRWVJSQ3BoOFBvLVRCTnh3bUd2dHN1eFFac00zRTVzNFBuaUphb1FhTnBkZTNuQWI1MzE1NDRTcElWX05nLWhBSzdNQXFodGpydEllZUFsanBmMTJpQ3dPRVE?oc=5" target="_blank">Judge fines and sanctions two city lawyers over AI-generated citations</a> <font color="#6f6f6f">WDSU</font>
OpenAI and Google Workers File Amicus Brief in Support of Anthropic Against the US Government - WIRED
<a href="https://news.google.com/rss/articles/CBMimgFBVV95cUxPX2NIcE81VkNhSHlvcVdWTTA5MXVVd082ekNhTmZfaWlBcGlyV3NyZWhEb3BqZmNJR3N3TXB6XzJ0SWVEaGNjRTBqcFVUdGlNSnY1em9pOFE5SnI1UVBxQnJzdVpHcEtyUnBDMEttMjB3cER0RmRPaThfcnNRRDZic2tOOTgzWGFsdG1LbjhjNDVPTnFyeFRPTHVn?oc=5" target="_blank">OpenAI and Google Workers File Amicus Brief in Support of Anthropic Against the US Government</a> <font color="#6f6f6f">WIRED</font>
Sam Altman’s sister amends suit accusing OpenAI CEO of sexual abuse - Michigan Lawyers Weekly
<a href="https://news.google.com/rss/articles/CBMinAFBVV95cUxQOEdSclNRS2pjbDdfcFFnMHYyMnVoVEduejdlWjBFYzNnVnA1U1ZiZTZnQ3A5THVoZnN4V2hrY1FhQ3JxUUl6OFVxVTMzUkR2Qm9MdW8wc014Tk9lUU9uVnViSDZfSW5HM2RCYmZ6VHN5cGVqeGNyT0dnM29zdzlObzFuTGtwc0pTQzQxUElLcVlCZk52OURSWFd2dVg?oc=5" target="_blank">Sam Altman’s sister amends suit accusing OpenAI CEO of sexual abuse</a> <font color="#6f6f6f">Michigan Lawyers Weekly</font>

AI company insiders can bias models for election interference
tl;dr it is currently possible for a captured AI company to deploy a frontier AI model that later becomes politically disinformative and persuasive enough to distort electoral outcomes. With gratitude to Anders Cairns Woodruff for productive discussion and feedback. LLMs are able to be highly persuasive, especially when engaged in conversational contexts . An AI "swarm" or other disinformation techniques scaled massively by AI assistance are potential threats to democracy because they could distort electoral results. AI massively increases the capacity for actors with malicious incentives to influence politics and governments in ways that are hard to prevent, such as AI-enabled coups . Mundane use and integration of AI also has been suggested to pose risks to democracy. A political persuas
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!