We sped up bun by 100x
Comments
Digest
How to code cannon yourself
Install vers CLI
$ curl -fsSL https://raw.githubusercontent.com/hdresearch/vers-cli/main/install.sh | sh
$ vers login
Configure environment variables
$ vers env set GITHUB_API_KEY github_pat_...
$ vers env set VERS_API_KEY abc123...
$ vers env set ANTHROPIC_API_KEY sk-ant-...
Write your initial plan
$ pi "Read plan.md and let me know when I can quit this session"
Let it start running
Check where it's at
Repeat running and checking in
How we rewrote git in zig
Environment
The initial plan
The goal is to make a modern version control software like git or jj but written in zig
ALL SYSTEMS AND AGENTS MUST use this github -> https://github.com/hdresearch/ziggit.git
For each of the below goals, create a VM and run code like the following
while true do
pi -run "GOAL"
end
while true do
pi -run "GOAL"
end
NOTE - pi is running on the VM itself rather than running on the host machine and then ssh'ing commands. This should be done so we can quit this pi session
So agents are just infinitely running since there is always something to improve in a piece of software. Include pi-vers extension so each infinite loop can provision further VMs or agents.
- first person like jj but does not have a
jj gitsubcommand and instead is drop in replaceable withgitsoziggit checkoutnotziggit git checkout - feature compatibility with git (copy over test suite from git source)
- can compile to webassembly
- can yield performance improvements to oven-sh/bun codebase by using directly with zig integration instead of libgit2 or git cli
Maybe wait for some progress before starting on replacing bun's usage of the git cli (which they use over libgit2 for performance reasons, our suspicion is that a modern solution in zig could be better). Every VM should have the env vars VERS_API_KEY, ANTHROPIC_API_KEY, GITHUB_API_KEY. Also use the hdresearch/bun fork with changes so a real PR can be created pointing at oven-sh/bun BUT DO NOT MAKE THIS PR YOURSELF. Provide instructions for a person to validate the benchmark results with ziggit usage first`
The produced agent loop
#!/bin/bash set -a; source /etc/environment 2>/dev/null; set +a export HOME=/root export NODE_OPTIONS="--max-old-space-size=256"#!/bin/bash set -a; source /etc/environment 2>/dev/null; set +a export HOME=/root export NODE_OPTIONS="--max-old-space-size=256"cd /root/myproject || exit 1
while true; do echo "$(date): === Starting agent run ==="
1. SYNC — save dirty work, pull latest from other agents
git add -A git diff --cached --quiet || git commit -m "auto-save before sync" git fetch origin master git rebase origin/master || { git rebase --abort git reset --hard origin/master # nuclear option on conflicts }
2. BUILD — rebuild the project
zig build # or whatever your build command is
3. RUN PI — the actual agent work
pi --no-session -p "$(cat /root/prompt.txt)"
4. PUSH — commit and push whatever pi did
git add -A git diff --cached --quiet || git commit -m "auto-save after pi run" for attempt in 1 2 3; do git pull --rebase origin master || { git rebase --abort git reset --hard origin/master } git push origin master && break sleep 5 done
sleep 10 done`
Meta note
What it cost
The final results
bun improvements
git drop-in
WebAssembly
Succinct mode
$ git commit -m "chore: add another file" [master b6eeb42] chore: add staged file 1 file changed, 1 insertion(+)$ git commit -m "chore: add another file" [master b6eeb42] chore: add staged file 1 file changed, 1 insertion(+)$ ziggit commit -m "chore: add another file" ok master 640fe38 "chore: add another file"$ ziggit commit -m "chore: add another file" ok master 640fe38 "chore: add another file"--- normal --- --- succinct --- On branch master * master*--- normal --- --- succinct --- On branch master * master*- Staged: 1 files Changes to be committed: staged.txt (use "git restore --staged ..." ...) ~ Modified: 1 files new file: staged.txt README.md Changes not staged for commit: (use "git add ..." ...) (use "git restore ..." ...) modified: README.md`
$ ziggit --no-succinct status
GIT_SUCCINCT=0 ziggit status
Theory
Agent spawned agents is like being a manager of managers
For scenarios where we figured one agent was not going to fulfill some capability in a reasonable amount of time (mind you, this stuff is eating up billions of tokens so not like it's absurdly unreasonable in the first place), we'd have multiple agents working in the same part of the codebase except the logic wrapping the agent itself (both in the prompt and in literal shell scripts), we use git to rebase or stash or push changes along the way. This both ensures agents don't tunnel vision themselves into stuff that's never pushed and agents can be failure tolerant when one gets a task that was already handled by another agent.
Why we think this works
The way it goes is you have all the ingredients and tools you'd use to prepare a PB&J (bread, peanut butter, jelly, plates, so on) as well as something to write on and something to write with (such as blackboard, whiteboard, paper, text editor). You begin by instructing the group to provide (so it can be written down) the instructions for preparing a PB&J while, along the way, you follow instructions extremely literally such that a sandwich never gets made (unless you're nice about it). The goal isn't to demoralize your students into thinking they can't define steps but more to emphasize how "dumb" computers can be and how explicit code needs to be for a program to do what you expect.
If you prompt an LLM to make a PB&J, assuming it has access to whatever's needed in the real world with robot arms plus all the cool hijinks, you'll likely end up with something much like how you can prompt a coding agent to make some program and it will likely end up with something. If you want to ensure that every sandwich made uses apricot jam, that's something to specify in the instructions. If you want to ensure some web app generation always uses a certain component library, that's something to specify in the instructions as well. LLMs are great because they can do things but whichever details you care about must be specified similar to how a human doing the PB&J exercise would need the orientation of the knife and so on to be specified.
What was funny about steering this system of agents is it was reminiscent of seeing demands of engineering teams evolve over time like the startups we've been at; when the group needs to focus on a refactor or tasks can be divided in parallel, agents can be redirected towards something or spawned/killed according to the codebase's demands. The point here being there wasn't a single organizational structure or scaffold which was the "best", our orchestration was more dynamic as I went along with the project.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!