WorkerMill – open-source AI coding team, multi-expert orchestration
Article URL: https://github.com/jarod-rosenthal/workermill Comments URL: https://news.ycombinator.com/item?id=47616600 Points: 1 # Comments: 1
Point at a ticket. Get a pull request.A single model writes bad code and approves its own bad code. WorkerMill separates planning, coding, and review — each with a different model, different strengths, different blind spots.
Website ·
Docs · Discussions · npm
Get Started
npx workermill
No API key required — select Ollama during setup to run fully local. Or bring your own keys for Anthropic, OpenAI, Google, LM Studio, or any OpenAI-compatible provider. Setup takes 60 seconds.
Point at a ticket. Get a pull request.
Point WorkerMill at your GitHub Issues, Jira, or Linear tickets. It plans the work, assigns specialist AI personas — backend, frontend, devops, security — writes the code, runs your tests, reviews with a separate model, and opens a PR.
> /ship #42
coordinator Fetched #42: Add product export to CSV
planner Reading codebase... 38 files analyzed planner 2 stories: [backend_developer] CSV export endpoint with filters, auth, tests [frontend_developer] Export button on Products page
backend_developer Created src/routers/products.py export endpoint backend_developer Created tests/test_products.py — 4 new tests frontend_developer Created frontend/src/pages/ProductsPage.tsx — Export CSV button
tech_lead Reviewing against original spec... tech_lead Score: 5/10 — N+1 database query, JSX parsing error tech_lead Revision needed
backend_developer Fixed N+1 with selectinload, updated tests frontend_developer Fixed JSX structure, verified build
tech_lead Score: 8/10 — approved
system Branch: workermill/add-product-export (4 commits) Push and open PR? (y/n) Cost: ~$2.50 (planner + reviewer only — workers ran locally for free)`
The reviewer caught a real N+1 database query. The workers fixed it. The re-review passed. No human intervention. That's the difference between one model approving its own work and a team with independent review.
Works with GitHub Issues (/ship #42), Jira (/ship PROJ-123), Linear (/ship TEAM-42), spec files (/ship spec.md), or just a description (/ship add dark mode).
Review didn't pass? /retry picks up where you left off
> /retry
coordinator Retrying on branch: workermill/user-auth — 2 done, 1 remaining coordinator Story 1/1: [backend_developer] Auth service
backend_developer Modified src/routes/auth.ts — cookie-based token, blacklist on /logout backend_developer Modified src/middleware/requireAuth.ts — read token from cookie backend_developer Running quality gates... vitest ✓ (23 passed)
tech_lead Score: 9/10 — approved`
/retry doesn't start over. It loads the existing plan from disk, skips planning entirely, and resumes from the first incomplete story. No wasted tokens replanning or rebuilding what already works.
Review your code. Fix what it finds.
/review runs a standalone Tech Lead review on your current work. If it finds issues, WorkerMill offers to create a GitHub issue with the findings and immediately kicks off /ship to fix them.
> /review branch
tech_lead Reading diff against main... 14 files changed tech_lead Score: 6/10 tech_lead Issues:
- API key passed as query parameter — use headers
- No input validation on POST /api/webhooks
- Error responses leak stack traces in production
Create a GitHub issue with these findings and fix them? (y/n) y
coordinator Created issue #18 — starting fix...`
Works with branch (full diff vs main), diff (uncommitted changes), or a PR number (/review #42).
Target a single expert
/as sends one specialist with full tool access — no planning step, no review loop.
/as security_engineer audit this repository for injection and broken auth /as backend_developer add pagination to the /api/tasks endpoint /as devops_engineer set up a GitHub Actions CI pipeline /as qa_engineer write integration tests for the checkout flow/as security_engineer audit this repository for injection and broken auth /as backend_developer add pagination to the /api/tasks endpoint /as devops_engineer set up a GitHub Actions CI pipeline /as qa_engineer write integration tests for the checkout flowOr just chat
Ask it to fix a bug, explain a function, or refactor a module. It reads your code, makes changes, runs your tests.
How It Works
Unlike single-model tools, WorkerMill never lets the same model review its own code.
-
A planner reads your codebase and decomposes the task into scoped stories with specific files and implementation guidance.
-
Specialist workers build one story at a time — a backend expert writes the API, a frontend expert wires the UI. Workers run locally via Ollama (free) or on any cloud provider.
-
A reviewer on a different model reads the actual diffs against the original spec. It rejects bad work with specific feedback — including real code examples — until the code meets the standard.
{ "providers": { "ollama": { "model": "qwen3-coder:30b" }, "openai": { "apiKey": "{env:OPENAI_API_KEY}" }, "google": { "apiKey": "{env:GOOGLE_API_KEY}" } }, "default": "ollama", "routing": { "planner": "openai", "tech_lead": "google" } }{ "providers": { "ollama": { "model": "qwen3-coder:30b" }, "openai": { "apiKey": "{env:OPENAI_API_KEY}" }, "google": { "apiKey": "{env:GOOGLE_API_KEY}" } }, "default": "ollama", "routing": { "planner": "openai", "tech_lead": "google" } }Use expensive models for judgment. Free local models for volume.
AI Provider Support
Bring your own keys. Mix and match per role. WorkerMill uses the Vercel AI SDK — any compatible provider works out of the box.
Provider Models Notes
Ollama Any local model Auto-detected, including WSL. Fully offline
LM Studio Any local model Auto-detected
Anthropic Claude Opus 4.6, Sonnet 4.6, Haiku 4.5
OpenAI GPT-5.4, GPT-5.4 Mini, GPT-5.3 Codex
Google Gemini 3.1 Pro, Gemini 2.5 Flash
Any provider with an OpenAI-compatible API also works — Groq, DeepSeek, Mistral, OpenRouter, Together AI, xAI, Fireworks, or your own custom endpoint.
Install
# Run without installing (recommended) npx workermill# Run without installing (recommended) npx workermillOr install globally
npm install -g workermill
Check your setup
wm doctor`
No server, no Docker, no account. First run walks you through provider setup — pick a model, add a key (or point at Ollama), and you're building.
Requirements: Node.js 20+, Git, and an LLM provider (Ollama for local, or an API key). GitHub CLI (gh) is optional but needed for automatic PR creation.
All Commands
Build
Command What it does
/ship
Full team: plan, execute with experts, review, commit to branch
/ship spec.md
Same, but read the task from a file
/ship GH-42 / PROJ-123 / TEAM-42
Fetch a ticket from GitHub Issues, Jira, or Linear
/as
One expert, full tools, no planning overhead
/retry
Resume last /ship — skips planning, picks up from the first incomplete story
/review branch
Tech lead review of feature branch diff vs main
/review diff
Review uncommitted changes only
/review #42
Review a GitHub PR by number
Session
Command What it does
/model provider/model [ctx]
Hot-swap model mid-session (e.g. /model google/gemini-3.1-pro)
/compact [focus]
Compress conversation — optionally preserve specific context
/cost
Session cost estimate and token usage
/sessions
List past conversations (resume with --resume on next launch)
/clear
Reset the conversation
/editor
Open $EDITOR for longer input
Project
Command What it does
/init
Generate WORKERMILL.md from codebase analysis
/remember
Save a persistent memory
/forget
Remove a memory
/memories
View all saved project memories
/personas
List, view, or create expert personas
/skills
List custom skills from .workermill/skills/
Safety
Command What it does
/undo
Revert file changes — per-file, per-step, or everything
/diff
Preview uncommitted changes
/git
Branch and status
/permissions
Manage tool allow/deny rules
/trust
Auto-approve all tools for this session
Config
Command What it does
/settings
View and change configuration inline
/settings key
Add an API key without leaving the session
/setup
Re-run the provider setup wizard
/hooks
View configured pre/post tool hooks
/mcp
MCP server connection status
Experimental
Command What it does
/chrome
Headless Chrome for testing and scraping
/voice
Voice input — speak your task
/schedule
Scheduled recurring tasks
Shortcuts: !command runs shell directly · ESC cancels · ESC ESC rolls back last exchange · Shift+Tab cycles permission mode · @file.ts inlines code · @dir/ inlines tree · @url fetches content · @image.png sends to vision models
For teams that need a web dashboard, VS Code extension, and managed cloud workers, see the WorkerMill Platform.
Apache License 2.0 — see LICENSE for details.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
open-sourcegithub
🚀 Build a Professional Image Converter GUI in Python (Step-by-Step)
👉 Full source code: https://github.com/rogers-cyber/python-tiny-tools/blob/main/63-Image-resizer/ImageConvertPRO.py 🧠 What You’ll Build In this tutorial, we’ll create a modern desktop app that can: 📂 Add images (files, folders, drag drop) 🖼 Preview thumbnails 🔄 Convert formats (PNG, JPEG, WEBP, etc.) 📏 Resize images 💾 Save conversion history (SQLite) ⚡ Run conversions in background (no freezing UI) 📦 Step 1: Install Dependencies pip install pillow ttkbootstrap tkinterdnd2 🔍 Why we need them: Pillow → image processing ttkbootstrap → modern UI styling tkinterdnd2 → drag drop support 📁 Step 2: Project Setup Create a Python file: image_convert_pro.py ⚙️ Step 3: Import Libraries import os import sys import sqlite3 from threading import Thread from PIL import Image , ImageTk 🧠 Explana

I Turned My MacBook's Notch Into a Control Center for AI Coding Agents
Every developer using Claude Code knows the pain: you have 5+ terminal sessions running, Claude is asking for permission in one tab, waiting for input in another, and you're buried in a third. You Alt-Tab frantically, lose context, and waste time. So I built CodeIsland — a free, open-source macOS app that turns your MacBook's notch (Dynamic Island) into a real-time dashboard for all your AI coding agents. The Problem When you're running multiple Claude Code sessions across different projects, there's no way to see everything at a glance. You're constantly switching between terminals to: Check which session finished Approve permission requests Answer Claude's questions Monitor usage limits Multiple Claude Code sessions in cmux, with CodeIsland monitoring everything from the notch The Soluti
trunk/98fc38c4eb17c435699cea1a7d3aa84c14458ed9: Add autograd_cache_key to aot_autograd with tests (#178152)
Expose autograd_cache_key in torch._functorch.aot_autograd that wraps prepare_aot_module_simplified to build an AOTConfig and then delegates to the base autograd_cache.autograd_cache_key. This lets callers compute the cache key for a dynamo graph without running the full inductor pipeline. Tests verify that the API produces the same key as torch.compile by capturing the ground-truth key and graph from the inductor pipeline, then calling the new API on the same graph and example inputs. Pull Request resolved: #178152 Approved by: https://github.com/zou3519 ghstack dependencies: #177852 , #177871
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI

Gemma 4 Complete Guide: Architecture, Models, and Deployment in 2026
Google DeepMind released Gemma 4 on April 3, 2026 under Apache 2.0 — a significant licensing shift from previous Gemma releases that makes it genuinely usable for commercial products without legal ambiguity. This guide covers the full model family, architecture decisions worth understanding, and practical deployment paths across cloud, local, and mobile. The Four Models and When to Use Each Gemma 4 ships in four sizes with meaningfully different architectures: Model Params Active Architecture VRAM (4-bit) Target E2B ~2.3B all Dense + PLE ~2GB Mobile / edge E4B ~4.5B all Dense + PLE ~3.6GB Laptop / tablet 26B A4B 25.2B 3.8B MoE ~16GB Consumer GPU 31B 30.7B all Dense ~18GB Workstation The E2B result is the most surprising: multiple community benchmarks confirm it outperforms Gemma 3 27B on s

I scored 14 popular AI frameworks on behavioral commitment — here's the data
When you're choosing an AI framework, what do you actually look at? Usually: stars, documentation quality, whether the README looks maintained. All of that is stated signal. Easy to manufacture, doesn't tell you if the project will exist in 18 months. I built a tool that scores repos on behavioral commitment — signals that cost real time and money to fake. Here's what I found when I ran 14 of the most popular AI frameworks through it. The methodology Five behavioral signals, weighted by how hard they are to fake: Signal Weight Logic Longevity 30% Years of consistent operation Recent activity 25% Commits in the last 30 days Community 20% Number of contributors Release cadence 15% Stable versioned releases Social proof 10% Stars (real people starring costs attention) Archived repos or projec

Running OpenClaw with Gemma 4 TurboQuant on MacAir 16GB
Hi guys, We’ve implemented a one-click app for OpenClaw with Local Models built in. It includes TurboQuant caching, a large context window, and proper tool calling. It runs on mid-range devices. Free and Open source. The biggest challenge was enabling a local agentic model to run on average hardware like a Mac Mini or MacBook Air. Small models work well on these devices, but agents require more sophisticated models like QWEN or GLM. OpenClaw adds a large context to each request, which caused the MacBook Air to struggle with processing. This became possible with TurboQuant cache compression, even on 16gb memory. We found llama.cpp TurboQuant implementation by Tom Turney. However, it didn’t work properly with agentic tool calling in many cases with QWEN, so we had to patch it. Even then, the

Help running Qwen3-Coder-Next TurboQuant (TQ3) model
I found a TQ3-quantized version of Qwen3-Coder-Next here: https://huggingface.co/edwardyoon79/Qwen3-Coder-Next-TQ3_0 According to the page, this model requires a compatible inference engine that supports TurboQuant. It also provides a command, but it doesn’t clearly specify which version or fork of llama.cpp should be used (or maybe I missed it). llama-server I’ve tried the following llama.cpp forks that claim to support TQ3, but none of them worked for me: https://github.com/TheTom/llama-cpp-turboquant https://github.com/turbo-tan/llama.cpp-tq3 https://github.com/drdotdot/llama.cpp-turbo3-tq3 If anyone has successfully run this model, I’d really appreciate it if you could share how you did it. submitted by /u/UnluckyTeam3478 [link] [comments]

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!