Claude Code Hooks: How to Auto-Format, Lint, and Test on Every Save
Configure hooks in .claude/settings.json to run prettier, eslint, and tests automatically, ensuring clean code without manual intervention. Claude Code Hooks: How to Auto-Format, Lint, and Test on Every Save Claude Code hooks are your automation layer for agentic development. They let you run shell commands at specific points in Claude's workflow—before tools run, after files are written, or when sessions end. Most developers discover hooks when they're tired of Claude writing code that doesn't match their formatter settings. Here's how to stop that permanently. Where Hooks Live Hooks go in your CLAUDE.md or in .claude/settings.json at the project root: { "hooks" : { "afterFileWrite" : "prettier --write $FILE" , "afterSessionEnd" : "npm test -- --passWithNoTests" } } The $FILE variable con
Configure hooks in .claude/settings.json to run prettier, eslint, and tests automatically, ensuring clean code without manual intervention.
Claude Code Hooks: How to Auto-Format, Lint, and Test on Every Save
Claude Code hooks are your automation layer for agentic development. They let you run shell commands at specific points in Claude's workflow—before tools run, after files are written, or when sessions end. Most developers discover hooks when they're tired of Claude writing code that doesn't match their formatter settings. Here's how to stop that permanently.
Where Hooks Live
Hooks go in your CLAUDE.md or in .claude/settings.json at the project root:
{ "hooks": { "afterFileWrite": "prettier --write $FILE", "afterSessionEnd": "npm test -- --passWithNoTests" } }{ "hooks": { "afterFileWrite": "prettier --write $FILE", "afterSessionEnd": "npm test -- --passWithNoTests" } }Enter fullscreen mode
Exit fullscreen mode
The $FILE variable contains the path of the file Claude just wrote. $SESSION_ID is also available for session-based logging.
The Four Hook Points You Need to Know
Hook When it fires Common use
beforeToolRun
Before any tool executes
Log what Claude is about to do
afterFileWrite
After Claude writes or edits a file
Format, lint, type-check
afterBashRun
After a bash command completes
Capture output, trigger CI
afterSessionEnd
When the session closes
Run test suite, commit
Auto-Format on Write (The Essential Hook)
This is the most common hook. Claude writes a file → it gets formatted immediately:
{ "hooks": { "afterFileWrite": "npx prettier --write $FILE 2>/dev/null || true" } }{ "hooks": { "afterFileWrite": "npx prettier --write $FILE 2>/dev/null || true" } }Enter fullscreen mode
Exit fullscreen mode
The || true prevents Claude from seeing a non-zero exit code as an error. If the file isn't JS/TS/CSS, prettier skips it silently.
Chain Linting with Formatting
You can only have one afterFileWrite hook per settings block. Chain multiple commands with ; or &&:
{ "hooks": { "afterFileWrite": "npx prettier --write $FILE 2>/dev/null; npx eslint --fix $FILE 2>/dev/null; true" } }{ "hooks": { "afterFileWrite": "npx prettier --write $FILE 2>/dev/null; npx eslint --fix $FILE 2>/dev/null; true" } }Enter fullscreen mode
Exit fullscreen mode
This runs prettier first, then eslint with auto-fix. The final true ensures the hook always exits with code 0.
Auto-Test After Session
Run your test suite every time Claude finishes a session:
{ "hooks": { "afterSessionEnd": "npm test -- --passWithNoTests 2>&1 | tail -20" } }{ "hooks": { "afterSessionEnd": "npm test -- --passWithNoTests 2>&1 | tail -20" } }Enter fullscreen mode
Exit fullscreen mode
Claude sees the test output and can immediately fix failures in the next session. The tail -20 shows just the last 20 lines of output to avoid overwhelming the context window.
The Self-Healing Loop Pattern
This is where hooks get powerful. Combine afterSessionEnd with a script that reruns Claude if tests fail:
#!/bin/bash
run-with-tests.sh
npm test 2>&1 > /tmp/test-output.txt
if [ $? -ne 0 ]; then
echo "Tests failed, asking Claude to fix..."
claude --print "Fix the failing tests. Output:\n$(cat /tmp/test-output.txt)"
--allowedTools Edit,Bash
fi`
Enter fullscreen mode
Exit fullscreen mode
Then set:
{ "hooks": { "afterSessionEnd": "./run-with-tests.sh" } }{ "hooks": { "afterSessionEnd": "./run-with-tests.sh" } }Enter fullscreen mode
Exit fullscreen mode
This creates a loop: Claude writes code → session ends → tests run → if red, Claude is called again with failure output → it fixes → tests run again. The loop continues until green.
Watch Out for Rate Limits
The self-healing loop is great until Claude hits its hourly rate limit mid-loop. When that happens, the loop stalls and your terminal hangs. One workaround mentioned in the community is using a proxy service like SimplyLouie ($2/month) that removes rate limits:
export ANTHROPIC_BASE_URL=https://simplylouie.com/api/proxy npm run claude-loopexport ANTHROPIC_BASE_URL=https://simplylouie.com/api/proxy npm run claude-loopEnter fullscreen mode
Exit fullscreen mode
Audit What Claude Does
If you want a log of every tool Claude runs:
{ "hooks": { "beforeToolRun": "echo \"$(date): $TOOL_NAME $TOOL_INPUT\" >> ~/.claude-audit.log" } }{ "hooks": { "beforeToolRun": "echo \"$(date): $TOOL_NAME $TOOL_INPUT\" >> ~/.claude-audit.log" } }Enter fullscreen mode
Exit fullscreen mode
This creates an append-only audit log at ~/.claude-audit.log. Useful for understanding what Claude actually does in long sessions.
Hooks vs CLAUDE.md: They Work Best Together
Hooks handle behavior (what happens after actions). CLAUDE.md handles knowledge (what Claude should know and follow).
They work best together:
-
CLAUDE.md: "Always use Prettier for formatting, never manual spaces"
-
Hook: afterFileWrite: prettier --write $FILE
CLAUDE.md tells Claude the rule. The hook enforces it automatically even if Claude forgets.
Start Here
Begin with the formatter hook. Once you've run a session without manually formatting a single file, add linting. Then testing. The combination creates a development environment where Claude's output is always clean, linted, and tested—without you lifting a finger.
Originally published on gentic.news
Dev.to AI
https://dev.to/gentic_news/claude-code-hooks-how-to-auto-format-lint-and-test-on-every-save-54m2Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudeavailableservice
My forays into cyborgism: theory, pt. 1
In this post, I share the thinking that lies behind the Exobrain system I have built for myself. In another post, I'll describe the actual system. I think the standard way of relating to LLM/AIs is as an external tool (or "digital mind") that you use and/or collaborate with. Instead of you doing the coding, you ask the LLM to do it for you. Instead of doing the research, you ask it to. That's great, and there is utility in those use cases. Now, while I hardly engage in the delusion that humans can have some kind of long-term symbiotic integration with AIs that prevents them from replacing us [1] , in the short term, I think humans can automate, outsource, and augment our thinking with LLM/AIs. We already augment our cognition with technologies such as writing and mundane software. Organizi
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Eight years of wanting, three months of building with AI
Eight years of wanting, three months of building with AI Lalit Maganti provides one of my favorite pieces of long-form writing on agentic engineering I've seen in ages. They spent eight years thinking about and then three months building syntaqlite , which they describe as " high-fidelity devtools that SQLite deserves ". The goal was to provide fast, robust and comprehensive linting and verifying tools for SQLite, suitable for use in language servers and other development tools - a parser, formatter, and verifier for SQLite queries. I've found myself wanting this kind of thing in the past myself, hence my (far less production-ready) sqlite-ast project from a few months ago. Lalit had been procrastinating on this project for years, because of the inevitable tedium of needing to work through

If LLMs Have No Memory, How Do They Remember Anything?
A technical but approachable guide to how large language models handle memory — from the math behind statelessness to the engineering behind systems that make AI feel like it actually knows you. An LLM is just a math function. A stateless one. Let’s start with the uncomfortable truth. At its core, a large language model — at inference time — is nothing more than a parameterized mathematical function. It takes an input, runs it through billions of learned parameters, and produces an output. Y = fθ(X) Here, X is your input (the prompt), θ (theta) represents all the learned weights baked into the model during training, and Y is the output — the response the model generates. Simple. But here’s the kicker: this function is stateless. What does “stateless” actually mean? Stateless means that whe

Building a Multi-Agent OS: Key Design Decisions That Matter
Introduction Most agent systems start at the top layer: a model, a persona, a tool list, and an orchestration wrapper. That works for demos. It does not hold up in production. State ends up split across conversations, approval logic hides inside prompts, and swapping a provider or runtime means rebuilding the loop. The useful questions sit lower in the stack: Which component owns task state? Which component enforces policy? Which surface lets operators inspect work and step in? Which event wakes execution? Which protocol must an executor follow to write results back? How do you project runtime capabilities into workspaces without drift? The essential part was discovering which boundaries actually matter. Across each iteration, the same correction kept showing up: centralize the truth, form


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!