🔥 NousResearch/hermes-agent
The agent that grows with you — Trending on GitHub today with 713 new stars.
The self-improving AI agent built by Nous Research. It's the only agent with a built-in learning loop — it creates skills from experience, improves them during use, nudges itself to persist knowledge, searches its own past conversations, and builds a deepening model of who you are across sessions. Run it on a $5 VPS, a GPU cluster, or serverless infrastructure that costs nearly nothing when idle. It's not tied to your laptop — talk to it from Telegram while it works on a cloud VM.
Use any model you want — Nous Portal, OpenRouter (200+ models), z.ai/GLM, Kimi/Moonshot, MiniMax, OpenAI, or your own endpoint. Switch with hermes model — no code changes, no lock-in.
A real terminal interfaceFull TUI with multiline editing, slash-command autocomplete, conversation history, interrupt-and-redirect, and streaming tool output. Lives where you doTelegram, Discord, Slack, WhatsApp, Signal, and CLI — all from a single gateway process. Voice memo transcription, cross-platform conversation continuity. A closed learning loopAgent-curated memory with periodic nudges. Autonomous skill creation after complex tasks. Skills self-improve during use. FTS5 session search with LLM summarization for cross-session recall. Honcho dialectic user modeling. Compatible with the agentskills.io open standard. Scheduled automationsBuilt-in cron scheduler with delivery to any platform. Daily reports, nightly backups, weekly audits — all in natural language, running unattended. Delegates and parallelizesSpawn isolated subagents for parallel workstreams. Write Python scripts that call tools via RPC, collapsing multi-step pipelines into zero-context-cost turns. Runs anywhere, not just your laptopSix terminal backends — local, Docker, SSH, Daytona, Singularity, and Modal. Daytona and Modal offer serverless persistence — your agent's environment hibernates when idle and wakes on demand, costing nearly nothing between sessions. Run it on a $5 VPS or a GPU cluster. Research-readyBatch trajectory generation, Atropos RL environments, trajectory compression for training the next generation of tool-calling models.
Quick Install
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
Works on Linux, macOS, and WSL2. The installer handles everything — Python, Node.js, dependencies, and the hermes command. No prerequisites except git.
Windows: Native Windows is not supported. Please install WSL2 and run the command above.
After installation:
source ~/.bashrc # reload shell (or: source ~/.zshrc) hermes # start chatting!source ~/.bashrc # reload shell (or: source ~/.zshrc) hermes # start chatting!Getting Started
hermes # Interactive CLI — start a conversation hermes model # Choose your LLM provider and model hermes tools # Configure which tools are enabled hermes config set # Set individual config values hermes gateway # Start the messaging gateway (Telegram, Discord, etc.) hermes setup # Run the full setup wizard (configures everything at once) hermes claw migrate # Migrate from OpenClaw (if coming from OpenClaw) hermes update # Update to the latest version hermes doctor # Diagnose any issueshermes # Interactive CLI — start a conversation hermes model # Choose your LLM provider and model hermes tools # Configure which tools are enabled hermes config set # Set individual config values hermes gateway # Start the messaging gateway (Telegram, Discord, etc.) hermes setup # Run the full setup wizard (configures everything at once) hermes claw migrate # Migrate from OpenClaw (if coming from OpenClaw) hermes update # Update to the latest version hermes doctor # Diagnose any issues📖 Full documentation →
CLI vs Messaging Quick Reference
Hermes has two entry points: start the terminal UI with hermes, or run the gateway and talk to it from Telegram, Discord, Slack, WhatsApp, Signal, or Email. Once you're in a conversation, many slash commands are shared across both interfaces.
Action CLI Messaging platforms
Start chatting
hermes
Run hermes gateway setup + hermes gateway start, then send the bot a message
Start fresh conversation
/new or /reset
/new or /reset
Change model
/model [provider:model]
/model [provider:model]
Set a personality
/personality [name]
/personality [name]
Retry or undo the last turn
/retry, /undo
/retry, /undo
Compress context / check usage
/compress, /usage, /insights [--days N]
/compress, /usage, /insights [days]
Browse skills
/skills or /
/skills or /
Interrupt current work
Ctrl+C or send a new message
/stop or send a new message
Platform-specific status
/platforms
/status, /sethome
For the full command lists, see the CLI guide and the Messaging Gateway guide.
Documentation
All documentation lives at hermes-agent.nousresearch.com/docs:
Section What's Covered
Quickstart Install → setup → first conversation in 2 minutes
CLI Usage Commands, keybindings, personalities, sessions
Configuration Config file, providers, models, all options
Messaging Gateway Telegram, Discord, Slack, WhatsApp, Signal, Home Assistant
Security Command approval, DM pairing, container isolation
Tools & Toolsets 40+ tools, toolset system, terminal backends
Skills System Procedural memory, Skills Hub, creating skills
Memory Persistent memory, user profiles, best practices
MCP Integration Connect any MCP server for extended capabilities
Cron Scheduling Scheduled tasks with platform delivery
Context Files Project context that shapes every conversation
Architecture Project structure, agent loop, key classes
Contributing Development setup, PR process, code style
CLI Reference All commands and flags
Environment Variables Complete env var reference
Migrating from OpenClaw
If you're coming from OpenClaw, Hermes can automatically import your settings, memories, skills, and API keys.
During first-time setup: The setup wizard (hermes setup) automatically detects ~/.openclaw and offers to migrate before configuration begins.
Anytime after install:
hermes claw migrate # Interactive migration (full preset) hermes claw migrate --dry-run # Preview what would be migrated hermes claw migrate --preset user-data # Migrate without secrets hermes claw migrate --overwrite # Overwrite existing conflictshermes claw migrate # Interactive migration (full preset) hermes claw migrate --dry-run # Preview what would be migrated hermes claw migrate --preset user-data # Migrate without secrets hermes claw migrate --overwrite # Overwrite existing conflictsWhat gets imported:
-
SOUL.md — persona file
-
Memories — MEMORY.md and USER.md entries
-
Skills — user-created skills → ~/.hermes/skills/openclaw-imports/
-
Command allowlist — approval patterns
-
Messaging settings — platform configs, allowed users, working directory
-
API keys — allowlisted secrets (Telegram, OpenRouter, OpenAI, Anthropic, ElevenLabs)
-
TTS assets — workspace audio files
-
Workspace instructions — AGENTS.md (with --workspace-target)
See hermes claw migrate --help for all options, or use the openclaw-migration skill for an interactive agent-guided migration with dry-run previews.
Contributing
We welcome contributions! See the Contributing Guide for development setup, code style, and PR process.
Quick start for contributors:
git clone https://github.com/NousResearch/hermes-agent.git cd hermes-agent curl -LsSf https://astral.sh/uv/install.sh | sh uv venv venv --python 3.11 source venv/bin/activate uv pip install -e ".[all,dev]" python -m pytest tests/ -qgit clone https://github.com/NousResearch/hermes-agent.git cd hermes-agent curl -LsSf https://astral.sh/uv/install.sh | sh uv venv venv --python 3.11 source venv/bin/activate uv pip install -e ".[all,dev]" python -m pytest tests/ -qRL Training (optional): To work on the RL/Tinker-Atropos integration:
git submodule update --init tinker-atropos uv pip install -e "./tinker-atropos"
Community
-
💬 Discord
-
📚 Skills Hub
-
🐛 Issues
-
💡 Discussions
License
MIT — see LICENSE.
Built by Nous Research.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
githubtrendingopen-sourceShow HN: Semantic atlas of 188 constitutions in 3D (30k articles, embeddings)
I built this after noticing that existing tools for comparing constitutional law either have steep learning curves or only support keyword search. By combining Gemini embeddings with UMAP projection, you can navigate 30,828 constitutional articles from 188 countries in 3D and find conceptually related provisions even when the wording differs. Feedback welcome, especially from legal researchers or comparative law folks. Source and pipeline: github.com/joaoli13/constitutional-map-ai Comments URL: https://news.ycombinator.com/item?id=47609372 Points: 4 # Comments: 0
Anthropic's Botched DMCA Takedown Nukes Thousands of GitHub Repos - The Tech Buzz
<a href="https://news.google.com/rss/articles/CBMinwFBVV95cUxPX0Y3bzBRZVdoVDB3UHFVZFNteGhwTGZNOUs0WWpPQUdkT2JTbDFHbjVyd2VTVW5qNjVtVExKZW5yYXhLNTh2NHFuMUpiVGE1Y0J4bzIxa0xmMlNDM2RHR3BKUEpuTnhKdXZ4U04zOUVMTTk2MzRtMGlhekVkOU5oM0VoSnAyczAyT3dmbm9EdmlVa0ZhUWJoOEtWNDEzY0U?oc=5" target="_blank">Anthropic's Botched DMCA Takedown Nukes Thousands of GitHub Repos</a> <font color="#6f6f6f">The Tech Buzz</font>

ParetoBandit: Budget-Paced Adaptive Routing for Non-Stationary LLM Serving
arXiv:2604.00136v1 Announce Type: new Abstract: Production LLM serving often relies on multi-model portfolios spanning a ~530x cost range, where routing decisions trade off quality against cost. This trade-off is non-stationary: providers revise pricing, model quality can regress silently, and new models must be integrated without downtime. We present ParetoBandit, an open-source adaptive router built on cost-aware contextual bandits that is the first to simultaneously enforce dollar-denominated budgets, adapt online to such shifts, and onboard new models at runtime. ParetoBandit closes these gaps through three mechanisms. An online primal-dual budget pacer enforces a per-request cost ceiling over an open-ended stream, replacing offline penalty tuning with closed-loop control. Geometric fo
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI

Del aprendizaje a la práctica: Por qué decidí dejar de estudiar en privado y empezar a compartir 🚀
<p>¡Hola a todos! 👋</p> <p>Llevo mucho tiempo sumergido en cursos, laboratorios y documentación. Durante meses (o incluso años), mi enfoque ha sido absorber todo lo posible sobre Cloud Engineering, Data Analysis entre otros temas. Sin embargo, hoy he tomado una decisión importante: dejar de guardar mis proyectos en carpetas locales y empezar a compartirlos con la comunidad.</p> <p>He decidido que la mejor forma de crecer no es solo estudiando, sino exponiendo mi trabajo al criterio de otros profesionales para recibir feedback, mejorar y, con suerte, ayudar a alguien que esté en un camino similar.</p> <p>🛠️ Mi primer aporte: Procesamiento de Datos<br> Hoy les presento un repositorio en el que he estado trabajando. Es una herramienta diseñada para estandarizar y agilizar el procesamiento d
Building Human Resilience for the Age of AI - Elon University
<a href="https://news.google.com/rss/articles/CBMiiwFBVV95cUxQTVdzSmQ5aXhaZkYtdTdCUWlKamdpM3JIa25SYklSTFhRR2ozcUUzZTdhb2ZKVzc1Vk9yRDI2M2RjVjZhdEowTkJUOXprQThZREtrLXc1NFh5SkdEbUY5MFZBNzdnU3hFV29POFAtZWpDR3R5US02UEo1ZXpCTnMwTVM2alFSTzM1b2Q4?oc=5" target="_blank">Building Human Resilience for the Age of AI</a> <font color="#6f6f6f">Elon University</font>

Observabilidade de agentes de IA com LangChain4j
<h2> Introdução </h2> <p>Estamos vivendo uma onda no desenvolvimento de software impulsionada pelo uso de <strong>IA generativa</strong> e, mais recentemente, por <strong>agentes de IA</strong> capazes de tomar decisões, orquestrar chamadas a modelos e interagir com ferramentas externas.</p> <p>Esses agentes vão além de simples integrações com LLMs. Eles executam fluxos dinâmicos, fazem múltiplas chamadas ao modelo, utilizam ferramentas e tomam decisões com base no contexto. Esse comportamento os aproxima muito mais de sistemas distribuídos.</p> <p>Com isso, à medida que começamos a levar esses agentes para <strong>ambientes corporativos</strong>, surge um requisito essencial que não pode ser ignorado: <strong>observabilidade e monitoramento</strong>.</p> <p>De forma simplificada:</p> <ul>
datasette-enrichments-llm 0.2a1
<p><strong>Release:</strong> <a href="https://github.com/datasette/datasette-enrichments-llm/releases/tag/0.2a1">datasette-enrichments-llm 0.2a1</a></p> <blockquote> <ul> <li>The <code>actor</code> who triggers an enrichment is now passed to the <code>llm.mode(... actor=actor)</code> method. <a href="https://github.com/datasette/datasette-enrichments-llm">#3</a></li> </ul> </blockquote> <p>Tags: <a href="https://simonwillison.net/tags/enrichments">enrichments</a>, <a href="https://simonwillison.net/tags/llm">llm</a>, <a href="https://simonwillison.net/tags/datasette">datasette</a></p>
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!