🔥 moeru-ai/airi
💖🧸 Self hosted, you-owned Grok Companion, a container of souls of waifu, cyber livings to bring them into our worlds, wishing to achieve Neuro-sama's altitude. Capable of realtime voice chat, Minecraft, Factorio playing. Web / macOS / Windows supported. — Trending on GitHub today with 213 new stars.
Project AIRI
Re-creating Neuro-sama, a soul container of AI waifu / virtual characters to bring them into our world.
[Join Discord Server] [Try it] [简体中文] [日本語] [Русский] [Tiếng Việt] [Français] [한국어]
Heavily inspired by Neuro-sama
Tip
On Windows, you can also install AIRI with Scoop:
scoop bucket add airi https://github.com/moeru-ai/airi scoop install airi/airiscoop bucket add airi https://github.com/moeru-ai/airi scoop install airi/airiWarning
Attention: We do not have any officially minted cryptocurrency or token associated with this project. Please check the information and proceed with caution.
Note
We've got a whole dedicated organization @proj-airi for all the sub-projects born from Project AIRI. Check it out!
RAG, memory system, embedded database, icons, Live2D utilities, and more!
Tip
We have a translation project on Crowdin. If you find any inaccurate translations, feel free to contribute improvements there.
Have you dreamed about having a cyber living being (cyber waifu, digital pet) or digital companion that could play with and talk to you?
With the power of modern large language models like ChatGPT and famous Claude, asking a virtual being to roleplay and chat with us is already easy enough for everyone. Platforms like Character.ai (a.k.a. c.ai) and JanitorAI as well as local playgrounds like SillyTavern are already good-enough solutions for a chat based or visual adventure game like experience.
But, what about the abilities to play games? And see what you are coding at? Chatting while playing games, watching videos, and is capable of doing many other things.
Perhaps you know Neuro-sama already. She is currently the best virtual streamer capable of playing games, chatting, and interacting with you and the participants. Some also call this kind of being "digital human." Sadly, as it's not open sourced, you cannot interact with her after her live streams go offline.
Therefore, this project, AIRI, offers another possibility here: let you own your digital life, cyber living, easily, anywhere, anytime.
DevLogs We Posted & Recent Updates
-
DevLog @ 2026.03.14 on March 14, 2026
-
DevLog @ 2026.02.16 on February 16, 2026
-
DevLog @ 2026.01.01 on January 1, 2026
-
DevLog @ 2025.10.20 on October 20, 2025
-
DevLog @ 2025.08.05 on August 5, 2025
-
DevLog @ 2025.08.01 on August 1, 2025
-
DreamLog 0x1 on June 16, 2025
-
...more on documentation site
What's So Special About This Project?
Unlike the other AI driven VTuber open source projects, アイリ was built with support of many Web technologies such as WebGPU, WebAudio, Web Workers, WebAssembly, WebSocket, etc. from the first day.
Tip
Worrying about the performance drop since we are using Web related technologies?
Don't worry, while Web browser version is meant to give an insight about how much we can push and do inside browsers, and webviews, we will never fully rely on this, the desktop version of AIRI is capable of using native NVIDIA CUDA and Apple Metal by default (thanks to HuggingFace & beloved candle project), without any complex dependency managements, considering the tradeoff, it was partially powered by Web technologies for graphics, layouts, animations, and the WIP plugin systems for everyone to integrate things.
This means that アイリ is capable of running on modern browsers and devices and even on mobile devices (already done with PWA support). This brings a lot of possibilities for us (the developers) to build and extend the power of アイリ VTuber to the next level, while still leaving the flexibilities for users to enable features that requires TCP connections or other non-Web technologies such as connecting to a Discord voice channel or playing Minecraft and Factorio with friends.
Note
We are still in the early stage of development where we are seeking out talented developers to join us and help us to make アイリ a reality.
It's ok if you are not familiar with Vue.js, TypeScript, and devtools required for this project, you can join us as an artist, designer, or even help us to launch our first live stream.
Even if you are a big fan of React, Svelte or even Solid, we welcome you. You can open a sub-directory to add features that you want to see in アイリ, or would like to experiment with.
Fields (and related projects) that we are looking for:
-
Live2D modeller
-
VRM modeller
-
VRChat avatar designer
-
Computer Vision
-
Reinforcement Learning
-
Speech Recognition
-
Speech Synthesis
-
ONNX Runtime
-
Transformers.js
-
vLLM
-
WebGPU
-
Three.js
-
WebXR (checkout the another project we have under the @moeru-ai organization)
If you are interested, why not introduce yourself here? Would like to join part of us to build AIRI?
Current Progress
Capable of
- Brain
Play Minecraft Play Factorio (WIP, but PoC and demo available) Play Kerbal Space Program (announcement TBD) Co-play Helldivers 2 (WIP) Chat in Telegram Chat in Discord Memory
Pure in-browser database support (DuckDB WASM | pglite) Memory Alaya (WIP)
Pure in-browser local (WebGPU) inference
- Ears
Audio input from browser Audio input from Discord Client side speech recognition Client side talking detection
- Mouth
ElevenLabs voice synthesis
- Body
VRM support
Control VRM model
VRM model animations
Auto blink Auto look at Idle eye movement
Live2D support
Control Live2D model
Live2D model animations
Auto blink Auto look at Idle eye movement
Development
For detailed instructions to develop this project, follow CONTRIBUTING.md
Note
By default, pnpm dev will start the development server for the Stage Web (browser version). If you would like to try developing the desktop version, please make sure you read CONTRIBUTING.md to setup the environment correctly.
pnpm i pnpm devpnpm i pnpm devStage Web (Browser Version at airi.moeru.ai)
pnpm dev
Stage Tamagotchi (Desktop Version)
pnpm dev:tamagotchi
A Nix package for Tamagotchi is included. To run airi with Nix, first make sure to enable flakes, then run:
nix run github:moeru-ai/airi
NixOS
Electron requires shared libraries that aren't in standard paths on NixOS. Use the FHS shell defined in flake.nix:
nix develop .#fhs pnpm dev:tamagotchinix develop .#fhs pnpm dev:tamagotchiStage Pocket (Mobile Version)
Start the development server for the capacitor:
pnpm dev:pocket:ios --target
Or
CAPACITOR_DEVICE_ID_IOS= pnpm dev:pocket:ios`
You can see the list of available devices and simulators by running pnpm exec cap run ios --list.
If you need to connect server channel on pocket in wireless mode, you need to start tamagotchi as root:
sudo pnpm dev:tamagotchi
Then enable secure websocket in tamagotchi settings/connections.
Documentation Site
pnpm dev:docs
Publish
Please update the version in Cargo.toml after running bumpp:
npx bumpp --no-commit --no-tag
Support of LLM API Providers (powered by xsai)
-
AIHubMix (recommended)
-
OpenRouter
-
vLLM
-
SGLang
-
Ollama
-
302.AI (sponsored)
-
OpenAI
Azure OpenAI API
- Anthropic Claude
AWS Claude (PR welcome)
-
DeepSeek
-
Qwen
-
Google Gemini
-
xAI
-
Groq
-
Mistral
-
Cloudflare Workers AI
-
Together.ai
-
Fireworks.ai
-
Novita
-
Zhipu
-
SiliconFlow
-
Stepfun
-
Baichuan
-
Minimax
-
Moonshot AI
-
ModelScope
-
Player2
-
Tencent Cloud
-
Sparks (PR welcome)
-
Volcano Engine (PR welcome)
Sub-projects Born from This Project
-
Awesome AI VTuber: A curated list of AI VTubers and related projects
-
unspeech: Universal endpoint proxy server for /audio/transcriptions and /audio/speech, like LiteLLM but for any ASR and TTS
-
hfup: tools to help on deploying, bundling to HuggingFace Spaces
-
xsai-transformers: Experimental 🤗 Transformers.js provider for xsAI.
-
WebAI: Realtime Voice Chat: Full example of implementing ChatGPT's realtime voice from scratch with VAD + STT + LLM + TTS.
-
@proj-airi/drizzle-duckdb-wasm: Drizzle ORM driver for DuckDB WASM
-
@proj-airi/duckdb-wasm: Easy to use wrapper for @duckdb/duckdb-wasm
-
tauri-plugin-mcp: A Tauri plugin for interacting with MCP servers.
-
AIRI Factorio: Allow AIRI to play Factorio.
-
AIRI DomeKeeper: Allow AIRI to play DomeKeeper.
-
Factorio RCON API: RESTful API wrapper for Factorio headless server console
-
autorio: Factorio automation library
-
tstl-plugin-reload-factorio-mod: Reload Factorio mod when developing
-
Velin: Use Vue SFC and Markdown to write easy to manage stateful prompts for LLM
-
demodel: Easily boost the speed of pulling your models and datasets from various of inference runtimes.
-
inventory: Centralized model catalog and default provider configurations backend service
-
MCP Launcher: Easy to use MCP builder & launcher for all possible MCP servers, just like Ollama for models!
-
🥺 SAD: Documentation and notes for self-host and browser running LLMs.
%%{ init: { 'flowchart': { 'curve': 'catmullRom' } } }%%
flowchart TD Core("Core") Unspeech("unspeech") DBDriver("@proj-airi/drizzle-duckdb-wasm") MemoryDriver("[WIP] Memory Alaya") DB1("@proj-airi/duckdb-wasm") SVRT("@proj-airi/server-runtime") Memory("Memory") STT("STT") Stage("Stage") StageUI("@proj-airi/stage-ui") UI("@proj-airi/ui")
subgraph AIRI DB1 --> DBDriver --> MemoryDriver --> Memory --> Core UI --> StageUI --> Stage --> Core Core --> STT Core --> SVRT end
subgraph UI_Components UI --> StageUI UITransitions("@proj-airi/ui-transitions") --> StageUI UILoadingScreens("@proj-airi/ui-loading-screens") --> StageUI FontCJK("@proj-airi/font-cjkfonts-allseto") --> StageUI FontXiaolai("@proj-airi/font-xiaolai") --> StageUI end
subgraph Apps Stage --> StageWeb("@proj-airi/stage-web") Stage --> StageTamagotchi("@proj-airi/stage-tamagotchi") Core --> RealtimeAudio("@proj-airi/realtime-audio") Core --> PromptEngineering("@proj-airi/playground-prompt-engineering") end
subgraph Server_Components Core --> ServerSDK("@proj-airi/server-sdk") ServerShared("@proj-airi/server-shared") --> SVRT ServerShared --> ServerSDK end
STT -->|Speaking| Unspeech SVRT -->|Playing Factorio| F_AGENT SVRT -->|Playing Minecraft| MC_AGENT
subgraph Factorio_Agent F_AGENT("Factorio Agent") F_API("Factorio RCON API") factorio-server("factorio-server") F_MOD1("autorio")
F_AGENT --> F_API -.-> factorio-server F_MOD1 -.-> factorio-server end
subgraph Minecraft_Agent MC_AGENT("Minecraft Agent") Mineflayer("Mineflayer") minecraft-server("minecraft-server")
MC_AGENT --> Mineflayer -.-> minecraft-server end
XSAI("xsAI") --> Core XSAI --> F_AGENT XSAI --> MC_AGENT
Core --> TauriMCP("@proj-airi/tauri-plugin-mcp") Memory_PGVector("@proj-airi/memory-pgvector") --> Memory
style Core fill:#f9d4d4,stroke:#333,stroke-width:1px style AIRI fill:#fcf7f7,stroke:#333,stroke-width:1px style UI fill:#d4f9d4,stroke:#333,stroke-width:1px style Stage fill:#d4f9d4,stroke:#333,stroke-width:1px style UI_Components fill:#d4f9d4,stroke:#333,stroke-width:1px style Server_Components fill:#d4e6f9,stroke:#333,stroke-width:1px style Apps fill:#d4d4f9,stroke:#333,stroke-width:1px style Factorio_Agent fill:#f9d4f2,stroke:#333,stroke-width:1px style Minecraft_Agent fill:#f9d4f2,stroke:#333,stroke-width:1px
style DBDriver fill:#f9f9d4,stroke:#333,stroke-width:1px style MemoryDriver fill:#f9f9d4,stroke:#333,stroke-width:1px style DB1 fill:#f9f9d4,stroke:#333,stroke-width:1px style Memory fill:#f9f9d4,stroke:#333,stroke-width:1px style Memory_PGVector fill:#f9f9d4,stroke:#333,stroke-width:1px`
Loading
Similar Projects
Open sourced ones
-
kimjammer/Neuro: A recreation of Neuro-Sama originally created in 7 days.: very well completed implementation.
-
SugarcaneDefender/z-waif: Great at gaming, autonomous, and prompt engineering
-
semperai/amica: Great at VRM, WebXR
-
elizaOS/eliza: Great examples and software engineering on how to integrate agent into various of systems and APIs
-
ardha27/AI-Waifu-Vtuber: Great about Twitch API integrations
-
InsanityLabs/AIVTuber: Nice UI and UX
-
IRedDragonICY/vixevia
-
t41372/Open-LLM-VTuber
-
PeterH0323/Streamer-Sales
Non-open-sourced ones
-
https://clips.twitch.tv/WanderingCaringDeerDxCat-Qt55xtiGDSoNmDDr https://www.youtube.com/watch?v=8Giv5mupJNE
-
https://clips.twitch.tv/TriangularAthleticBunnySoonerLater-SXpBk1dFso21VcWD
Project Status
Acknowledgements
-
Reka UI: for designing the documentation site, the new landing page is based on this, as well as implementing a massive amount of UI components. (shadcn-vue is using Reka UI as the headless, do checkout!)
-
pixiv/ChatVRM
-
josephrocca/ChatVRM-js: A JS conversion/adaptation of parts of the ChatVRM (TypeScript) code for standalone use in OpenCharacters and elsewhere
-
Design of UI and style was inspired by Cookard, UNBEATABLE, and Sensei! I like you so much!, and artworks of Ayame by Mercedes Bazan with Wish by Mercedes Bazan
-
mallorbc/whisper_mic
-
xsai: Implemented a decent amount of packages to interact with LLMs and models, like Vercel AI SDK but way small.
Supporters
Thank you for supporting Project AIRI through OpenCollective, Patreon, and Ko-fi.
Special Thanks
Special thanks to all contributors for their contributions to Project AIRI ❤️
Star History
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
githubtrendingopen-source
From Linux Admin to DevOps & AI: My Journey Begins
<h2> Why I'm Sharing This Journey </h2> <p>Hi! I'm Nazmur, a Junior Linux Administrator who's decided to level up. Like many in operations, I've realized that the future belongs to those who can bridge <strong>traditional sysadmin work</strong> with modern <strong>DevOps practices</strong> and emerging <strong>AI technologies</strong>.</p> <p>This is the first post in my journey from Linux admin → DevOps Engineer → AI/ML Engineer.</p> <h2> Where I Started </h2> <p>As a Linux admin, my daily work involves:</p> <ul> <li>Managing and monitoring Linux servers</li> <li>Writing bash scripts to automate repetitive tasks</li> <li>Troubleshooting system issues</li> <li>Ensuring uptime and reliability</li> </ul> <p>It's solid work, but I kept asking myself: <em>"How can I do this faster? More effici

I Let an AI Agent Run My Freelance Life. It Almost Burned It Down.
<p>For the past few days I kept seeing OpenClaw everywhere. YouTube, Instagram, that one tech Discord I lurk in but never actually talk in. Everyone losing their minds over it.</p> <p><em>"It negotiated $4,200 off a car price." "It runs my entire inbox." "It's the future of computing."</em></p> <p>I had a rough idea what it was, some kind of AI agent. And the intern brain immediately went: if this is basically an automation tool, I can fix my entire chaotic freelance workflow with it.</p> <p>Classic. Give a sleep-deprived software intern a new shiny tool and watch what happens.</p> <p>I'm juggling a software internship by day and freelance client work on the side. My problems aren't glamorous. Client meetings clashing with job interview slots. Cold emails to recruiters I keep meaning to se

VanityH – Elegant Hyperscript DSL for Frontend Render Functions
<p>I built <strong>VanityH</strong> to fix the pain of writing hyperscript in vanilla JS/TS, low‑code engines, and non‑JSX environments.</p> <p>It’s a <strong>tiny, zero‑dependency DSL</strong> built on Proxy & closure that turns messy nested <code>h(tag, props, children)</code> into clean, chainable code like SwiftUI/Flutter.</p> <h2> Why it matters </h2> <ul> <li> <strong>Escape nesting hell</strong>: Clear DOM structure at a glance</li> <li> <strong>Fully immutable</strong>: Copy‑on‑write, no accidental prop pollution</li> <li> <strong>Zero magic</strong>: Explicit, no hidden conversions</li> <li> <strong>Ultra‑light</strong>: ~600 bytes gzipped</li> <li> <strong>Works everywhere</strong>: Vue, React, Preact, Snabbdom, any hyperscript‑compatible renderer</li> </ul> <h2> Example (Vue
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI
LLM Quantization, Kernels, and Deployment: How to Fine-Tune Correctly, Part 5
The Unsloth deep dive into GPTQ, AWQ, GGUF, inference kernels, and deployment routing Generated using notebookLM A 1.5B model quantized to 4-bit can lose enough fidelity that instruction-following collapses entirely. A GPTQ model calibrated on WikiText and deployed on domain-specific medical text silently degrades on exactly the inputs that matter most. A Mixture-of-Experts model budgeted for 5B active parameters actually needs VRAM for all 400B. None of these failures produce error messages. All of them produce models that look fine on benchmarks and fail in production. The common thread is that the post-training pipeline, everything between the last training step and the first served request, was treated as a formatting step rather than an engineering problem. This episode opens that pip
The reinforcement gap — or why some AI skills improve faster than others - TechCrunch
<a href="https://news.google.com/rss/articles/CBMiqAFBVV95cUxNZDROQTRMb2xvdW1HMEZiWFpRaWVHcFJVa2hHR1FXSURkQkZ6ZHFmR3pWZGw5TWZSbno0cTU0TWlOOTdxb1EtRlFMVWk0UW1fWUFwNDJfVGFubTNMc0ZhU3RWd2M5MmRCLVFtZlV3cXllYVZMYlpSZWVRRDFPYUVUdER0T29qb3J0dE5ZNnRmdVhjcGFpMUxab1NBd29pRU51X3hFS21OZlA?oc=5" target="_blank">The reinforcement gap — or why some AI skills improve faster than others</a> <font color="#6f6f6f">TechCrunch</font>
Computer science students take on neuroscience - Mizzou Engineering
<a href="https://news.google.com/rss/articles/CBMijgFBVV95cUxNRmZreWF3d3R5cWtzM0pBVmNGd1NiYzV5XzlvZTRlalFOZ0V2X2Fqd1dXcWd0bWJnNl9zWkhOVWRwdDYtbXc3d256Yk1GWlRlSUZOY3RlYjNLVHJSMmtBTU5CXy16RTZMb2hTa1BINzZoVzVhX0NJNmc0SFYxejdsZWR1c3pGQW0tcG5uSjVR?oc=5" target="_blank">Computer science students take on neuroscience</a> <font color="#6f6f6f">Mizzou Engineering</font>

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!