Live
Black Hat USAAI BusinessBlack Hat AsiaAI Business512,000 lines of leaked AI agent source code, three mapped attack paths, and the audit security leaders need now - VentureBeatGoogle News: ClaudeCash App launches ‘buy now, pay later’ feature for P2P pay transfersTechCrunchWhen the Scraper Breaks Itself: Building a Self-Healing CSS Selector Repair SystemDEV CommunitySelf-Referential Generics in Kotlin: When Type Safety Requires Talking to YourselfDEV CommunitySources: Amazon is in talks to acquire Globalstar to bolster its low Earth orbit satellite business; Apple's 20% stake in Globalstar is a complicating factor (Financial Times)TechmemeZ.ai Launches GLM-5V-Turbo: A Native Multimodal Vision Coding Model Optimized for OpenClaw and High-Capacity Agentic Engineering Workflows EverywhereMarkTechPostHow I Started Using AI Agents for End-to-End Testing (Autonoma AI)DEV CommunityHow AI Is Changing PTSD Recovery — And Why It MattersDEV CommunityYour Company’s AI Isn’t Broken. Your Data Just Doesn’t Know What It Means.Towards AIDisney’s Robot Olaf Dying Is the Funniest Thing to Happen in 2026GizmodoDeepSource vs Coverity: Static Analysis ComparedDEV CommunityClaude Code's Source Didn't Leak. It Was Already Public for Years.DEV CommunityBlack Hat USAAI BusinessBlack Hat AsiaAI Business512,000 lines of leaked AI agent source code, three mapped attack paths, and the audit security leaders need now - VentureBeatGoogle News: ClaudeCash App launches ‘buy now, pay later’ feature for P2P pay transfersTechCrunchWhen the Scraper Breaks Itself: Building a Self-Healing CSS Selector Repair SystemDEV CommunitySelf-Referential Generics in Kotlin: When Type Safety Requires Talking to YourselfDEV CommunitySources: Amazon is in talks to acquire Globalstar to bolster its low Earth orbit satellite business; Apple's 20% stake in Globalstar is a complicating factor (Financial Times)TechmemeZ.ai Launches GLM-5V-Turbo: A Native Multimodal Vision Coding Model Optimized for OpenClaw and High-Capacity Agentic Engineering Workflows EverywhereMarkTechPostHow I Started Using AI Agents for End-to-End Testing (Autonoma AI)DEV CommunityHow AI Is Changing PTSD Recovery — And Why It MattersDEV CommunityYour Company’s AI Isn’t Broken. Your Data Just Doesn’t Know What It Means.Towards AIDisney’s Robot Olaf Dying Is the Funniest Thing to Happen in 2026GizmodoDeepSource vs Coverity: Static Analysis ComparedDEV CommunityClaude Code's Source Didn't Leak. It Was Already Public for Years.DEV Community

Building a LEGO-like remote Agent - Jean2

DEV Communityby Daniel BílekMarch 31, 20264 min read0 views
Source Quiz

<p>I'm a huge fan of coding agents. My daily consumption is at about <strong>7 million tokens</strong> and growing. I started on Cursor, fell in love with OpenCode, got into customizing setups with MCPs, Skills, and subagent orchestration — all while daily-driving GLM, Minimax, GPT models, and everything I could get my hands on at OpenRouter just for the new and shiny.</p> <p>I love squeezing the best possible answers from small models with the right prompts. At some point, I wanted to use OpenCode for <em>everything</em>, not just coding.</p> <p>And then I kinda hit a wall.</p> <h2> The Wall </h2> <h3> Baked-in steering </h3> <p>Coding agents come with baked-in prompts that already steer them in a certain direction — great for an out-of-the-box solution, not so great when you want full co

I'm a huge fan of coding agents. My daily consumption is at about 7 million tokens and growing. I started on Cursor, fell in love with OpenCode, got into customizing setups with MCPs, Skills, and subagent orchestration — all while daily-driving GLM, Minimax, GPT models, and everything I could get my hands on at OpenRouter just for the new and shiny.

I love squeezing the best possible answers from small models with the right prompts. At some point, I wanted to use OpenCode for everything, not just coding.

And then I kinda hit a wall.

The Wall

Baked-in steering

Coding agents come with baked-in prompts that already steer them in a certain direction — great for an out-of-the-box solution, not so great when you want full control. You can create your own agent with custom steering, but it's always appended to whatever the system's baked-in prompt already says.

Rigid tooling

Built-in tools are already great, but you can't alter, remove, or modify them — not their prompts, not the tools themselves. Adding your own usually doesn't feel the same. These systems aren't really built around the concept of bringing your own tools. Want an extra capability? Bring an MCP or a skill.

Session management

The TUI is great, but managing multiple projects or conversations not tied to any specific project just wasn't a comfortable experience. There are now GUI and Web options that handle it much better, but at the time, there weren't really stable, comfortable choices.

Multi-device continuity

Throw the phone into the mix, and it wasn't really there yet either. I just wanted to be able to create a spec doc with the agent on my phone, hand it off for execution, and review it on my laptop — seamlessly.

Enter Jean2

(Why Jean2 and not just Jean? Because Jean was kinda "not good.")

So I started building. Here's what came out of it:

Always running, seamless experience

Jean2 runs as a daemon with sockets and HTTP. The goal is a seamless experience — open a desktop client on your laptop, prompt what you want, pick up your phone, watch progress, and handle permissions. You can close the clients and leave it running with an 80–100 MB memory footprint across 3 different projects.

Absolutely and completely dumb

No baked-in prompts. Your agent system message and your AGENT.md files are the only things that form a system message. This means you're not limited to building "your coding agent" — you can build "your [anything] agent."

No baked-in tools

Well — that's not completely true. Technically, handling MCP, skill lookup, and subagents are tools, but nothing else is baked in. Tools are completely language-agnostic. They're spawned on demand and undergo an explicit, upfront security check that you define (if you want). You can yank anything out and drop anything in. All input is sent to tools via JSON stdin, and all results are expected in JSON stdout.

There's also a built-in visualization property that lets the tool decide how to display its results: diff, code, markdown, table, todo list — you name it. (These data are omitted when making API requests to the LLM, naturally.)

Important: You are in control of the tool definition and its description. If you notice a model not using a tool properly, you can just change it.

What's Next

Jean2 is still in active development — but it's very usable. I've been coding with it for over a week.

The next steps are:

  • SDK & structured output — so you can create your own apps with agent orchestration

  • Multimodal model support

  • Expanded tool offering — fal.ai tools for image generation seem like a natural fit

If this sounds even a little interesting, check out jean2.ai.

Got questions, ideas for tools, or tried it out and got lost in the setup? Hit me up at @danielbilekq0.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modelmillionreview

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Building a …modelmillionreviewmultimodalagentcursorDEV Communi…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 209 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Models