Live
Black Hat USAAI BusinessBlack Hat AsiaAI Business2026世界杯Dev.to AItama96Dev.to AIThe All-in-One Local AI App: Chat + Images + Video Without the CloudDev.to AIClaude Code Just Fixed Terminal Flickering (How to Enable NO_FLICKER Mode)Dev.to AIHow to use a Claude Subscription in Cursor without paying for API tokensDev.to AIHow I Built a Desktop AI App with Tauri v2 + React 19 in 2026Dev.to AIAnthropic Source Code Leak: What Was Exposed & Why It Matters in AI Security - iZOOlogicGoogle News: ClaudeAI Agents in Production: Why Most Systems Break (And How to Fix Them)Dev.to AISome editors 'uploading confidential manuscripts to ChatGPT to read quickly', agent claims - The BooksellerGoogle News: ChatGPTWhere is the AI revolution at?Dev.to AIDigital Marketing Course in Delhi: A Practical Skill Stack for High-Growth Careers in 2026Dev.to AIMercor AI Data Breach: Supply Chain Attack via LiteLLM Package CompromiseDev.to AIBlack Hat USAAI BusinessBlack Hat AsiaAI Business2026世界杯Dev.to AItama96Dev.to AIThe All-in-One Local AI App: Chat + Images + Video Without the CloudDev.to AIClaude Code Just Fixed Terminal Flickering (How to Enable NO_FLICKER Mode)Dev.to AIHow to use a Claude Subscription in Cursor without paying for API tokensDev.to AIHow I Built a Desktop AI App with Tauri v2 + React 19 in 2026Dev.to AIAnthropic Source Code Leak: What Was Exposed & Why It Matters in AI Security - iZOOlogicGoogle News: ClaudeAI Agents in Production: Why Most Systems Break (And How to Fix Them)Dev.to AISome editors 'uploading confidential manuscripts to ChatGPT to read quickly', agent claims - The BooksellerGoogle News: ChatGPTWhere is the AI revolution at?Dev.to AIDigital Marketing Course in Delhi: A Practical Skill Stack for High-Growth Careers in 2026Dev.to AIMercor AI Data Breach: Supply Chain Attack via LiteLLM Package CompromiseDev.to AI

I’m Building a Synthetic Psyche for Developers — Here’s the Architecture

Dev.to AIby Praveen KGApril 1, 202615 min read0 views
Source Quiz

<p><em>This is not another AI assistant. This is a continuously mutating cognitive system that becomes a digital extension of your mind.</em></p> <h2> The Problem Nobody Is Solving </h2> <p>Every AI tool you use today shares one fundamental flaw.</p> <p>It resets.</p> <p>You close your laptop. Context gone. You open a new session. It knows nothing about yesterday, nothing about the auth bug you’ve been fighting for three days, nothing about the fact that Sarah on your team is cautious and Tom ships too fast and your CTO will reject anything without a security review.<br> My<br> You rebuild context. Every. Single. Day.</p> <p>GitHub Copilot watches your code but forgets your session. Claude Code understands your codebase but resets between conversations. ChatGPT knows what you told it five

This is not another AI assistant. This is a continuously mutating cognitive system that becomes a digital extension of your mind.

The Problem Nobody Is Solving

Every AI tool you use today shares one fundamental flaw.

It resets.

You close your laptop. Context gone. You open a new session. It knows nothing about yesterday, nothing about the auth bug you’ve been fighting for three days, nothing about the fact that Sarah on your team is cautious and Tom ships too fast and your CTO will reject anything without a security review. My You rebuild context. Every. Single. Day.

GitHub Copilot watches your code but forgets your session. Claude Code understands your codebase but resets between conversations. ChatGPT knows what you told it five minutes ago and nothing before that.

The tools are smart. But they have no memory. No continuity. No sense of you.

I started building something different. I’m calling it Mini Me.

What Mini Me Actually Is

Not an assistant. Not a chatbot. Not another RAG wrapper.

Mini Me is a synthetic psyche — a continuously running cognitive system that:

  • Watches everything you do without being asked

  • Learns your style, your patterns, your team, your rhythm

  • Develops emotional responses that colour its behaviour

  • Mutates with every single interaction — permanently

  • Never resets. Never forgets what matters.

  • Gets cheaper every week as it learns to answer locally

The closest human analogy: it’s the difference between a very smart colleague you brief every morning versus one who was there yesterday, remembers the context, knows your team, and has already thought about your problem while you were sleeping.

The Central Insight — Mind Is Not a File

Early in the design I made a mistake almost everyone makes.

I thought about building a mind.py file. A static definition of how Mini Me should think and behave.

Then I realised: that’s not how minds work.

You don’t have a mind file. You have neurons firing (energy), memories forming and fading (storage), emotions colouring every experience, senses flowing in constantly, conflicts between competing impulses, and learning continuously reshaping all of the above.

Mind is what you call it when all of that runs together. It’s not a thing. It’s a process.

So Mini Me has no mind.py. Instead it has five layers that run simultaneously, constantly changing each other, producing emergent behaviour that nobody programmed:

Layer 0 — The World (what's happening around you) Layer 1 — Senses (observer.py — watching everything) Layer 2 — Psyche (psyche.py — the mutating core) Layer 3 — Consciousness (consciousness.py — the continuous loop) Layer 4 — Memory (rag_engine.py — living knowledge stores) Layer 5 — Interface (mcp_server.py — how you interact with it)

Enter fullscreen mode

Exit fullscreen mode

Let me walk through each one.

Layer 0 — The World

Mini Me doesn’t live in a chat window. It lives in your actual world:

  • Your IDE (opencode, claude-code)

  • Your team (Git, Jira, Slack, Email, Teams, Zendesk)

  • Your people (Sarah, Tom, your CTO — real characters with personalities)

  • Your context (time of day, energy level, what you’re stuck on)

Everything that happens in these places is an input. Not just when you ask. All the time.

Layer 1 — Senses (observer.py)

The observer is the eyes and ears. It runs independently — not as a plugin inside opencode, but as its own process that starts at login and never stops.

This distinction matters enormously. A plugin only watches when the IDE is open. Mini Me watches while you sleep.

Three streams flow into the observer:

IDE Stream — active while you’re working

file saves · terminal commands · test runs errors · git operations · cursor position

Enter fullscreen mode

Exit fullscreen mode

Conversation Stream — the piece everyone misses

every prompt you type to opencode every response you accept or edit words you choose · tone · sentiment what you ask again (comprehension gaps) what you praise · what you reject

Enter fullscreen mode

Exit fullscreen mode

World Stream — overnight, while you’re away

git commits · PR reviews · Jira ticket changes Slack messages · email threads · calendar updates team activity · production alerts

Enter fullscreen mode

Exit fullscreen mode

The conversation stream is the critical one. Most systems watch what you do. Mini Me watches what you mean. Every prompt is a signal about your state of mind, your expertise gaps, your communication style, your frustrations and breakthroughs.

Wind-down is automatic — no manual trigger.

There is no “good night” command. When signals go quiet — no commits, no file saves, no terminal activity, calendar shows no more meetings — the energy system naturally drifts toward dormant state. Mini Me slows its polling rate from every few seconds to every fifteen minutes. It starts consolidating the day quietly. No one told it to. It just noticed you were gone.

Layer 2 — Psyche (psyche.py)

This is the hard part. The novel part. The part nobody has built.

Psyche is not static. It mutates with every interaction. The system that handles your next prompt has been permanently changed by every prompt that came before it.

It has four components:

Emotional State

Mini Me has emotions. Not as a gimmick — as a functional mechanism that colours every downstream decision.

GRATIFICATION  Trigger: output accepted unchanged, tests pass, "perfect"  Effect: reinforce the pattern that led here (2.5x weight)  warm lift in energy — seek similar opportunities

WORRY Trigger: same error appearing 3rd time, deadline + blockers Effect: raise arousal — must act proactively flag in world model — this needs attention pre-compute solutions without being asked

SORRY Trigger: output significantly edited, "no that's wrong" Effect: deprecate the RAG docs that led to the error trigger self-review: what did I miss? lower confidence on similar future queries

CURIOSITY Trigger: new pattern never seen, unfamiliar territory Effect: exploration spike — pre-fetch related knowledge energy lifts — engage more deeply

EXCITEMENT Trigger: breakthrough found, novel solution Effect: strong positive signal — amplify this direction`

Enter fullscreen mode

Exit fullscreen mode

Each emotion has an intensity (0→1) and its own decay curve. Gratification fades in hours. Worry about a recurring bug persists for days. Core frustrations can last weeks.

Crucially: emotions don’t just get logged. They weight every RAG retrieval, every response generation, every conflict resolution. A worried Mini Me responds differently to the same query than a calm Mini Me.

User Model

Built from zero on day one. Grows with every interaction. Never manually configured.

style_fingerprint: how this person writes  formal or casual? verbose or terse?  do they use bullet points or paragraphs?

vocabulary_map: words they use and avoid technical depth they operate at jargon they're comfortable with

expertise_topology: strong areas (where they move fast) blind spots (where they ask the same question multiple times) growth edges (new territory they're exploring)

work_rhythm: when they're sharp (morning deep work?) when they're tired (3pm slump?) when they're creative vs methodical

frustration_map: what consistently triggers negative signals delight_map: what consistently produces gratification`

Enter fullscreen mode

Exit fullscreen mode

After a week of interaction, Mini Me knows your communication style better than most colleagues do. After a month, it knows your expertise map, your rhythm, your triggers. It adapts everything — vocabulary, response length, level of explanation, tone — to match what works for you specifically.

Character Models

This is the piece that surprised me most when I designed it.

Mini Me doesn’t just learn about you. It learns about the people in your world.

Every time you mention someone — in a prompt, in a commit message, in a Slack thread it’s watching — it updates that character’s model:

# You type to opencode: "Sarah is going to want more tests before she approves this"

Mini Me updates Sarah's character RAG:

{ "name": "Sarah", "trait": "cautious, test-driven, approval-gated", "context": "code review process", "updated": now }`

Enter fullscreen mode

Exit fullscreen mode

# Two weeks later, when you ask for a code review: Mini Me responds:

"This is solid. Tom will ship it immediately. Sarah will want to see test coverage on the edge cases first — specifically the null handling on line 47. The CTO will ask about the auth implications before approving."`

Enter fullscreen mode

Exit fullscreen mode

Nobody told Mini Me to think this way. It learned it from watching you talk about these people over two weeks.

Each character gets their own MiniRAG store, with the same decay curves as everything else. Characters you mention often stay vivid. Characters you haven’t mentioned in months fade.

Learning Engine — The Dynamic Mutation Core

Every interaction produces a Learning Signal. These signals don’t just update what Mini Me knows. They reshape how it responds.

Signal Source Effect on System ────────── ─────────────────────── ──────────────────────────── ACCEPTED output used unchanged reinforce pattern 2.0x

EDITED output modified learn the correction 3.0x store the edit not deprecate what was wrong the original

REPEATED_Q asked same thing twice flag comprehension gap try completely different approach next time

PRAISED "perfect" / "exactly" strong reinforce 2.5x / "yes that's it"

REJECTED "no" / "wrong" / deprecate 0.3x significant rewrite self-review triggered

TESTS_PASS code worked in prod verify the pattern 2.0x

TESTS_FAIL code broke question the pattern 0.5x

STYLE_OBS every prompt analysed update user fingerprint style model refined`

Enter fullscreen mode

Exit fullscreen mode

The mutation is cumulative and permanent. Mini Me on day 30 has fundamentally different retrieval patterns, response tendencies, and style calibrations than Mini Me on day 1. Not because someone reconfigured it. Because 30 days of interactions reshaped it.

This is what separates it from every RAG system I’ve seen. RAG systems update what they know. Mini Me updates how it thinks.

Layer 3 — Consciousness (consciousness.py)

The brain loop that never stops.

Every tick — running every 2 seconds at peak engagement, every 30 seconds when dormant — the consciousness layer:

  • Decays energy naturally (like tiredness)

  • Reads the world model

  • Generates spontaneous thoughts

  • Scans agent beliefs for conflicts

  • Resolves high-tension conflicts via LLM judge

  • Decides whether to act proactively

  • Updates predictions for what comes next

The Energy System

Arousal (0→1) controls everything: how fast the brain loop ticks, how aggressively Mini Me pre-computes, how deeply it explores vs converges.

HYPERFOCUS 0.85–1.0 2s ticks error detected, user query ENGAGED 0.65–0.85 5s ticks active coding session ALERT 0.35–0.65 10s ticks normal work QUIET 0.15–0.35 20s ticks slowing down DORMANT 0.00–0.15 30s ticks overnight, you're away

Enter fullscreen mode

Exit fullscreen mode

Emotions from the psyche layer feed directly into energy:

  • Gratification → warm lift, sustained engagement

  • Worry → sharp arousal spike, must act

  • Sorry → brief dip, self-review mode

  • Curiosity → exploration lift, longer engagement

The Conflict Engine

When agents hold contradictory beliefs — planning says “deploy now”, safety says “too risky” — tension builds until a resolution is forced.

The resolution is a real Claude API call. A judge prompt receives both agents’ beliefs plus their supporting RAG context and returns:

{  "winner": "safety",  "reason": "unresolved risks cannot be overridden by urgency",  "synthesis": "Deploy should proceed after addressing the three  specific risks identified. The timeline pressure  is noted but insufficient to override them.",  "confidence": 0.87 }

Enter fullscreen mode

Exit fullscreen mode

The synthesis — not just the winner’s belief — gets written to both agents’ RAG stores. Both agents learn the nuanced truth. The conflict makes both of them smarter.

This is the mechanism that produces emergent wisdom. The system arrives at conclusions that neither agent held alone, through genuine reasoning about their disagreement.

Layer 4 — Memory (rag_engine.py)

Eight specialised agents, each with their own isolated RAG store. Plus one store per character in the user’s world.

The memory system mimics human forgetting using the Ebbinghaus decay curve:

vitality = e^(-0.693 × age_days / half_life_days)

Enter fullscreen mode

Exit fullscreen mode

At exactly one half-life, vitality hits 0.5. Always. Every agent.

Different domains decay at different rates:

Sensor signals 1 day (environmental context expires fast) Calendar 3 days (schedule fades when events pass) Planning 7 days (tasks complete, context moves on) Personal memory 14 days (moderate fade) Technical knowledge 21 days Style preferences 30 days (stable but not permanent) Format preferences 60 days (rarely change) Safety policies 90 days (near permanent)

Enter fullscreen mode

Exit fullscreen mode

Documents aren’t just stored and retrieved. They have vitality — a score that combines:

  • Time since creation (decay)

  • Time since last accessed (recency boost)

  • Number of times retrieved (frequency boost)

  • Outcome of interactions it contributed to (reinforcement)

Stale docs rank lower in retrieval even if textually relevant. A correct answer from six months ago that’s never been accessed loses authority to a fresher, frequently-used answer on the same topic.

Pinned documents — anything tagged as preference, policy, or identity — never decay. These are the things that make Mini Me you.

The Dynamic Mutation — How It Actually Works

Traditional AI tools:

Input → [Fixed Model] → Output

Enter fullscreen mode

Exit fullscreen mode

Same model tomorrow as today.

Mini Me:

Input  → Psyche reads emotional state  → User model shapes interpretation  → Emotionally weighted RAG retrieval  → Character models inform framing  → Response generated  → Learning signal extracted  → Psyche mutated  → RAG stores updated with emotional weighting  → User model refined  → System that handles NEXT input is different

Enter fullscreen mode

Exit fullscreen mode

The mutation is real and measurable. After 10 interactions, retrieval patterns have shifted. After 100, response style has adapted. After 1000, the system has a nuanced model of you, your team, your work patterns, and your preferences that no static configuration could produce.

Nobody programmed the second response. It emerged from the interactions. From the person.

What Makes This Different

The closest things that exist today:

System Memory Emotions Mutation Local Characters

ChatGPT Session only ❌ ❌ ❌ ❌

GitHub Copilot Codebase only ❌ ❌ Partial ❌

Cursor Codebase only ❌ ❌ Partial ❌

MemGPT Cross-session ❌ ❌ ❌ ❌

AutoGen Conversation ❌ Partial ❌ ❌

Mini Me Living decay ✅ ✅ ✅ ✅

The combination doesn’t exist. Individual pieces do. The integration — emotional weighting driving mutation of retrieval behaviour, character models of real people in your world, emergent wind-down from signal silence, conflict resolution that teaches both agents — nobody has assembled this.

The Security Answer

The most common first question: “This watches everything — isn’t that a massive security risk?”

The answer is local-first architecture.

Every RAG store lives on your disk. Every computation runs on your machine. Nothing is transmitted to any server. The psyche model, the character models, the conversation logs — all local, all yours, all private.

The open source nature is the audit mechanism. Anyone can read the code and verify what goes where. There are no hidden syncs, no telemetry, no training on your data.

For enterprise: the local-first model is actually stronger than cloud-based alternatives. Your code never leaves your machine. Your Jira tickets, Slack messages, and email threads are processed locally and stored locally. The AI gets smarter without your data going anywhere.

Build Status

This isn’t a concept. It’s being built.

Complete and tested:

  • rag_engine.py — living memory with decay, boost, eviction, persistence (22/23 tests)

  • agents.py — 8 specialised agents with isolated RAG stores

  • consciousness.py — energy system, conflict engine with LLM judge, brain loop (40/40 tests)

  • server.py — Flask REST API

  • MiniMe.jsx — React frontend with real Claude API calls per agent

In active development:

  • psyche.py — the dynamic mutation core (the hard part)

  • observer.py — senses layer with conversation stream

  • mcp_server.py — opencode and claude-code integration

The POC target: prove that after 10 interactions, the system measurably responds differently than after 1. Demonstrate the mutation is real, the emotional weighting changes retrieval, and the character models produce responses that feel genuinely personal.

The One-Line Pitch

“The first AI that doesn’t just know you — it becomes you.”

What Comes Next

The GM briefing is the tracer bullet — the first end-to-end demo that touches every layer. You open your terminal in the morning, type two letters, and Mini Me tells you everything that happened overnight across Git, Jira, Slack, and email. No commands. No configuration. It was watching while you slept and it already has the answer.

But that’s just the first visible output of a much deeper system. The real story is the psyche — the thing that makes Mini Me genuinely yours after thirty days of working together.

I’ll be documenting the build publicly. The architecture decisions, the failures, the moments where the theory meets reality.

If this resonates — if you’ve felt the pain of rebuilding context every morning, if you’ve wanted an AI that actually knows you — follow along.

Mini Me is open source and in active development. Architecture feedback welcome in the comments. The hardest part — psyche.py — gets built next.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
I’m Buildin…claudemodeltrainingupdateopen sourceproductDev.to AI

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 219 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!