Clanker – 452KB emotional scoring engine strapped to Llama-1B
Built a deterministic engine that computes 7D emotional coordinates (VADUGWI) from text structure. Hooked it up to Llama-3.2-1B in a Gradio Space. The model generates dialogue between two characters. The engine scores every line on 7 dimensions and tracks how each message shifts the other character’s emotional baseline. State carries forward through A+B=C transitions. What the engine does that the model can’t: Detects 26 structural patterns (VICTIMIZATION, SELF_NULLIFY, SARCASM_INVERSION, CHOPPER_SPLIT, etc.) Tracks self-worth (W) separately from valence – blaming yourself reads differently than blaming the world Tracks intent direction (I) – reaching out vs pulling away vs commanding Runs at 0.15ms/sentence on CPU, ~452KB total The Space has two tabs: Two characters argue (pick personalit
Could not retrieve the full article text.
Read on discuss.huggingface.co →discuss.huggingface.co
https://discuss.huggingface.co/t/clanker-452kb-emotional-scoring-engine-strapped-to-llama-1b/174857Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
llamamodeltraining
Gemma 4 E2B as a multi-agent coordinator: task decomposition, tool-calling, multi-turn — it works
Wanted to see if Gemma 4 E2B could handle the coordinator role in a multi-agent setup — not just chat, but the actual hard part: take a goal, break it into a task graph, assign agents, call tools, and stitch results together. Short answer: it works. Tested with my framework open-multi-agent (TypeScript, open-source, Ollama via OpenAI-compatible API). What the coordinator has to do: Receive a natural language goal + agent roster Output a JSON task array (title, description, assignee, dependencies) Each agent executes with tool-calling (bash, file read/write) Coordinator synthesizes all results Quick note on E2B : "Effective 2B" — 2.3B effective params, 5.1B total. The extra ~2.8B is the embedding layer for 140+ language / multimodal support. So the actual compute is 2.3B. What I tested: Gav

Netflix just dropped their first public model on Hugging Face: VOID: Video Object and Interaction Deletion
Hugging Face netflix/void-model: https://huggingface.co/netflix/void-model Project page - GitHub: https://github.com/Netflix/void-model Demo: https://huggingface.co/spaces/sam-motamed/VOID submitted by /u/Nunki08 [link] [comments]

Gemma 4 is fine great even …
Been playing with the new Gemma 4 models it’s amazing great even but boy did it make me appreciate the level of quality the qwen team produced and I’m able to have much larger context windows on my standard consumer hardware. submitted by /u/ThinkExtension2328 [link] [comments]
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Gemma 4 is fine great even …
Been playing with the new Gemma 4 models it’s amazing great even but boy did it make me appreciate the level of quality the qwen team produced and I’m able to have much larger context windows on my standard consumer hardware. submitted by /u/ThinkExtension2328 [link] [comments]

MorphoGuard: A Morphology-Based Whole-Body Interactive Motion Controller
arXiv:2604.01517v1 Announce Type: cross Abstract: Whole-body control (WBC) has demonstrated significant advantages in complex interactive movements of high-dimensional robotic systems. However, when a robot is required to handle dynamic multi-contact combinations along a single kinematic chain-such as pushing open a door with its elbow while grasping an object-it faces major obstacles in terms of complex contact representation and joint configuration coupling. To address this, we propose a new control approach that explicitly manages arbitrary contact combinations, aiming to endow robots with whole-body interactive capabilities. We develop a morphology-constrained WBC network (MorphoGuard)-which is trained on a self-constructed dual-arm physical and simulation platform. A series of model r

HyVGGT-VO: Tightly Coupled Hybrid Dense Visual Odometry with Feed-Forward Models
arXiv:2604.02107v1 Announce Type: new Abstract: Dense visual odometry (VO), which provides pose estimation and dense 3D reconstruction, serves as the cornerstone for applications ranging from robotics to augmented reality. Recently, feed-forward models have demonstrated remarkable capabilities in dense mapping. However, when these models are used in dense visual SLAM systems, their heavy computational burden restricts them to yielding sparse pose outputs at keyframes while still failing to achieve real-time pose estimation. In contrast, traditional sparse methods provide high computational efficiency and high-frequency pose outputs, but lack the capability for dense reconstruction. To address these limitations, we propose HyVGGT-VO, a novel framework that combines the computational efficie

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!