b8637: model, mtmd: fix gguf conversion for audio/vision mmproj (#21309)
fix gguf conversion for audio/vision mmproj fix test
Provide feedback
Saved searches
Use saved searches to filter your results more quickly
Sign up
Appearance settings
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelversion
I Built a Self-Hosted AI Agent That Runs on a Raspberry Pi
Most AI coding tools live in someone else's cloud. Cursor, Devin, GitHub Copilot: useful, but your context and conversations flow through a third-party server. For some teams that's fine. For others it's a non-starter. I wanted an AI agent engine I could deploy on my own hardware, connect to whatever model I wanted, and extend without waiting for a vendor to ship the feature. So I built profClaw . The problem with the current landscape There are roughly two categories of AI dev tools right now: Cloud-only agents (Cursor, Devin, Claude Code web, Copilot Chat): polished and easy to start, but you're locked into their infra, their model selection, and their pricing. No offline mode, no control over what gets logged. Single-purpose CLIs (Aider, shell wrappers around OpenAI): simpler and self-h

Same Agents, Different Minds — What 180 Configurations Proved About AI Environment Design
Google tested 180 agent configurations. Same foundation models. Same tasks. Same tools. The only variable was how the agents talked to each other. Independent agents — working in parallel, no communication — amplified errors 17.2 times. Give the same agents a centralized hub-and-spoke topology, and error amplification dropped to 4.4 times. Same intelligence. Same training. A 3.9x difference in error rate, explained entirely by communication structure. This isn't a story about better prompts or smarter models. It's a story about environment. And it follows directly from a claim I made in Part 1 of this series : the interface isn't plumbing between the AI and the world. It's a mold that shapes what the AI becomes. Part 1 argued this through cases — a developer who felt hollowed out by AI, a
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Same Agents, Different Minds — What 180 Configurations Proved About AI Environment Design
Google tested 180 agent configurations. Same foundation models. Same tasks. Same tools. The only variable was how the agents talked to each other. Independent agents — working in parallel, no communication — amplified errors 17.2 times. Give the same agents a centralized hub-and-spoke topology, and error amplification dropped to 4.4 times. Same intelligence. Same training. A 3.9x difference in error rate, explained entirely by communication structure. This isn't a story about better prompts or smarter models. It's a story about environment. And it follows directly from a claim I made in Part 1 of this series : the interface isn't plumbing between the AI and the world. It's a mold that shapes what the AI becomes. Part 1 argued this through cases — a developer who felt hollowed out by AI, a



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!