Google Open Sources Gemma 4 For Private Local AI Workloads - Open Source For You
Google Open Sources Gemma 4 For Private Local AI Workloads Open Source For You
Could not retrieve the full article text.
Read on GNews AI open source →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
open source
ARCUS-H: Full Evaluation Results — 979,200 Episodes, 51 RL Policies
We completed a large behavioral stability evaluation of trained RL policies of : 979,200 evaluation episodes across 51 policy configurations , 12 environments, 8 algorithms, and 8 structured stress schedules. Here are three findings that matter for deployment. Finding 1: Reward explains 3.7% of behavioral stability variance. The primary correlation between ARCUS-H stability scores and normalized reward is r = +0.240 [0.111, 0.354], p = 1.1×10⁻⁴ (n = 255 policy-level observations, 2,550 seed-level). R² = 0.057. 94.3% of the variance in how a policy behaves under sensor noise, actuator failure, or reward corruption is not captured by its return in clean conditions. 87% of policies rank differently under ARCUS-H vs reward rankings, with a mean rank shift of 74.4 positions. Finding 2 : SAC’s e

I Renamed All 43 Tools in My MCP Server. Here's Why I Did It Now.
Charlotte has 111 stars. That's not a lot. But it's enough that a breaking change will annoy real people. I shipped one anyway. The naming problem When I started building Charlotte in February, I named every tool with a colon separator: charlotte:navigate , charlotte:observe , charlotte:click . It looked clean. It felt namespaced. Every tool call in every session used it. The problem: the MCP spec restricts tool names to [A-Za-z0-9_.-] . The colon character isn't in that set. It never was. I either didn't check or didn't care at the time. The MCP SDK was lenient about it until v1.26.0, which started emitting validation warnings on every tool registration. I had two options. Fix it now with 111 stars and a handful of active users. Or fix it later with more stars, more users, more documentat
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

Claude Code subagent patterns: how to break big tasks into bounded scopes
Claude Code Subagent Patterns: How to Break Big Tasks into Bounded Scopes If you've ever given Claude Code a massive task — "refactor the entire auth system" — and watched it spiral into confusion after 20 minutes, you've hit the core problem: unbounded scope kills context . The solution is subagent patterns: structured ways to decompose work into bounded, parallelizable units. Why Big Tasks Fail in Claude Code Claude Code has a finite context window. When you give it a large task: It reads lots of files → context fills up It loses track of what it read first It starts making contradictory changes You hit the context limit mid-task The session crashes and you lose progress The fix isn't a bigger context window — it's smaller tasks. The Subagent Pattern Instead of one Claude session doing e

I Started Building a Roguelike RPG — Powered by On-Device AI #2
Running On-Device LLM in Unity Android — Everything That Broke (and How I Fixed It) In my last post, I mentioned I was building a roguelike RPG powered by an on-device LLM. This time I'll cover exactly how I did it, what broke, and what the numbers look like. The short version: I got Phi-4-mini running in Unity on a real Android device in one day. It generated valid JSON. It took 8 minutes and 43 seconds. 0. Why This Tech Stack Before the details, here's why I made each choice. Why Phi-4-mini (3.8B)? Microsoft officially distributes it in ONNX format — no conversion work needed. The INT4 quantized version fits in 4.9GB, which is manageable on a 12GB RAM device. At 3.8B parameters, it's roughly the minimum size that can reliably produce structured JSON output. Smaller models tend to fall ap




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!