Gemma 4 has been released
https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF https://huggingface.co/unsloth/gemma-4-31B-it-GGUF https://huggingface.co/unsloth/gemma-4-E4B-it-GGUF https://huggingface.co/unsloth/gemma-4-E2B-it-GGUF https://huggingface.co/collections/google/gemma-4 What’s new in Gemma 4 https://www.youtube.com/watch?v=jZVBoFOJK-Q Gemma is a family of open models built by Google DeepMind. Gemma 4 models are multimodal, handling text and image input (with audio supported on small models) and generating text output. This release includes open-weights models in both pre-trained and instruction-tuned variants. Gemma 4 features a context window of up to 256K tokens and maintains multilingual support in over 140 languages. Featuring both Dense and Mixture-of-Experts (MoE) architectures, Gemma 4 is well-s
Could not retrieve the full article text.
Read on Reddit r/LocalLLaMA →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelbenchmarkrelease
7 CVEs in 48 Hours: How PraisonAI Got Completely Owned — And What Every Agent Framework Should Learn
PraisonAI is a popular multi-agent Python framework supporting 100+ LLMs. On April 3, 2026, seven CVEs dropped simultaneously. Together they enable complete system compromise from zero authentication to arbitrary code execution. I spent the day analyzing each vulnerability. Here is what I found, why it matters, and the patterns every agent framework developer should audit for immediately. The Sandbox Bypass (CVE-2026-34938, CVSS 10.0) This is the most technically interesting attack I have seen this year. PraisonAI's execute_code() function runs a sandbox with three protection layers. The innermost wrapper, _safe_getattr , calls startswith() on incoming arguments to check for dangerous imports like os , subprocess , and sys . The attack: create a Python class that inherits from str and over
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI

Gemma 4 Complete Guide: Architecture, Models, and Deployment in 2026
Google DeepMind released Gemma 4 on April 3, 2026 under Apache 2.0 — a significant licensing shift from previous Gemma releases that makes it genuinely usable for commercial products without legal ambiguity. This guide covers the full model family, architecture decisions worth understanding, and practical deployment paths across cloud, local, and mobile. The Four Models and When to Use Each Gemma 4 ships in four sizes with meaningfully different architectures: Model Params Active Architecture VRAM (4-bit) Target E2B ~2.3B all Dense + PLE ~2GB Mobile / edge E4B ~4.5B all Dense + PLE ~3.6GB Laptop / tablet 26B A4B 25.2B 3.8B MoE ~16GB Consumer GPU 31B 30.7B all Dense ~18GB Workstation The E2B result is the most surprising: multiple community benchmarks confirm it outperforms Gemma 3 27B on s

I scored 14 popular AI frameworks on behavioral commitment — here's the data
When you're choosing an AI framework, what do you actually look at? Usually: stars, documentation quality, whether the README looks maintained. All of that is stated signal. Easy to manufacture, doesn't tell you if the project will exist in 18 months. I built a tool that scores repos on behavioral commitment — signals that cost real time and money to fake. Here's what I found when I ran 14 of the most popular AI frameworks through it. The methodology Five behavioral signals, weighted by how hard they are to fake: Signal Weight Logic Longevity 30% Years of consistent operation Recent activity 25% Commits in the last 30 days Community 20% Number of contributors Release cadence 15% Stable versioned releases Social proof 10% Stars (real people starring costs attention) Archived repos or projec

Running OpenClaw with Gemma 4 TurboQuant on MacAir 16GB
Hi guys, We’ve implemented a one-click app for OpenClaw with Local Models built in. It includes TurboQuant caching, a large context window, and proper tool calling. It runs on mid-range devices. Free and Open source. The biggest challenge was enabling a local agentic model to run on average hardware like a Mac Mini or MacBook Air. Small models work well on these devices, but agents require more sophisticated models like QWEN or GLM. OpenClaw adds a large context to each request, which caused the MacBook Air to struggle with processing. This became possible with TurboQuant cache compression, even on 16gb memory. We found llama.cpp TurboQuant implementation by Tom Turney. However, it didn’t work properly with agentic tool calling in many cases with QWEN, so we had to patch it. Even then, the

Help running Qwen3-Coder-Next TurboQuant (TQ3) model
I found a TQ3-quantized version of Qwen3-Coder-Next here: https://huggingface.co/edwardyoon79/Qwen3-Coder-Next-TQ3_0 According to the page, this model requires a compatible inference engine that supports TurboQuant. It also provides a command, but it doesn’t clearly specify which version or fork of llama.cpp should be used (or maybe I missed it). llama-server I’ve tried the following llama.cpp forks that claim to support TQ3, but none of them worked for me: https://github.com/TheTom/llama-cpp-turboquant https://github.com/turbo-tan/llama.cpp-tq3 https://github.com/drdotdot/llama.cpp-turbo3-tq3 If anyone has successfully run this model, I’d really appreciate it if you could share how you did it. submitted by /u/UnluckyTeam3478 [link] [comments]




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!