Google’s Gemma AI models surpass 150M downloads - TechCrunch
<a href="https://news.google.com/rss/articles/CBMiiAFBVV95cUxPNVJHS3BPR1lHZTJ3cVkyc3ZRWFVzR2hLQzVBVURmTWlfamVKcVRvTU5ZWjRKRkVmLWJKMHp4bWhKYWtYV0xXTklMT2xFcHJjVkx2ZUhRYUhzdVRqak9YdVlyQ0syVGZELUhvQ0lrSDJwS0dfR3R5WG01Ymw2akp2TmEtdWRMUTRo?oc=5" target="_blank">Google’s Gemma AI models surpass 150M downloads</a> <font color="#6f6f6f">TechCrunch</font>
Could not retrieve the full article text.
Read on GNews AI Gemma →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelTraining mRNA Language Models Across 25 Species for $165
<table> <tr><td> <a href="https://www.reddit.com/r/LocalLLaMA/comments/1s912ur/training_mrna_language_models_across_25_species/"> <img src="https://external-preview.redd.it/nVu8HcAwsZ22nurNc_vAW-R_2drotHTdQavTOwGgmi4.png?width=640&crop=smart&auto=webp&s=00c766290f7df67dcd9c20ff789b8f25a14b886b" alt="Training mRNA Language Models Across 25 Species for $165" title="Training mRNA Language Models Across 25 Species for $165" /> </a> </td><td> <!-- SC_OFF --><div class="md"><p>We built an end-to-end protein AI pipeline covering structure prediction, sequence design, and codon optimization. After comparing multiple transformer architectures for codon-level language modeling, CodonRoBERTa-large-v2 emerged as the clear winner with a perplexity of 4.10 and a Spearman CAI correlation of 0
[llama.cpp] New TurboQuant 3-bit KV Cache is insane! 17 t/s on Nemotron 30B using only 8GB VRAM (Full Windows/MSVC Build Guide + Auto-Script)
<table> <tr><td> <a href="https://www.reddit.com/r/LocalLLaMA/comments/1s931oz/llamacpp_new_turboquant_3bit_kv_cache_is_insane/"> <img src="https://preview.redd.it/ul2pi1esogsg1.jpg?width=140&height=74&auto=webp&s=37edd20c35b0338dbf6b160a01342c2f88b2eb90" alt="[llama.cpp] New TurboQuant 3-bit KV Cache is insane! 17 t/s on Nemotron 30B using only 8GB VRAM (Full Windows/MSVC Build Guide + Auto-Script)" title="[llama.cpp] New TurboQuant 3-bit KV Cache is insane! 17 t/s on Nemotron 30B using only 8GB VRAM (Full Windows/MSVC Build Guide + Auto-Script)" /> </a> </td><td> <!-- SC_OFF --><div class="md"><p><a href="https://preview.redd.it/ul2pi1esogsg1.jpg?width=707&format=pjpg&auto=webp&s=20d2b478c1700f74ce0933f9202177f07c19239e"> </a></p> <h1>Hi everyone.</h1> <p>If you a
You guys seen this? 1-bit model with an MMLU-R of 65.7, 8B params
<!-- SC_OFF --><div class="md"><p>This is nuts.</p> <p><a href="https://huggingface.co/prism-ml/Bonsai-8B-gguf">prism-ml/Bonsai-8B-gguf · Hugging Face</a></p> <p>has anyone tested this thing?</p> </div><!-- SC_ON -->   submitted by   <a href="https://www.reddit.com/user/OmarBessa"> /u/OmarBessa </a> <br/> <span><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s91jxl/you_guys_seen_this_1bit_model_with_an_mmlur_of/">[link]</a></span>   <span><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s91jxl/you_guys_seen_this_1bit_model_with_an_mmlur_of/">[comments]</a></span>
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
Training mRNA Language Models Across 25 Species for $165
<table> <tr><td> <a href="https://www.reddit.com/r/LocalLLaMA/comments/1s912ur/training_mrna_language_models_across_25_species/"> <img src="https://external-preview.redd.it/nVu8HcAwsZ22nurNc_vAW-R_2drotHTdQavTOwGgmi4.png?width=640&crop=smart&auto=webp&s=00c766290f7df67dcd9c20ff789b8f25a14b886b" alt="Training mRNA Language Models Across 25 Species for $165" title="Training mRNA Language Models Across 25 Species for $165" /> </a> </td><td> <!-- SC_OFF --><div class="md"><p>We built an end-to-end protein AI pipeline covering structure prediction, sequence design, and codon optimization. After comparing multiple transformer architectures for codon-level language modeling, CodonRoBERTa-large-v2 emerged as the clear winner with a perplexity of 4.10 and a Spearman CAI correlation of 0
You guys seen this? 1-bit model with an MMLU-R of 65.7, 8B params
<!-- SC_OFF --><div class="md"><p>This is nuts.</p> <p><a href="https://huggingface.co/prism-ml/Bonsai-8B-gguf">prism-ml/Bonsai-8B-gguf · Hugging Face</a></p> <p>has anyone tested this thing?</p> </div><!-- SC_ON -->   submitted by   <a href="https://www.reddit.com/user/OmarBessa"> /u/OmarBessa </a> <br/> <span><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s91jxl/you_guys_seen_this_1bit_model_with_an_mmlur_of/">[link]</a></span>   <span><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s91jxl/you_guys_seen_this_1bit_model_with_an_mmlur_of/">[comments]</a></span>
[P] Looking for people who have had training runs fail unexpectedly to beta test a stability monitor. Free, takes 5 minutes to add to your existing loop. DM me.
<!-- SC_OFF --><div class="md"><p>Anyone actively training models want to try a stability monitor on a real run? Trying to get real world validation outside my own benchmarks.</p> </div><!-- SC_ON -->   submitted by   <a href="https://www.reddit.com/user/Turbulent-Tap6723"> /u/Turbulent-Tap6723 </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/1s93kzm/p_looking_for_people_who_have_had_training_runs/">[link]</a></span>   <span><a href="https://www.reddit.com/r/MachineLearning/comments/1s93kzm/p_looking_for_people_who_have_had_training_runs/">[comments]</a></span>
FOR ME, Qwen3.5-27B is better than Gemini 3.1 Pro and GPT-5.3 Codex
<!-- SC_OFF --><div class="md"><p>There's something I hate about the big SOTA proprietary models. In order to make them better for people who don't know how to program, they're optimized to solve problems entirely autonomously. Yeah, this makes people over on <a href="/r/ChatGPT">/r/ChatGPT</a> soypog when it writes a 7z parser in Python because the binary is missing, however, for me, this makes them suck. If something isn't matching up, Qwen3.5-27B will just give up. If you're trying to vibecode some slop this is annoying, but for me this is much, much better. I'm forced to use GitHub Copilot in university, and whenever there's a problem, it goes completely off the rails and does some absolute hogwash. Like, for example, it was struggling to write to a file that had some broken permission
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!