SIMA 2: A Gemini-Powered AI Agent for 3D Virtual Worlds - Google DeepMind
<a href="https://news.google.com/rss/articles/CBMiqAFBVV95cUxPNUFPX2NVSk5Hc3hDdVVORGFvdnNUR29BMzd6a3JqUFY0cy1WTVRXZkNrNlpIc0s0VGZPazNCbUFDbkQwNW5URTkyUUNnRDV3OVVkMElxOEZPTk44V1VtRUJtellVM0pjbkplT1hFOWlPN29nLTQ0V2U2cGNObjJmUWl4QzA3VFlzb25RUkdoejJFTkg0REhWUk00Z1RLdmt4TDJ0M2JvWHk?oc=5" target="_blank">SIMA 2: A Gemini-Powered AI Agent for 3D Virtual Worlds</a> <font color="#6f6f6f">Google DeepMind</font>
Could not retrieve the full article text.
Read on Google News: DeepMind →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
geminiagentFOR ME, Qwen3.5-27B is better than Gemini 3.1 Pro and GPT-5.3 Codex
<!-- SC_OFF --><div class="md"><p>There's something I hate about the big SOTA proprietary models. In order to make them better for people who don't know how to program, they're optimized to solve problems entirely autonomously. Yeah, this makes people over on <a href="/r/ChatGPT">/r/ChatGPT</a> soypog when it writes a 7z parser in Python because the binary is missing, however, for me, this makes them suck. If something isn't matching up, Qwen3.5-27B will just give up. If you're trying to vibecode some slop this is annoying, but for me this is much, much better. I'm forced to use GitHub Copilot in university, and whenever there's a problem, it goes completely off the rails and does some absolute hogwash. Like, for example, it was struggling to write to a file that had some broken permission
Celebrando la Excelencia: Premios Globales de Emparejar de MongoDB 2025
En un mundo que está siendo remodelado por la IA y los rápidos cambios tecnológicos, una cosa está clara: nuestros Emparejar están impulsando el futuro con MongoDB. Juntos, ayudamos a los clientes a modernizar los sistemas heredados, a resolver desafíos que van desde la seguridad hasta las restricciones presupuestarias, y a compilar la siguiente ola de aplicaciones basadas en IA. Por eso nos enorgullece anunciar los premios anuales MongoDB Global Emparejar Awards, que celebran a los Emparejar que lideraron el camino en 2025. Desde ser pioneros en la IA y la modernización, hasta avanzar en la innovación del sector público y construir colaboraciones audaces para el mercado, estos Emparejar establecen el estándar de excelencia. Su liderazgo no solo marca la diferencia, sino que redefine lo qu
MongoDB.local NYC 2025: Definindo o Banco de Dados Ideal para a Era da IA
Ontem, demos as boas-vindas a milhares de desenvolvedores e executivos no MongoDB.local NYC, a mais recente parada do nosso .local séries. Ao longo do último ano, conectamos-nos com dezenas de milhares de parceiros e clientes em 20 cidades ao redor do mundo. Mas é especialmente significativo estar em Nova York, onde o MongoDB foi fundado e onde ainda temos nossa sede. Durante o evento, apresentamos novas capacidades que reforçam a posição do MongoDB como o principal banco de dados moderno do mundo. Com o MongoDB 8.2, nossa versão mais rica em recursos e de melhor desempenho até agora, estamos elevando o padrão do que os desenvolvedores podem alcançar. Também compartilhamos mais sobre nossos modelos de incorporação e reclassificação da Voyage AI , que trazem precisão e eficiência de última
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Training mRNA Language Models Across 25 Species for $165
<table> <tr><td> <a href="https://www.reddit.com/r/LocalLLaMA/comments/1s912ur/training_mrna_language_models_across_25_species/"> <img src="https://external-preview.redd.it/nVu8HcAwsZ22nurNc_vAW-R_2drotHTdQavTOwGgmi4.png?width=640&crop=smart&auto=webp&s=00c766290f7df67dcd9c20ff789b8f25a14b886b" alt="Training mRNA Language Models Across 25 Species for $165" title="Training mRNA Language Models Across 25 Species for $165" /> </a> </td><td> <!-- SC_OFF --><div class="md"><p>We built an end-to-end protein AI pipeline covering structure prediction, sequence design, and codon optimization. After comparing multiple transformer architectures for codon-level language modeling, CodonRoBERTa-large-v2 emerged as the clear winner with a perplexity of 4.10 and a Spearman CAI correlation of 0
You guys seen this? 1-bit model with an MMLU-R of 65.7, 8B params
<!-- SC_OFF --><div class="md"><p>This is nuts.</p> <p><a href="https://huggingface.co/prism-ml/Bonsai-8B-gguf">prism-ml/Bonsai-8B-gguf · Hugging Face</a></p> <p>has anyone tested this thing?</p> </div><!-- SC_ON -->   submitted by   <a href="https://www.reddit.com/user/OmarBessa"> /u/OmarBessa </a> <br/> <span><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s91jxl/you_guys_seen_this_1bit_model_with_an_mmlur_of/">[link]</a></span>   <span><a href="https://www.reddit.com/r/LocalLLaMA/comments/1s91jxl/you_guys_seen_this_1bit_model_with_an_mmlur_of/">[comments]</a></span>
[P] Looking for people who have had training runs fail unexpectedly to beta test a stability monitor. Free, takes 5 minutes to add to your existing loop. DM me.
<!-- SC_OFF --><div class="md"><p>Anyone actively training models want to try a stability monitor on a real run? Trying to get real world validation outside my own benchmarks.</p> </div><!-- SC_ON -->   submitted by   <a href="https://www.reddit.com/user/Turbulent-Tap6723"> /u/Turbulent-Tap6723 </a> <br/> <span><a href="https://www.reddit.com/r/MachineLearning/comments/1s93kzm/p_looking_for_people_who_have_had_training_runs/">[link]</a></span>   <span><a href="https://www.reddit.com/r/MachineLearning/comments/1s93kzm/p_looking_for_people_who_have_had_training_runs/">[comments]</a></span>
FOR ME, Qwen3.5-27B is better than Gemini 3.1 Pro and GPT-5.3 Codex
<!-- SC_OFF --><div class="md"><p>There's something I hate about the big SOTA proprietary models. In order to make them better for people who don't know how to program, they're optimized to solve problems entirely autonomously. Yeah, this makes people over on <a href="/r/ChatGPT">/r/ChatGPT</a> soypog when it writes a 7z parser in Python because the binary is missing, however, for me, this makes them suck. If something isn't matching up, Qwen3.5-27B will just give up. If you're trying to vibecode some slop this is annoying, but for me this is much, much better. I'm forced to use GitHub Copilot in university, and whenever there's a problem, it goes completely off the rails and does some absolute hogwash. Like, for example, it was struggling to write to a file that had some broken permission
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!