DoControl Sets the Bar for AI Security with Industry-First Control of Google Gemini Gems - prnewswire.com
<a href="https://news.google.com/rss/articles/CBMi3AFBVV95cUxNbDdxX1lRVGVMaUJQMGJ5SnNqLXpuRjBzVzJYcERuaFh4NlB1dW9pRi1vTXRLZmExWndvU3JOX1J6U2RuSFFYTmk0X3dNaGFaUDdONjNqWlNITldkNV8xNU81b3pFSUxmMkYtMUNXbm1TbHBab0EwNlQ4UllEWnBhaVZqR2dCYk1lUG5RbE9sQ3FJYjVCM2xwb1JRbWNudmtVTzE1X2V6Y29aRTE2cEs2U0hsR1diWWlSeTR2QXJyaUZWUE5EUkhnTDNkNTlWdjhVY1FYVEtCNy1EOWpr?oc=5" target="_blank">DoControl Sets the Bar for AI Security with Industry-First Control of Google Gemini Gems</a> <font color="#6f6f6f">prnewswire.com</font>
Could not retrieve the full article text.
Read on GNews AI Google →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
gemini
Google Gemma 4: Everything Developers Need to Know
Google dropped Gemma 4 on April 2, 2026, A full generational jump in what open models can do at their parameter range and the first time in the Gemma family's history that one ships under Apache 2.0, meaning commercial use without permission-seeking. Some context: since Gemma's first generation, developers have downloaded the models over 400 million times and built more than 100,000 variants. Four Models, One Family Gemma 4 is a family of four, each aimed at a different point in the hardware spectrum. E2B : Effective 2 billion active parameters. Runs on smartphones, Raspberry Pi, Jetson Orin Nano. 128K context window. Handles images, video, and audio. Built for battery and memory efficiency. E4B : Effective 4 billion active parameters. Same hardware targets, higher reasoning quality. About

Gemma 4 is efficient with thinking tokens, but it will also happily reason for 10+ minutes if you prompt it to do so.
Tested both 26b and 31b in AI Studio. The task I asked of it was to crack a cypher. The top closed source models can crack this cypher at max thinking parameters, and Kimi 2.5 Thinking and Deepseek 3.2 are the only open source models to crack the cypher without tool use. (Of course, with the closed models you can't rule out 'secret' tool use on the backend.) When I first asked these models to crack the cypher, they thought for a short amount of time and then both hallucinated false 'translations' of the cypher. I added this to my prompt: Spare no effort to solve this, the stakes are high. Increase your thinking length to maximum in order to solve it. Double check and verify your results to rule out hallucination of an incorrect response. I did not expect dramatic results (we all laugh at p
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

AtomGen: Streamlining Atomistic Modeling through Dataset and Benchmark Integration
By Ali Kore, Amrit Krishnan, David Emerson The AtomGen project focuses on enhancing atomistic modeling capabilities through advanced machine-learning techniques. By integrating deep-learning models with extensive atomistic datasets, AtomGen aims [ ] The post AtomGen: Streamlining Atomistic Modeling through Dataset and Benchmark Integration appeared first on Vector Institute for Artificial Intelligence .

Vector researcher Wenhu Chen on improving and benchmarking foundation models
By Wenhu Chen The past year has seen great progress in foundation models as they achieve expert-level performance in solving challenging, real-world problems. In early 2023, the best open-source 7B [ ] The post Vector researcher Wenhu Chen on improving and benchmarking foundation models appeared first on Vector Institute for Artificial Intelligence .


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!