Google unveils open-source AI models Gemma 4 - Breakingthenews.net
Google unveils open-source AI models Gemma 4 Breakingthenews.net
Could not retrieve the full article text.
Read on GNews AI Gemma →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelopen-source
The web can still be wonderful, and Flipboard’s Surf proves it
Hello again, and welcome back to Fast Company ’s Plugged In . More than 15 months ago, I wrote about Surf , a discovery engine for the social web from Flipboard—itself an earlier twist on the same concept dating to the early days of the iPad . At the time, it was still a rough draft, and in private beta. Rather than rushing it out to a broader audience, Flipboard took its time. The app went through a series of revisions that were both numerous and substantial, ending up significantly different than the intriguing prototype I tried in December 2024. This week, the company finally deemed Surf ready for prime time. It’s now live in web form at Surf.social ; a beta Android version is in the Google Play store. (The iPhone and iPad versions still have a waitlist .) If you’ve grown jaded about so
![[R] VOID: Video Object and Interaction Deletion (physically-consistent video inpainting)](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-neural-network-P6fqXULWLNUwjuxqUZnB3T.webp)
[R] VOID: Video Object and Interaction Deletion (physically-consistent video inpainting)
We present VOID, a model for video object removal that aims to handle *physical interactions*, not just appearance. Most existing video inpainting / object removal methods can fill in pixels behind an object (e.g., removing shadows or reflections), but they often fail when the removed object affects the dynamics of the scene. For example: - A domino chain is falling → removing the middle blocks should stop the chain - Two cars are about to crash → removing one car should prevent the collision Current models typically remove the object but leave its effects unchanged, resulting in physically implausible outputs. VOID addresses this by modeling counterfactual scene evolution: “What would the video look like if the object had never been there?” Key ideas: - Counterfactual training data: paire

Gemma-4 26B-A4B + Opencode on M5 MacBook is *actually good*
TL;DR, 32gb M5 MacBook Air can run gemma-4-26B-A4B-it-UD-IQ4_XS at 300t/s PP and 12t/s generation (running in low power mode, uses 8W , making it the first laptop I've used to not get warm and noisy whilst running LLMs). Fast prompt processing + short thinking traces + can actually handle agentic behaviour = Opencode is actually usable from my laptop! -- Previously I've been running LLMs off my M1 Max 64gb. And whilst it's been good enough for tinkering and toy use cases, it's never really been great for running anything that requires longer context... i.e. it could be useful as a simple chatbot but not much else. Making a single Snake game in Python was fine, but anything where I might want to do agentic coding / contribute to a larger codebase has always been a bit janky. And unless I ar
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

OpenAI neemt techpodcast TBPN over om mensen positiever te maken over AI
OpenAI koopt Technology Business Programming Network, de maker van de gelijknamige podcast. Het is niet bekend hoeveel de ChatGPT-maker daarvoor betaalt. Het bedrijf wil met de podcast de stemming rond generatieve AI positiever maken.
![[R] VOID: Video Object and Interaction Deletion (physically-consistent video inpainting)](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-neural-network-P6fqXULWLNUwjuxqUZnB3T.webp)
[R] VOID: Video Object and Interaction Deletion (physically-consistent video inpainting)
We present VOID, a model for video object removal that aims to handle *physical interactions*, not just appearance. Most existing video inpainting / object removal methods can fill in pixels behind an object (e.g., removing shadows or reflections), but they often fail when the removed object affects the dynamics of the scene. For example: - A domino chain is falling → removing the middle blocks should stop the chain - Two cars are about to crash → removing one car should prevent the collision Current models typically remove the object but leave its effects unchanged, resulting in physically implausible outputs. VOID addresses this by modeling counterfactual scene evolution: “What would the video look like if the object had never been there?” Key ideas: - Counterfactual training data: paire

Gemma-4 26B-A4B + Opencode on M5 MacBook is *actually good*
TL;DR, 32gb M5 MacBook Air can run gemma-4-26B-A4B-it-UD-IQ4_XS at 300t/s PP and 12t/s generation (running in low power mode, uses 8W , making it the first laptop I've used to not get warm and noisy whilst running LLMs). Fast prompt processing + short thinking traces + can actually handle agentic behaviour = Opencode is actually usable from my laptop! -- Previously I've been running LLMs off my M1 Max 64gb. And whilst it's been good enough for tinkering and toy use cases, it's never really been great for running anything that requires longer context... i.e. it could be useful as a simple chatbot but not much else. Making a single Snake game in Python was fine, but anything where I might want to do agentic coding / contribute to a larger codebase has always been a bit janky. And unless I ar


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!