Bringing AI Closer to the Edge and On-Device with Gemma 4
The Gemmaverse expands with the launch of the latest Gemma 4 multimodal and multilingual models, designed to scale across the full spectrum of deployments, from...
The Gemmaverse expands with the launch of the latest Gemma 4 multimodal and multilingual models, designed to scale across the full spectrum of deployments, from NVIDIA Blackwell in the data center to Jetson at the edge. These models are suited to meet the growing demand for local deployment for AI development and prototyping, secure on-prem requirements, cost efficiency, and latency-sensitive use cases. The newest generation improves both efficiency and accuracy, making these general-purpose models well-suitable for a wide range of common tasks:
-
Reasoning: Strong performance on complex problem-solving tasks.
-
Coding: Code generation and debugging for developer workflows.
-
Agents: Native support for structured tool use (function calling).
-
Vision, video and audio capability: Enables rich multimodal interactions for use cases such as object recognition, automated speech recognition (ASR), document and video intelligence, and more.
-
Interleaved multimodal input: Freely mix text and images in any order within a single prompt.
-
Multilingual: Out-of-the-box support for over 35 languages, and pre-trained on over 140 languages.
The bundle includes four models, including Gemma’s first MoE model, which can all fit on a single NVIDIA H100 GPU and supports over 140 languages. The 31B and 26B A4B variants are high-performing reasoning models suitable for both local and data center environments. The E4B and E2B are the newest edition of on-device and mobile designed models first launched with Gemma 3n.
Model Name Architecture Type Total Parameters Active or Effective Parameters Input Context Length (Tokens) Sliding Window (Tokens) Modalities Gemma-4-31B Dense Transformer 31B — 256K 1024 Gemma-4-26B-A4B MoE – 128 Experts 26B 3.8B 256K — Gemma-4-E4B Dense Transformer 7.9B with embeddings 4.5B effective 128K 512 Text, Audio, Vision, Video Gemma-4-E2B Dense Transformer 5.1B with embeddings 2.3B effective 128K 512 Text, Audio, Vision, Video Table 1. Overview of the Gemma 4 model family, summarizing architecture types, parameter sizes, effective parameters, supported context lengths, and available modalities to help developers choose the right model for data center, edge, and on‑device deployments.
Each model is available on Hugging Face with BF16 checkpoints, and an NVFP4 quantized check point for Gemma-4-31B will be available soon for NVIDIA Blackwell developers.
Run intelligent workloads on-device
As AI workflows and agents become more integrated into everyday applications, the ability to run these models beyond traditional data center environments is becoming critical. The NVIDIA suite of client and edge systems, from RTX GPUs and DGX Spark to Jetson Nano, provides developers with the flexibility to manage cost and latency while supporting security requirements for highly regulated industries such as healthcare and finance.
We collaborated with vLLM, Ollama and llama.cpp to provide the best local deployment experience for each of the Gemma 4 models. Unsloth also provides day-one support with optimized and quantized models for efficient local deployment via Unsloth Studio.
Check out the RTX AI Garage blog post to get started with Gemma 4 on RTX GPUs and DGX Spark.
DGX Spark Jetson RTX / RTX PRO Use Case AI research and prototyping Edge AI and robotics Desktop apps and Windows development Key Highlights A preinstalled NVIDIA AI software stack and 128 GB of unified memory power local prototyping, fine-tuning, and fully local OpenClaw workflowsNear-zero latency due to architecture features such as conditional parameter loading and per-layer embeddings which can be cached for faster and reduced memory use (more info) Optimized performance for local inference for hobbyists, creators and professionals Getting Started Guide DGX Spark Playbooks for vLLM, Ollama, Unsloth and llama.cpp deployment guides
NeMo Automodel for fine-tuning on Spark guide
Jetson AI Lab for tutorials and custom Gemma containers RTX AI Garage for Ollama and llama.cpp guides. RTX Pro owners can use vLLM as well. Table 2. Comparison of local deployment options across NVIDIA platforms, highlighting primary use cases, key capabilities, and recommended getting‑started resources for DGX Spark, Jetson, and RTX / RTX PRO systems running Gemma 4 models.
Build secure agentic AI workflows with DGX Spark
AI developers and enthusiasts benefit from the GB10 Grace Blackwell Superchip paired with 128 GB of unified memory in DGX Spark, providing the resources needed to run Gemma 4 31B with BF16 model weights. Combined with DGX Linux OS and the full NVIDIA software stack, developers can efficiently prototype and build agentic AI workflows with Gemma 4 while maintaining private, secure on-device execution.
The vLLM inference engine is designed to run LLMs efficiently, maximizing throughput while minimizing memory usage. Using vLLM high-throughput LLM serving on DGX Spark provides a high-performance platform for the largest Gemma 4 models; the vLLM for Inference DGX Spark playbook provides the details to get vLLM running with Gemma 4 on your DGX Spark. Or get started with Gemma 4 using Ollama or llama.cpp. Users can further fine-tune the models on DGX Spark with NeMo Automodel.
Power physical AI agents with Jetson
Modern physical AI agents are evolving rapidly with Gemma 4 models that integrate audio, multimodal perception, and deep reasoning capabilities. These advanced models enable robotics systems to move beyond simple task execution, allowing them to understand speech, interpret visual context, and reason intelligently before taking action. On NVIDIA Jetson, developers can run Gemma 4 inference at the edge using llama.cpp and vLLM. Jetson Orin Nano supports the Gemma 4 e2b and e4b variants, enabling multimodal inference on small, embedded, and power-constrained systems, with the same model family scaling across the Jetson platform up to Jetson Thor.
This supports scalable deployment across robotics, smart machines, and industrial automation use cases that depend on low-latency performance and on-device intelligence.
Jetson developers can check out the tutorial and download the container to get started from the Jetson AI Lab.
Video 1. Demo of Gemma 4 31B on build.nvidia.com
Production ready deployment with NVIDIA NIM
Enterprise developers can try the Gemma 4 31B model for free using an NVIDIA-hosted NIM API available in the NVIDIA API catalog for prototyping. For production deployment, they can use prepackaged and optimized NIM microservices for secure, self-hosted deployment with an NVIDIA Enterprise License.
Day 0 fine-tuning with NeMo Framework
Developers can customize Gemma 4 with their own domain data using the NVIDIA NeMo framework, specifically the NeMo Automodel library, which combines native PyTorch ease of use with optimized performance. Using this fine‑tuning recipe for Gemma 4, developers can apply techniques such as supervised fine‑tuning (SFT) and memory‑efficient LoRA to perform day‑0 fine‑tuning starting from Hugging Face model checkpoints without the need for conversion.
Get started today
No matter which NVIDIA GPU you are using, Gemma 4 is supported across the entire NVIDIA AI platform and is available under the commercial-friendly Apache 2.0 license. From Blackwell, with NVFP4 quantized checkpoints coming soon, to Jetson platforms, developers can quickly get started deploying these high-accuracy multimodal models, with the flexibility to meet their speed, security, and cost requirements.
Check out Gemma on Hugging Face, or test Gemma 4 31B for free using NVIDIA APIs at build.nvidia.com.
About the Authors
NVIDIA Tech Blog
https://developer.nvidia.com/blog/bringing-ai-closer-to-the-edge-and-on-device-with-gemma-4/Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modellaunchmultimodal
Arcee AI Releases Trinity Large Thinking: An Apache 2.0 Open Reasoning Model for Long-Horizon Agents and Tool Use
The landscape of open-source artificial intelligence has shifted from purely generative models toward systems capable of complex, multi-step reasoning. While proprietary reasoning models have dominated the conversation, Arcee AI has released Trinity Large Thinking. This release is an open-weight reasoning model distributed under the Apache 2.0 license, positioning it as a transparent alternative for developers [ ] The post Arcee AI Releases Trinity Large Thinking: An Apache 2.0 Open Reasoning Model for Long-Horizon Agents and Tool Use appeared first on MarkTechPost .

Migrating from Ralph Loops to duckflux
If you've been running coding agent tasks inside Ralph Loops , you already understand the core insight: iteration beats perfection. You've seen what happens when you hand a well-written prompt to an AI agent and let it grind until the job is done. This guide shows how to take that same philosophy and express it as a declarative, reproducible workflow in duckflux. You gain structure, observability, and composability without giving up the power of iterative automation. What are Ralph Loops? Ralph Wiggum is an iterative AI development methodology built on a deceptively simple idea: feed a prompt to a coding agent in a loop until the task is complete. Named after the Simpsons character (who stumbles forward until he accidentally succeeds), the technique treats failures as data points and bets
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Migrating from Ralph Loops to duckflux
If you've been running coding agent tasks inside Ralph Loops , you already understand the core insight: iteration beats perfection. You've seen what happens when you hand a well-written prompt to an AI agent and let it grind until the job is done. This guide shows how to take that same philosophy and express it as a declarative, reproducible workflow in duckflux. You gain structure, observability, and composability without giving up the power of iterative automation. What are Ralph Loops? Ralph Wiggum is an iterative AI development methodology built on a deceptively simple idea: feed a prompt to a coding agent in a loop until the task is complete. Named after the Simpsons character (who stumbles forward until he accidentally succeeds), the technique treats failures as data points and bets

Я уволил отдел и нанял одного AI-агента
Когда я сказал, что уволю весь отдел, многие подумали, что это шутка. Но через месяц я оказался одним из первых в Киеве, кто доверил бизнес одному AI-агенту. Секрет оказался прост - автоматизация бизнеса с помощью AI. Отдел из пяти человек занимался обработкой заявок, отвечал клиентам, составлял отчёты и следил за воронкой продаж. На бумаге всё выглядело хорошо, но на практике работа была полна дублирования, ошибок и задержек. Человеческий фактор и 8-часовой рабочий день против 24/7 работы AI - разница была очевидна. Как только стоимость ошибок превысила зарплаты, стало ясно, что пора что-то менять. Я собрал автоматизация бизнеса с помощью AI-промпты в PDF. Забери бесплатно в Telegram (в закрепе): https://t.me/yevheniirozov Я собрал AI-агента на базе GPT-4 и Claude API, интегрировал его с



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!