Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessHow Does AI-Powered Data Analysis Supercharge Investment Decisions in Today's Inflationary World?Dev.to AISame Prompt. Different Answers Every Time. Here's How I Fixed It.Dev.to AICan AI Predict the Next Stock Market Crash? Unpacking the Hype and Reality for Global InvestorsDev.to AIYour Go Tests Pass, But Do They Actually Test Anything? An Introduction to Mutation TestingDev.to AII Broke My Multi-Agent Pipeline on Purpose. All 3 Failures Were Silent.Dev.to AIUnlock Blog Growth: Implement Structured Data for Blogs Now!Dev.to AIWhat is Algorithmic Trading, and Why is it the Silent Force Behind Today's Market Volatility?Dev.to AIЯ уволил отдел и нанял одного AI-агентаDev.to AIIssue #23: Day 15 — The Newsletter Finally Has a Subscriber System (And How It Works)Dev.to AIMigrating from Ralph Loops to duckfluxDev.to AIMusk Announced a $25B Chip Factory That Nvidia’s CEO Says Is “Impossible.”Medium AIGoogle Paid $2.7 Billion to Rehire Someone It Let Walk Out the Door. Read That Again.Medium AIBlack Hat USADark ReadingBlack Hat AsiaAI BusinessHow Does AI-Powered Data Analysis Supercharge Investment Decisions in Today's Inflationary World?Dev.to AISame Prompt. Different Answers Every Time. Here's How I Fixed It.Dev.to AICan AI Predict the Next Stock Market Crash? Unpacking the Hype and Reality for Global InvestorsDev.to AIYour Go Tests Pass, But Do They Actually Test Anything? An Introduction to Mutation TestingDev.to AII Broke My Multi-Agent Pipeline on Purpose. All 3 Failures Were Silent.Dev.to AIUnlock Blog Growth: Implement Structured Data for Blogs Now!Dev.to AIWhat is Algorithmic Trading, and Why is it the Silent Force Behind Today's Market Volatility?Dev.to AIЯ уволил отдел и нанял одного AI-агентаDev.to AIIssue #23: Day 15 — The Newsletter Finally Has a Subscriber System (And How It Works)Dev.to AIMigrating from Ralph Loops to duckfluxDev.to AIMusk Announced a $25B Chip Factory That Nvidia’s CEO Says Is “Impossible.”Medium AIGoogle Paid $2.7 Billion to Rehire Someone It Let Walk Out the Door. Read That Again.Medium AI
AI NEWS HUBbyEIGENVECTOREigenvector

Bringing AI Closer to the Edge and On-Device with Gemma 4

NVIDIA Tech Blogby Anu SrivastavaApril 2, 20261 min read0 views
Source Quiz

The Gemmaverse expands with the launch of the latest Gemma 4 multimodal and multilingual models, designed to scale across the full spectrum of deployments, from...

The Gemmaverse expands with the launch of the latest Gemma 4 multimodal and multilingual models, designed to scale across the full spectrum of deployments, from NVIDIA Blackwell in the data center to Jetson at the edge. These models are suited to meet the growing demand for local deployment for AI development and prototyping, secure on-prem requirements, cost efficiency, and latency-sensitive use cases. The newest generation improves both efficiency and accuracy, making these general-purpose models well-suitable for a wide range of common tasks:

  • Reasoning: Strong performance on complex problem-solving tasks.

  • Coding: Code generation and debugging for developer workflows.

  • Agents: Native support for structured tool use (function calling).

  • Vision, video and audio capability: Enables rich multimodal interactions for use cases such as object recognition, automated speech recognition (ASR), document and video intelligence, and more.

  • Interleaved multimodal input: Freely mix text and images in any order within a single prompt.

  • Multilingual: Out-of-the-box support for over 35 languages, and pre-trained on over 140 languages.

The bundle includes four models, including Gemma’s first MoE model, which can all fit on a single NVIDIA H100 GPU and supports over 140 languages. The 31B and 26B A4B variants are high-performing reasoning models suitable for both local and data center environments. The E4B and E2B are the newest edition of on-device and mobile designed models first launched with Gemma 3n.

Model Name Architecture Type Total Parameters Active or Effective Parameters Input Context Length (Tokens) Sliding Window (Tokens) Modalities Gemma-4-31B Dense Transformer 31B — 256K  1024  Gemma-4-26B-A4B  MoE – 128 Experts 26B  3.8B 256K —  Gemma-4-E4B Dense Transformer  7.9B with embeddings 4.5B effective 128K 512 Text, Audio, Vision, Video Gemma-4-E2B Dense Transformer  5.1B with embeddings 2.3B effective 128K 512 Text, Audio, Vision, Video Table 1. Overview of the Gemma 4 model family, summarizing architecture types, parameter sizes, effective parameters, supported context lengths, and available modalities to help developers choose the right model for data center, edge, and on‑device deployments.

Each model is available on Hugging Face with BF16 checkpoints, and an NVFP4 quantized check point for Gemma-4-31B will be available soon for NVIDIA Blackwell developers.

Run intelligent workloads on-device

As AI workflows and agents become more integrated into everyday applications, the ability to run these models beyond traditional data center environments is becoming critical. The NVIDIA suite of client and edge systems, from RTX GPUs and DGX Spark to Jetson Nano, provides developers with the flexibility to manage cost and latency while supporting security requirements for highly regulated industries such as healthcare and finance.

We collaborated with vLLM, Ollama and llama.cpp to provide the best local deployment experience for each of the Gemma 4 models. Unsloth also provides day-one support with optimized and quantized models for efficient local deployment via Unsloth Studio.

Check out the RTX AI Garage blog post to get started with Gemma 4 on RTX GPUs and DGX Spark.

DGX Spark Jetson  RTX / RTX PRO Use Case AI research  and prototyping Edge AI and robotics Desktop apps  and Windows development  Key Highlights A preinstalled NVIDIA AI software stack and 128 GB of unified memory power local prototyping, fine-tuning, and fully local OpenClaw workflowsNear-zero latency due to architecture features such as conditional parameter loading and per-layer embeddings which can be cached for faster and reduced memory use (more info)  Optimized performance for local inference for hobbyists, creators and professionals Getting Started Guide DGX Spark Playbooks for vLLM, Ollama, Unsloth and llama.cpp deployment guides

NeMo Automodel for fine-tuning on Spark guide

Jetson AI Lab for tutorials and custom Gemma containers RTX AI Garage for Ollama and llama.cpp guides. RTX Pro owners can use vLLM as well. Table 2. Comparison of local deployment options across NVIDIA platforms, highlighting primary use cases, key capabilities, and recommended getting‑started resources for DGX Spark, Jetson, and RTX / RTX PRO systems running Gemma 4 models.

Build secure agentic AI workflows with DGX Spark

AI developers and enthusiasts benefit from the GB10 Grace Blackwell Superchip paired with 128 GB of unified memory in DGX Spark, providing the resources needed to run Gemma 4 31B with BF16 model weights. Combined with DGX Linux OS and the full NVIDIA software stack, developers can efficiently prototype and build agentic AI workflows with Gemma 4 while maintaining private, secure on-device execution.

The vLLM inference engine is designed to run LLMs efficiently, maximizing throughput while minimizing memory usage. Using vLLM high-throughput LLM serving on DGX Spark provides a high-performance platform for the largest Gemma 4 models; the vLLM for Inference DGX Spark playbook provides the details to get vLLM running with Gemma 4 on your DGX Spark. Or get started with Gemma 4 using Ollama or llama.cpp. Users can further fine-tune the models on DGX Spark with NeMo Automodel.

Power physical AI agents with Jetson

Modern physical AI agents are evolving rapidly with Gemma 4 models that integrate audio, multimodal perception, and deep reasoning capabilities. These advanced models enable robotics systems to move beyond simple task execution, allowing them to understand speech, interpret visual context, and reason intelligently before taking action. On NVIDIA Jetson, developers can run Gemma 4 inference at the edge using llama.cpp and vLLM. Jetson Orin Nano supports the Gemma 4 e2b and e4b variants, enabling multimodal inference on small, embedded, and power-constrained systems, with the same model family scaling across the Jetson platform up to Jetson Thor.

This supports scalable deployment across robotics, smart machines, and industrial automation use cases that depend on low-latency performance and on-device intelligence.

Jetson developers can check out the tutorial and download the container to get started from the Jetson AI Lab.

Video 1. Demo of Gemma 4 31B on build.nvidia.com

Production ready deployment with NVIDIA NIM

Enterprise developers can try the Gemma 4 31B model for free using an NVIDIA-hosted NIM API available in the NVIDIA API catalog for prototyping. For production deployment, they can use prepackaged and optimized NIM microservices for secure, self-hosted deployment with an NVIDIA Enterprise License.

Day 0 fine-tuning with NeMo Framework

Developers can customize Gemma 4 with their own domain data using the NVIDIA NeMo framework, specifically the NeMo Automodel library, which combines native PyTorch ease of use with optimized performance. Using this fine‑tuning recipe for Gemma 4, developers can apply techniques such as supervised fine‑tuning (SFT) and memory‑efficient LoRA to perform day‑0 fine‑tuning starting from  Hugging Face model checkpoints without the need for conversion.

Get started today

No matter which NVIDIA GPU you are using, Gemma 4 is supported across the entire NVIDIA AI platform and is available under the commercial-friendly Apache 2.0 license. From Blackwell, with NVFP4 quantized checkpoints coming soon, to Jetson platforms, developers can quickly get started deploying these high-accuracy multimodal models, with the flexibility to meet their speed, security, and cost requirements.

Check out Gemma on Hugging Face, or test Gemma 4 31B for free using NVIDIA APIs at build.nvidia.com.

About the Authors

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modellaunchmultimodal

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Bringing AI…modellaunchmultimodalNVIDIA Tech…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 186 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Models

Я уволил отдел и нанял одного AI-агента
ModelsLive