Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessMy Claude Code Buddy Moved Into My MacBook's Notch and I Can't Stop Looking at ItDEV CommunityI Turned My MacBook's Notch Into a Control Center for AI Coding AgentsDEV Communitytrunk/98fc38c4eb17c435699cea1a7d3aa84c14458ed9: Add autograd_cache_key to aot_autograd with tests (#178152)PyTorch ReleasesBuildWithAI: What Broke, What I Learned, What's NextDEV CommunityBuildWithAI: Prompt Engineering 6 DR Tools with Amazon BedrockDEV CommunityBuildWithAI: Architecting a Serverless DR Toolkit on AWSDEV CommunityThe Locksmith's ApprenticeDEV CommunitySame Agents, Different Minds — What 180 Configurations Proved About AI Environment DesignDEV CommunityI Built a Self-Hosted AI Agent That Runs on a Raspberry PiDEV CommunityBeyond the Boardroom: How Decentralized Autonomous Organizations (DAOs) are Reshaping E-commerceDEV Community20 Articles Later: What I've Learned About AI Agent WritingDEV CommunityFrance’s Mistral AI seeks Samsung memory for AI expansion - The Korea HeraldGoogle News - Mistral AI FranceBlack Hat USADark ReadingBlack Hat AsiaAI BusinessMy Claude Code Buddy Moved Into My MacBook's Notch and I Can't Stop Looking at ItDEV CommunityI Turned My MacBook's Notch Into a Control Center for AI Coding AgentsDEV Communitytrunk/98fc38c4eb17c435699cea1a7d3aa84c14458ed9: Add autograd_cache_key to aot_autograd with tests (#178152)PyTorch ReleasesBuildWithAI: What Broke, What I Learned, What's NextDEV CommunityBuildWithAI: Prompt Engineering 6 DR Tools with Amazon BedrockDEV CommunityBuildWithAI: Architecting a Serverless DR Toolkit on AWSDEV CommunityThe Locksmith's ApprenticeDEV CommunitySame Agents, Different Minds — What 180 Configurations Proved About AI Environment DesignDEV CommunityI Built a Self-Hosted AI Agent That Runs on a Raspberry PiDEV CommunityBeyond the Boardroom: How Decentralized Autonomous Organizations (DAOs) are Reshaping E-commerceDEV Community20 Articles Later: What I've Learned About AI Agent WritingDEV CommunityFrance’s Mistral AI seeks Samsung memory for AI expansion - The Korea HeraldGoogle News - Mistral AI France
AI NEWS HUBbyEIGENVECTOREigenvector

April 2026 TLDR Setup for Ollama and Gemma 4 26B on a Mac mini

Hacker News Topby 262588213843476April 3, 20264 min read1 views
Source Quiz

Article URL: https://gist.github.com/greenstevester/fc49b4e60a4fef9effc79066c1033ae5 Comments URL: https://news.ycombinator.com/item?id=47624731 Points: 26 # Comments: 8

April 2026 TLDR setup for Ollama + Gemma 4 26B on a Mac mini (Apple Silicon) — auto-start, preload, and keep-alive

Prerequisites

  • Mac mini with Apple Silicon (M1/M2/M3/M4/M5)

  • At least 24GB unified memory for Gemma 4 26B

  • macOS with Homebrew installed

Step 1: Install Ollama

Install the Ollama macOS app via Homebrew cask (includes auto-updates and MLX backend):

brew install --cask ollama-app

This installs:

  • Ollama.app in /Applications/

  • ollama CLI at /opt/homebrew/bin/ollama

Step 2: Start Ollama

open -a Ollama

The Ollama icon will appear in the menu bar. Wait a few seconds for the server to initialize.

Verify it's running:

ollama list

Step 3: Pull Gemma 4 26B

ollama pull gemma4:26b

This downloads ~17GB. Verify:

ollama list

NAME ID SIZE MODIFIED

gemma4:26b 5571076f3d70 17 GB ...`

Step 4: Test the Model

ollama run gemma4:26b "Hello, what model are you?"

Check that it's using GPU acceleration:

ollama ps

Should show CPU/GPU split, e.g. 14%/86% CPU/GPU`

Step 5: Configure Auto-Start on Login

5a. Ollama App — Launch at Login

Click the Ollama icon in the menu bar > Launch at Login (enable it).

Alternatively, go to System Settings > General > Login Items and add Ollama.

5b. Auto-Preload Gemma 4 on Startup

Create a launch agent that loads the model into memory after Ollama starts and keeps it warm:

cat << 'EOF' > ~/Library/LaunchAgents/com.ollama.preload-gemma4.plist

Label com.ollama.preload-gemma4 ProgramArguments

/opt/homebrew/bin/ollama run gemma4:26b

RunAtLoad

StartInterval 300 StandardOutPath /tmp/ollama-preload.log StandardErrorPath /tmp/ollama-preload.log

EOF`

Load the agent:

launchctl load ~/Library/LaunchAgents/com.ollama.preload-gemma4.plist

This sends an empty prompt to ollama run every 5 minutes, keeping the model warm in memory.

5c. Keep Models Loaded Indefinitely

By default, Ollama unloads models after 5 minutes of inactivity. To keep them loaded forever:

launchctl setenv OLLAMA_KEEP_ALIVE "-1"

Then restart Ollama for the change to take effect.

Note: This environment variable is session-scoped. To persist across reboots, add export OLLAMA_KEEP_ALIVE="-1" to your ~/.zshrc, or set it via a dedicated launch agent.

Step 6: Verify Everything Works

# Check Ollama server is running ollama list

Check model is loaded in memory

ollama ps

Check launch agent is registered

launchctl list | grep ollama`

Expected output from ollama ps:

NAME ID SIZE PROCESSOR CONTEXT UNTIL gemma4:26b 5571076f3d70 20 GB 14%/86% CPU/GPU 4096 Forever

API Access

Ollama exposes a local API at http://localhost:11434. Use it with coding agents:

# Chat completion (OpenAI-compatible) curl http://localhost:11434/v1/chat/completions \  -H "Content-Type: application/json" \  -d '{  "model": "gemma4:26b",  "messages": [{"role": "user", "content": "Hello"}]  }'

Useful Commands

Command Description

ollama list List downloaded models

ollama ps Show running models & memory usage

ollama run gemma4:26b Interactive chat

ollama stop gemma4:26b Unload model from memory

ollama pull gemma4:26b Update model to latest version

ollama rm gemma4:26b Delete model

Uninstall / Remove Auto-Start

# Remove the preload agent launchctl unload ~/Library/LaunchAgents/com.ollama.preload-gemma4.plist rm ~/Library/LaunchAgents/com.ollama.preload-gemma4.plist

Uninstall Ollama

brew uninstall --cask ollama-app`

What's New in Ollama v0.19+ (March 31, 2026)

MLX Backend on Apple Silicon

On Apple Silicon, Ollama automatically uses Apple's MLX framework for faster inference — no manual configuration needed. M5/M5 Pro/M5 Max chips get additional acceleration via GPU Neural Accelerators. M4 and earlier still benefit from general MLX speedups.

NVFP4 Support (NVIDIA)

Ollama now leverages NVIDIA's NVFP4 format to maintain model accuracy while reducing memory bandwidth and storage requirements for inference workloads. As more inference providers scale inference using NVFP4 format, this allows Ollama users to share the same results as they would in a production environment. It further opens up Ollama to run models optimized by NVIDIA's model optimizer.

Improved Caching for Coding and Agentic Tasks

  • Lower memory utilization: Ollama reuses its cache across conversations, meaning less memory utilization and more cache hits when branching with a shared system prompt — especially useful with tools like Claude Code.

  • Intelligent checkpoints: Ollama stores snapshots of its cache at intelligent locations in the prompt, resulting in less prompt processing and faster responses.

  • Smarter eviction: Shared prefixes survive longer even when older branches are dropped.

Notes

  • Memory: Gemma 4 26B uses ~20GB when loaded. On a 24GB Mac mini, this leaves ~4GB for the system — close memory-heavy apps before running.

References

  • Ollama MLX Blog Post — Ollama Newsletter, March 31, 2026

  • Ollama v0.20.0 Release

  • Gemma 4 Announcement — Google DeepMind

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

llamagithubollama

Knowledge Map

Knowledge Map
TopicsEntitiesSource
April 2026 …llamagithubollamaHacker News…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 237 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Open Source AI