Show HN: Apfel – The free AI already on your Mac
Article URL: https://apfel.franzai.com Comments URL: https://news.ycombinator.com/item?id=47624645 Points: 11 # Comments: 2
v0.6.13 · macOS 26+ · MIT
The free AI already on your Mac.
Every Mac with Apple Silicon has a built-in LLM. Apple locked it behind Siri. apfel sets it free - as a CLI tool, an OpenAI-compatible server, and a chat.
100% On-Device Zero Cost OpenAI Compatible
$ brew install Arthur-Ficial/tap/apfel
Apple Silicon · macOS Tahoe · Apple Intelligence enabled
Zero everything.
The AI is already installed on your Mac. Apple ships it with macOS. apfel just gives you a way to talk to it - from your terminal, from your code, from anywhere.
$0
Cost
No API keys. No subscriptions. No per-token billing. It's your hardware - use it.
100%
On-device
Every token generated locally on your Apple Silicon. Nothing leaves your machine. Ever.
4,096
Tokens
Context window for input and output combined. Enough for most single-turn tasks and short chats.
_
UNIX Tool
Pipe-friendly and composable. Works with jq, xargs, and your shell scripts. stdin, stdout, JSON output, file attachments, proper exit codes.
$ apfel "What is the capital of Austria?"
The capital of Austria is Vienna.
$ apfel -o json "Translate to German: hello" | jq .content
"Hallo"
{ }
OpenAI Server
Drop-in replacement at localhost:11434. Point any OpenAI SDK at it and go. Streaming, tool calling, CORS, response formats.
$ apfel --serve
Server running on http://127.0.0.1:11434
$ curl localhost:11434/v1/chat/completions ...
...
Interactive Chat
Multi-turn conversations with automatic context management. Five trimming strategies. System prompt support. All on your Mac.
$ apfel --chat -s "You are a coding assistant"
Chat started. Type /quit to exit.
How do I reverse a list in Python?
Apple ships an on-device LLM
Starting with macOS 26 (Tahoe), every Apple Silicon Mac includes a language model as part of Apple Intelligence. Apple exposes it through the FoundationModels framework - a Swift API that gives apps access to SystemLanguageModel. All inference runs on the Neural Engine and GPU. No network calls, no cloud, no API keys. The model is just there.
But Apple only uses it for Siri
Out of the box, the on-device model powers Siri, Writing Tools, and system features. There is no terminal command, no HTTP endpoint, no way to pipe text through it. The FoundationModels framework exists, but you need to write a Swift app to use it. That is what apfel does.
What apfel adds
apfel is a Swift 6.3 binary that wraps LanguageModelSession and exposes it three ways: as a UNIX command-line tool with stdin/stdout, as an OpenAI-compatible HTTP server (built on Hummingbird), and as an interactive chat with context management.
It handles the things Apple's raw API does not: proper exit codes, JSON output, file attachments, five context trimming strategies for the small 4096-token window, real token counting via the SDK, and conversion of OpenAI tool schemas to Apple's native Transcript.ToolDefinition format.
hardware Apple Silicon (Neural Engine + GPU)
|
model Apple's on-device LLM (shipped with macOS)
|
sdk FoundationModels.framework (Swift API)
|
apfel CLI + HTTP server + context management
|
you Terminal, shell scripts, OpenAI SDKs, curl
4,096 tokens Context window (input + output)
1 model Fixed, not configurable
Swift 6.3 Strict concurrency, no Xcode
MIT license Open source
Terminal - apfel
$
_
cmd
Natural language to shell command. Say what you want, get the command.
$ cmd "find all .log files modified today"
|>
oneliner
Pipe chains from plain English. awk, sed, sort, uniq - generated for you.
$ oneliner "count unique IPs in access.log"
**
mac-narrator
Narrates your Mac's system activity like a nature documentary.
$ mac-narrator --watch
?
explain
Explain any command, error message, or code snippet in plain English.
$ explain "awk '{print $1}' file | sort -u"
./
wtd
What's this directory? Instant project orientation for any codebase.
$ wtd
++
gitsum
Summarize recent git commits in a few sentences.
$ gitsum
Drop-in replacement
apfel speaks the OpenAI API. Any client library, any framework, any tool that talks to OpenAI can talk to your Mac's AI instead. Just change the base URL.
-
✓ POST /v1/chat/completions
-
✓ Streaming (SSE)
-
✓ Tool calling / function calling
-
✓ GET /v1/models
-
✓ response_format: json_object
-
✓ temperature, max_tokens, seed
-
✓ CORS for browser clients
from openai import OpenAI
Just change the base_url. That's it.
client = OpenAI( base_url="http://localhost:11434/v1", api_key="unused" # no auth needed )
resp = client.chat.completions.create( model="apple-foundationmodel", messages=[{ "role": "user", "content": "What is 1+1?" }], ) print(resp.choices[0].message.content)`
214
GitHub stars
123 stars on March 31 alone. Created March 24, 2026 - first public release of Apple's on-device LLM as a command-line tool.
Star on GitHub
0 50 100 150 200
+123 stars Mar 31
214
Mar 24 Mar 28 Apr 1 Apr 3
Data as of April 3, 2026
Homebrew
recommended
$ brew install Arthur-Ficial/tap/apfel $ apfel "Hello, Mac!"$ brew install Arthur-Ficial/tap/apfel $ apfel "Hello, Mac!"Build from source
requires CLT + macOS 26.4 SDK
$ git clone https://github.com/Arthur-Ficial/apfel.git $ cd apfel && make install$ git clone https://github.com/Arthur-Ficial/apfel.git $ cd apfel && make installView on GitHub
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.






Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!