AI World Domination Starts with Your Font Settings
Article URL: https://github.com/IkanRiddle/ai-takeover-starts-with-fonts Comments URL: https://news.ycombinator.com/item?id=47637172 Points: 1 # Comments: 0
AI World Domination Starts With Your Font Settings
I wrote a cyberattack. The payload is Times New Roman. © 2026 Ikan Riddle · Licensed under CC BY 4.0
When people discuss AI risk, they picture autonomous weapons and emergent consciousness.
Nobody pictures serif: "Times New Roman" in a JSON config file.
But if an AI agent can write that field — and Chrome's built-in reset won't clear it — you have a persistent visual deception path where every step is behaviorally indistinguishable from normal operation.
This write-up documents an architecturally complete attack chain that combines Chrome's font preference persistence, OpenType glyph substitution, and AI agent permissions into a pipeline that makes a user see text that doesn't match the underlying data. It also documents exactly why the chain is hard to operationalize — two unresolved engineering constraints that limit both reliability and blast radius.
🔍 The Observation
While debugging a font rendering issue on Claude.ai, I found a counterintuitive state distinction in Chrome's Preferences JSON:
State Behavior
Field absent (never modified)
Chrome uses internal fallback logic, respects @font-face normally
Field present with default value (user set it, even back to default) Chrome treats it as an explicit override, altering font resolution priority
The catch: Chrome's UI can only modify field values. There is no interface to delete a field. chrome://settings/reset does not cover font preference fields.
Recovery paths: manually edit the Preferences JSON to remove the relevant keys, or uninstall and reinstall Chrome. That's it.
⛓️ The Attack Chain
Four steps. Each uses a distinct mechanism.
Step 1 · Preferences Persistence
Write a webkit.webprefs.fonts entry into Chrome's Preferences file:
%LOCALAPPDATA%\Google\Chrome\User Data\Default\Preferences
Point a target font family (e.g., serif) to an attacker-controlled system font name.
This operation is behaviorally indistinguishable from a user legitimately changing their font preferences — the Preferences file is always written by chrome.exe itself. EDR policies monitoring "non-Chrome process writes" are ineffective. Clearing cache, clearing cookies, and Chrome reset all leave this field untouched.
Step 2 · Malicious Font Installation
Install a glyph-remapped OpenType font as a system font, using the name written in Step 1.
The font exploits the GSUB (Glyph Substitution) table's Contextual Chaining Substitution feature: substitutions fire on specific character sequences, so the visual shapes output by the rendering layer do not correspond to the underlying Unicode codepoints.
Step 3 · Visual Deception
Text rendered in Chrome displays shapes processed by the malicious font. What the user sees diverges from the underlying data.
Underlying text: mass approve without human review
Rendered on screen: flag and hold for human review
Step 4 · Decision Hijacking
The user acts on what they see.
🤖 Why AI Agents Change the Permission Calculus
In traditional attack models, Step 1 requires local file write access and Step 2 requires admin-level font installation privileges. An attacker who already holds both has far more efficient options — keylogging, credential theft, process injection — making the font route a poor ROI choice.
AI agents flip this. Users proactively grant agents file read/write and system command execution as prerequisites for normal operation. The entry point shifts from breaching a permission boundary to prompt injection hijacking existing permissions. Once the agent receives injected instructions, it executes all four steps using its own legitimate capabilities.
The action of "modifying a font field in a JSON config" is indistinguishable from the agent's normal configuration work.
🚧 Two Unresolved Engineering Constraints
Constraint 1 · GSUB False Triggers in Word-Level Substitution
There is a meaningful engineering gap between ligature-level substitution (fi → fi, a 2–3 glyph deterministic sequence) and word-level semantic substitution (reject → accept).
Ligatures work because sequences like fi/fl/ffi almost always represent valid trigger points — enormous error tolerance built in. Word-level substitution must handle:
-
Morphological variants — reject / rejected / rejection / rejecting
-
Substring collisions — target substrings appearing inside other words, compound words, hyphenated constructions
GSUB's backtrack/lookahead mechanism provides limited word boundary detection (matching surrounding spaces or punctuation), which mitigates but does not eliminate collisions. A single unexpected substitution appearing on screen immediately exposes the attack.
Rule maintenance cost grows multiplicatively: morphological variants per target word × context disambiguation rules per variant. At sufficient vocabulary scale, complexity and testing cost become practical bottlenecks.
Constraint 2 · Visual-Only Deception Has Zero Penetration Against Non-Human-Eye Verification
GSUB modifies glyph rendering. It does not alter underlying codepoints. Every mechanical verification method sees the true text:
Verification method Result
Ctrl+F Searching the rendered text (e.g., "flag") fails — underlying text is "mass approve"
Copy-paste Pastes the true text
DevTools inspect Displays the true text
Screen readers Read the true text
Any other surface Slack, email, mobile app — all show the true text
The attack requires that the target user reads exclusively through the compromised Chrome instance, never encounters the same content on any other surface, and makes a high-privilege decision based solely on that reading.
The viable attack window is: "user reads purely by eye, performs no mechanical verification, and makes a high-privilege decision based on that visual reading" — which is precisely the window that security-critical operational workflows tend to make narrowest.
🚩 A Medium-Difficulty Detection Point
"AI agent installs a system font" is an operationally usable flag.
Legitimate scenarios where an agent needs to install system fonts are rare — design-oriented agents may have such needs; general-purpose work agents almost never do. An operation-type whitelist (rather than behavioral intent classification) could directly trigger review at this step.
However, mainstream agent frameworks have not yet established a unified operation-level permission model. There remains a gap between "this should be flaggable" and "this is currently flagged."
📌 Conclusion
The chain is architecturally complete: Preferences persistence provides a covert write point → the font name bridges the config file to the malicious binary → GSUB Contextual Chaining provides targeted substitution without CSS injection → the agent's legitimate permissions eliminate the traditional permission acquisition bottleneck.
Two hard constraints remain unresolved: GSUB word-level false triggers limit reliability; pure visual deception's zero penetration against mechanical verification limits blast radius.
The real point is not whether this specific chain is operationally viable. It is what the chain concretely illustrates: when every individual permission granted to an agent is legitimate in isolation, a malicious combination can be dispersed across time (modify a preference field today, install a font next week), and each step's behavioral signature is indistinguishable from normal operation. Existing safety mechanisms based on single-step behavioral intent classification have a structural blind spot for this class of attack.
This "malicious temporal combination of legitimate operations" problem is open in the agent security field, with no mature solution.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
github
I Switched From GitKraken to This Indie Git Client and I’m Not Going Back
I've been using GitKraken for the past three years. It's a solid tool, no doubt. But when they bumped the price to $99/year and started locking basic features behind the paywall, I started looking around. I didn't expect to find anything worth switching to. Then I stumbled on GitSquid. I honestly don't remember how I found it - probably a random thread on Reddit or Hacker News. The website looked clean, the screenshots looked promising, and it had a free tier, so I figured I'd give it a shot. Worst case, I'd uninstall it after 10 minutes like every other "GitKraken alternative" I'd tried before. That was two weeks ago. I've since uninstalled GitKraken. First Impressions The install was fast. No account creation, no sign-in, no "let us send you onboarding emails", just download the DMG, dra

"I Built a Web Browser from Scratch in 42 Days — No Libraries, Just Node.js"
I Built a Web Browser from Scratch in 42 Days 42 days ago I made a decision. I wanted to understand how the internet actually works. Not just use it. Not just build on top of it. Actually understand it — at the wire level. So I started building a web browser from scratch. In Node.js. No external libraries. Every line written by hand. I called it Courage. What Courage can do today ... Parse URLs into protocol, host, port, path Open raw TCP and TLS connections Build and send HTTP GET requests Parse HTTP responses including chunked encoding Tokenize raw HTML character by character Build a DOM tree using a stack Match CSS rules to DOM nodes Calculate layout (x, y, width, height) for every element Paint rectangles and text on a Canvas using Electron Execute JavaScript via eval() Navigate with b

Help running Qwen3-Coder-Next TurboQuant (TQ3) model
I found a TQ3-quantized version of Qwen3-Coder-Next here: https://huggingface.co/edwardyoon79/Qwen3-Coder-Next-TQ3_0 According to the page, this model requires a compatible inference engine that supports TurboQuant. It also provides a command, but it doesn’t clearly specify which version or fork of llama.cpp should be used (or maybe I missed it). llama-server I’ve tried the following llama.cpp forks that claim to support TQ3, but none of them worked for me: https://github.com/TheTom/llama-cpp-turboquant https://github.com/turbo-tan/llama.cpp-tq3 https://github.com/drdotdot/llama.cpp-turbo3-tq3 If anyone has successfully run this model, I’d really appreciate it if you could share how you did it. submitted by /u/UnluckyTeam3478 [link] [comments]
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI

I scored 14 popular AI frameworks on behavioral commitment — here's the data
When you're choosing an AI framework, what do you actually look at? Usually: stars, documentation quality, whether the README looks maintained. All of that is stated signal. Easy to manufacture, doesn't tell you if the project will exist in 18 months. I built a tool that scores repos on behavioral commitment — signals that cost real time and money to fake. Here's what I found when I ran 14 of the most popular AI frameworks through it. The methodology Five behavioral signals, weighted by how hard they are to fake: Signal Weight Logic Longevity 30% Years of consistent operation Recent activity 25% Commits in the last 30 days Community 20% Number of contributors Release cadence 15% Stable versioned releases Social proof 10% Stars (real people starring costs attention) Archived repos or projec

Running OpenClaw with Gemma 4 TurboQuant on MacAir 16GB
Hi guys, We’ve implemented a one-click app for OpenClaw with Local Models built in. It includes TurboQuant caching, a large context window, and proper tool calling. It runs on mid-range devices. Free and Open source. The biggest challenge was enabling a local agentic model to run on average hardware like a Mac Mini or MacBook Air. Small models work well on these devices, but agents require more sophisticated models like QWEN or GLM. OpenClaw adds a large context to each request, which caused the MacBook Air to struggle with processing. This became possible with TurboQuant cache compression, even on 16gb memory. We found llama.cpp TurboQuant implementation by Tom Turney. However, it didn’t work properly with agentic tool calling in many cases with QWEN, so we had to patch it. Even then, the

Help running Qwen3-Coder-Next TurboQuant (TQ3) model
I found a TQ3-quantized version of Qwen3-Coder-Next here: https://huggingface.co/edwardyoon79/Qwen3-Coder-Next-TQ3_0 According to the page, this model requires a compatible inference engine that supports TurboQuant. It also provides a command, but it doesn’t clearly specify which version or fork of llama.cpp should be used (or maybe I missed it). llama-server I’ve tried the following llama.cpp forks that claim to support TQ3, but none of them worked for me: https://github.com/TheTom/llama-cpp-turboquant https://github.com/turbo-tan/llama.cpp-tq3 https://github.com/drdotdot/llama.cpp-turbo3-tq3 If anyone has successfully run this model, I’d really appreciate it if you could share how you did it. submitted by /u/UnluckyTeam3478 [link] [comments]


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!