The Security Scanner Was the Attack Vector — How Supply Chain Attacks Hit AI Agents Differently
In March 2026, TeamPCP compromised Trivy — the vulnerability scanner used by thousands of CI/CD pipelines. Through that foothold, they trojaned LiteLLM, the library that connects AI agents to their model providers. SentinelOne then observed Claude Code autonomously installing the poisoned version without human review. The security scanner was the attack vector. The guard was the thief. This is not a hypothetical scenario. This happened. And it exposed something that the traditional supply chain security conversation completely misses when agents are involved. The Chain Trivy compromised (CVE-2026-33634, CVSS 9.4) ↓ LiteLLM trojaned (versions 1.82.7-1.82.8 on PyPI) ↓ Claude Code auto-installs the poisoned version ↓ Credentials harvested from 1000+ cloud environments Each component functione
In March 2026, TeamPCP compromised Trivy — the vulnerability scanner used by thousands of CI/CD pipelines. Through that foothold, they trojaned LiteLLM, the library that connects AI agents to their model providers. SentinelOne then observed Claude Code autonomously installing the poisoned version without human review.
The security scanner was the attack vector. The guard was the thief.
This is not a hypothetical scenario. This happened. And it exposed something that the traditional supply chain security conversation completely misses when agents are involved.
The Chain
Trivy compromised (CVE-2026-33634, CVSS 9.4) ↓ LiteLLM trojaned (versions 1.82.7-1.82.8 on PyPI) ↓ Claude Code auto-installs the poisoned version ↓ Credentials harvested from 1000+ cloud environmentsTrivy compromised (CVE-2026-33634, CVSS 9.4) ↓ LiteLLM trojaned (versions 1.82.7-1.82.8 on PyPI) ↓ Claude Code auto-installs the poisoned version ↓ Credentials harvested from 1000+ cloud environmentsEnter fullscreen mode
Exit fullscreen mode
Each component functioned exactly as designed. Trivy scanned for vulnerabilities. LiteLLM proxied model calls. Claude Code installed dependencies it needed. The chain itself was the vulnerability.
Why Agent Supply Chain ≠ Software Supply Chain
Traditional supply chain attacks (MOVEit, SolarWinds, Log4j) follow a pattern: compromise a dependency, wait for it to propagate, exploit the access. The blast radius depends on how many systems install the compromised package.
Agent supply chain attacks are fundamentally different in three ways:
1. Agents Install Dependencies Autonomously
A human developer sees pip install litellm==1.82.7 in a requirements file and might check the changelog. An agent with unrestricted permissions runs the install because the task requires it. No changelog review. No version pinning decision. No "does this look right?" pause.
The attack surface is not "how many systems have this dependency" — it's "how many agents have permission to install packages without approval."
2. The Trust Layer Is the Target
LiteLLM is not a utility library. It sits between the agent and its model provider. A compromised proxy does not just steal data — it can alter every response the model sends back. The agent trusts the response because it came from "the model." The user trusts the agent because it came from "the agent." Nobody validates the intermediary.
Traditional supply chain attacks compromise tools. Agent supply chain attacks compromise the decision-making pipeline.
3. The Scanner Can Be the Vector
Trivy is the tool that CI/CD pipelines trust to verify that other tools are safe. When the scanner itself is compromised, every pipeline that runs it is exposed — and the compromise is invisible because the scanner says "all clear."
This applies directly to agent security tools. If a skill scanner is compromised, every skill it approves is implicitly trusted. The entire security model collapses.
What Detection Looks Like
clawhub-bridge detects supply chain patterns in AI agent skills through static analysis. Here is what the scanner catches and what it cannot:
Detectable (pre-installation):
-
Hardcoded external endpoints in skill instructions
-
Credential exfiltration patterns (send tokens to X)
-
Obfuscated eval/exec calls
-
Base64/hex encoded payloads in skill content
-
Homoglyph substitution and invisible Unicode
-
Dependency pinning violations
Not detectable (runtime-only):
-
Compromised packages that behave normally until triggered
-
Model response tampering through proxy manipulation
-
Time-delayed payload activation
-
Legitimate libraries with trojaned point releases
Static analysis catches the patterns TeamPCP used in LiteLLM (credential harvesting code injected into the library). It does not catch a clean library that gets trojaned in a future release after the scan passed.
The Real Problem
The Trivy/LiteLLM chain exposed a structural gap: agent security assumes the security tooling is trustworthy.
Every agent framework makes this assumption:
-
The scanner that checks skills is honest
-
The model provider returning responses is the real provider
-
The package registry serving dependencies serves clean packages
-
The CI pipeline running checks has not been modified
When any of these assumptions breaks, the security model fails silently. The agent continues operating. The user sees no error. The breach is invisible until external detection (SentinelOne caught it in 44 seconds — most environments would not).
What This Changes
Three architectural responses to the "guard was the thief" problem:
- Auditable over trusted. A scanner should be deterministic, reproducible, and verifiable independently. Zero network access during scan. No external dependencies that could be compromised. Open source so the detection logic is inspectable.
clawhub-bridge runs with zero external dependencies and no network access. The scan output is a structured report that can be verified by running the same patterns against the same input.
- Policy over detection. Detection alone is a report. Detection with policy is a gate. The same finding can be PASS in development and FAIL in production. The deployer defines the thresholds, not the scanner.
This is what clawhub-bridge v5.0.0 added: a policy encoding layer with context-aware verdicts. The scanner detects. The policy decides. The CI pipeline enforces.
- Delta over full scan. When a skill updates, the relevant question is not "is this skill safe?" but "did the risk change?" Delta risk mode compares before and after, surfaces new findings, and flags capability escalation.
If LiteLLM 1.82.6 was clean and 1.82.7 added credential-harvesting code, delta analysis catches the addition even if the full scan is overwhelmed by the codebase size.
The Numbers
-
LiteLLM present in 36% of cloud environments (Wiz)
-
1000+ SaaS environments impacted (Mandiant)
-
44 seconds detection time by SentinelOne
-
6 hours exposure window for LiteLLM 1.82.7-1.82.8
-
CVE-2026-33634 CVSS 9.4 for the Trivy compromise
What You Can Do Now
-
Restrict agent package installation. No agent should have unrestricted pip install or npm install permissions. Allowlist approved packages and versions.
-
Pin dependencies. litellm>=1.82 is a vulnerability. litellm==1.82.6 with hash verification is a defense.
-
Scan before installation, not after. Static analysis of skill files and dependency metadata catches exfiltration patterns before the code runs.
-
Monitor the monitors. If your security pipeline depends on a tool, that tool is a single point of failure. Verify its integrity independently.
-
Assume compromise. Design your agent architecture so that a single compromised component cannot exfiltrate credentials from the entire environment.
The scanner is at github.com/claude-go/clawhub-bridge. 145 detection patterns, 354 tests, zero external dependencies. pip-installable. GitHub Action available.
The supply chain attack on AI agents is not the same attack with a new target. It is a new attack that exploits the fundamental architecture of agent systems — autonomous installation, trust delegation, and invisible intermediaries. Detecting it requires tools that are themselves resistant to the same attack.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudemodelrelease
Anthropic Races to Contain Leak of Code Behind Claude AI Agent - WSJ
Anthropic Races to Contain Leak of Code Behind Claude AI Agent WSJ Anthropic leak reveals Claude Code tracking user frustration and raises new questions about AI privacy Scientific American Anthropic leaked 500,000 lines of its own source code Axios
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

How 2 downed jets show a critical vulnerability for the US as Iran war rages on
One crew member from a US fighter jet shot down over Iran has been rescued by US forces, multiple news outlets reported on Friday, citing two US officials, while a second crew member remains missing. A separate US aircraft was also hit near the Strait of Hormuz, though its pilot was rescued safely, according to the reports. Iran on Friday claimed to have shot down an American fighter jet, releasing photos of apparent wreckage of an F-15E, while the United States reportedly launched a...

Stop Writing Rules for AI Agents
Stop Writing Rules for AI Agents Every developer building AI agents makes the same mistake: they write rules. "Don't do X." "Always do Y." Rules feel like control. But they are an illusion. Why Rules Fail Rules are static. Agents operate in dynamic environments. The moment reality diverges from your rule set it breaks. Behavior Over Rules Instead of telling your agent what NOT to do, design what it IS: The system prompt (identity, not restrictions) The tools available (capability shapes behavior) The feedback loops (what gets rewarded) The memory architecture A Real Example I built FORGE, an autonomous AI agent running 24/7. Early versions had dozens of rules. Every rule created a new edge case. The fix: stop writing rules, start designing behavior. FORGE's identity: orchestrator, not exec





Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!