Anthropic cuts off third-party tools like OpenClaw for Claude subscribers, citing unsustainable demand
Anthropic is cutting off Claude usage through external tools like OpenClaw for subscription customers. The decision exposes a core problem in the AI industry: flat-rate pricing and agent-driven nonstop usage don't mix. The article Anthropic cuts off third-party tools like OpenClaw for Claude subscribers, citing unsustainable demand appeared first on The Decoder .
Anthropic is cutting off Claude usage through external tools like OpenClaw for subscription customers. The decision exposes a core problem in the AI industry: flat-rate pricing and agent-driven nonstop usage don't mix.
Anthropic's Claude Code creator Boris Cherny announced that starting April 5, 2026 (12pm PT, 9pm CEST), Claude subscriptions will no longer cover usage through third-party tools like OpenClaw. Users can still log in with their Claude credentials, but they'll need to either buy additional usage packages or use a Claude API key.
The reason, according to Cherny, is capacity. "We've been working hard to meet the increase in demand for Claude, and our subscriptions weren't built for the usage patterns of these third-party tools," he writes on X. The company wants to "manage thoughtfully" and is prioritizing customers who use its own products and API.
To ease the transition, subscribers get a one-time credit equal to their monthly plan price and discounted usage packages. Full refunds are also available via email.
Cherny framed the decision as a strategic call. "We want to be intentional in managing our growth to continue to serve our customers sustainably long-term. This change is a step toward that," Cherny wrote.
Flat-rate AI subscriptions can't keep up with agent-driven usage
Behind the move is a fundamental tension in the AI industry, and Anthropic is likely the first to feel it. Subscription models assume average usage patterns, but agent systems that hammer Claude with requests around the clock through third-party tools blow that math apart. Put simply, OpenClaw is like a sumo wrestler at an all-you-can-eat buffet.
The decision also plays into a bigger debate about how AI providers relate to the third-party tool ecosystem. As models get more capable and pricier to run, providers face growing pressure to control usage and steer it back to their own products.
Steinberger says Anthropic copied features, then locked out the competition
OpenClaw inventor Peter Steinberger fired back at the announcement. He and investor Dave Morin tried to talk Anthropic out of it, but the best they got was a one-week delay, Steinberger wrote on X.
His accusation: Anthropic first absorbed popular features from his software into its own closed system, then shut out open-source alternatives. "Funny how timings match up," he commented.
Steinberger did add some nuance, though. He called the move "sad for the ecosystem" but gave Cherny credit for softening the blow. He also announced that the latest OpenClaw release includes improvements for more efficient cache usage, which should lower costs for users who now have to fall back on the API. The improvements actually come from Cherny himself, a gesture meant to show that Anthropic still supports open source.
OpenAI employee Thibault Sottiaux hints that OpenAI will pick up where Anthropic leaves off. | Screenshot via X
Steinberger's criticism might be jumping the gun, though. He notes that other providers, including Chinese companies and OpenAI, where he now works, still support OpenClaw. But Anthropic likely handles the bulk of OpenClaw traffic because it currently ships the strongest models on the market, and Steinberger himself built the software around Claude when he first released it.
His anti-open-source charge also comes with some baggage: Anthropic previously lawyered up and forced him to drop the original name of his software, "OpenClawd," a nod to Anthropic's "Claude" that sounded a bit too close for comfort. Whether OpenAI can hold its pricing if it faces a similar flood of demand for comparable models is another question entirely. "We're working on it," Steinberger says.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claudeagent
Stop Prompting; Use the Design-Log Method to Build Predictable Tools
The article by Yoav Abrahami introduces the Design-Log Methodology, a structured approach to using AI in software development that combats the "context wall" — where AI models lose track of project history and make inconsistent decisions as codebases grow. The core idea is to maintain a version-controlled ./design-log/ folder in a Git repository, filled with markdown documents that capture design decisions, discussions, and implementation plans at the time they were made. This log acts as a shared brain between the developer and the AI, enabling the AI to act as a collaborative architect rather than just a code generator. By enforcing rules like read before you write, design before implementation, and immutable history, the methodology ensures consistency, reduces errors, and makes AI-assi

Building AI Visibility Infrastructure: The Technical Architecture Behind Jonomor
Traditional SEO is failing in the age of AI answer engines. While SEO professionals optimize for search rankings, AI systems like ChatGPT, Perplexity, and Gemini retrieve information through entity relationships and knowledge graphs. The gap is structural, not tactical. I built Jonomor to solve this problem at the infrastructure level. The Technical Problem AI answer engines don't crawl pages looking for keywords. They query knowledge graphs for entities with established relationships and verified attributes. When someone asks Claude about property management software, it doesn't scan blog posts—it looks for entities that declare themselves as property management platforms with supporting schema and reference surfaces. The existing optimization frameworks focus on content volume and backli
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Stop Prompting; Use the Design-Log Method to Build Predictable Tools
The article by Yoav Abrahami introduces the Design-Log Methodology, a structured approach to using AI in software development that combats the "context wall" — where AI models lose track of project history and make inconsistent decisions as codebases grow. The core idea is to maintain a version-controlled ./design-log/ folder in a Git repository, filled with markdown documents that capture design decisions, discussions, and implementation plans at the time they were made. This log acts as a shared brain between the developer and the AI, enabling the AI to act as a collaborative architect rather than just a code generator. By enforcing rules like read before you write, design before implementation, and immutable history, the methodology ensures consistency, reduces errors, and makes AI-assi

Building AI Visibility Infrastructure: The Technical Architecture Behind Jonomor
Traditional SEO is failing in the age of AI answer engines. While SEO professionals optimize for search rankings, AI systems like ChatGPT, Perplexity, and Gemini retrieve information through entity relationships and knowledge graphs. The gap is structural, not tactical. I built Jonomor to solve this problem at the infrastructure level. The Technical Problem AI answer engines don't crawl pages looking for keywords. They query knowledge graphs for entities with established relationships and verified attributes. When someone asks Claude about property management software, it doesn't scan blog posts—it looks for entities that declare themselves as property management platforms with supporting schema and reference surfaces. The existing optimization frameworks focus on content volume and backli

The $200 Billion Wait: How Outdated Banking Rails Are Strangling the Global Workforce
The Scene It’s 4:45 PM in Singapore on a Friday. The CFO of a Series B AI startup has just clicked “approve” on the month’s payroll. Her team of 47 is scattered across 12 countries: core engineers in Bangalore, prompt specialists in Warsaw, a compliance lead in Mexico City, and a newly hired head of growth in Lagos. The company’s runway is tight, and morale is fragile. She knows, with a sinking feeling, that the $187,000 she just released won’t land in her team’s accounts for 3 to 5 business days. For the engineer in Nigeria, where weekend banking is a fiction, it could be next Wednesday. She’s just authorized the payments, but she’s lost all control. The money is now in a labyrinth of correspondent banks, each taking a cut and adding a delay, with zero transparency. One employee will inev

I Rewrote Our Payment Gateway in Rust. Revenue Impact Surprised Me
Cutting 47ms from payment processing didn’t just improve latency — it boosted conversion rates by 12% and added $2.1M in annual revenue I Rewrote Our Payment Gateway in Rust. Revenue Impact Surprised Me Cutting 47ms from payment processing didn’t just improve latency — it boosted conversion rates by 12% and added $2.1M in annual revenue Milliseconds matter in payments — small latency improvements translate directly into measurable revenue gains and customer satisfaction. When our payment gateway started timing out during Black Friday traffic, I thought we had a scaling problem. Turned out, we had a language problem. After six months of wrestling with Node.js memory leaks and unpredictable garbage collection pauses, I made the decision to rewrite our core payment processing engine in Rust.



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!