Anthropic says Claude subscriptions will no longer support OpenClaw because it puts an 'outsized strain' on systems
Anthropic said third-party tools like OpenClaw put an outsized strain on our systems. OpenClaw's founder said cutting support would be a loss.
Anthropic is cutting off support for OpenClaw, a popular AI agent platform that's developed a cult following.
Adek Berry/AFP via Getty Images
2026-04-04T02:25:29.798Z
-
Claude users have been using their subscription to deploy AI agents with OpenClaw.
-
Anthropic said that such usage is placing an "outsized strain on our systems."
-
OpenClaw's creator told Business Insider that "it'd be a loss" to cut users off.
Anthropic is cutting off support for the popular AI agent platform OpenClaw from Claude subscriptions, as it grapples with soaring demand for its chatbot.
Boris Cherny, head of Claude Code, said in an X post on Friday evening that Claude subscriptions will no longer support third-party tools like OpenClaw starting at 12 p.m. PT on Saturday. Users will instead need to pay through discounted "extra usage bundles" tied to their Claude login or use a separate Claude API key through Anthropic's developer platform, Cherny said.
The Anthropic executive said the move was driven by the compute demand Anthropic is seeing from users.
Claude had surged in popularity in recent weeks, briefly topping the US Apple App Store in March. Last week, Anthropic had to adjust Claude usage limits for subscribers due to the demand.
"We've been working hard to meet the increase in demand for Claude, and our subscriptions weren't built for the usage patterns of these third-party tools. Capacity is a resource we manage thoughtfully and we are prioritizing our customers using our products and API," Cherny wrote in the X post.
An Anthropic spokesperson told Business Insider in a statement that using Claude subscriptions with third-party tools is against the company's terms of service and that those tools put an "outsized strain on our systems."
Peter Steinberger, the creator of OpenClaw, said on X that he and Dave Morin, a board member of the OpenClaw foundation, tried to "talk sense into Anthropic" and that they delayed Anthropic's move for a week.
"We told Anthropic that we have many users who only signed up for their sub because of OpenClaw and that it'd be a loss if they cut them off," Steinberger told Business Insider in a text message. "Now they try to bury the news on a Friday night."
OpenClaw is a fast-rising AI agent platform that connects to platforms like Claude, enabling users to deploy personal AI assistants. Those assistants can then carry out tasks on other apps and workflows.
The popularity of OpenClaw has sparked an AI agent craze. Some users have deployed AI assistants to manage their entire day-to-day workflow. One founder said she built nine AI agents to handle administrative work and personal household logistics.
Anthropic is not alone in putting restrictions on third-party tools.
Google recently took action against Gemini CLI users who use third-party tools; although the move was not framed as a capacity issue, it was more of a violation of the terms of service.
Business Insider
https://www.businessinsider.com/anthropic-cuts-off-openclaw-support-claude-subscriptions-2026-4Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
claude
New Fatebook Android App
tldr; get the new Fatebook Android app! What is Fatebook? Fatebook.io is a website [1] for easily tracking your predictions and becoming better calibrated at them. I like it a lot, and find it convenient for practicing probabilistic thinking. The Fatebook.io dashboard That said, I've found Fatebook's mobile version to be clunky, and its email-based notifications to be less-than-ideal...which leads me to: The New Android App Over the past two weeks, I've made an android app that wraps the Fatebook API, allowing you to easily make new forecasts, leave comments, resolve old forecasts, and view your stats. The default screen A (non-resolved) prediction card Making a new prediction Statistics A beautiful and intuitive UI combined with a fast offline-first database makes it easy pull open the ap

Три месяца я использовал Cursor неправильно. Вот как надо.
14 февраля, 23:40. Я сижу перед ноутбуком, пытаясь закрыть дашборд за $1200 . В панике копирую огромные блоки кода в Cursor и требую "почини это". Ответы - мусор. Пытаюсь снова и снова, но проходит три часа, а проблема не решается. Наутро, случайно выделив маленькую функцию из 12 строк и добавив точный контекст, я получаю решение за 40 секунд . Понимание пришло слишком поздно - Cursor, как и любой AI, требует точности. Почему мне было больно Cursor казался мне волшебной палочкой, но на практике это было как попытка открыть замок набором случайных ключей. Когда я копировал целые файлы по 400 строк , он терялся и выдавал бессмысленный код. Было обидно терять часы на правки, которые никто не оплачивает. Этот опыт знаком многим, кто использует AI для кода. Хотим получить решения мгновенно, но
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Modeling and Controlling Deployment Reliability under Temporal Distribution Shift
arXiv:2604.02351v1 Announce Type: new Abstract: Machine learning models deployed in non-stationary environments are exposed to temporal distribution shift, which can erode predictive reliability over time. While common mitigation strategies such as periodic retraining and recalibration aim to preserve performance, they typically focus on average metrics evaluated at isolated time points and do not explicitly model how reliability evolves during deployment. We propose a deployment-centric framework that treats reliability as a dynamic state composed of discrimination and calibration. The trajectory of this state across sequential evaluation windows induces a measurable notion of volatility, allowing deployment adaptation to be formulated as a multi-objective control problem that balances re

An Initial Exploration of Contrastive Prompt Tuning to Generate Energy-Efficient Code
arXiv:2604.02352v1 Announce Type: new Abstract: Although LLMs are capable of generating functionally correct code, they also tend to produce less energy-efficient code in comparison to human-written solutions. As these inefficiencies lead to higher computational overhead, they are in direct conflict with Green Software Development (GSD) efforts, which aim to reduce the energy consumption of code. To support these efforts, this study aims to investigate whether and how LLMs can be optimized to promote the generation of energy-efficient code. To this end, we employ Contrastive Prompt Tuning (CPT). CPT combines Contrastive Learning techniques, which help the model to distinguish between efficient and inefficient code, and Prompt Tuning, a Parameter-Efficient Fine Tuning (PEFT) approach that r

Differentiable Symbolic Planning: A Neural Architecture for Constraint Reasoning with Learned Feasibility
arXiv:2604.02350v1 Announce Type: new Abstract: Neural networks excel at pattern recognition but struggle with constraint reasoning -- determining whether configurations satisfy logical or physical constraints. We introduce Differentiable Symbolic Planning (DSP), a neural architecture that performs discrete symbolic reasoning while remaining fully differentiable. DSP maintains a feasibility channel (phi) that tracks constraint satisfaction evidence at each node, aggregates this into a global feasibility signal (Phi) through learned rule-weighted combination, and uses sparsemax attention to achieve exact-zero discrete rule selection. We integrate DSP into a Universal Cognitive Kernel (UCK) that combines graph attention with iterative constraint propagation. Evaluated on three constraint rea

Contextual Intelligence The Next Leap for Reinforcement Learning
arXiv:2604.02348v1 Announce Type: new Abstract: Reinforcement learning (RL) has produced spectacular results in games, robotics, and continuous control. Yet, despite these successes, learned policies often fail to generalize beyond their training distribution, limiting real-world impact. Recent work on contextual RL (cRL) shows that exposing agents to environment characteristics -- contexts -- can improve zero-shot transfer. So far, the community has treated context as a monolithic, static observable, an approach that constrains the generalization capabilities of RL agents. To achieve contextual intelligence we first propose a novel taxonomy of contexts that separates allogenic (environment-imposed) from autogenic (agent-driven) factors. We identify three fundamental research directions th


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!