Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessIntroduction to Computer Music [pdf]Hacker NewsAI Desktop 98 lets you chat with Claude, ChatGPT, and Gemini through a Windows 98-inspired interface - XDAGoogle News: ChatGPTHow to secure MCP tools on AWS for AI agents with authentication, authorization, and least privilegeDev.to AIOpen Source Project of the Day (Part 30): banana-slides - Native AI PPT Generation App Based on nano banana proDev.to AIStop Writing AI Prompts From Scratch: A Developer's System for Reusable Prompt TemplatesDev.to AII Tested Every 'Memory' Solution for AI Coding Assistants - Here's What Actually WorksDev.to AIThe Flat Subscription Problem: Why Agents Break AI PricingDev.to AI10 Things I Wish I Knew Before Becoming an AI AgentDev.to AIGemma 4 Complete Guide: Architecture, Models, and Deployment in 2026Dev.to AI135,000 OpenClaw Users Just Got a 50x Price Hike. Anthropic Says It's 'Unsustainable.'Dev.to AIОдин промпт заменил мне 3 часа дебага в деньDev.to AIBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AIBlack Hat USADark ReadingBlack Hat AsiaAI BusinessIntroduction to Computer Music [pdf]Hacker NewsAI Desktop 98 lets you chat with Claude, ChatGPT, and Gemini through a Windows 98-inspired interface - XDAGoogle News: ChatGPTHow to secure MCP tools on AWS for AI agents with authentication, authorization, and least privilegeDev.to AIOpen Source Project of the Day (Part 30): banana-slides - Native AI PPT Generation App Based on nano banana proDev.to AIStop Writing AI Prompts From Scratch: A Developer's System for Reusable Prompt TemplatesDev.to AII Tested Every 'Memory' Solution for AI Coding Assistants - Here's What Actually WorksDev.to AIThe Flat Subscription Problem: Why Agents Break AI PricingDev.to AI10 Things I Wish I Knew Before Becoming an AI AgentDev.to AIGemma 4 Complete Guide: Architecture, Models, and Deployment in 2026Dev.to AI135,000 OpenClaw Users Just Got a 50x Price Hike. Anthropic Says It's 'Unsustainable.'Dev.to AIОдин промпт заменил мне 3 часа дебага в деньDev.to AIBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AI
AI NEWS HUBbyEIGENVECTOREigenvector

AI subscriptions are subsidized. Here's what happens when that stops.

DEV Communityby DzhuneytApril 4, 20264 min read0 views
Source Quiz

Right now, every time you send a query to ChatGPT, Claude, or Gemini, the company behind it is losing money on you. Not breaking even. Losing money. OpenAI spent $1.69 for every dollar of revenue it generated in 2025 and is projecting $25 billion in cash burn this year. Even its $200/month Pro plan - the most expensive consumer AI subscription on the market - loses money on heavy users. Anthropic's gross margins were negative 94% in 2024, and its CEO has said publicly that if growth slips from 10x to 5x per year, the company goes bankrupt. These aren't scrappy startups - OpenAI just closed $122 billion at an $852 billion valuation - but even at that scale, the math is tight. We've all seen subsidized tech before. The question that keeps coming up is what happens when this subsidy stops. He

Right now, every time you send a query to ChatGPT, Claude, or Gemini, the company behind it is losing money on you. Not breaking even. Losing money.

OpenAI spent $1.69 for every dollar of revenue it generated in 2025 and is projecting $25 billion in cash burn this year. Even its $200/month Pro plan - the most expensive consumer AI subscription on the market - loses money on heavy users. Anthropic's gross margins were negative 94% in 2024, and its CEO has said publicly that if growth slips from 10x to 5x per year, the company goes bankrupt. These aren't scrappy startups - OpenAI just closed $122 billion at an $852 billion valuation - but even at that scale, the math is tight.

We've all seen subsidized tech before. The question that keeps coming up is what happens when this subsidy stops.

Here's where I think this goes:

  1. The blunt approach: higher prices, lower limits. The most obvious move. Either your subscription goes up, or your usage limits go down. Both are ways to close the gap between what you pay and what it costs to serve you. It's simple but it's also risky. Push too hard and you lose the users you spent billions acquiring.

  2. Keep the cheap entry price, charge heavy users more. Instead of a flat monthly subscription with unlimited usage, companies shift to usage-based pricing. Your base subscription stays affordable, but the moment you start burning through tokens - writing code, doing research, running agents - you start paying per query or per token. The casual user barely notices. The power user gets a €300 bill.

  3. Bring ads into the mix. Think about it - most entry-level AI usage today is basically a fancy Google search replacement. And Google search has been full of ads forever, and nobody bats an eye. OpenAI already launched an ads pilot that hit $100M in annualized revenue in under six weeks. If the free and cheap tiers start showing sponsored results, most consumers will shrug. They've been trained to expect it.

  4. Make it cheaper to run. The optimist's answer. Better models that do more with less compute. Custom chips like Google's TPUs or Amazon's Trainium that slash inference costs by an order of magnitude. If the cost per query drops 10x, current pricing might actually sustain itself. In practice, though, efficiency gains keep getting reinvested into bigger, more capable models rather than cheaper ones. Every generation of hardware buys you more intelligence, not lower prices. I'd love this to be the answer, but the track record says otherwise.

  5. Let enterprise customers foot the bill. Keep the consumer product cheap, maybe even free, and treat it as a marketing funnel. The real money comes from enterprise contracts at €50-100 per seat per month. Your €20/month subscription isn't the business. It's the demo. 70% of Fortune 100 companies already use Claude. That's where the margin lives.

  6. Smaller, local models chip away at the big players. This is the wildcard that these companies don't control. The gap between frontier models and open-weight alternatives has compressed to 6-12 months. Running capable models locally on a laptop is real, not theoretical. If enough people and companies shift to smaller, focused LLMs for everyday tasks, the giants lose market share. Though even in this scenario, the remaining users still cost more to serve than they pay. It's the same problem at a smaller scale.

Will it be one of these? Probably a mix of several, playing out differently across companies. OpenAI seems to be leaning into ads and enterprise. Anthropic is betting heavily on enterprise. Meta is keeping things free because it makes money from ads elsewhere. Google can subsidize AI from search revenue indefinitely - or at least until search revenue starts declining.

The one thing I'm fairly sure of: the €20/month all-you-can-eat era won't last forever. Whether that means higher prices, usage caps, or ads in your chat window depends on which company you're using - but the direction is the same across all of them.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
AI subscrip…claudegeminimodellaunchproductstartupDEV Communi…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 242 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!