AI subscriptions are subsidized. Here's what happens when that stops.
Right now, every time you send a query to ChatGPT, Claude, or Gemini, the company behind it is losing money on you. Not breaking even. Losing money. OpenAI spent $1.69 for every dollar of revenue it generated in 2025 and is projecting $25 billion in cash burn this year. Even its $200/month Pro plan - the most expensive consumer AI subscription on the market - loses money on heavy users. Anthropic's gross margins were negative 94% in 2024, and its CEO has said publicly that if growth slips from 10x to 5x per year, the company goes bankrupt. These aren't scrappy startups - OpenAI just closed $122 billion at an $852 billion valuation - but even at that scale, the math is tight. We've all seen subsidized tech before. The question that keeps coming up is what happens when this subsidy stops. He
Right now, every time you send a query to ChatGPT, Claude, or Gemini, the company behind it is losing money on you. Not breaking even. Losing money.
OpenAI spent $1.69 for every dollar of revenue it generated in 2025 and is projecting $25 billion in cash burn this year. Even its $200/month Pro plan - the most expensive consumer AI subscription on the market - loses money on heavy users. Anthropic's gross margins were negative 94% in 2024, and its CEO has said publicly that if growth slips from 10x to 5x per year, the company goes bankrupt. These aren't scrappy startups - OpenAI just closed $122 billion at an $852 billion valuation - but even at that scale, the math is tight.
We've all seen subsidized tech before. The question that keeps coming up is what happens when this subsidy stops.
Here's where I think this goes:
-
The blunt approach: higher prices, lower limits. The most obvious move. Either your subscription goes up, or your usage limits go down. Both are ways to close the gap between what you pay and what it costs to serve you. It's simple but it's also risky. Push too hard and you lose the users you spent billions acquiring.
-
Keep the cheap entry price, charge heavy users more. Instead of a flat monthly subscription with unlimited usage, companies shift to usage-based pricing. Your base subscription stays affordable, but the moment you start burning through tokens - writing code, doing research, running agents - you start paying per query or per token. The casual user barely notices. The power user gets a €300 bill.
-
Bring ads into the mix. Think about it - most entry-level AI usage today is basically a fancy Google search replacement. And Google search has been full of ads forever, and nobody bats an eye. OpenAI already launched an ads pilot that hit $100M in annualized revenue in under six weeks. If the free and cheap tiers start showing sponsored results, most consumers will shrug. They've been trained to expect it.
-
Make it cheaper to run. The optimist's answer. Better models that do more with less compute. Custom chips like Google's TPUs or Amazon's Trainium that slash inference costs by an order of magnitude. If the cost per query drops 10x, current pricing might actually sustain itself. In practice, though, efficiency gains keep getting reinvested into bigger, more capable models rather than cheaper ones. Every generation of hardware buys you more intelligence, not lower prices. I'd love this to be the answer, but the track record says otherwise.
-
Let enterprise customers foot the bill. Keep the consumer product cheap, maybe even free, and treat it as a marketing funnel. The real money comes from enterprise contracts at €50-100 per seat per month. Your €20/month subscription isn't the business. It's the demo. 70% of Fortune 100 companies already use Claude. That's where the margin lives.
-
Smaller, local models chip away at the big players. This is the wildcard that these companies don't control. The gap between frontier models and open-weight alternatives has compressed to 6-12 months. Running capable models locally on a laptop is real, not theoretical. If enough people and companies shift to smaller, focused LLMs for everyday tasks, the giants lose market share. Though even in this scenario, the remaining users still cost more to serve than they pay. It's the same problem at a smaller scale.
Will it be one of these? Probably a mix of several, playing out differently across companies. OpenAI seems to be leaning into ads and enterprise. Anthropic is betting heavily on enterprise. Meta is keeping things free because it makes money from ads elsewhere. Google can subsidize AI from search revenue indefinitely - or at least until search revenue starts declining.
The one thing I'm fairly sure of: the €20/month all-you-can-eat era won't last forever. Whether that means higher prices, usage caps, or ads in your chat window depends on which company you're using - but the direction is the same across all of them.
DEV Community
https://dev.to/dzhuneyt/ai-subscriptions-are-subsidized-heres-what-happens-when-that-stops-293fSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.







Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!