Compute Curse
Epistemic status: romantic speculation. The core claim: I accidentally thought that compute growth can be rather neatly analogized to natural resource abundance. Before compute curse, there was resource curse Countries that discover oil often end up worse off than countries that don't, which is known as the resource curse . The mechanisms are well-understood: a booming resource sector draws capital and labor away from other industries, creates incentives for rent-seeking over productive investment, crowds out human capital development, and corrodes the institutions needed to sustain long-term growth. I argue that something structurally similar has been happening with compute. The exponential growth of available computation over the past several decades, and, critically, the widespread expe
Epistemic status: romantic speculation.
The core claim: I accidentally thought that compute growth can be rather neatly analogized to natural resource abundance.
Before compute curse, there was resource curse
Countries that discover oil often end up worse off than countries that don't, which is known as the resource curse. The mechanisms are well-understood: a booming resource sector draws capital and labor away from other industries, creates incentives for rent-seeking over productive investment, crowds out human capital development, and corrodes the institutions needed to sustain long-term growth.
I argue that something structurally similar has been happening with compute. The exponential growth of available computation over the past several decades, and, critically, the widespread expectation that this growth would continue, has created a pattern of resource allocation, talent distribution, and research prioritization that mirrors the resource curse in specific and non-metaphorical ways.
Note: this is not a claim that extensive compute growth has been net negative (neither it is the opposite claim).
Dutch disease comes for ASML
The original Dutch disease mechanism is straightforward: when a booming sector (say, natural gas extraction) generates high returns, it pulls capital and labor out of other sectors (say, manufacturing), causing them to atrophy. The non-booming sectors don't decline because they became less valuable in absolute terms but rather because the booming sector offers relatively better returns, and resources flow accordingly.
A trivial version of "compute Dutch disease" of it goes like this: because scaling compute yields such reliable, legible, and fundable returns (train a bigger model, get a better benchmark score, publish the paper, raise the round), it systematically starves research directions that are harder to fund, harder to evaluate, and slower to produce results, even when those directions might be more consequential in the long run.
So, "The Bitter Lesson" can be seen as the Dutch disease of AI research, if we add to it that the fact that scaling works doesn't mean the crowding-out of alternatives is costless. Or, in other words, the fact that scaling works better is rather a fact about our ability to do programming or even, if we go to the very end of this line of reasoning, about our economic and educational institutions, than about computer science in general.
However, I consider it only as the most recent and prominent manifestation of a phenomenon that was happening for decades.
Since at least the late 1990s, the reliable cheapening of compute has made it consistently more profitable to build compute-intensive solutions to problems than to invest in the kind of deep, careful engineering that produces efficient, well-understood systems. When you can always count on next year's hardware being faster and cheaper, the rational business decision is to ship bloated software now and let Moore's Law clean up after you, rather than spending the additional engineering time to make something lean and correct. This created an enitre economy of applications, business models and platform architectures that are, in a meaningful sense, the technological equivalent of an oil-dependent monoculture: they exist not because they represent the best way to solve a problem, but because abundant compute made them the cheapest way to ship a product.
The consequences are visible across the entire stack. Web applications that would have run comfortably on a 2005-era machine now require gigabytes of RAM to render what is essentially styled text. Electron-based desktop apps ship an entire browser engine to display a chat window. Backend services that could be handled by a well-designed program running on a single server are instead distributed across sprawling microservice architectures that consume orders of magnitude more compute. Cory Doctorow's "enshittification" framework is about the user-facing result of this dynamic, but the deeper structural story is about how compute abundance degraded the craft of software engineering itself, well before anyone started worrying about ChatGPT replacing programmers.
This is the Dutch disease pattern operating at the level of the entire technology economy: the booming sector (scale-dependent applications) drew capital and talent away from the non-booming sector (careful engineering, deep technical innovation, computationally parsimonious approaches, and overall hardware tech economy - biotech, spacetech, materials, etc.), and the non-booming sector atrophied accordingly.
Because of the advantage of huge compute available, it got more financially attractive to allocate resources towards software than towards physical engineering and deeptech, on top of software being easier to update, replicate, diffuse, and make incremental improvements on. And so deeptech stagnated.
But of course the AI case is qualitatively different and the most sorrowful because it resulted in humanity trying to build superintelligence with giant instructable deep learning models.
Human capital crowding-out
Resource curse economies characteristically underinvest in education and human capital development. The relative returns to education are lower in resource-dependent economies because the booming sector doesn't require a broadly educated population.
The compute version of this story has been playing out for at least a decade, well before the current discourse about AI replacing jobs and destroying university education. The entire trajectory of computer science education shifted from "understand the fundamentals deeply" toward "learn to use frameworks and APIs that abstract over compute." At the same time, natural sciences and engineering education got increasingly less attractive and rewarding as compared to computer science.
There is also a more direct talent-siphoning effect: the IT economy has been pulling the most capable technical minds into a narrow set of activities and away from a much broader set of technical and scientific challenges.
The voracity effect and race dynamics
In the resource curse literature, there is a so called "voracity effect": when competing interest groups face a resource windfall, they respond by extracting more aggressively, leading to worse outcomes than moderate scarcity would produce. Rather than investing the windfall prudently, competing factions race to capture as much of it as possible before others do.
I leave this without a direct comment and let the reader have their own pleasure of meditating on this.
But compute growth is endogeneous!
The resource curse in its classical form operates on an exogenous endowment: countries don't choose to have oil reserves, they discover them, and then the political economy warps around that windfall. Much of the pathology comes from the unearned nature of the wealth: it enables rent-seeking, weakens the link between effort and reward, and corrodes institutions.
Compute, by contrast, is endogenously produced through deliberate R&D and engineering investment. Moore's Law was never a law of nature.
Right?
I mean, to me Moore’s Law looks like a strong default of any humanity-like civilization. It is created by humans, right, but it is created in a kind of hardly avoidable manner.
The counterfactual question
The resource curse literature has natural counterfactuals (resource-poor countries that developed strong institutions and diversified economies: Japan, South Korea, Singapore). What's the compute-curse counterfactual? A world where compute grew more slowly and we consequently invested more in elegant algorithms, interpretable models, and formal methods?
It's plausible, but it's also possible that slower compute growth would have simply meant less progress overall rather than differently-directed progress. I don’t know. I said in the beginning - it is a speculation.
However, one can trivially note that in a world with less compute abundance, the relative returns to algorithmic cleverness, interpretability research, and formal verification would have been higher, because you couldn't just solve problems by throwing more FLOPS at them. And that may or may not lead to better outcomes in the long run (I am basically leaving here the question of ASI development and just talking about rather “normal” tech and science R&D).
People actually thought about this!
Two existing frameworks are close to what I'm describing, but both point the analogy in different directions.
The Intelligence Curse (Luke Drago and Rudolf Laine, 2025) uses the resource curse analogy to argue that AGI will create rentier-state-like incentives: powerful actors who control AI will lose their incentive to invest in regular people, just as petrostates lose their incentive to invest in citizens. This is a compelling argument about the distributional consequences of AGI, but it's about what happens after AGI arrives. The compute curse is about what's happening now, during the process of building toward AGI, and about how the abundance of compute is distorting that process itself.
The Generalized Dutch Disease (Policy Tensor, Feb 2026) is about the macroeconomic effects of the compute capex boom on US manufacturing competitiveness, showing that it operates through the same channels as the fracking boom and the pre-2008 financial boom. This is the closest existing work to what I'm describing, but it stays within the macroeconomic framing (factor prices, unit labor costs, exchange rate effects) and doesn't address the innovation-direction distortion, human capital crowding-out in the intellectual sense, or the AI safety implications.
But: compute curse may actually be worse than resource curse
Some of the negative downstream effects of compute abundance don't map onto the resource curse framework directly but are worth including for completeness, since they stem from the same underlying cause (cheap, abundant compute enabling activities that wouldn't otherwise be viable):
- Social media and attention economy pathologies
- Surveillance infrastructure
- Targeted public opinion manipulation
- And of course many AI safety issues
These are not Dutch disease effects, just straightforward negative externalities of cheap compute. But they suggest that the full accounting of compute abundance's costs is substantially larger than what the resource curse analogy alone would tell.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelbenchmarkavailable
We Built a Robotics Developer Platform from Scratch - Meet Isaac Monitor & Robosynx
We Built a Full Robotics Developer Platform from Scratch — AI Generator, ROS 2 Architect, Physics Validator, Isaac Monitor, and More One platform that removes every single friction point between a robotics engineer and a working simulation — from generating your first robot file to monitoring a GPU training cluster in real time. This is Robosynx. The Problem We Set Out to Solve Robotics development in 2025 is powerful — but the tooling around it is still fragile, tribal, and painful. You want to test a new robot in NVIDIA Isaac Sim? You need to write URDF XML by hand. You want to move that robot to Isaac Lab for reinforcement learning? Now you need MJCF format, so you spend three hours refactoring XML. You want to validate that the physics won't explode your simulation? There's no standard

Understanding Attention Mechanisms – Part 6: Final Step in Decoding
In the previous article , we obtained the initial output, but we didn’t receive the EOS token yet. To get that, we need to unroll the embedding layer and the LSTMs in the decoder , and then feed the translated word “vamos” into the decoder’s unrolled embedding layer. After that, we follow the same process as before. But this time, we use the encoded values for “vamos” . The second output from the decoder is EOS , which means we are done decoding. When we add attention to an encoder-decoder model, the encoder mostly stays the same. However, during each step of decoding, the model has access to the individual encodings for each input word. We use similarity scores and the softmax function to determine what percentage of each encoded input word should be used to predict the next output word.

Qodo Merge Review: Is AI PR Review Worth It?
Quick Verdict Qodo Merge is one of the most feature-rich AI pull request review tools available in 2026, and the only major option backed by a fully open-source core. Built on the PR-Agent engine, Qodo Merge automatically generates PR descriptions, posts structured review comments, suggests code improvements, and identifies test coverage gaps - all within minutes of a pull request being opened. The February 2026 release of Qodo 2.0 introduced a multi-agent review architecture that achieved the highest F1 score (60.1%) among eight leading AI code review tools in comparative benchmarks. The open-source angle is what makes Qodo Merge genuinely interesting. You can self-host PR-Agent for free with your own LLM API keys and get the core review experience without paying a subscription. The manag
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

We Built a Robotics Developer Platform from Scratch - Meet Isaac Monitor & Robosynx
We Built a Full Robotics Developer Platform from Scratch — AI Generator, ROS 2 Architect, Physics Validator, Isaac Monitor, and More One platform that removes every single friction point between a robotics engineer and a working simulation — from generating your first robot file to monitoring a GPU training cluster in real time. This is Robosynx. The Problem We Set Out to Solve Robotics development in 2025 is powerful — but the tooling around it is still fragile, tribal, and painful. You want to test a new robot in NVIDIA Isaac Sim? You need to write URDF XML by hand. You want to move that robot to Isaac Lab for reinforcement learning? Now you need MJCF format, so you spend three hours refactoring XML. You want to validate that the physics won't explode your simulation? There's no standard

Qodo Merge Review: Is AI PR Review Worth It?
Quick Verdict Qodo Merge is one of the most feature-rich AI pull request review tools available in 2026, and the only major option backed by a fully open-source core. Built on the PR-Agent engine, Qodo Merge automatically generates PR descriptions, posts structured review comments, suggests code improvements, and identifies test coverage gaps - all within minutes of a pull request being opened. The February 2026 release of Qodo 2.0 introduced a multi-agent review architecture that achieved the highest F1 score (60.1%) among eight leading AI code review tools in comparative benchmarks. The open-source angle is what makes Qodo Merge genuinely interesting. You can self-host PR-Agent for free with your own LLM API keys and get the core review experience without paying a subscription. The manag

Everyone's Building AI Agents. Nobody's Building What Makes Them Work.
Three things happened this week. They tell the same story. On April 3, NPR reported that AI legal sanctions have hit 1,200+ cases , with a record fine of $110,000. Courts sanctioned ten cases in a single day. On April 4, The Week published that enterprise environments are still not ready for agentic AI —85% of companies want to deploy agents within three years, but 76% admit their operations can't support it. 50% of deployed agents operate in total isolation. This morning, NVIDIA launched an open agent platform, partnering with Salesforce, Adobe, Atlassian, and ServiceNow. The gold rush is accelerating. The narrative is seductive: AI agents are coming. Build them. Deploy them. Win. But the data tells a different story. The problem isn't the agents themselves. It's the infrastructure undern

What is an MCP proxy and why does it need an approval layer?
MCP (Model Context Protocol) lets AI agents call external tools. A database query, a file write, an API call -- the agent decides what to do and the MCP server executes it. But there's nothing in the spec that evaluates whether that action should happen. An MCP proxy sits between the agent and the MCP server. It intercepts every tools/call request, does something with it, and forwards it (or doesn't). The proxy pattern isn't new -- it's how HTTP proxies, API gateways, and service meshes work. Apply it to MCP and you get an enforcement point for agent actions. Why a plain proxy isn't enough Most MCP proxies today do routing, load balancing, or observability. They watch traffic. Some log it. A few do rate limiting. None of that stops an agent from running DROP TABLE customers if the tool cal


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!