Systematically dismantle the AI compute supply chain.
This is not an April fool’s joke, I’m participating in Inkhaven, which means I need to write a blog post every day. I recently watched The AI Doc . It’s the first big documentary featuring AI safety. It’s playing in theatres across America. It’s got a bunch of my friends and colleagues in it. It starts out with a sick Neil Young deep cut . I thought it was pretty great overall. I liked the discussion of AI risks, but I was really disappointed by the way it discussed solutions to the risks from AI. The documentary presents us with a choice: “Lock It Down” or “Let It Rip”. In other words: Do you want the government spying on everything you do? Or do you want terrorists to kill billions of people with bio-weapons? 1 And this is after the filmmaker asks, “Why don’t we just stop?” It’s frustrat
This is not an April fool’s joke, I’m participating in Inkhaven, which means I need to write a blog post every day.I recently watched The AI Doc. It’s the first big documentary featuring AI safety. It’s playing in theatres across America. It’s got a bunch of my friends and colleagues in it. It starts out with a sick Neil Young deep cut. I thought it was pretty great overall.
I liked the discussion of AI risks, but I was really disappointed by the way it discussed solutions to the risks from AI.The documentary presents us with a choice: “Lock It Down” or “Let It Rip”. In other words: Do you want the government spying on everything you do? Or do you want terrorists to kill billions of people with bio-weapons?1
And this is after the filmmaker asks, “Why don’t we just stop?”It’s frustrating.But I get it.I used to think the same way.It wasn’t until 2023 that I realized one of the most significant conclusions of the “scaling paradigm” for building AI, where progress is driven by trillions of dollars of investment into hardware, which is that there is a 3rd way, that doesn’t carry the same choice between two unacceptable risks: we can get rid of the “compute”, that is the advanced AI chips and the factories that build them.
I started calling this intervention “Systematically dismantle the compute supply chain”, and the name has sort of stuck with me (although it could probably be improved.) Let’s break it down.
Systematically. Dismantle. The compute supply chain.
The compute supply chain
Starting at the end, the compute supply chain includes, at a minimum, the factories that make the most advanced computer chips, often called “fabs” (short for “fabrication plants”). There’s really just one of these, the Taiwan Semiconductor Manufacturing Corporation (TSMC), in Taiwan. TSMC relies on very specialized knowledge, which no individual employee possesses; only the company as a team can produce such advanced chips. As a result, they have resisted concerted Chinese efforts at industrial espionage that have successfully stolen secrets of other tech industries. There are a few competitors like Samsung and Intel that aren’t too far behind, maybe a few years. China’s best effort, SMIC, lags further.The fabs in turn rely on “Extreme ultraviolet lithography” (EUV) machines to actually physically manufacture the computer chips. The ones used to make advanced chips are produced exclusively by Dutch company ASML using lenses created by German company Zeiss. Computer chips are made out of high-quality silicon, the large majority of which comes from the Spruce Pine Mining District in North Carolina.
Once they are made, the chips, end up in datacenters. The rush to build massive datacenters across America, which has met with so much local resistence, is driven by AI. There are lots of reasons AI chips end up packed into datacenters — it makes it easier to maintain and cool them, and often it’s good if the chips can send messages back and forth as fast as possible. The good news is that this means we know where the vast majority of AI chips are — datacenters are not easy to hide, because of their intense demands for energy and cooling.
Dismantle the compute supply chain
A simply way to bring AI progress to a grinding halt would be to get rid of the AI chips and the factories that build them. This would go beyond Bernie Sander’s call for a moratorium on datacenter construction, and include shutting down much of the operations of leading companies building advanced AI hardware and related technology.
The compute supply chain is extremely concentrated, and all of the main pieces of the most bits of it are in the US or its allies: Taiwan, the Netherlands, and Germany. It would be easy enough to shut down operations of these factories and mines if governments want to.We might need to cast a broader net in order to capture enough of the compute to make this solution robust, and it gets harder the further back in time we go with computing technology. From a consumer perspective, though, the main thing we’d give up by reversing a decade of progress in AI computer hardware is just AI — smartphones and laptops haven’t really improved much recently.
Systematically dismantle the compute supply chain
The most durably effective way of getting rid of this technology would be through a serious international commitment to do so. This would need to involve ways for countries to check that other countries were complying.
The US and China agreing to something like this might be enough to get other countries to go along. The US seems like the main sticking point, given how far ahead the US is in AI and AI hardware, and how aggressively the US is racing to build superintelligence.
There could be government buyouts, where the companies involved are compensated, and government buybacks, where people who hand over unaccounted for chips are compensated, perhaps with a big mark-up.There could also be programs to prevent anyone from resuming development of more powerful AI hardware. These might require surveilling the people who have been involved in the relevant industries, to make sure they aren’t setting up shop in secret somewhere else. But it wouldn’t require anything approaching mass surveillance.
Bad futures this helps avoid
If AI really is incredibly dangerous, and governments realize this in the future, their default reactions could be much worse than what I’ve outlined. Two scenarios I worry about:
- If risks emerge slowly, and dangerous AI gets cheaper and cheaper to build over time, governments drift further and further towards mass surveillance, “locking it down” by slowly tightening their grip.
- If risks emerge rapidly and catch governments off guard, an “AI Chernobyl”-type event leaves many dead and governments suddenly scrambling to “lock it down” immediately, by any means necessary, e.g. bombing datacenters and potentially triggering WWIII.
Importantly, “systematically dismantling the compute supply chain” becomes less effective the more time passes. Technology for building advanced chips advances and proliferates, and there are more advanced computer chips already out there. The effort is bound to miss something, and open-weights models (and, for that matter, proprietary ones) are getting more powerful over time. So it will become harder and harder to rule out the possibility that someone, somewhere, is using some of the remaining compute to build a dangerously powerful AI system. And with that come the dangerous temptations to spy on people to make sure they’re not doing it, and to do it yourself. In 2023, AI safety pioneer and “original doom guy” Eliezer Yudkowsky notoriously argued in Time Magazine that countries should be willing to bomb each other to stop the development of superintelligent AI; at the time this was considered a fringe view. A few years later, researcher Dan Hendrycks was joined by no less than a former Google CEO (Eric Schmidt), and Meta’s current head of superintelligence (Alexandr Wang) as authors on a paper called “Superintelligence Strategy” arguing that countries might go to war to stop the development of superintelligence.
If governments can’t cooperate to prevent the development of superintelligent AI peacefully, they might resort to doing so violently.
Thanks for reading The Real AI! Subscribe for free to receive new posts and support my work.
1
To be fair, it ultimately says we should chart some sort of middle path. But what does that actually look like? The regulations they talk about are sensible, but they’re not nearly enough, and I worry that might not come through.
LessWrong AI
https://www.lesswrong.com/posts/JXyveb6tBqy9RP6jF/systematically-dismantle-the-ai-compute-supply-chainSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.





Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!