Here's what 'cracking' bitcoin in 9 minutes by quantum computers actually means
Google's quantum paper made headlines with that number. Here's what it means, what's actually at risk, and why 6.9 million bitcoin are more exposed than the rest.
Google's quantum paper made headlines with that number. Here's what it means, what's actually at risk, and why 6.9 million bitcoin are more exposed than the rest.
Apr 4, 2026, 2:30 a.m.
Make preferred on
Google's Quantum AI team said earlier this week that a future quantum computer could derive a bitcoin private key from a public key in roughly nine minutes. The number ricocheted across social media and spooked markets.
But, what does it actually mean in practice?
Let's start with how bitcoin transactions work. When you send bitcoin, your wallet signs the transaction with a private key, a secret number that proves you own the coins.
That signature also reveals your public key, a shareable address, which gets broadcast to the network and sits in a waiting area called the mempool until a miner includes it in a block. On average, that confirmation takes about 10 minutes.
Your private key and public key are linked by a math problem called the elliptic curve discrete logarithm problem. Classical computers can't reverse that math in any useful timeframe, while a sufficiently powerful future quantum computer running an algorithm called Shor's could.
Here's where the nine minutes part comes in. Google's paper found that a quantum computer could be "primed" in advance by pre-computing the parts of the attack that don't depend on any specific public key.
Once your public key appears in the mempool, the machine only needs about nine minutes to finish the job and derive your private key. Bitcoin's average confirmation time is 10 minutes. That gives the attacker a roughly 41% chance of deriving your key and redirecting your funds before the original transaction confirms.
Think of it like a thief spending hours building a universal safe-cracking machine (pre-computation). The machine works for any safe, but each time a new safe appears, it only needs a few final adjustments — and that last step is what takes about nine minutes.
That's the mempool attack. It's alarming but requires a quantum computer that doesn't exist yet. Google's paper estimates such a machine would need fewer than 500,000 physical qubits. Today's largest quantum processors have around 1,000.
The bigger and more immediate concern is the 6.9 million bitcoin, roughly one-third of total supply, that already sit in wallets where the public key has been permanently exposed.
This includes early bitcoin addresses from the network's first years that used a format called pay-to-public-key, where the public key is visible on the blockchain by default. It also includes any wallet that has reused an address, since spending from an address reveals the public key for all remaining funds.
These coins don't need the nine-minute race. An attacker with a sufficiently powerful quantum computer could crack them at leisure, working through exposed keys one by one without any time pressure.
Bitcoin's 2021 Taproot upgrade made this worse, as CoinDesk reported earlier Tuesday. Taproot changed how addresses work so that public keys are visible on-chain by default, inadvertently expanding the pool of wallets that would be vulnerable to a future quantum attack.
The bitcoin network itself would keep running. Mining uses a different algorithm called SHA-256 that quantum computers can't meaningfully speed up with current approaches. Blocks would still be produced.
The ledger would still exist. But if private keys can be derived from public keys, the ownership guarantees that make bitcoin valuable break down. Anyone with exposed keys is at risk of theft, and institutional trust in the network's security model collapses.
The fix is post-quantum cryptography, which replaces the vulnerable math with algorithms that quantum computers can't crack. Ethereum has spent eight years building toward that migration. Bitcoin hasn't even started.
More For You
Most crypto privacy models weaken as blockchain data grows. Encryption-based models like Zcash strengthen. CoinDesk Research maps the five privacy approaches and examines the widening gap.
Why it matters:
As blockchain adoption scales, the metadata available to machine learning models scales with it. Obfuscation-based privacy approaches are structurally degrading as a result. This report provides a comprehensive comparison of all five major crypto privacy architectures and a framework for evaluating which models remain durable as AI capabilities improve.
View Full Report
More For You
The exploit did not involve a bug in Drift's code. It used "durable nonces," a legitimate Solana transaction feature, to pre-sign administrative transfers weeks before executing them, bypassing the protocol's multisig security in minutes.
What to know:
- An attacker drained at least $270 million from the Drift Protocol on Solana by abusing a legitimate feature called 'durable nonces,' rather than exploiting a code bug or stolen keys.
- By securing two misleading approvals from Drift's five-member Security Council multisig, the attacker pre-signed transactions that remained valid for more...
Read full story
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
millionpaper
Building a Production RAG Pipeline: Lessons from Real-World AI Apps
Building a Production RAG Pipeline: Lessons from Real-World AI Apps RAG (Retrieval-Augmented Generation) sounds simple on paper — embed your documents, store them in a vector DB, retrieve the relevant chunks, and pass them to an LLM. In practice, getting a RAG pipeline to production quality is significantly harder. Here's what I learned building RAG pipelines for real SaaS products. The Naive Implementation Most tutorials show you this flow: Chunk your documents Embed them with OpenAI Store in Pinecone Retrieve top-k chunks Pass to GPT-4 This works fine in demos. It fails in production for a few key reasons. Problem 1: Chunking Strategy Kills Retrieval Quality Naive fixed-size chunking (every 512 tokens) destroys semantic context. A paragraph about "authentication" gets split mid-sentence,
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Market News

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
The AI landscape is experiencing unprecedented growth and transformation. This post delves into the key developments shaping the future of artificial intelligence, from massive industry investments to critical safety considerations and integration into core development processes. Key Areas Explored: Record-Breaking Investments: Major tech firms are committing billions to AI infrastructure, signaling a significant acceleration in the field. AI in Software Development: We examine how companies are leveraging AI for code generation and the implications for engineering workflows. Safety and Responsibility: The increasing focus on ethical AI development and protecting vulnerable users, particularly minors. Market Dynamics: How AI is influencing stock performance, cloud computing strategies, and





Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!