The race to solve the biggest problem in quantum computing
The errors that quantum computers make are holding the technology back. But recent progress in quantum error correction has excited many researchers
Quantum computers won’t be truly useful until they can correct their mistakes
davide bonaldo / Alamy
Quantum computers are already here, but they make far too many errors. This is arguably the biggest obstacle to the technology really becoming useful, but recent breakthroughs suggest a solution may be on the horizon.
Errors creep into traditional computers too, but there are well-established techniques for correcting them. They rely on redundancy, where extra bits are used to detect when 0s incorrectly swap to 1s or vice versa. In the quantum world, however, it is a lot more challenging.
The laws of quantum mechanics forbid information from being duplicated inside a quantum computer, so redundancy must be achieved by spreading information across groups of qubits – the building blocks of quantum computers – and utilising phenomena that only exist in quantum settings, such as when pairs of particles become linked via quantum entanglement. These qubit groups are called logical qubits and figuring out the optimal way to build and use them is crucial for determining how best to eliminate errors.
A recent surge in progress has made researchers optimistic. “It’s a very exciting time in error correction. For the first time, theory and practice are really making contact,” says Robert Schoelkopf at Yale University.
One of the stumbling blocks for quantum error correction has been that the number of qubits needed to make a logical qubit tends to be large, which makes the whole quantum computer costly and challenging to build. But Xiayu Linpeng at the International Quantum Academy in China and his team have recently demonstrated that this doesn’t have to be the case.
The researchers found that just two superconducting qubits can be combined with a tiny resonator to make one larger qubit that both makes fewer errors and can automatically flag an error when it happens. They then went a step further to show how three such qubits can be grouped together through quantum entanglement for building up computational power without surreptitious errors.
Schoelkopf’s team also recently demonstrated how several operations necessary for quantum computer programs could be implemented with the same type of qubit and exceptionally low error rates, with some errors occurring as rarely as once in a million qubit manipulations.
Even though approaches like this will catch many errors, useful quantum computers will have to contain thousands of logical qubits, meaning some will still creep in. So Arian Vezvaee at start-up Quantum Elements and his colleagues have tested a way to add further error protection to logical qubits, like wearing a raincoat under an umbrella.
The key idea is to not let any qubits sit idle for too long, as that makes them lose their special quantum properties and become corrupted. The team showed that giving idle qubits extra “kicks” of electromagnetic radiation can create the most reliable entanglement between logical qubits to date.
The exact recipe for how to combine physical qubits into logical ones really matters for some of the most precise calculations, as David Muñoz Ramo at quantum computing firm Quantinuum and his colleagues found when investigating an algorithm that determines the lowest possible energy that a hydrogen molecule can have. There, the precision needed is so high that basic error-correcting methods aren’t enough.
Such innovation in error-correcting programs will be crucial for the success or failure of quantum computers, says James Wootton at start-up Moth Quantum. “We’re still in a phase where researchers are learning how all the pieces of error correction fit together.” Quantum computers can’t yet operate effectively without errors, but we are starting to see the engineering foundations of this appear, he says.
Topics:
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
research![[P] GPU friendly lossless 12-bit BF16 format with 0.03% escape rate and 1 integer ADD decode works for AMD & NVIDIA](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-robot-hand-JvPW6jsLFTCtkgtb97Kys5.webp)
[P] GPU friendly lossless 12-bit BF16 format with 0.03% escape rate and 1 integer ADD decode works for AMD & NVIDIA
Hi everyone, I am from Australia : ) I just released a new research prototype It’s a lossless BF16 compression format that stores weights in 12 bits by replacing the 8-bit exponent with a 4-bit group code . For 99.97% of weights , decoding is just one integer ADD . Byte-aligned split storage: true 12-bit per weight, no 16-bit padding waste, and zero HBM read amplification. Yes 12 bit not 11 bit !! The main idea was not just “compress weights more”, but to make the format GPU-friendly enough to use directly during inference : sign + mantissa: exactly 1 byte per element group: two nibbles packed into exactly 1 byte too https://preview.redd.it/qbx94xeeo2tg1.png?width=1536 format=png auto=webp s=831da49f6b1729bd0a0e2d1f075786274e5a7398 1.33x smaller than BF16 Fixed-rate 12-bit per weight , no

Quoting Greg Kroah-Hartman
Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality. It was kind of funny. It didn't really worry us. Something happened a month ago, and the world switched. Now we have real reports. All open source projects have real reports that are made with AI, but they're good, and they're real. Greg Kroah-Hartman , Linux kernel maintainer ( bio ), in conversation with Steven J. Vaughan-Nichols Tags: security , linux , generative-ai , ai , llms , ai-security-research

Quoting Daniel Stenberg
The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good. I'm spending hours per day on this now. It's intense. Daniel Stenberg , lead developer of cURL Tags: daniel-stenberg , security , curl , generative-ai , ai , llms , ai-security-research
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers

Development and multi-center evaluation of domain-adapted speech recognition for human-AI teaming in real-world gastrointestinal endoscopy
Automatic speech recognition (ASR) is a critical interface for human-AI interaction in gastrointestinal endoscopy, yet its reliability in real-world clinical settings is limited by domain-specific terminology and complex acoustic conditions. Here, we present EndoASR, a domain-adapted ASR system designed for real-time deployment in endoscopic workflows. We develop a two-stage adaptation strategy based on synthetic endoscopy reports, targeting domain-specific language modeling and noise robustness. In retrospective evaluation across six endoscopists, EndoASR substantially improves both transcrip — Ruijie Yang, Yan Zhu, Peiyao Fu

Memory in the LLM Era: Modular Architectures and Strategies in a Unified Framework
Memory emerges as the core module in the large language model (LLM)-based agents for long-horizon complex tasks (e.g., multi-turn dialogue, game playing, scientific discovery), where memory can enable knowledge accumulation, iterative reasoning and self-evolution. A number of memory methods have been proposed in the literature. However, these methods have not been systematically and comprehensively compared under the same experimental settings. In this paper, we first summarize a unified framework that incorporates all the existing agent memory methods from a high-level perspective. We then ex — Yanchen Wu, Tenghui Lin, Yingli Zhou

Human-Guided Reasoning with Large Language Models for Vietnamese Speech Emotion Recognition
Vietnamese Speech Emotion Recognition (SER) remains challenging due to ambiguous acoustic patterns and the lack of reliable annotated data, especially in real-world conditions where emotional boundaries are not clearly separable. To address this problem, this paper proposes a human-machine collaborative framework that integrates human knowledge into the learning process rather than relying solely on data-driven models. The proposed framework is centered around LLM-based reasoning, where acoustic feature-based models are used to provide auxiliary signals such as confidence and feature-level evi — Truc Nguyen, Then Tran, Binh Truong



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!