AI Could Become 2,000 Times More Efficient by Copying the Brain: Study
Researchers from Loughborough University are looking into a new type of computer chip that could make AI far more energy efficient.
In brief
-
A new chip inspired by the mechanics of the brain could make AI far more efficient in specific tasks, researchers from Loughborough University found.
-
This could be instrumental in reducing energy drain in weather systems, biological processes and more, where AI is involved.
-
The team focused on physical processes over hardware when designing this AI, suggesting the possibility of reworking how AI is built.
AI systems, like ChatGPT or Claude, are known for their intensive energy usage. They need to store data in one place and then process it elsewhere, constantly moving it back and forth. It’s a problem which could now be fixed with new research.
A team of physicists from Loughborough University have designed a device that can process data that is changing over time directly inside the hardware. Traditional systems have relied on software-based methods to do this in the past.
Brain-inspired chip developed by @LboroScience researchers could make some AI tasks up to 2,000x more energy efficient ⚡🧠
The device processes data directly in hardware - offering a new route to lower-power, more sustainable AI systems.
Read⤵️https://t.co/OdJGJhs3IW
— Loughborough University PR (@LboroPR) April 2, 2026
With this new chip, the team of researchers argue that it could be 2,000 times more energy efficient than existing methods.
“This is exciting because it shows we can rethink how AI systems are built,” said Dr. Pavel Borisov, lead author of the study, in a statement. “By using physical processes instead of relying entirely on software, we can dramatically reduce the energy needed for these kinds of tasks.”
Where conventional AI systems are akin to sending documents back and forth between two offices (memory and processor) over and over, with this new chip, it could be like having one smarter office, working on everything in one place.
Brain gain
At the heart of the chip is a memory resistor, a memory chip that remembers past signals. That memory alters how it responds to new signals—in other words, it's not just following instructions, but learning from history. This is an idea modelled on the human brain.
“Inspired by the way the human brain forms very numerous and seemingly random neuronal connections between all its neurons, we created complex, random, physical connections in an artificial neural network by designing pores in nanometre-thin films of niobium oxide as part of a novel electronic device,” said Dr. Borisov.
“We showed how one can predict the future evolution of a complex time series using these devices at up to two thousand-times lower energy consumption compared to a standard software-based solution.”
AI is often used to process data that changes over time, such as weather reports, stock market tracking or wave analysis. They might not be random, but they are sensitive to small changes.
For these more chaotic kinds of measurements, traditional AI systems need to use huge amounts of energy to keep up with all the small changes, sending information back and forth. This new chip could be perfectly designed for these more chaotic systems.
By analysing past measurements and experiences, the chip better learns to track and understand these chaotic types of measurements, reducing the energy output needed.
While we often think of AI as something like ChatGPT, or facial image software, it is found in most applications these days. This tool is aimed not at static information, like a chatbot, but at time-dependent information.
“Heart beat rates, brain electric activity, the outside temperature. These are all changing every day. There are capable applications that track these, but they are energy-intensive and they require a stable online connection to a server,” Dr. Borisov told Decrypt.
These are the kind of areas this chip could be implemented in, creating smarter systems for data that isn’t stable, often changing throughout time.
“My end goal would be for this kind of technology to be used in a time-dependent signal. Whether that’s in a car, a robot, a nuclear power plant, or in a smart watch,” he added. “For example, to monitor if someone has a stroke or not, to monitor the health of a car engine, or that the nuclear reactor is operating normally, this sort of thing.”
Daily Debrief Newsletter
Start every day with the top news stories right now, plus original features, a podcast, videos and more.
Decrypt AI
https://decrypt.co/363164/ai-could-become-2000-times-more-efficient-by-copying-the-brain-studySign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
studyresearch
Adaptive Fully Dynamic $k$-Center Clustering with (Near-)Optimal Worst-Case Guarantees
arXiv:2604.01726v1 Announce Type: new Abstract: Given a sequence of adversarial point insertions and point deletions, is it possible to simultaneously optimize the approximation ratio, update time, and recourse for a $k$-clustering problem? If so, can this be achieved with worst-case guarantees against an adaptive adversary? These questions have garnered significant attention in recent years. Prior works by Bhattacharya, Costa, Garg, Lattanzi, and Parotsidis [FOCS '24] and by Bhattacharya, Costa, and Farokhnejad [STOC '25] have taken significant steps toward this direction for the $k$-median clustering problem and its generalization, the $(k, z)$-clustering problem. In this paper, we study the $k$-center clustering problem, which is one of the most classical and well-studied $k$-clustering

Single-Pass Streaming CSPs via Two-Tier Sampling
arXiv:2604.01575v1 Announce Type: new Abstract: We study the maximum constraint satisfaction problem, Max-CSP, in the streaming setting. Given $n$ variables, the constraints arrive sequentially in an arbitrary order, with each constraint involving only a small subset of the variables. The objective is to approximate the maximum fraction of constraints that can be satisfied by an optimal assignment in a single pass. The problem admits a trivial near-optimal solution with $O(n)$ space, so the major open problem in the literature has been the best approximation achievable when limiting the space to $o(n)$. The answer to the question above depends heavily on the CSP instance at hand. The integrality gap $\alpha$ of an LP relaxation, known as the BasicLP, plays a central role. In particular, a

Sublinear-query relative-error testing of halfspaces
arXiv:2604.01557v1 Announce Type: new Abstract: The relative-error property testing model was introduced in [CDHLNSY24] to facilitate the study of property testing for "sparse" Boolean-valued functions, i.e. ones for which only a small fraction of all input assignments satisfy the function. In this framework, the distance from the unknown target function $f$ that is being tested to a function $g$ is defined as $\mathrm{Vol}(f \mathop{\triangle} g)/\mathrm{Vol}(f)$, where the numerator is the fraction of inputs on which $f$ and $g$ disagree and the denominator is the fraction of inputs that satisfy $f$. Recent work [CDHNSY26] has shown that over the Boolean domain $\{0,1\}^n$, any relative-error testing algorithm for the fundamental class of halfspaces (i.e. linear threshold functions) must
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers

Probabilistic AVL Trees (p-AVL): Relaxing Deterministic Balancing
arXiv:2604.02223v1 Announce Type: new Abstract: This paper studies the empirical behaviour of the p-AVL tree, a probabilistic variant of the AVL tree in which each imbalance is repaired with probability $p$. This gives an exact continuous interpolation from $p = 0$, which recovers the BST endpoint, to $p = 1$, which recovers the standard AVL tree. Across random-order insertion experiments, we track rotations per node, total imbalance events, average depth, average height, and a global imbalance statistic $\sigma$. The main empirical result is that even small nonzero p already causes a strong structural change. The goal here is empirical rather than fully theoretical: to document the behaviour of the p-AVL family clearly and identify the main patterns.

A Constant-Approximation Distance Labeling Scheme under Polynomially Many Edge Failures
arXiv:2604.01829v1 Announce Type: new Abstract: A fault-tolerant distance labeling scheme assigns a label to each vertex and edge of an undirected weighted graph $G$ with $n$ vertices so that, for any edge set $F$ of size $|F| \leq f$, one can approximate the distance between $p$ and $q$ in $G \setminus F$ by reading only the labels of $F \cup \{p,q\}$. For any $k$, we present a deterministic polynomial-time scheme with $O(k^{4})$ approximation and $\tilde{O}(f^{4}n^{1/k})$ label size. This is the first scheme to achieve a constant approximation while handling any number of edge faults $f$, resolving the open problem posed by Dory and Parter [DP21]. All previous schemes provided only a linear-in-$f$ approximation [DP21, LPS25]. Our labeling scheme directly improves the state of the art in

Adaptive Fully Dynamic $k$-Center Clustering with (Near-)Optimal Worst-Case Guarantees
arXiv:2604.01726v1 Announce Type: new Abstract: Given a sequence of adversarial point insertions and point deletions, is it possible to simultaneously optimize the approximation ratio, update time, and recourse for a $k$-clustering problem? If so, can this be achieved with worst-case guarantees against an adaptive adversary? These questions have garnered significant attention in recent years. Prior works by Bhattacharya, Costa, Garg, Lattanzi, and Parotsidis [FOCS '24] and by Bhattacharya, Costa, and Farokhnejad [STOC '25] have taken significant steps toward this direction for the $k$-median clustering problem and its generalization, the $(k, z)$-clustering problem. In this paper, we study the $k$-center clustering problem, which is one of the most classical and well-studied $k$-clustering

Single-Pass Streaming CSPs via Two-Tier Sampling
arXiv:2604.01575v1 Announce Type: new Abstract: We study the maximum constraint satisfaction problem, Max-CSP, in the streaming setting. Given $n$ variables, the constraints arrive sequentially in an arbitrary order, with each constraint involving only a small subset of the variables. The objective is to approximate the maximum fraction of constraints that can be satisfied by an optimal assignment in a single pass. The problem admits a trivial near-optimal solution with $O(n)$ space, so the major open problem in the literature has been the best approximation achievable when limiting the space to $o(n)$. The answer to the question above depends heavily on the CSP instance at hand. The integrality gap $\alpha$ of an LP relaxation, known as the BasicLP, plays a central role. In particular, a


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!