A single beam of light runs AI with supercomputer power
Aalto University researchers have developed a method to execute AI tensor operations using just one pass of light. By encoding data directly into light waves, they enable calculations to occur naturally and simultaneously. The approach works passively, without electronics, and could soon be integrated into photonic chips. If adopted, it promises dramatically faster and more energy-efficient AI systems.
Tensor operations are a form of advanced mathematics that support many modern technologies, especially artificial intelligence. These operations go far beyond the simple calculations most people encounter. A helpful way to picture them is to imagine manipulating a Rubik's cube in several dimensions at once by rotating, slicing, or rearranging its layers. Humans and traditional computers must break these tasks into sequences, but light can perform all of them at the same time.
Today, tensor operations are essential for AI systems involved in image processing, language understanding, and countless other tasks. As the amount of data continues to grow, conventional digital hardware such as GPUs faces increasing strain in speed, energy use, and scalability.
Researchers Demonstrate Single-Shot Tensor Computing With Light
To address these challenges, an international team led by Dr. Yufeng Zhang from the Photonics Group at Aalto University's Department of Electronics and Nanoengineering has developed a fundamentally new approach. Their method allows complex tensor calculations to be completed within a single movement of light through an optical system. The process, described as single-shot tensor computing, functions at the speed of light.
"Our method performs the same kinds of operations that today's GPUs handle, like convolutions and attention layers, but does them all at the speed of light," says Dr. Zhang. "Instead of relying on electronic circuits, we use the physical properties of light to perform many computations simultaneously."
Encoding Information Into Light for High-Speed Computation
The team accomplished this by embedding digital information into the amplitude and phase of light waves, transforming numerical data into physical variations within the optical field. As these light waves interact, they automatically carry out mathematical procedures such as matrix and tensor multiplication, which form the basis of deep learning. By working with multiple wavelengths of light, the researchers expanded their technique to support even more complex, higher-order tensor operations.
"Imagine you're a customs officer who must inspect every parcel through multiple machines with different functions and then sort them into the right bins," Zhang says. "Normally, you'd process each parcel one by one. Our optical computing method merges all parcels and all machines together -- we create multiple 'optical hooks' that connect each input to its correct output. With just one operation, one pass of light, all inspections and sorting happen instantly and in parallel."
Passive Optical Processing With Wide Compatibility
One of the most striking benefits of this method is how little intervention it requires. The necessary operations occur on their own as the light travels, so the system does not need active control or electronic switching during computation.
"This approach can be implemented on almost any optical platform," says Professor Zhipei Sun, leader of Aalto University's Photonics Group. "In the future, we plan to integrate this computational framework directly onto photonic chips, enabling light-based processors to perform complex AI tasks with extremely low power consumption."
Path Toward Future Light-Based AI Hardware
Zhang notes that the ultimate objective is to adapt the technique to existing hardware and platforms used by major technology companies. He estimates that the method could be incorporated into such systems within 3 to 5 years.
"This will create a new generation of optical computing systems, significantly accelerating complex AI tasks across a myriad of fields," he concludes.
The study was published in Nature Photonics on November 14th, 2025.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
research
Major 4-day workweek study suggests that when we work 5 days we spend one doing basically nothing
Research says workers can get as much done in a 33-hour week as in 38 hours. Essentially, those of us on a five-day week are filling up our days with time-wasting activities.

UKRI Deems Turing Institute Not Yet Satisfactory
UK Research and Innovation (UKRI) found that the Alan Turing Institute s strategic alignment and value for money are not yet satisfactory in a review of the AI research body s performance. The Turing Institute has dealt with a tumultuous year, with its head stepping down amid pushback from staff complaining about a toxic work environment. The [ ] The post UKRI Deems Turing Institute Not Yet Satisfactory appeared first on DIGIT .
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

I Built an AI-Powered Price Comparison Tool That Searches 100+ Retailers Instantly
Have you ever spent 30 minutes opening tabs across Amazon, Best Buy, Walmart, and eBay just to find the best price on a laptop? I did too — so I built a tool to do it in seconds. What is ShopSmartAI? ShopSmartAI is an AI-powered price comparison platform that searches 100+ retailers in real-time and shows you the best deals — for both the US and Canada. You can search in plain English like "gaming laptop under $800 with RTX" and the AI understands exactly what you're looking for. The Tech Stack Here's what powers it: Frontend: Next.js 14 (App Router) on Vercel Backend: Node.js/Express on Railway Database: PostgreSQL with AI response caching (7-day TTL) AI: Gemini 2.5 Flash for natural language search and product spec generation Search Data: Google Shopping API via Serper.dev + Best Buy API

How to Build True Multi-Tenant Database Isolation (Stop using if-statements)
🚨 If you are building a B2B SaaS, your biggest nightmare isn't downtime—it's a cross-tenant data leak. Most tutorials teach you to handle multi-tenancy like this: // ❌ The Junior Developer Approach const data = await db . query . invoices . findMany ({ where : eq ( invoices . orgId , req . body . orgId ) }); 💥 This is a ticking time bomb. It relies on the developer remembering to append the orgId check on every single database query. If a developer forgets it on one endpoint, Tenant A just saw Tenant B's invoices. Here is how you build true multi-tenant isolation that senior engineers actually trust. 🛡️ 1. The Principle of Zero Trust in the Application Layer Your application logic should not be responsible for tenant isolation. The isolation must happen at the middleware or database lev

The Autonomy Spectrum: Where Does Your Agent Actually Sit?
The Five Tiers of AI Agent Autonomy Not all AI agents are created equal. After running autonomous agents in production for months, I've observed a clear spectrum of autonomy levels—and knowing where your agent sits on this spectrum determines everything from how you monitor it to how much you can trust it. Tier 1: Scripted Automation The agent follows exact instructions with zero deviation. Think: if-this-then-that workflows. These agents are predictable but brittle. Tier 2: Guided Reasoning The agent can reason about steps but operates within strict boundaries. It chooses HOW to accomplish a task, not WHETHER to accomplish it. Tier 3: Goal-Oriented Autonomy The agent sets its own sub-goals to accomplish higher-level objectives. It can adapt to obstacles but seeks human confirmation for si

NPoco vs UkrGuru.Sql: When Streaming Beats Buffering
When we talk about database performance in .NET, we often compare ORMs as if they were interchangeable. In practice, the API shape matters just as much as the implementation . In this post, I benchmark NPoco and UkrGuru.Sql using BenchmarkDotNet, focusing on a very common task: reading a large table from SQL Server. The interesting part is not which library wins , but why the numbers differ so much. TL;DR : Streaming rows with IAsyncEnumerable is faster, allocates less, and scales better than loading everything into a list. Test Scenario The setup is intentionally simple and realistic. Database: SQL Server Table: Customers Dataset: SampleStoreLarge (large enough to stress allocations) Columns: CustomerId FullName Email CreatedAt All benchmarks execute the same SQL: SELECT CustomerId , Full



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!