Measuring the Representational Alignment of Neural Systems in Superposition
arXiv:2604.00208v1 Announce Type: new Abstract: Comparing the internal representations of neural networks is a central goal in both neuroscience and machine learning. Standard alignment metrics operate on raw neural activations, implicitly assuming that similar representations produce similar activity patterns. However, neural systems frequently operate in superposition, encoding more features than they have neurons via linear compression. We derive closed-form expressions showing that superposition systematically deflates Representational Similarity Analysis, Centered Kernel Alignment, and linear regression, causing networks with identical feature content to appear dissimilar. The root cause is that these metrics are dependent on cross-similarity between two systems' respective superposit
View PDF HTML (experimental)
Abstract:Comparing the internal representations of neural networks is a central goal in both neuroscience and machine learning. Standard alignment metrics operate on raw neural activations, implicitly assuming that similar representations produce similar activity patterns. However, neural systems frequently operate in superposition, encoding more features than they have neurons via linear compression. We derive closed-form expressions showing that superposition systematically deflates Representational Similarity Analysis, Centered Kernel Alignment, and linear regression, causing networks with identical feature content to appear dissimilar. The root cause is that these metrics are dependent on cross-similarity between two systems' respective superposition matrices, which under assumption of random projection usually differ significantly, not on the latent features themselves: alignment scores conflate what a system represents with how it represents it. Under partial feature overlap, this confound can invert the expected ordering, making systems sharing fewer features appear more aligned than systems sharing more. Crucially, the apparent misalignment need not reflect a loss of information; compressed sensing guarantees that the original features remain recoverable from the lower-dimensional activity, provided they are sparse. We therefore argue that comparing neural systems in superposition requires extracting and aligning the underlying features rather than comparing the raw neural mixtures.
Comments: 17 pages, 4 figures
Subjects:
Machine Learning (cs.LG)
Cite as: arXiv:2604.00208 [cs.LG]
(or arXiv:2604.00208v1 [cs.LG] for this version)
https://doi.org/10.48550/arXiv.2604.00208
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Sunny Liu [view email] [v1] Tue, 31 Mar 2026 20:23:07 UTC (2,474 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
neural networkannouncefeatureVLMs Need Words: Vision Language Models Ignore Visual Detail In Favor of Semantic Anchors
Vision Language Models struggle with fine-grained visual perception tasks due to their language-centric training approach, performing poorly on unnamed visual entities despite having relevant information in their representations. (1 upvotes on HuggingFace)

How Field Service Management Software Is Transforming Service Businesses in 2026
Not long ago, a field service business ran on clipboards, carbon-copy invoices, and a dispatcher who seemed to hold the entire operation together through sheer force of memory. A missed call meant a missed job. A lost work order meant a billing dispute. And scheduling three technicians across a metro area was considered a full-time job in itself. That world is over. In 2026, the businesses pulling ahead are the ones that have embraced Field Service Management Software — not as a back-office nicety, but as the operational core around which every customer interaction, every dispatch, every invoice, and every performance review is built. The transformation is real, measurable, and accelerating. The Old Model Was Broken — and Everyone Knew It Talk to any field service veteran and you'll hear t
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Why TSMC grew four times faster than its foundry rivals in 2025 — price hikes, vertical integration, and commanding technology lead pay dividends
Why TSMC grew four times faster than its foundry rivals in 2025 — price hikes, vertical integration, and commanding technology lead pay dividends

The AI Stack: A Practical Guide to Building Your Own Intelligent Applications
Beyond the Hype: What Does "Building with AI" Actually Mean? Another week, another wave of AI headlines. From speculative leaks to existential debates, the conversation often orbits the sensational. But for developers, the real story is happening in the trenches: the practical, stack-by-stack integration of intelligence into real applications. While the industry debates "how it happened," we're busy figuring out how to use it . Forget the monolithic "AI" label for a moment. Modern AI application development is less about creating a sentient being and more about strategically assembling a set of powerful, specialized tools. It's about choosing the right component for the job—be it generating text, analyzing images, or making predictions—and wiring it into your existing systems. This guide b




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!