Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessMicrosoft’s new AI models signal its independence while challenging OpenAI and Google - eMarketerGNews AI MicrosoftWhy TSMC grew four times faster than its foundry rivals in 2025 — price hikes, vertical integration, and commanding technology lead pay dividendstomshardware.comThe Complete DevSecOps Engineer Career Guide: From Pipeline Security to Platform Architect in 2026DEV CommunityOpenAI’s $1M API Credits, Holos’ Agentic Web, and Xpertbench’s Expert TasksDEV CommunitySemantic matching in graph space without matrix computation and hallucinations and no GPUdiscuss.huggingface.coWhy We Built 5 Products on FastAPI + Next.js (and Would Do It Again)DEV CommunityHow We Run 5 Live SaaS Products on $35/Month in InfrastructureDEV CommunityOur Email Provider Banned Us Overnight -- Here's What We LearnedDEV CommunityThe AI Stack: A Practical Guide to Building Your Own Intelligent ApplicationsDEV Community🚀 Day 29 of My Automation Journey – Arrays (Full Guide + Tricky Questions)DEV CommunityThe Real Size of AI Frameworks: A Wake-Up CallDEV CommunityMicrosoft Piles Up 80 "Copilot" Products, Apps, and Services - TechPowerUpGNews AI MicrosoftBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessMicrosoft’s new AI models signal its independence while challenging OpenAI and Google - eMarketerGNews AI MicrosoftWhy TSMC grew four times faster than its foundry rivals in 2025 — price hikes, vertical integration, and commanding technology lead pay dividendstomshardware.comThe Complete DevSecOps Engineer Career Guide: From Pipeline Security to Platform Architect in 2026DEV CommunityOpenAI’s $1M API Credits, Holos’ Agentic Web, and Xpertbench’s Expert TasksDEV CommunitySemantic matching in graph space without matrix computation and hallucinations and no GPUdiscuss.huggingface.coWhy We Built 5 Products on FastAPI + Next.js (and Would Do It Again)DEV CommunityHow We Run 5 Live SaaS Products on $35/Month in InfrastructureDEV CommunityOur Email Provider Banned Us Overnight -- Here's What We LearnedDEV CommunityThe AI Stack: A Practical Guide to Building Your Own Intelligent ApplicationsDEV Community🚀 Day 29 of My Automation Journey – Arrays (Full Guide + Tricky Questions)DEV CommunityThe Real Size of AI Frameworks: A Wake-Up CallDEV CommunityMicrosoft Piles Up 80 "Copilot" Products, Apps, and Services - TechPowerUpGNews AI Microsoft
AI NEWS HUBbyEIGENVECTOREigenvector

Measuring the Representational Alignment of Neural Systems in Superposition

arXiv cs.LGby Sunny Liu, Habon Issa, Andr\'e Longon, Liv Gorton, Meenakshi Khosla, David KlindtApril 2, 20262 min read0 views
Source Quiz

arXiv:2604.00208v1 Announce Type: new Abstract: Comparing the internal representations of neural networks is a central goal in both neuroscience and machine learning. Standard alignment metrics operate on raw neural activations, implicitly assuming that similar representations produce similar activity patterns. However, neural systems frequently operate in superposition, encoding more features than they have neurons via linear compression. We derive closed-form expressions showing that superposition systematically deflates Representational Similarity Analysis, Centered Kernel Alignment, and linear regression, causing networks with identical feature content to appear dissimilar. The root cause is that these metrics are dependent on cross-similarity between two systems' respective superposit

View PDF HTML (experimental)

Abstract:Comparing the internal representations of neural networks is a central goal in both neuroscience and machine learning. Standard alignment metrics operate on raw neural activations, implicitly assuming that similar representations produce similar activity patterns. However, neural systems frequently operate in superposition, encoding more features than they have neurons via linear compression. We derive closed-form expressions showing that superposition systematically deflates Representational Similarity Analysis, Centered Kernel Alignment, and linear regression, causing networks with identical feature content to appear dissimilar. The root cause is that these metrics are dependent on cross-similarity between two systems' respective superposition matrices, which under assumption of random projection usually differ significantly, not on the latent features themselves: alignment scores conflate what a system represents with how it represents it. Under partial feature overlap, this confound can invert the expected ordering, making systems sharing fewer features appear more aligned than systems sharing more. Crucially, the apparent misalignment need not reflect a loss of information; compressed sensing guarantees that the original features remain recoverable from the lower-dimensional activity, provided they are sparse. We therefore argue that comparing neural systems in superposition requires extracting and aligning the underlying features rather than comparing the raw neural mixtures.

Comments: 17 pages, 4 figures

Subjects:

Machine Learning (cs.LG)

Cite as: arXiv:2604.00208 [cs.LG]

(or arXiv:2604.00208v1 [cs.LG] for this version)

https://doi.org/10.48550/arXiv.2604.00208

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Sunny Liu [view email] [v1] Tue, 31 Mar 2026 20:23:07 UTC (2,474 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

neural networkannouncefeature

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Measuring t…neural netw…announcefeatureanalysisalignmentarxivarXiv cs.LG

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 214 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Products