Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessAI Mastery Course in Telugu: Hands-On Training with Real ProjectsDev.to AIHow I'm Running Autonomous AI Agents That Actually Earn USDCDev.to AIBizNode Workflow Marketplace: chain multiple bot handles into multi-step pipelines. Client onboarding, contract-to-payment,...Dev.to AIЯ потратил месяц на AI-инструменты и удалил половину из нихDev.to AI500 AI Demos at AZ Tech Week. Every One Hits the Same Scaling Ceiling.Dev.to AIPEDIGREE® Uses Artificial Intelligence to Drive Responsible Dog Adoption in Brazil - PA MediaGoogle News: AIWhen the accountability tool becomes the procrastination toolDev.to AIBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AIUnlocking the Power of AI: Introducing MindPal - The Ultimate Developer Tool for 2026Dev.to AIMultichannel AI Agent: Shared Memory Across Messaging PlatformsDev.to AIAustralia Generative AI Market 2026: Enterprise Adoption, Automation & AI-Driven Innovation - vocal.mediaGoogle News: Generative AIContra The Usual Interpretation Of “The Whispering Earring”LessWrong AIBlack Hat USADark ReadingBlack Hat AsiaAI BusinessAI Mastery Course in Telugu: Hands-On Training with Real ProjectsDev.to AIHow I'm Running Autonomous AI Agents That Actually Earn USDCDev.to AIBizNode Workflow Marketplace: chain multiple bot handles into multi-step pipelines. Client onboarding, contract-to-payment,...Dev.to AIЯ потратил месяц на AI-инструменты и удалил половину из нихDev.to AI500 AI Demos at AZ Tech Week. Every One Hits the Same Scaling Ceiling.Dev.to AIPEDIGREE® Uses Artificial Intelligence to Drive Responsible Dog Adoption in Brazil - PA MediaGoogle News: AIWhen the accountability tool becomes the procrastination toolDev.to AIBig Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.Dev.to AIUnlocking the Power of AI: Introducing MindPal - The Ultimate Developer Tool for 2026Dev.to AIMultichannel AI Agent: Shared Memory Across Messaging PlatformsDev.to AIAustralia Generative AI Market 2026: Enterprise Adoption, Automation & AI-Driven Innovation - vocal.mediaGoogle News: Generative AIContra The Usual Interpretation Of “The Whispering Earring”LessWrong AI
AI NEWS HUBbyEIGENVECTOREigenvector

Low-Rank Compression of Pretrained Models via Randomized Subspace Iteration

arXiv stat.MLby Farhad Pourkamali-AnarakiApril 6, 20261 min read0 views
Source Quiz

arXiv:2604.02659v1 Announce Type: cross Abstract: The massive scale of pretrained models has made efficient compression essential for practical deployment. Low-rank decomposition based on the singular value decomposition (SVD) provides a principled approach for model reduction, but its exact computation is expensive for large weight matrices. Randomized alternatives such as randomized SVD (RSVD) improve efficiency, yet they can suffer from poor approximation quality when the singular value spectrum decays slowly, a regime commonly observed in modern pretrained models. In this work, we address this limitation from both theoretical and empirical perspectives. First, we establish a connection between low-rank approximation error and predictive performance by analyzing softmax perturbations, s

View PDF HTML (experimental)

Abstract:The massive scale of pretrained models has made efficient compression essential for practical deployment. Low-rank decomposition based on the singular value decomposition (SVD) provides a principled approach for model reduction, but its exact computation is expensive for large weight matrices. Randomized alternatives such as randomized SVD (RSVD) improve efficiency, yet they can suffer from poor approximation quality when the singular value spectrum decays slowly, a regime commonly observed in modern pretrained models. In this work, we address this limitation from both theoretical and empirical perspectives. First, we establish a connection between low-rank approximation error and predictive performance by analyzing softmax perturbations, showing that deviations in class probabilities are controlled by the spectral error of the compressed weights. Second, we demonstrate that RSVD is inadequate, and we propose randomized subspace iteration (RSI) as a more effective alternative. By incorporating multiple power iterations, RSI improves spectral separation and provides a controllable mechanism for enhancing approximation quality. We evaluate our approach on both convolutional networks and transformer-based architectures. Our results show that RSI achieves near-optimal approximation quality while outperforming RSVD in predictive accuracy under aggressive compression, enabling efficient model compression.

Comments: 13 pages

Subjects:

Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Numerical Analysis (math.NA); Machine Learning (stat.ML)

Cite as: arXiv:2604.02659 [cs.LG]

(or arXiv:2604.02659v1 [cs.LG] for this version)

https://doi.org/10.48550/arXiv.2604.02659

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Farhad Pourkamali-Anaraki [view email] [v1] Fri, 3 Apr 2026 02:47:03 UTC (645 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modeltransformerannounce

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Low-Rank Co…modeltransformerannounceperspectivearxivarXiv stat.…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 251 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Models