Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessTwo Subtle Bugs That Broke Our Remotion Vercel Sandbox (And How We Fixed Them)DEV CommunityZero-Shot Attack Transfer on Gemma 4 (E4B-IT)DEV CommunityGetting Started with the Gemini API: A Practical GuideDEV CommunityLAB: Terraform Dependencies (Implicit vs Explicit)DEV CommunityQ&A: AWS on new AI agents, quantum computing in healthcare - MobiHealthNewsGNews AI quantumGemma 4 Arrives: Google Drops Restrictions, Embraces True Open Models - eWeekGNews AI GemmaHow NinjaOne went from scrappy startup to $5B challenger in the race to unify IT operationsThe Next Web NeuralAI News: Anthropic Leak is Bigger Than You ThinkMatt Wolfe (YouTube)Anthropic says Claude Code s usage drain comes down to peak-hour caps and ballooning contextsThe DecoderAnthropic just paid $400 million for a startup with fewer than 10 peopleThe Next Web Neural[R] Differentiable Clustering & Search !Reddit r/MachineLearningHow 1 Missing Line of Code Cost Anthropic $340 BillionDev.to AIBlack Hat USADark ReadingBlack Hat AsiaAI BusinessTwo Subtle Bugs That Broke Our Remotion Vercel Sandbox (And How We Fixed Them)DEV CommunityZero-Shot Attack Transfer on Gemma 4 (E4B-IT)DEV CommunityGetting Started with the Gemini API: A Practical GuideDEV CommunityLAB: Terraform Dependencies (Implicit vs Explicit)DEV CommunityQ&A: AWS on new AI agents, quantum computing in healthcare - MobiHealthNewsGNews AI quantumGemma 4 Arrives: Google Drops Restrictions, Embraces True Open Models - eWeekGNews AI GemmaHow NinjaOne went from scrappy startup to $5B challenger in the race to unify IT operationsThe Next Web NeuralAI News: Anthropic Leak is Bigger Than You ThinkMatt Wolfe (YouTube)Anthropic says Claude Code s usage drain comes down to peak-hour caps and ballooning contextsThe DecoderAnthropic just paid $400 million for a startup with fewer than 10 peopleThe Next Web Neural[R] Differentiable Clustering & Search !Reddit r/MachineLearningHow 1 Missing Line of Code Cost Anthropic $340 BillionDev.to AI
AI NEWS HUBbyEIGENVECTOREigenvector

Covariance-Domain Near-Field Channel Estimation under Hybrid Compression: USW/Fresnel Model, Curvature Learning, and KL Covariance Fitting

arXiv eess.SPby R{\i}fat Volkan \c{S}enyuvaApril 1, 20262 min read0 views
Source Quiz

arXiv:2603.28918v1 Announce Type: new Abstract: Near-field propagation in extremely large aperture arrays requires joint angle-range estimation. In hybrid architectures, only $N_\mathrm{RF}\ll M$ compressed snapshots are available per slot, making the $N_\mathrm{RF}\times N_\mathrm{RF}$ compressed sample covariance the natural sufficient statistic. We propose the Curvature-Learning KL (CL-KL) estimator, which grids only the angle dimension and \emph{learns the per-angle inverse range} directly from the compressed covariance via KL divergence minimisation. CL-KL uses a $Q_\theta$-element dictionary instead of the $Q_\theta Q_r$ atoms of 2-D polar gridding, eliminating the range-dimension dictionary coherence that plagues polar codebooks in the strong near-field regime, and operates entirely

View PDF HTML (experimental)

Abstract:Near-field propagation in extremely large aperture arrays requires joint angle-range estimation. In hybrid architectures, only $N_\mathrm{RF}\ll M$ compressed snapshots are available per slot, making the $N_\mathrm{RF}\times N_\mathrm{RF}$ compressed sample covariance the natural sufficient statistic. We propose the Curvature-Learning KL (CL-KL) estimator, which grids only the angle dimension and \emph{learns the per-angle inverse range} directly from the compressed covariance via KL divergence minimisation. CL-KL uses a $Q_\theta$-element dictionary instead of the $Q_\theta Q_r$ atoms of 2-D polar gridding, eliminating the range-dimension dictionary coherence that plagues polar codebooks in the strong near-field regime, and operates entirely on the compressed covariance for full compatibility with hybrid front-ends. At $N_\mathrm{MC}=400$ ($f_c=28$GHz, $M=64$, $N_\mathrm{RF}=8$, $N=64$, $d=3$, $r\in[0.05,1.0],r_\mathrm{RD}$), CL-KL achieves the lowest channel NMSE among all six evaluated methods -- including four full-array baselines using $64\times$ more data -- at $\mathrm{SNR}\in{-5,0,+5,+10}$dB. Running in approximately 70ms per trial (vs.\ 5ms for the compressed-domain peer P-SOMP), CL-KL's dominant cost is the $N_\mathrm{RF}{\times}N_\mathrm{RF}$ inversion rather than $M$: measured runtime stays near 70ms across $M\in{32,64,128,256}$, making it aperture-scalable for XL-MIMO deployments. CL-KL is further validated against a derived compressed-domain Cramér-Rao bound and confirmed robust to non-Gaussian (QPSK) source distributions, with a maximum NMSE gap below 0.6dB.

Comments: 13 pages,9 figures. Submitted to IEEE Transactions on Wireless Communications, March 2026. Code and data: this https URL

Subjects:

Signal Processing (eess.SP); Information Theory (cs.IT)

Cite as: arXiv:2603.28918 [eess.SP]

(or arXiv:2603.28918v1 [eess.SP] for this version)

https://doi.org/10.48550/arXiv.2603.28918

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Rıfat Volkan Şenyuva [view email] [v1] Mon, 30 Mar 2026 18:49:45 UTC (301 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modelannounceavailable

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Covariance-…modelannounceavailableversionarxivarXiv eess.…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 177 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Releases