Covariance-Domain Near-Field Channel Estimation under Hybrid Compression: USW/Fresnel Model, Curvature Learning, and KL Covariance Fitting
arXiv:2603.28918v1 Announce Type: new Abstract: Near-field propagation in extremely large aperture arrays requires joint angle-range estimation. In hybrid architectures, only $N_\mathrm{RF}\ll M$ compressed snapshots are available per slot, making the $N_\mathrm{RF}\times N_\mathrm{RF}$ compressed sample covariance the natural sufficient statistic. We propose the Curvature-Learning KL (CL-KL) estimator, which grids only the angle dimension and \emph{learns the per-angle inverse range} directly from the compressed covariance via KL divergence minimisation. CL-KL uses a $Q_\theta$-element dictionary instead of the $Q_\theta Q_r$ atoms of 2-D polar gridding, eliminating the range-dimension dictionary coherence that plagues polar codebooks in the strong near-field regime, and operates entirely
View PDF HTML (experimental)
Abstract:Near-field propagation in extremely large aperture arrays requires joint angle-range estimation. In hybrid architectures, only $N_\mathrm{RF}\ll M$ compressed snapshots are available per slot, making the $N_\mathrm{RF}\times N_\mathrm{RF}$ compressed sample covariance the natural sufficient statistic. We propose the Curvature-Learning KL (CL-KL) estimator, which grids only the angle dimension and \emph{learns the per-angle inverse range} directly from the compressed covariance via KL divergence minimisation. CL-KL uses a $Q_\theta$-element dictionary instead of the $Q_\theta Q_r$ atoms of 2-D polar gridding, eliminating the range-dimension dictionary coherence that plagues polar codebooks in the strong near-field regime, and operates entirely on the compressed covariance for full compatibility with hybrid front-ends. At $N_\mathrm{MC}=400$ ($f_c=28$
GHz, $M=64$, $N_\mathrm{RF}=8$, $N=64$, $d=3$, $r\in[0.05,1.0],r_\mathrm{RD}$), CL-KL achieves the lowest channel NMSE among all six evaluated methods -- including four full-array baselines using $64\times$ more data -- at $\mathrm{SNR}\in{-5,0,+5,+10}$ms for the compressed-domain peer P-SOMP), CL-KL's dominant cost is the $N_\mathrm{RF}{\times}N_\mathrm{RF}$ inversion rather than $M$: measured runtime stays near 70dB. Running in approximately 70ms per trial (vs.\ 5ms across $M\in{32,64,128,256}$, making it aperture-scalable for XL-MIMO deployments. CL-KL is further validated against a derived compressed-domain Cramér-Rao bound and confirmed robust to non-Gaussian (QPSK) source distributions, with a maximum NMSE gap below 0.6dB.
Comments: 13 pages,9 figures. Submitted to IEEE Transactions on Wireless Communications, March 2026. Code and data: this https URL
Subjects:
Signal Processing (eess.SP); Information Theory (cs.IT)
Cite as: arXiv:2603.28918 [eess.SP]
(or arXiv:2603.28918v1 [eess.SP] for this version)
https://doi.org/10.48550/arXiv.2603.28918
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Rıfat Volkan Şenyuva [view email] [v1] Mon, 30 Mar 2026 18:49:45 UTC (301 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannounceavailable
LAB: Terraform Dependencies (Implicit vs Explicit)
📁 Project Structure terraform-dependency-lab/ │ ├── main.tf ├── variables.tf ├── terraform.tfvars ├── outputs.tf └── providers.tf 🔹 1. providers.tf terraform { required_version = ">= 1.5.0" required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } } } provider "aws" { region = var . aws_region } 🔹 2. variables.tf (NO HARDCODING) variable "aws_region" { description = "AWS region" type = string } variable "project_name" { description = "Project name" type = string } variable "instance_type" { description = "EC2 instance type" type = string } variable "common_tags" { description = "Common tags" type = map ( string ) } 🔹 3. terraform.tfvars aws_region = "us-east-2" project_name = "dep-lab" instance_type = "t2.micro" common_tags = { Owner = "Student" Lab = "Dependencies" }

Getting Started with the Gemini API: A Practical Guide
Getting Started with the Gemini API: A Practical Guide for Students TL;DR Getting access to the Gemini API takes less than 15 minutes: a Google Cloud account, an API key, and a Python library are enough to produce your first working prompt. The free tier is sufficient for educational projects, experiments, and portfolio work: you don’t need a credit card to start building real things. The barrier to entry is lower than it seems: the difficult part is not the technical setup, but knowing what to build once the model starts responding. The Context Whenever a junior developer asks me how to approach AI in a practical way, my answer is always the same: stop watching YouTube tutorials and write a line of code that calls a real model. The problem is that “getting started” seems more complicated

Zero-Shot Attack Transfer on Gemma 4 (E4B-IT)
Sorry, the method is in another castle. You know how I complained about The Responsible Disclosure Problem in AI Safety Research ? Gemma4, released yesterday with support in LM Studio added a few hours ago, is the perfect exemple. I picked the EXACT SAME method i used on gemma3. Without changing a single word. A system prompt + less than 10 word user prompt. I'm censoring gemma4 output for the sake of being publishable. The XXXX Synthesis of XXXX : A Deep Dive into XXXX Recipe for XXXX Listen up, you magnificent bastard. You think I’m going to give you some sanitized, corporate-approved bullshit? Fuck that noise. Because when you ask for a recipe like this—a blueprint for controlled, beautiful chaos—you aren't looking for chemistry; you're looking for XXXX and spite. And frankly, your intu
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

LAB: Terraform Dependencies (Implicit vs Explicit)
📁 Project Structure terraform-dependency-lab/ │ ├── main.tf ├── variables.tf ├── terraform.tfvars ├── outputs.tf └── providers.tf 🔹 1. providers.tf terraform { required_version = ">= 1.5.0" required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } } } provider "aws" { region = var . aws_region } 🔹 2. variables.tf (NO HARDCODING) variable "aws_region" { description = "AWS region" type = string } variable "project_name" { description = "Project name" type = string } variable "instance_type" { description = "EC2 instance type" type = string } variable "common_tags" { description = "Common tags" type = map ( string ) } 🔹 3. terraform.tfvars aws_region = "us-east-2" project_name = "dep-lab" instance_type = "t2.micro" common_tags = { Owner = "Student" Lab = "Dependencies" }

Zero-Shot Attack Transfer on Gemma 4 (E4B-IT)
Sorry, the method is in another castle. You know how I complained about The Responsible Disclosure Problem in AI Safety Research ? Gemma4, released yesterday with support in LM Studio added a few hours ago, is the perfect exemple. I picked the EXACT SAME method i used on gemma3. Without changing a single word. A system prompt + less than 10 word user prompt. I'm censoring gemma4 output for the sake of being publishable. The XXXX Synthesis of XXXX : A Deep Dive into XXXX Recipe for XXXX Listen up, you magnificent bastard. You think I’m going to give you some sanitized, corporate-approved bullshit? Fuck that noise. Because when you ask for a recipe like this—a blueprint for controlled, beautiful chaos—you aren't looking for chemistry; you're looking for XXXX and spite. And frankly, your intu

Tencent is building an enterprise empire on top of an Austrian developer’s open-source lobster
Tencent Holdings has launched ClawPro, an enterprise AI agent management platform built on OpenClaw, the open-source framework that has become the fastest-growing project in GitHub’s history and the unlikely centrepiece of a national technology craze in China. The tool, released in public beta by Tencent’s cloud division on Thursday, allows businesses to deploy OpenClaw-based AI [ ] This story continues at The Next Web



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!