A Novel Near-Field Dictionary Design for Hybrid MIMO with Uniform Planar Arrays
arXiv:2602.17202v2 Announce Type: replace Abstract: Near-field ultra-massive MIMO (U-MIMO) systems provide enhanced spatial resolution but present challenges for channel estimation, particularly when hybrid architectures are employed. Within this framework, dictionary-based channel estimation schemes are needed to achieve accurate reconstruction from a reduced set of measurements. However, existing near-field dictionaries generally provide full three-dimensional coverage, which is unnecessary when user equipments are primarily located on the ground. In this paper, we propose a novel near-field grid design tailored to this common scenario. Specifically, grid points lie on a reference plane located at an arbitrary height with respect to the U-MIMO system, equipped with a uniform planar array
View PDF HTML (experimental)
Abstract:Near-field ultra-massive MIMO (U-MIMO) systems provide enhanced spatial resolution but present challenges for channel estimation, particularly when hybrid architectures are employed. Within this framework, dictionary-based channel estimation schemes are needed to achieve accurate reconstruction from a reduced set of measurements. However, existing near-field dictionaries generally provide full three-dimensional coverage, which is unnecessary when user equipments are primarily located on the ground. In this paper, we propose a novel near-field grid design tailored to this common scenario. Specifically, grid points lie on a reference plane located at an arbitrary height with respect to the U-MIMO system, equipped with a uniform planar array. Furthermore, a channel accuracy metric is used to improve codebook performance, and to remark the limitations of the traditional far-field angular sampling in the near field. Results show that, as long as user equipments are not far from the reference plane, the proposed grid outperforms state-of-the-art designs in both channel estimation accuracy and spectral efficiency.
Comments: Submitted to Transactions on Wireless Communications (TWC)
Subjects:
Signal Processing (eess.SP)
Cite as: arXiv:2602.17202 [eess.SP]
(or arXiv:2602.17202v2 [eess.SP] for this version)
https://doi.org/10.48550/arXiv.2602.17202
arXiv-issued DOI via DataCite
Submission history
From: Luca Antonelli Dr [view email] [v1] Thu, 19 Feb 2026 09:46:56 UTC (739 KB) [v2] Wed, 1 Apr 2026 16:49:57 UTC (1,317 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
announcepaperarxiv
David Hogg's PAC leaves some Dem campaigns fuming
Leaders We Deserve, the PAC founded by David Hogg to elect young progressives in Democratic primaries, is leaving some of the campaigns it endorsed griping about alleged broken promises. Why it matters: Multiple campaigns backed by Hogg's PAC fumed after primary losses that the group dangled hopes of financial commitments that never materialized. First it was Irene Shin: The Washington Post reported last July that Leaders We Deserve backed off a commitment to spend $400,000 on the 38-year-old Virginia state delegate's behalf in a U.S. House special election that was won by now-Rep. James Walkinshaw (D-Va.). Now sources close to the campaign of Robert Peters , a 40-year-old Illinois state senator who finished a distant third in the primary to succeed Rep. Robin Kelly (D-Ill.), are telling a

VLMs Need Words: Vision Language Models Ignore Visual Detail In Favor of Semantic Anchors
Vision Language Models struggle with fine-grained visual perception tasks due to their language-centric training approach, performing poorly on unnamed visual entities despite having relevant information in their representations. (1 upvotes on HuggingFace)
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers
VLMs Need Words: Vision Language Models Ignore Visual Detail In Favor of Semantic Anchors
Vision Language Models struggle with fine-grained visual perception tasks due to their language-centric training approach, performing poorly on unnamed visual entities despite having relevant information in their representations. (1 upvotes on HuggingFace)




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!