MIMO Capacity Enhancement by Grating Walls: A Physics-Based Proof of Principle
arXiv:2604.01786v1 Announce Type: new Abstract: This paper investigates the passive enhancement of MIMO spectral efficiency through boundary engineering in a simplified two dimensional indoor proof of principle model. The propagation channel is constructed from the electromagnetic Green's function of a room with boundaries modeled as free space, drywall, perfect electric conductor (PEC), or binary gratings. Within this framework, grating coated walls enrich the non line of sight (NLoS) multipath field, reduce channel correlation, and enhance spatial multiplexing over a broad range of receiver locations. Comparisons with the drywall and PEC reference cases further reveal that the observed capacity enhancement arises not from diffraction alone, but from the combined effects of effective wall
View PDF HTML (experimental)
Abstract:This paper investigates the passive enhancement of MIMO spectral efficiency through boundary engineering in a simplified two dimensional indoor proof of principle model. The propagation channel is constructed from the electromagnetic Green's function of a room with boundaries modeled as free space, drywall, perfect electric conductor (PEC), or binary gratings. Within this framework, grating coated walls enrich the non line of sight (NLoS) multipath field, reduce channel correlation, and enhance spatial multiplexing over a broad range of receiver locations. Comparisons with the drywall and PEC reference cases further reveal that the observed capacity enhancement arises not from diffraction alone, but from the combined effects of effective wall reflectivity, which confines and reradiates energy within the room, and diffraction induced angular redistribution, which enriches the channel eigenstructure.
Comments: 10 pages, 12 figures
Subjects:
Signal Processing (eess.SP)
Cite as: arXiv:2604.01786 [eess.SP]
(or arXiv:2604.01786v1 [eess.SP] for this version)
https://doi.org/10.48550/arXiv.2604.01786
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Xiaolu Yang [view email] [v1] Thu, 2 Apr 2026 08:55:40 UTC (6,229 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannouncepaper
LAI #121: The single-agent sweet spot nobody wants to admit
Author(s): Towards AI Editorial Team Originally published on Towards AI. Good morning, AI enthusiasts! Your next AI system is probably too complicated, and you haven’t even built it yet. This week, we co-published a piece with Paul Iusztin that gives you a mental model for catching overengineering before it starts. Here’s what’s inside: Agent or workflow? Getting it wrong is where most production headaches begin. Do biases amplify as agents get more autonomous? What actually changes and how to control it at the system level. Claude Code’s three most ignored slash commands: /btw, /fork, and /rewind, and why they matter more the longer your session runs. The community voted on where coding agents are headed. Terminal-based tools are pulling ahead, but that 17% “Other” bucket is hiding someth

J-CHAT: Japanese Large-scale Spoken Dialogue Corpus for Spoken Dialogue Language Modeling
arXiv:2407.15828v2 Announce Type: replace-cross Abstract: Spoken dialogue is essential for human-AI interactions, providing expressive capabilities beyond text. Developing effective spoken dialogue systems (SDSs) requires large-scale, high-quality, and diverse spoken dialogue corpora. However, existing datasets are often limited in size, spontaneity, or linguistic coherence. To address these limitations, we introduce J-CHAT, a 76,000-hour open-source Japanese spoken dialogue corpus. Constructed using an automated, language-independent methodology, J-CHAT ensures acoustic cleanliness, diversity, and natural spontaneity. The corpus is built from YouTube and podcast data, with extensive filtering and denoising to enhance quality. Experimental results with generative spoken dialogue language m
Do Phone-Use Agents Respect Your Privacy?
We study whether phone-use agents respect privacy while completing benign mobile tasks. This question has remained hard to answer because privacy-compliant behavior is not operationalized for phone-use agents, and ordinary apps do not reveal exactly what data agents type into which form entries during execution. To make this question measurable, we introduce MyPhoneBench, a verifiable evaluation framework for privacy behavior in mobile agents. We operationalize privacy-respecting phone use as pe... (3 upvotes on HuggingFace)
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers
Do Phone-Use Agents Respect Your Privacy?
We study whether phone-use agents respect privacy while completing benign mobile tasks. This question has remained hard to answer because privacy-compliant behavior is not operationalized for phone-use agents, and ordinary apps do not reveal exactly what data agents type into which form entries during execution. To make this question measurable, we introduce MyPhoneBench, a verifiable evaluation framework for privacy behavior in mobile agents. We operationalize privacy-respecting phone use as pe... (3 upvotes on HuggingFace)
Friends and Grandmothers in Silico: Localizing Entity Cells in Language Models
Entity-centric factual question answering involves localized MLP neurons that can be causally intervened to recover entity-consistent predictions, showing robustness to various linguistic variations but with limited universality across all entities. (0 upvotes on HuggingFace)
Automatic Image-Level Morphological Trait Annotation for Organismal Images
Sparse autoencoders trained on foundation-model features produce monosemantic neurons that enable scalable extraction of morphological traits from biological images through a modular annotation pipeline. (1 upvotes on HuggingFace)


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!