Empirical and Statistical Characterisation of 28 GHz mmWave Propagation in Office Environments
arXiv:2604.01814v1 Announce Type: new Abstract: Millimeter wave (mmWave) technology at 28 GHz is vital for beyond-5G systems, but indoor deployment remains challenging due to limited statistical evidence on propagation. This study investigates path loss, material penetration, and coverage enhancement using TMYTEK-based measurements. Statistical tests and confidence interval analysis show that path loss aligns with free-space theory, with an exponent of n = 2.07 plus or minus 0.073 (p = 0.385), confirming the suitability of classical models. Material analysis reveals significant variation: desk dividers introduce 3.4 dB more attenuation than display boards (95 percent CI: 1.81 to 4.98 dB, p less than 0.01), contradicting thickness-based assumptions. Reflector optimisation yields a significa
View PDF HTML (experimental)
Abstract:Millimeter wave (mmWave) technology at 28 GHz is vital for beyond-5G systems, but indoor deployment remains challenging due to limited statistical evidence on propagation. This study investigates path loss, material penetration, and coverage enhancement using TMYTEK-based measurements. Statistical tests and confidence interval analysis show that path loss aligns with free-space theory, with an exponent of n = 2.07 plus or minus 0.073 (p = 0.385), confirming the suitability of classical models. Material analysis reveals significant variation: desk dividers introduce 3.4 dB more attenuation than display boards (95 percent CI: 1.81 to 4.98 dB, p less than 0.01), contradicting thickness-based assumptions. Reflector optimisation yields a significant mean gain of 2.17 plus or minus 2.33 dB (p less than 0.05), enhancing coverage. The results provide new empirical benchmarks and practical design insights for reliable indoor mmWave deployment.
Comments: 6 pages, 3 figures , Paper presented at the IEEE International Conference on Communication Networks and Computing
Subjects:
Signal Processing (eess.SP)
Cite as: arXiv:2604.01814 [eess.SP]
(or arXiv:2604.01814v1 [eess.SP] for this version)
https://doi.org/10.48550/arXiv.2604.01814
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Sokipriala Jonah [view email] [v1] Thu, 2 Apr 2026 09:26:41 UTC (1,072 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelbenchmarkannounce
Help running Qwen3-Coder-Next TurboQuant (TQ3) model
I found a TQ3-quantized version of Qwen3-Coder-Next here: https://huggingface.co/edwardyoon79/Qwen3-Coder-Next-TQ3_0 According to the page, this model requires a compatible inference engine that supports TurboQuant. It also provides a command, but it doesn’t clearly specify which version or fork of llama.cpp should be used (or maybe I missed it). llama-server I’ve tried the following llama.cpp forks that claim to support TQ3, but none of them worked for me: https://github.com/TheTom/llama-cpp-turboquant https://github.com/turbo-tan/llama.cpp-tq3 https://github.com/drdotdot/llama.cpp-turbo3-tq3 If anyone has successfully run this model, I’d really appreciate it if you could share how you did it. submitted by /u/UnluckyTeam3478 [link] [comments]

Is Turboquant really a game changer?
I am currently utilizing qwen3.5 and Gemma 4 model. Realized Gemma 4 requires 2x ram for same context length. As far as I understand, what turbo quant gives is quantizing kv cache into about 4 bit and minimize the loses But Q8 still not lose the context that much so isn't kv cache ram for qwen 3.5 q8 and Gemma 4 truboquant is the same? Is turboquant also applicable in qwen's cache architecture? because as far as I know they didn't tested it in qwen3.5 style kv cache in their paper. Just curious, I started to learn local LLM recently submitted by /u/Interesting-Print366 [link] [comments]
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Analyst News

The Clarity Reckoning: How Precise Prompting with AI Is Rewriting the Rules of Executive Leadership
The Clarity Reckoning: How Precise Prompting with AI Agents Is Rewriting the Rules of Executive Leadership From ‘forward this and pls fix’ emails to true leverage: why precise prompting has quietly become the rarest — and most powerful — executive skill The leap from casual “forward this and pls fix” emails to disciplined agent orchestration is quietly exposing decades of hidden execution gaps — while handing clear-thinking leaders the single greatest leverage opportunity in modern business. The rain hammered the windows of our Hong Kong office as I sat alone at 11:47 p.m., the harbor lights smearing into a neon haze beyond the glass. A senior relationship manager from one of our key clients – a multinational institution navigating cross-border payments and FX volatility – had just forward

When repression meets resistance: internet shutdowns in 2025
The 2025 #KeepItOn report on internet shutdowns is out. Read on for key insights from this year’s data on internet shutdowns in 2025. The post When repression meets resistance: internet shutdowns in 2025 appeared first on Access Now .




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!