A Tutorial on Learning-Based Radio Map Construction: Data, Paradigms, and Physics-Awarenes
arXiv:2603.17499v5 Announce Type: replace-cross Abstract: The integration of artificial intelligence into next-generation wireless networks necessitates the accurate construction of radio maps (RMs) as a foundational prerequisite for electromagnetic digital twins. A RM provides the digital representation of the wireless propagation environment, mapping complex geographical and topological boundary conditions to critical spatial-spectral metrics that range from received signal strength to full channel state information matrices. This tutorial presents a comprehensive survey of learning-based RM construction, systematically addressing three intertwined dimensions: data, paradigms, and physics-awareness. From the data perspective, we review physical measurement campaigns, ray tracing simulati
View PDF HTML (experimental)
Abstract:The integration of artificial intelligence into next-generation wireless networks necessitates the accurate construction of radio maps (RMs) as a foundational prerequisite for electromagnetic digital twins. A RM provides the digital representation of the wireless propagation environment, mapping complex geographical and topological boundary conditions to critical spatial-spectral metrics that range from received signal strength to full channel state information matrices. This tutorial presents a comprehensive survey of learning-based RM construction, systematically addressing three intertwined dimensions: data, paradigms, and physics-awareness. From the data perspective, we review physical measurement campaigns, ray tracing simulation engines, and publicly available benchmark datasets, identifying their respective strengths and fundamental limitations. From the paradigm perspective, we establish a core taxonomy that categorizes RM construction into source-aware forward prediction and source-agnostic inverse reconstruction, and examine five principal neural architecture families spanning convolutional neural networks, vision transformers, graph neural networks, generative adversarial networks, and diffusion models. We further survey optics-inspired methods adapted from neural radiance fields and 3D Gaussian splatting for continuous wireless radiation field modeling. From the physics-awareness perspective, we introduce a three-level integration framework encompassing data-level feature engineering, loss-level partial differential equation regularization, and architecture-level structural isomorphism. Open challenges including foundation model development, physical hallucination detection, and amortized inference for real-time deployment are discussed to outline future research directions.
Subjects:
Systems and Control (eess.SY); Signal Processing (eess.SP)
Cite as: arXiv:2603.17499 [eess.SY]
(or arXiv:2603.17499v5 [eess.SY] for this version)
https://doi.org/10.48550/arXiv.2603.17499
arXiv-issued DOI via DataCite
Submission history
From: Xiucheng Wang [view email] [v1] Wed, 18 Mar 2026 09:00:25 UTC (1,604 KB) [v2] Tue, 24 Mar 2026 03:04:20 UTC (4,173 KB) [v3] Thu, 26 Mar 2026 11:58:33 UTC (4,197 KB) [v4] Sat, 28 Mar 2026 12:22:40 UTC (4,201 KB) [v5] Tue, 31 Mar 2026 08:46:29 UTC (5,091 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modeltransformerneural networkA Very Fine Untuning
How fine-tuning made my chatbot worse (and broke my RAG pipeline) I spent weeks trying to improve my personal chatbot, Virtual Alexandra , with fine-tuning. Instead I got increased hallucination rate and broken retrieval in my RAG system. Yes, this is a story about a failed attempt, not a successful one. My husband and I called fine tuning results “Drunk Alexandra” — incoherent answers that were initially funny, but quickly became annoying. After weeks of experiments, I reached a simple conclusion: for this particular project, a small chatbot that answers questions based on my writing and instructions, fine tuning was not a good option. It was not just unnecessary, it actively degraded the experience and didn’t justify the extra time, cost, or complexity compared to the prompt + RAG system

Google's TurboQuant saves memory, but won't save us from DRAM-pricing hell
<h4>Chocolate Factory’s compression tech clears the way to cheaper AI inference, not more affordable memory</h4> <p>When Google unveiled <a target="_blank" rel="nofollow" href="https://research.google/blog/turboquant-redefining-ai-efficiency-with-extreme-compression/">TurboQuant</a>, an AI data compression technology that promises to slash the amount of memory required to serve models, many hoped it would help with a memory shortage that has seen prices triple since last year. Not so much.…</p>
![[Side A] Completely Defending Python from OOM Kills: The BytesIO Trap and D-MemFS 'Hard Quota' Design Philosophy](https://media2.dev.to/dynamic/image/width=1200,height=627,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9vney0xtkjc0fo4kadmp.png)
[Side A] Completely Defending Python from OOM Kills: The BytesIO Trap and D-MemFS 'Hard Quota' Design Philosophy
<blockquote> <p><strong>From the Author:</strong><br> Recently, I introduced <strong>D-MemFS</strong> on Reddit. The response was overwhelming, confirming that memory management and file I/O performance are truly universal challenges for developers everywhere. This series is my response to that global interest.</p> </blockquote> <h3> 🧭 About this Series: The Two Sides of Development </h3> <p>To provide a complete picture of this project, I’ve split each update into two perspectives:</p> <ul> <li> <strong>Side A (Practical / from Qiita):</strong> Implementation details, benchmarks, and technical solutions.</li> <li> <strong>Side B (Philosophy / from Zenn):</strong> The development war stories, AI-collaboration, and design decisions.</li> </ul> <h2> Introduction </h2> <p>If you write in-mem
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
Inside the hours when coders tore through Claude's guts and found pets, spinner verbs, and a curse chart - Business Insider
<a href="https://news.google.com/rss/articles/CBMipwFBVV95cUxNMl9tQk1icUo1Tzc4YXU2NmpOMjFUcEZtdEhNLVdSZ0kxaUtTRUtmbXdLOXV0MDJiSWFHTk5YYzhCS2dxT2dxTzFPdzBlZVU3MTBTcl94RnhPT1liNnExbzJXYVo1Y0NWRllfWUI1RkdENi00d09CMmNhU013WWhnWnNrSXQ0Y3FFMVRTdFpCcEdZby1XWEJmWXUxZWR5S295ekVPTGxscw?oc=5" target="_blank">Inside the hours when coders tore through Claude's guts and found pets, spinner verbs, and a curse chart</a> <font color="#6f6f6f">Business Insider</font>
Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT - wsj.com
<a href="https://news.google.com/rss/articles/CBMiogNBVV95cUxQNU1NTGdtSzlqckVfLU5oMkoxamNrWjNya0oxOFNFQ3Q1aEdnSUlDN0lYY3ZKNEc5U0QtMmRXbFVyZEdpTHBwcllmbVFPZmR2WmlBSzh1TmJUN0tncFIzdXNmR2ZLNUtjc1NtYjlSNHJfMzI4NUFpQ3ZIVnJSRlJvVmZpa2U4dHJOTnVlMWVUZE9lSUNqN2ZuT3FzeE15TTlzZDIwWjkxVFhIS0hua2JDQm5pRjVBcWZhVjV5bkZIR2YxcmdkczFxMEJCcTEwQ2pQS2dhakVjdjRwOXhkbmFZV2dEM3dqUllySHJ6LXZtR21PNnRUQWxBVE11MjZ6ZkRmczNjbjAzLWlhZkFDZEJ3dkRiMnhybFhhYlluQVYtNUswX096NFNlOVptZzQ0VlB4bmx2a1ZQV0M0VE5sVDNKMWQtV1BlUzFxNENBaWYxNmlpOHdpbjVvWHZnZ2JVWndwbUFwbGRNSXhCRHFxMG53c09LZ3JkLUREb1FRLV8wcGptei0xemlKSDd4aU1oWnlkRUlwbzNaY196dmtoa1BIVlF3?oc=5" target="_blank">Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT</a> <font color="#6f6f6f">wsj.com</font>
Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT - WSJ
<a href="https://news.google.com/rss/articles/CBMiogNBVV95cUxPazllT0hscUhNZFpyV1hBcXozd3FYb0pVaVJ1Qk84V2VvYnVPUDExV1VRTnN0dnpndFROYkhEOEpLU2tJWldUUFA5LVZ2a2F3SlFkeEMteW01bTYyOEs3NlNvNjlqd2VYb0oxMkRFaXMzekZPRUxvZTZHZ3V6Q0dfTDF4dlA1TC0tNW04RmxRWGoyQ2RkRHlwSEJwdmVaQW1xNDVmMGxxN0Nxa25odlFXYnJFNW5POE9ENkdfQkM5MUNERzBVX2E3em9IRUVKVGV4VVE2NnF4OV95dmRZZk9nZ0pvTTdHSVRxVk1nZW5DV0lrcG1lT2VKSmRLNk1uMDFWQnVaOFg2eEZNQWltXzZYQTh4TmVnS0JSZ3M3dUp2Umc5LTZ5emlWLWVvWmFZNEhMcklabnE2Y3J6SW93bXZYZmt4VmRILXBieC1wckhZUmlNakJsUVVHNUk1ZnRTcF9CdnJ3MEU3a2dfVm9aX25xYkN3dVF5bWlBQzZSc19LSlNZSUFGVHUyNlJUS1djYy1fbm5WSDhEUVNWN1dOVFR3X1FB?oc=5" target="_blank">Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT</a> <font color="#6f6f6f">WSJ</font>
Android Auto left behind as ChatGPT comes to CarPlay - Android Authority
<a href="https://news.google.com/rss/articles/CBMihgFBVV95cUxQYzVSM3loaldFMW5RWlp3Qi1FenJLYkJJT0lBYVBYN1JyQlgxSHozbnJwVzBhZ1MyeDkyLUR0b2k1Y211ZFRwcC1ubW5fOXcyakhXRDU4OF9TMDVjdG00VDJOOUFDVzRiN21yZ2NBa1UwdEJqTE4yLS03anV0TEdjNVp3NHprdw?oc=5" target="_blank">Android Auto left behind as ChatGPT comes to CarPlay</a> <font color="#6f6f6f">Android Authority</font>

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!