Semantic Zone-Based Map Management for Stable AI-Integrated Mobile Robots
arXiv:2603.29627v1 Announce Type: new Abstract: Recent advances in large AI models (VLMs and LLMs) and joint use of the 3D dense maps, enable mobile robots to provide more powerful and interactive services grounded in rich spatial context. However, deploying both heavy AI models and dense maps on edge robots is challenging under strict memory budgets. When the memory budget is exceeded, required keyframes may not be loaded in time, which can degrade the stability of position estimation and interfering model performance. We proposes a semantic zone-based map management approach to stabilize dense-map utilization under memory constraints. We associate keyframes with semantic indoor regions (e.g., rooms and corridors) and keyframe management at the semantic zone level prioritizes spatially re
View PDF HTML (experimental)
Abstract:Recent advances in large AI models (VLMs and LLMs) and joint use of the 3D dense maps, enable mobile robots to provide more powerful and interactive services grounded in rich spatial context. However, deploying both heavy AI models and dense maps on edge robots is challenging under strict memory budgets. When the memory budget is exceeded, required keyframes may not be loaded in time, which can degrade the stability of position estimation and interfering model performance. We proposes a semantic zone-based map management approach to stabilize dense-map utilization under memory constraints. We associate keyframes with semantic indoor regions (e.g., rooms and corridors) and keyframe management at the semantic zone level prioritizes spatially relevant map content while respecting memory constraints. This reduces keyframe loading and unloading frequency and memory usage. We evaluate the proposed approach in large-scale simulated indoor environments and on an NVIDIA Jetson Orin Nano under concurrent SLAM-VLM execution. With Qwen3.5:0.8b, the proposed method improves throughput by 3.3 tokens/s and reduces latency by 21.7% relative to a geometric map-management strategy. Furthermore, while the geometric strategy suffers from out-of-memory failures and stalled execution under memory pressure, the proposed method eliminates both issues, preserving localization stability and enabling robust VLM operation. These results demonstrate that the proposed approach enables efficient dense map utilization for memory constrained, AI-integrated mobile robots. Code is available at: this https URL
Subjects:
Robotics (cs.RO)
Cite as: arXiv:2603.29627 [cs.RO]
(or arXiv:2603.29627v1 [cs.RO] for this version)
https://doi.org/10.48550/arXiv.2603.29627
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Seungho Yoo [view email] [v1] Tue, 31 Mar 2026 11:50:14 UTC (919 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannounceavailableAI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted - wired.com
<a href="https://news.google.com/rss/articles/CBMijAFBVV95cUxOSWM1R1Y2THUxVzRaX2E1ZHBkekdrSGktcG0tbFFzV3k4emJXUWpDVkpJMWhKM1g4VXB2WktnWWl4dWQwSWhVQTF1ZzFMVlhJdnluTks5UzNEeXh5bWZsVUIyYktJMnUwNC14LTJ3TDZnRXNDS0FPelEwNWtHSFFpQ0xqd2dfNU45Zi1fag?oc=5" target="_blank">AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted</a> <font color="#6f6f6f">wired.com</font>

The Fallback That Never Fires
<p>Your agent hits a rate limit. The fallback logic kicks in, picks an alternative model. Everything should be fine.</p> <p>Except the request still goes to the original model. And gets rate-limited again. And again. Forever.</p> <h2> The Setup </h2> <p>When your primary model returns 429:</p> <ol> <li>Fallback logic detects rate_limit_error</li> <li>Selects next model in the fallback chain</li> <li>Retries with the fallback model</li> <li>User never notices</li> </ol> <p>OpenClaw has had model fallback chains for months, and they generally work well.</p> <h2> The Override </h2> <p><a href="https://github.com/openclaw/openclaw/issues/59213" rel="noopener noreferrer">Issue #59213</a> exposes a subtle timing problem. Between steps 2 and 3, there is another system: <strong>session model recon

🌪️ Proof of Work: The To-Do List of Infinite Regret
<p>**</p> <h2> What I Built </h2> <p>**<br> I built a productivity app for people who hate being productive. Proof of Work is a digital psychological experiment that turns simple task management into a high-stakes gamble.</p> <p>The gimmick? You cannot "check off" a task. To complete anything (e.g., "Buy Milk"), you must first win a game of Minesweeper on an Expert-level grid (30x16 with 99 mines). If you hit a mine, the Hydra Engine triggers: your task isn't cleared—it duplicates 20 times. Now you have to buy milk 21 times. It is a functional implementation of a "short-circuit" for the human brain.</p> <p>Demo<br> </p> <div class="crayons-card c-embed text-styles text-styles--secondary"> <div class="c-embed__content"> <div class="c-embed__body flex items-center justify-between"> <a href="
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted - wired.com
<a href="https://news.google.com/rss/articles/CBMijAFBVV95cUxOSWM1R1Y2THUxVzRaX2E1ZHBkekdrSGktcG0tbFFzV3k4emJXUWpDVkpJMWhKM1g4VXB2WktnWWl4dWQwSWhVQTF1ZzFMVlhJdnluTks5UzNEeXh5bWZsVUIyYktJMnUwNC14LTJ3TDZnRXNDS0FPelEwNWtHSFFpQ0xqd2dfNU45Zi1fag?oc=5" target="_blank">AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted</a> <font color="#6f6f6f">wired.com</font>
Tagged: claude ai - Crowdfund Insider
<a href="https://news.google.com/rss/articles/CBMiW0FVX3lxTFBMd3FCREZHVTY2ZWVyQUVILU1CQ0VZNG43MExCbjdiNGV5WW9XaTlqR2txUlJRN2dkeTJ1ZDE0bnZsNW5GVmlYS01tWWhwSzFhVkkzT0dRWWthbGc?oc=5" target="_blank">Tagged: claude ai</a> <font color="#6f6f6f">Crowdfund Insider</font>
Anthropic Executive Blames Claude Code Leak on ‘Process Errors’ - Bloomberg.com
<a href="https://news.google.com/rss/articles/CBMisgFBVV95cUxPNEZxSzM5eEUza21meWE4dndaM2N2QWhhTFFXdmdLeE84bzNyaU1lcy1QLU43RlZ3VWxtbElzcDJkVWJDV25PV0ZIRWNzbG5hazlLeEZpWm9VZXFEQkxaQ0N0LTdLYm9HcW13STZIa3FadU1HQTVHSkFKX25UOU1JQUhqY2pneGhNcE5PMDZiT3JVWG9kOGJjYUJXNE10REJLQ0EtalYxT05DTElMOC1BcHpR?oc=5" target="_blank">Anthropic Executive Blames Claude Code Leak on ‘Process Errors’</a> <font color="#6f6f6f">Bloomberg.com</font>
Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT - wsj.com
<a href="https://news.google.com/rss/articles/CBMiogNBVV95cUxPb0E5MnYyZ0VibktLdklKaXBOOWtKRC1HS3pPaVNCNGVqRVZFVEI3empFUGZEM2hlMHNicGI4V2l5ZXdkMkgwZFBnTHF6SXpCclgtNTRrZGpiWk5JcEdsb0gtYlI1OG1YVnBCQUhxNWNLcGFSeVdHSGdqNHYwZHkwVm9fTUhJOWtxYmp3RWUzalo3Vi1yV3UwaTd4WWR2cFo2czcyQlI3U1dvUHlWVFBjMDhBT2NzYnE2dWtvQkc5bmFOblhEWW8tUDNEQUg0WG5uUlE5RWNPOGs3T3QzSWEzencySnRNSXNWVjVVMGhDeFRXSW5TQ0gtYnc5UmRjX2IwVHFld21BSkpkaHFkV3ZsdXF2T0VSTDlFaGFXSU1pcEp5NGRkNVAtT2dpdzlGazhGbC16c2poZlpBV0YyLXduTTg0UjZZNGlIY0xNd3ppQU54MVlZT0loYlA2LU9DMk1MMGNTYlRHa3NYMDFweVFZZDZFNEZnZHRCZVhPQXpSMlU0dEU5VGdjcnB3T3ByZEUtODFSSTMzWTY3TWJoaU10eEd3?oc=5" target="_blank">Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT</a> <font color="#6f6f6f">wsj.com</font>
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!