Communication Outage-Resistant UUV State Estimation: A Variational History Distillation Approach
arXiv:2603.29512v1 Announce Type: new Abstract: The reliable operation of Unmanned Underwater Vehicle (UUV) clusters is highly dependent on continuous acoustic communication. However, this communication method is highly susceptible to intermittent interruptions. When communication outages occur, standard state estimators such as the Unscented Kalman Filter (UKF) will be forced to make open-loop predictions. If the environment contains unmodeled dynamic factors, such as unknown ocean currents, this estimation error will grow rapidly, which may eventually lead to mission failure. To address this critical issue, this paper proposes a Variational History Distillation (VHD) approach. VHD regards trajectory prediction as an approximate Bayesian reasoning process, which links a standard motion mo
View PDF HTML (experimental)
Abstract:The reliable operation of Unmanned Underwater Vehicle (UUV) clusters is highly dependent on continuous acoustic communication. However, this communication method is highly susceptible to intermittent interruptions. When communication outages occur, standard state estimators such as the Unscented Kalman Filter (UKF) will be forced to make open-loop predictions. If the environment contains unmodeled dynamic factors, such as unknown ocean currents, this estimation error will grow rapidly, which may eventually lead to mission failure. To address this critical issue, this paper proposes a Variational History Distillation (VHD) approach. VHD regards trajectory prediction as an approximate Bayesian reasoning process, which links a standard motion model based on physics with a pattern extracted directly from the past trajectory of the UUV. This is achieved by synthesizing ``virtual measurements'' distilled from historical trajectories. Recognizing that the reliability of extrapolated historical trends degrades over extended prediction horizons, an adaptive confidence mechanism is introduced. This mechanism allows the filter to gradually reduce the trust of virtual measurements as the communication outage time is extended. Extensive Monte Carlo simulations in a high-fidelity environment demonstrate that the proposed method achieves a 91% reduction in prediction Root Mean Square Error (RMSE), reducing the error from approximately 170 m to 15 m during a 40-second communication outage. These results demonstrate that VHD can maintain robust state estimation performance even under complete communication loss.
Comments: 7 pages, 2 figures,conference
Subjects:
Robotics (cs.RO); Systems and Control (eess.SY)
Cite as: arXiv:2603.29512 [cs.RO]
(or arXiv:2603.29512v1 [cs.RO] for this version)
https://doi.org/10.48550/arXiv.2603.29512
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Shuyue Li [view email] [v1] Tue, 31 Mar 2026 09:53:54 UTC (94 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases

The All-in-One Local AI App: Chat + Images + Video Without the Cloud
There's a point in every local AI enthusiast's journey where you realize you're juggling too many tools. Ollama for chat. ComfyUI for images (if you can get it working). Some other tool for video. A separate app for voice transcription. And none of them talk to each other. You end up with five terminal windows, three browser tabs, and a growing suspicion that this shouldn't be this hard. That's why I built Locally Uncensored — a single desktop app that does AI chat, image generation, and video creation. Everything runs on your machine. Nothing touches the cloud. No Docker required. You download a .exe, double-click it, and you're done. The Problem: Death by a Thousand Tabs If you're running local AI today, your workflow probably looks something like this: Open a terminal, run ollama serve

Claude Code Just Fixed Terminal Flickering (How to Enable NO_FLICKER Mode)
The old renderer redraws the entire screen on every update, causing visible flickering NO_FLICKER mode uses a virtual viewport that only patches changed characters Enable it with CLAUDE_CODE_NO_FLICKER=1 claude or add export to .zshrc Bonus: mouse support, cleaner copy-paste, lower CPU on long sessions Still experimental but most Anthropic internal users already prefer it The Flickering Problem If you use Claude Code daily, you know this one. You ask a question, the response starts streaming, and your terminal screen starts flashing. Every new token triggers a full redraw. The longer the response, the worse it gets. Not a bug in your terminal emulator. Just how the default renderer works. Claude Code clears the screen and repaints everything from scratch on every update. Fast updates plus

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!