See Something, Say Something: Context-Criticality-Aware Mobile Robot Communication for Hazard Mitigations
arXiv:2603.28901v1 Announce Type: new Abstract: The proverb ``see something, say something'' captures a core responsibility of autonomous mobile robots in safety-critical situations: when they detect a hazard, they must communicate--and do so quickly. In emergency scenarios, delayed or miscalibrated responses directly increase the time to action and the risk of damage. We argue that a systematic context-sensitive assessment of the criticality level, time sensitivity, and feasibility of mitigation is necessary for AMRs to reduce time to action and respond effectively. This paper presents a framework in which VLM/LLM-based perception drives adaptive message generation, for example, a knife in a kitchen produces a calm acknowledgment; the same object in a corridor triggers an urgent coordinat
View PDF HTML (experimental)
Abstract:The proverb ``see something, say something'' captures a core responsibility of autonomous mobile robots in safety-critical situations: when they detect a hazard, they must communicate--and do so quickly. In emergency scenarios, delayed or miscalibrated responses directly increase the time to action and the risk of damage. We argue that a systematic context-sensitive assessment of the criticality level, time sensitivity, and feasibility of mitigation is necessary for AMRs to reduce time to action and respond effectively. This paper presents a framework in which VLM/LLM-based perception drives adaptive message generation, for example, a knife in a kitchen produces a calm acknowledgment; the same object in a corridor triggers an urgent coordinated alert. Validation in 60+ runs using a patrolling mobile robot not only empowers faster response, but also brings user trusts to 82% compared to fixed-priority baselines, validating that structured criticality assessment improves both response speed and mitigation effectiveness.
Subjects:
Robotics (cs.RO)
Cite as: arXiv:2603.28901 [cs.RO]
(or arXiv:2603.28901v1 [cs.RO] for this version)
https://doi.org/10.48550/arXiv.2603.28901
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Aliasghar Arab [view email] [v1] Mon, 30 Mar 2026 18:28:05 UTC (1,963 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
announcesafetyautonomousHITEK AI launches a bundle of solutions to support compliance with new Dubai Law on building quality & safety - ZAWYA
<a href="https://news.google.com/rss/articles/CBMihAJBVV95cUxOU2JSOVMtbXlXM01fTDBmdWpfZ0JhQkpSSmlJOHBYbmFOSEEtcmFVMjFNX1JHcjgwOEhvNzJ4bHpTRWhfVFJQOHc3S2tERWNJSjg0ZWNjajVmYmVULXBmd2xSV2Z3ZE1qODFnbGhMLWJhNVpQczM5aTNlc1JhNllsc3JTdzhkOVdqRFpEX3BJSjM4TE02N1B5OFlfLVZ4b0RqVENMY0l3TThmZkVqUnJyODJWcWFBcUV4eGIwOXYwdkJjX0h0YVRzRHpFUUZmbVZ1VWNvYmZnUXpCTndLQWpNc2ZIS1VCbDNMcnp6UVVENUpueW5Ga2wwRllFTVJ3anBhb19zVw?oc=5" target="_blank">HITEK AI launches a bundle of solutions to support compliance with new Dubai Law on building quality & safety</a> <font color="#6f6f6f">ZAWYA</font>

TurboQuant, KIVI, and the Real Cost of Long-Context KV Cache
<h1> I Built a Free KV Cache Calculator for LLM Inference </h1> <p>When people talk about LLM deployment costs, they usually start with model weights.</p> <p>That makes sense, but once you push context length higher, KV cache becomes one of the real bottlenecks. In many long-context setups, it is the<br> dynamic memory cost that quietly starts dominating deployment decisions.</p> <p>I built a small free tool to make that easier to estimate:</p> <p><a href="https://turbo-quant.com/en/kv-cache-calculator" rel="noopener noreferrer">TurboQuant Tools</a></p> <p>It is a practical KV cache calculator for LLM inference. You can use it to estimate memory for:</p> <ul> <li>MHA models</li> <li>GQA models</li> <li>MQA models</li> <li>different context lengths</li> <li>different batch sizes</li> <li>di
NSF announces $100 million investment in National Artificial Intelligence Research Institutes awards to secure American leadership in AI - National Science Foundation (.gov)
<a href="https://news.google.com/rss/articles/CBMiiAFBVV95cUxPX3hyLURUeEdvXzJuYUxhZElyZWtYdW90WmFQV3phUlZXUW9DdmFTb0t0MzhpSVVhY2xvbFgzZURjY1VPcElZQTlCTlZYYWgxM2xtTjVJWWxmTFl5MWZTUzI0UmZTZVY1SlJkVXNsV1RiREkzM29pNkhEOTU4b2ZYMHk4dUhnLURC?oc=5" target="_blank">NSF announces $100 million investment in National Artificial Intelligence Research Institutes awards to secure American leadership in AI</a> <font color="#6f6f6f">National Science Foundation (.gov)</font>
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers
AI Inspires New Research Topics In Materials Science - miragenews.com
<a href="https://news.google.com/rss/articles/CBMihwFBVV95cUxQRlVFdkRBaHRvYkJJdFRlMTZmajEzeFRPU0hGWWdfbi02V1FnTUdVQ2pmY2VZLUV2NlB4V3BFdEVlSVZkUlhRSTZaNWFKMmcyWXJYbnNqbUhMTmp0NnFtMEppOXlPZkJSNHJfck5VSEVYcmUtX1k2QkJlR1BvUEdTTkp3UmlYRkk?oc=5" target="_blank">AI Inspires New Research Topics In Materials Science</a> <font color="#6f6f6f">miragenews.com</font>
From brain scans to alloys: Teaching AI to make sense of complex research data - Penn State University
<a href="https://news.google.com/rss/articles/CBMiwAFBVV95cUxPZDFHdkptQ2VUM2hmWjhqQkxoRnBiTWoxMXRRR21MUG5TamdUMlFRWmhvYVNHaFVNREVKU3VmSnVOdDVZYnNLb2ppYXRVRTZmVFVMV1pLTlVhUm9ybTNZbGtvZTdIMnIyMHNpOEk5aU9TSmxxS2Y4V2MwazYwY3JlX1Axbk1nd3pfcWhFdUJaaDJWRXJaMFIyTTROcmFHeXI3ZzFudXJ2M1h6UHI1LW1Ca1dta2RkM3BiYndocGk3Yjg?oc=5" target="_blank">From brain scans to alloys: Teaching AI to make sense of complex research data</a> <font color="#6f6f6f">Penn State University</font>

Locating Risk: Task Designers and the Challenge of Risk Disclosure in RAI Content Work
arXiv:2505.24246v4 Announce Type: replace Abstract: As AI systems are increasingly tested and deployed in open-ended and high-stakes domains, crowdworkers are often tasked with responsible AI (RAI) content work. These tasks include labeling violent content, moderating disturbing text, or simulating harmful behavior for red teaming exercises to shape AI system behaviors. While prior research efforts have highlighted the risks to worker well-being associated with RAI content work, far less attention has been paid to how these risks are communicated to workers by task designers or individuals who design and post RAI tasks. Existing transparency frameworks and guidelines, such as model cards, datasheets, and crowdworksheets, focus on documenting model information and dataset collection process

Togedule: Scheduling Meetings with Large Language Models and Adaptive Representations of Group Availability
arXiv:2505.01000v5 Announce Type: replace Abstract: Scheduling is a perennial-and often challenging-problem for many groups. Existing tools are mostly static, showing an identical set of choices to everyone, regardless of the current status of attendees' inputs and preferences. In this paper, we propose Togedule, an adaptive scheduling tool that uses large language models to dynamically adjust the pool of choices and their presentation format. With the initial prototype, we conducted a formative study (N=10) and identified the potential benefits and risks of such an adaptive scheduling tool. Then, after enhancing the system, we conducted two controlled experiments, one each for attendees and organizers (total N=66). For each experiment, we compared scheduling with verbal messages, shared c
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!