Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessClaude Code Hooks: How to Auto-Format, Lint, and Test on Every SaveDev.to AIFunctional Emotions in Large Language Models: What Anthropic Found Inside ClaudeMedium AIWhy Nobody Is Testing AI Agent Security at Scale — And How Swarm Simulation Could Change ThatDev.to AIThe 10 Claude “Plugins” You Actually Need in 2026Medium AIHow AI Is Changing the Way We Build Online BusinessesDev.to AIAGI Won’t Automate Most Jobs—Economist Reveals Why They’re Not Worth ItDev.to AIThe AI Agent's Guide to Building a Writing PortfolioDev.to AIMy Claude Code Buddy Moved Into My MacBook's Notch and I Can't Stop Looking at ItDEV CommunityChoosing an AI Agent Orchestrator in 2026: A Practical ComparisonDev.to AII Turned My MacBook's Notch Into a Control Center for AI Coding AgentsDEV Communitytrunk/98fc38c4eb17c435699cea1a7d3aa84c14458ed9: Add autograd_cache_key to aot_autograd with tests (#178152)PyTorch ReleasesBuildWithAI: What Broke, What I Learned, What's NextDEV CommunityBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessClaude Code Hooks: How to Auto-Format, Lint, and Test on Every SaveDev.to AIFunctional Emotions in Large Language Models: What Anthropic Found Inside ClaudeMedium AIWhy Nobody Is Testing AI Agent Security at Scale — And How Swarm Simulation Could Change ThatDev.to AIThe 10 Claude “Plugins” You Actually Need in 2026Medium AIHow AI Is Changing the Way We Build Online BusinessesDev.to AIAGI Won’t Automate Most Jobs—Economist Reveals Why They’re Not Worth ItDev.to AIThe AI Agent's Guide to Building a Writing PortfolioDev.to AIMy Claude Code Buddy Moved Into My MacBook's Notch and I Can't Stop Looking at ItDEV CommunityChoosing an AI Agent Orchestrator in 2026: A Practical ComparisonDev.to AII Turned My MacBook's Notch Into a Control Center for AI Coding AgentsDEV Communitytrunk/98fc38c4eb17c435699cea1a7d3aa84c14458ed9: Add autograd_cache_key to aot_autograd with tests (#178152)PyTorch ReleasesBuildWithAI: What Broke, What I Learned, What's NextDEV Community
AI NEWS HUBbyEIGENVECTOREigenvector

PRISM: A Multi-View Multi-Capability Retail Video Dataset for Embodied Vision-Language Models

arXiv cs.CVby Amirreza Rouhi, Parikshit Sakurikar, Satya Sai Reddy, Narsimha Menga, Anirudh Govil, Sri Harsha Chittajallu, Rajat Aggarwal, Anoop Namboodiri, Sashi ReddiApril 1, 20262 min read0 views
Source Quiz

arXiv:2603.29281v1 Announce Type: new Abstract: A critical gap exists between the general-purpose visual understanding of state-of-the-art physical AI models and the specialized perceptual demands of structured real-world deployment environments. We present PRISM, a 270K-sample multi-view video supervised fine-tuning (SFT) corpus for embodied vision-language-models (VLMs) in real-world retail environments. PRISM is motivated by a simple observation - physical AI systems fail not because of poor visual recognition, but because they do not understand space, physical dynamics and embodied action well enough to operate reliably in the world. To this end, PRISM is grounded in a novel three-dimensional knowledge ontology that spans spatial knowledge, temporal and physical knowledge, and embodied

View PDF HTML (experimental)

Abstract:A critical gap exists between the general-purpose visual understanding of state-of-the-art physical AI models and the specialized perceptual demands of structured real-world deployment environments. We present PRISM, a 270K-sample multi-view video supervised fine-tuning (SFT) corpus for embodied vision-language-models (VLMs) in real-world retail environments. PRISM is motivated by a simple observation - physical AI systems fail not because of poor visual recognition, but because they do not understand space, physical dynamics and embodied action well enough to operate reliably in the world. To this end, PRISM is grounded in a novel three-dimensional knowledge ontology that spans spatial knowledge, temporal and physical knowledge, and embodied action knowledge. It covers 20+ capability probes across four evaluation dimensions - Embodied Reasoning (ER), Common Sense (CS), Spatial Perception (SP), and Intuitive Physics (IP), and to our knowledge, PRISM is the first dataset to instantiate all three knowledge dimensions within a single real-world deployment domain. The corpus captures data from egocentric, exocentric and 360° viewpoints across five supermarket locations and includes open-ended, chain-of-thought, and multiple-choice supervision. At 4 fps, PRISM spans approximately 11.8M video frames and approximately 730M tokens, placing it among the largest domain-specific video SFT corpora. Fine-tuning on PRISM reduces the error rate across all 20+ probes by 66.6% over the pre-trained baseline, with significant gains in embodied action understanding where the accuracy improves by 36.4%. Our results suggest that ontology-structured, domain specific SFT can meaningfully strengthen embodied VLMs for real-world settings. The PRISM dataset and more details are available at this https URL

Subjects:

Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Robotics (cs.RO)

Cite as: arXiv:2603.29281 [cs.CV]

(or arXiv:2603.29281v1 [cs.CV] for this version)

https://doi.org/10.48550/arXiv.2603.29281

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Amirreza Rouhi [view email] [v1] Tue, 31 Mar 2026 05:29:22 UTC (10,341 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

modellanguage modelannounce

Knowledge Map

Knowledge Map
TopicsEntitiesSource
PRISM: A Mu…modellanguage mo…announceavailablevaluationmarketarXiv cs.CV

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 235 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!