RAAP: Retrieval-Augmented Affordance Prediction with Cross-Image Action Alignment
arXiv:2603.29419v1 Announce Type: new Abstract: Understanding object affordances is essential for enabling robots to perform purposeful and fine-grained interactions in diverse and unstructured environments. However, existing approaches either rely on retrieval, which is fragile due to sparsity and coverage gaps, or on large-scale models, which frequently mislocalize contact points and mispredict post-contact actions when applied to unseen categories, thereby hindering robust generalization. We introduce Retrieval-Augmented Affordance Prediction (RAAP), a framework that unifies affordance retrieval with alignment-based learning. By decoupling static contact localization and dynamic action direction, RAAP transfers contact points via dense correspondence and predicts action directions throu
View PDF HTML (experimental)
Abstract:Understanding object affordances is essential for enabling robots to perform purposeful and fine-grained interactions in diverse and unstructured environments. However, existing approaches either rely on retrieval, which is fragile due to sparsity and coverage gaps, or on large-scale models, which frequently mislocalize contact points and mispredict post-contact actions when applied to unseen categories, thereby hindering robust generalization. We introduce Retrieval-Augmented Affordance Prediction (RAAP), a framework that unifies affordance retrieval with alignment-based learning. By decoupling static contact localization and dynamic action direction, RAAP transfers contact points via dense correspondence and predicts action directions through a retrieval-augmented alignment model that consolidates multiple references with dual-weighted attention. Trained on compact subsets of DROID and HOI4D with as few as tens of samples per task, RAAP achieves consistent performance across unseen objects and categories, and enables zero-shot robotic manipulation in both simulation and the real world. Project website: this https URL.
Comments: Accepted to ICRA 2026
Subjects:
Robotics (cs.RO); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.29419 [cs.RO]
(or arXiv:2603.29419v1 [cs.RO] for this version)
https://doi.org/10.48550/arXiv.2603.29419
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Qiyuan Zhuang [view email] [v1] Tue, 31 Mar 2026 08:25:22 UTC (2,151 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannouncepredictionWe Need Positive Visions of the Future
People don't want to talk about positive visions of the future, because it is not timely and because it's not the pressing problem. Preventing AI doom already seems so unlikely that caring about what happens in case we succeed feels meaningless. I agree that it seems very unlikely. But I think we still need to care about it, to some extent, even if only for psychological and strategic reasons. And I think this neglect is itself contributing to the very dynamics that make success less likely. The Desperation Engine Some people — or, arguably, many people — go to work on AI capabilities because they see it as kind of "the only hope." "So what now, if we pause AI?", they ask. The problem is that even with paused AI, the future looks grim. Institutional decay continues, aging continues, regula
Scaling seismic foundation models on AWS: Distributed training with Amazon SageMaker HyperPod and expanding context windows | Amazon Web Services - Amazon Web Services
Scaling seismic foundation models on AWS: Distributed training with Amazon SageMaker HyperPod and expanding context windows | Amazon Web Services Amazon Web Services
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Frontier Research
We Need Positive Visions of the Future
People don't want to talk about positive visions of the future, because it is not timely and because it's not the pressing problem. Preventing AI doom already seems so unlikely that caring about what happens in case we succeed feels meaningless. I agree that it seems very unlikely. But I think we still need to care about it, to some extent, even if only for psychological and strategic reasons. And I think this neglect is itself contributing to the very dynamics that make success less likely. The Desperation Engine Some people — or, arguably, many people — go to work on AI capabilities because they see it as kind of "the only hope." "So what now, if we pause AI?", they ask. The problem is that even with paused AI, the future looks grim. Institutional decay continues, aging continues, regula


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!