Live
Black Hat USAAI BusinessBlack Hat AsiaAI Business‘I’m not dumb’: Hong Kong’s London trade office manager denies running spy networkSCMP Tech (Asia AI)ciflow/torchtitan/178947: Update on "add API to check if a tensor is symm-mem-tensor"PyTorch ReleasesGoogle Panda Algorithm: Understanding Its Impact and How to Recover from Its ConsequencesDev.to AIComplete Guide to llm-d CNCF Sandbox — Kubernetes-Native Distributed LLM InferenceDev.to AIciflow/trunk/178016: simplify testPyTorch Releasesciflow/torchtitan/178016: simplify testPyTorch ReleasesI Built an AI Coloring Page Generator — Got 500+ Organic Visits in One DayDev.to AIHeated Rivalry: A Guide to the Best Books, Movies, Video Games, and Podcasts for Fans of the Hit SeriesDev.to AIWe're running an AI-authored research workshop for Northeast India's 200+ languages - and publishing everything openlyDev.to AIciflow/torchtitan/177627: UpdatePyTorch Releasesciflow/torchtitan/177621: UpdatePyTorch Releasestrunk/d52b2f548aa3cfcfcd499fcba764fccf29628de6: [inductor] Enable precompiled headers in fbcode (#178870) (#178870)PyTorch ReleasesBlack Hat USAAI BusinessBlack Hat AsiaAI Business‘I’m not dumb’: Hong Kong’s London trade office manager denies running spy networkSCMP Tech (Asia AI)ciflow/torchtitan/178947: Update on "add API to check if a tensor is symm-mem-tensor"PyTorch ReleasesGoogle Panda Algorithm: Understanding Its Impact and How to Recover from Its ConsequencesDev.to AIComplete Guide to llm-d CNCF Sandbox — Kubernetes-Native Distributed LLM InferenceDev.to AIciflow/trunk/178016: simplify testPyTorch Releasesciflow/torchtitan/178016: simplify testPyTorch ReleasesI Built an AI Coloring Page Generator — Got 500+ Organic Visits in One DayDev.to AIHeated Rivalry: A Guide to the Best Books, Movies, Video Games, and Podcasts for Fans of the Hit SeriesDev.to AIWe're running an AI-authored research workshop for Northeast India's 200+ languages - and publishing everything openlyDev.to AIciflow/torchtitan/177627: UpdatePyTorch Releasesciflow/torchtitan/177621: UpdatePyTorch Releasestrunk/d52b2f548aa3cfcfcd499fcba764fccf29628de6: [inductor] Enable precompiled headers in fbcode (#178870) (#178870)PyTorch Releases

MolmoPoint: Better Pointing for VLMs with Grounding Tokens

HuggingFace PapersMarch 30, 20268 min read0 views
Source Quiz

A vision-language model approach for grounding that directly selects visual tokens containing target concepts through specialized pointing tokens, achieving superior performance in image, GUI, video pointing, and tracking tasks. (0 upvotes on HuggingFace)

Published on Mar 30

Authors:

,

,

,

,

,

,

,

Abstract

A vision-language model approach for grounding that directly selects visual tokens containing target concepts through specialized pointing tokens, achieving superior performance in image, GUI, video pointing, and tracking tasks.

AI-generated summary

Grounding has become a fundamental capability of vision-language models (VLMs). Most existing VLMs point by generating coordinates as part of their text output, which requires learning a complicated coordinate system and results in a high token count. Instead, we propose a more intuitive pointing mechanism that directly selects the visual tokens that contain the target concept. Our model generates a special pointing token that cross-attends to the input image or video tokens and selects the appropriate one. To make this model more fine-grained, we follow these pointing tokens with an additional special token that selects a fine-grained subpatch within the initially selected region, and then a third token that specifies a location within that subpatch. We further show that performance improves by generating points sequentially in a consistent order, encoding the relative position of the previously selected point, and including a special no-more-points class when selecting visual tokens. Using this method, we set a new state-of-the-art on image pointing (70.7% on PointBench), set a new state-of-the-art among fully open models on GUI pointing (61.1% on ScreenSpotPro), and improve video pointing (59.1% human preference win rate vs. a text coordinate baseline) and tracking (+6.3% gain on Molmo2Track). We additionally show that our method achieves much higher sample efficiency and discuss the qualitative differences that emerge from this design change.

View arXiv page View PDF Add to collection

Get this paper in your agent:

hf papers read 2603.28069

Don't have the latest CLI?

curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.28069 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.28069 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.28069 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
MolmoPoint:…researchpaperarxivvision-lang…visual toke…cross-atten…HuggingFace…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 163 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Research Papers