Self-Consistency for LLM-Based Motion Trajectory Generation and Verification
arXiv:2603.29301v1 Announce Type: new Abstract: Self-consistency has proven to be an effective technique for improving LLM performance on natural language reasoning tasks in a lightweight, unsupervised manner. In this work, we study how to adapt self-consistency to visual domains. Specifically, we consider the generation and verification of LLM-produced motion graphics trajectories. Given a prompt (e.g., "Move the circle in a spiral path"), we first sample diverse motion trajectories from an LLM, and then identify groups of consistent trajectories via clustering. Our key insight is to model the family of shapes associated with a prompt as a prototype trajectory paired with a group of geometric transformations (e.g., rigid, similarity, and affine). Two trajectories can then be considered co
View PDF HTML (experimental)
Abstract:Self-consistency has proven to be an effective technique for improving LLM performance on natural language reasoning tasks in a lightweight, unsupervised manner. In this work, we study how to adapt self-consistency to visual domains. Specifically, we consider the generation and verification of LLM-produced motion graphics trajectories. Given a prompt (e.g., "Move the circle in a spiral path"), we first sample diverse motion trajectories from an LLM, and then identify groups of consistent trajectories via clustering. Our key insight is to model the family of shapes associated with a prompt as a prototype trajectory paired with a group of geometric transformations (e.g., rigid, similarity, and affine). Two trajectories can then be considered consistent if one can be transformed into the other under the warps allowable by the transformation group. We propose an algorithm that automatically recovers a shape family, using hierarchical relationships between a set of candidate transformation groups. Our approach improves the accuracy of LLM-based trajectory generation by 4-6%. We further extend our method to support verification, observing 11% precision gains over VLM baselines. Our code and dataset are available at this https URL .
Comments: Accepted to CVPR 2026
Subjects:
Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2603.29301 [cs.CV]
(or arXiv:2603.29301v1 [cs.CV] for this version)
https://doi.org/10.48550/arXiv.2603.29301
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Jiaju Ma [view email] [v1] Tue, 31 Mar 2026 06:08:13 UTC (28,346 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannounceavailable
A conversation on concentration of power
Many people who are paying attention to the trajectory of AI worry about its potential to concentrate power. I think this is a reasonable thing to worry about, with some important caveats. If someone builds a superintelligence, I think they are far more likely to die ignominiously with the rest of us than attain a stranglehold on wealth and power; but if this somehow manages not to happen, I do then worry about what happens instead. Below is a significantly paraphrased, cleaned, and polished amalgam of a conversation that I have had, at least twice now, on this subject. It is not itself a real conversation, nor was every point therein made explicitly by the participants; but it mostly follows the general shape of the real conversations that inspired it. Part 1: The Musk-Maximizer Norm: So
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!