Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessRambus Unveils HBM4E Controller: 16 GT/s, 2,048-Bit Interface, Enabling C-HBM4EEE TimesGPT reasoning models have "line of sight" to AGI, says OpenAI's Greg Brockman - the-decoder.comGoogle News: OpenAIGPT reasoning models have "line of sight" to AGI, says OpenAI s Greg BrockmanThe DecoderCornell study reveals AI can regenerate famous books with amazing accuracy, sparks copyright concerns - India TodayGNews AI copyrightStudy Finds ChatGPT May Help You Learn Faster, But There's a Catch - ScienceAlertGoogle News: ChatGPTThe Sequence Chat #835: Illia Polosukhin on NEAR AI, Authoring the Transformer Paper and Decentralized and Private AI - TheSequenceGoogle News: Machine LearningOpenClaw Unlocks China’s AI Token Export BusinessBloomberg TechnologySector Snapshot: Venture Funding To Foundational AI Startups In Q1 Was Double All Of 2025 - Crunchbase NewsGNews AI startupsSector Snapshot: Venture Funding To Foundational AI Startups In Q1 Was Double All Of 2025Crunchbase NewsAre Multi-Agent Systems More Complex Than They Need to Be?The Data ExchangeStudy Finds ChatGPT May Help You Learn Faster, But There's a Catch - YahooGoogle News: ChatGPTMicrosoft launches 3 new AI models in direct shot at OpenAI and Google - VentureBeatGoogle News: OpenAIBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessRambus Unveils HBM4E Controller: 16 GT/s, 2,048-Bit Interface, Enabling C-HBM4EEE TimesGPT reasoning models have "line of sight" to AGI, says OpenAI's Greg Brockman - the-decoder.comGoogle News: OpenAIGPT reasoning models have "line of sight" to AGI, says OpenAI s Greg BrockmanThe DecoderCornell study reveals AI can regenerate famous books with amazing accuracy, sparks copyright concerns - India TodayGNews AI copyrightStudy Finds ChatGPT May Help You Learn Faster, But There's a Catch - ScienceAlertGoogle News: ChatGPTThe Sequence Chat #835: Illia Polosukhin on NEAR AI, Authoring the Transformer Paper and Decentralized and Private AI - TheSequenceGoogle News: Machine LearningOpenClaw Unlocks China’s AI Token Export BusinessBloomberg TechnologySector Snapshot: Venture Funding To Foundational AI Startups In Q1 Was Double All Of 2025 - Crunchbase NewsGNews AI startupsSector Snapshot: Venture Funding To Foundational AI Startups In Q1 Was Double All Of 2025Crunchbase NewsAre Multi-Agent Systems More Complex Than They Need to Be?The Data ExchangeStudy Finds ChatGPT May Help You Learn Faster, But There's a Catch - YahooGoogle News: ChatGPTMicrosoft launches 3 new AI models in direct shot at OpenAI and Google - VentureBeatGoogle News: OpenAI
AI NEWS HUBbyEIGENVECTOREigenvector

TrajectoryMover: Generative Movement of Object Trajectories in Videos

arXiv cs.CVby Kiran Chhatre, Hyeonho Jeong, Yulia Gryaditskaya, Christopher E. Peters, Chun-Hao Paul Huang, Paul GuerreroApril 1, 20261 min read0 views
Source Quiz

arXiv:2603.29092v1 Announce Type: new Abstract: Generative video editing has enabled several intuitive editing operations for short video clips that would previously have been difficult to achieve, especially for non-expert editors. Existing methods focus on prescribing an object's 3D or 2D motion trajectory in a video, or on altering the appearance of an object or a scene, while preserving both the video's plausibility and identity. Yet a method to move an object's 3D motion trajectory in a video, i.e., moving an object while preserving its relative 3D motion, is currently still missing. The main challenge lies in obtaining paired video data for this scenario. Previous methods typically rely on clever data generation approaches to construct plausible paired data from unpaired videos, but

View PDF HTML (experimental)

Abstract:Generative video editing has enabled several intuitive editing operations for short video clips that would previously have been difficult to achieve, especially for non-expert editors. Existing methods focus on prescribing an object's 3D or 2D motion trajectory in a video, or on altering the appearance of an object or a scene, while preserving both the video's plausibility and identity. Yet a method to move an object's 3D motion trajectory in a video, i.e., moving an object while preserving its relative 3D motion, is currently still missing. The main challenge lies in obtaining paired video data for this scenario. Previous methods typically rely on clever data generation approaches to construct plausible paired data from unpaired videos, but this approach fails if one of the videos in a pair can not easily be constructed from the other. Instead, we introduce TrajectoryAtlas, a new data generation pipeline for large-scale synthetic paired video data and a video generator TrajectoryMover fine-tuned with this data. We show that this successfully enables generative movement of object trajectories. Project page: this https URL

Comments: 24 pages, 8 figures. Project page: this https URL

Subjects:

Computer Vision and Pattern Recognition (cs.CV)

Cite as: arXiv:2603.29092 [cs.CV]

(or arXiv:2603.29092v1 [cs.CV] for this version)

https://doi.org/10.48550/arXiv.2603.29092

arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Kiran Chhatre [view email] [v1] Tue, 31 Mar 2026 00:15:36 UTC (8,432 KB)

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

announcearxivgithub

Knowledge Map

Knowledge Map
TopicsEntitiesSource
TrajectoryM…announcearxivgithubarXiv cs.CV

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 208 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Releases