Geometric Visual Servo Via Optimal Transport
arXiv:2506.02768v2 Announce Type: replace Abstract: When developing control laws for robotic systems, the principle factor when examining their performance is choosing inputs that allow smooth tracking to a reference input. In the context of robotic manipulation, this involves translating an object or end-effector from an initial pose to a target pose. Robotic manipulation control laws frequently use vision systems as an error generator to track features and produce control inputs. However, current control algorithms don't take into account the probabilistic features that are extracted and instead rely on hand-tuned feature extraction methods. Furthermore, the target features can exist in a static pose thus allowing a combined pose and feature error for control generation. We present a geo
View PDF HTML (experimental)
Abstract:When developing control laws for robotic systems, the principle factor when examining their performance is choosing inputs that allow smooth tracking to a reference input. In the context of robotic manipulation, this involves translating an object or end-effector from an initial pose to a target pose. Robotic manipulation control laws frequently use vision systems as an error generator to track features and produce control inputs. However, current control algorithms don't take into account the probabilistic features that are extracted and instead rely on hand-tuned feature extraction methods. Furthermore, the target features can exist in a static pose thus allowing a combined pose and feature error for control generation. We present a geometric control law for the visual servoing problem for robotic manipulators. The input from the camera constitutes a probability measure on the 3-dimensional Special Euclidean task-space group, where the Wasserstein distance between the current and desired poses is analogous with the geometric geodesic. From this, we develop a controller that allows for both pose and image-based visual servoing by combining classical PD control with gravity compensation with error minimization through the use of geodesic flows on a 3-dimensional Special Euclidean group. We present our results on a set of test cases demonstrating the generalisation ability of our approach to a variety of initial positions.
Comments: 19 pages, 5 figures. Accepted to Control Engineering Practice
Subjects:
Robotics (cs.RO); Systems and Control (eess.SY)
Cite as: arXiv:2506.02768 [cs.RO]
(or arXiv:2506.02768v2 [cs.RO] for this version)
https://doi.org/10.48550/arXiv.2506.02768
arXiv-issued DOI via DataCite
Related DOI:
https://doi.org/10.1016/j.conengprac.2026.106966
DOI(s) linking to related resources
Submission history
From: Eytan Canzini [view email] [v1] Tue, 3 Jun 2025 11:38:09 UTC (13,742 KB) [v2] Wed, 1 Apr 2026 14:25:20 UTC (13,721 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
announcefeaturearxiv
Gill Pratt Says Humanoid Robots’ Moment Is Finally Here
In 2012, the U.S. Defense Advanced Research Projects Agency announced the DARPA Robotics Challenge (DRC). The multi-year, multi-million-dollar competition for disaster robotics resulted in Boston Dynamics’ Atlas , some absolutely incredible moments from one of the very first generations of useful humanoid robots, and a blooper video that will live on forever. Gill Pratt , the architect of the competition, had a very clear understanding of what the DRC was going to do for robotics. “The reason [for the DARPA Robotics Challenge] is actually to push the field forward and make this capability a reality,” Pratt told IEEE Spectrum in 2012 . At the time, he pointed out that before the DARPA Grand Challenge in 2004 and the DARPA Urban Challenge in 2007, driverless cars for complex environments ess

Gemma 4 has been released
https://huggingface.co/unsloth/gemma-4-26B-A4B-it-GGUF https://huggingface.co/unsloth/gemma-4-31B-it-GGUF https://huggingface.co/unsloth/gemma-4-E4B-it-GGUF https://huggingface.co/unsloth/gemma-4-E2B-it-GGUF https://huggingface.co/collections/google/gemma-4 What’s new in Gemma 4 https://www.youtube.com/watch?v=jZVBoFOJK-Q Gemma is a family of open models built by Google DeepMind. Gemma 4 models are multimodal, handling text and image input (with audio supported on small models) and generating text output. This release includes open-weights models in both pre-trained and instruction-tuned variants. Gemma 4 features a context window of up to 256K tokens and maintains multilingual support in over 140 languages. Featuring both Dense and Mixture-of-Experts (MoE) architectures, Gemma 4 is well-s

The Algorithmic Edge: Launching Your Day Trading Journey with AI Sentiment and Next-Gen Charting
The Modern Trader's Toolkit: From Automated Signals to Market Sentiment AI The landscape of retail trading has undergone a seismic shift in the last five years. Where once a Bloomberg Terminal, a broker's phone line, and gut instinct were the primary tools, today's trader navigates a digital ecosystem powered by artificial intelligence, real-time analytics, and democratized data. For aspiring and established traders alike, the challenge is no longer accessing information, but intelligently filtering the signal from the noise. This evolution has given rise to sophisticated AI trading signals , comprehensive educational resources like a day trading guide for beginners , and powerful analytics platforms that go beyond traditional charting. Understanding these tools—and how they integrate—is n
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

AI s great paradox: The industry s rise and investors collapse
The AI industry faces a paradox, promising transformational advances while investors risk substantial losses due to limitations of current technologies and potential quantum breakthroughs. The post AI’s great paradox: The industry’s rise and investors’ collapse first appeared on TechTalks .



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!