RoboTAG: End-to-end Robot Configuration Estimation via Topological Alignment Graph

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Monocular RGB pose estimation suffers from scarce annotated data, a significant sim-to-real gap, and insufficient incorporation of 3D geometric priors. Method: We propose a topology-aligned dual-branch graph network that jointly couples a 2D visual backbone with 3D geometric reasoning. It models node states and edge dependencies to achieve cross-domain representation alignment; introduces loop-closure consistency as a self-supervised signal for training on unlabeled real-world data; and designs a topology-aligned graph structure enabling co-evolution of 2D and 3D branches while explicitly embedding 3D geometric priors. Contribution/Results: Experiments across multiple robotic platforms demonstrate substantial improvements in pose estimation accuracy over existing unsupervised and weakly supervised methods. Annotation dependency is reduced by over 70%, and performance on fully unlabeled real images approaches that of fully supervised approaches.

Technology Category

Application Category

📝 Abstract
Estimating robot pose from a monocular RGB image is a challenge in robotics and computer vision. Existing methods typically build networks on top of 2D visual backbones and depend heavily on labeled data for training, which is often scarce in real-world scenarios, causing a sim-to-real gap. Moreover, these approaches reduce the 3D-based problem to 2D domain, neglecting the 3D priors. To address these, we propose Robot Topological Alignment Graph (RoboTAG), which incorporates a 3D branch to inject 3D priors while enabling co-evolution of the 2D and 3D representations, alleviating the reliance on labels. Specifically, the RoboTAG consists of a 3D branch and a 2D branch, where nodes represent the states of the camera and robot system, and edges capture the dependencies between these variables or denote alignments between them. Closed loops are then defined in the graph, on which a consistency supervision across branches can be applied. This design allows us to utilize in-the-wild images as training data without annotations. Experimental results demonstrate that our method is effective across robot types, highlighting its potential to alleviate the data bottleneck in robotics.
Problem

Research questions and friction points this paper is trying to address.

Estimating robot pose from monocular RGB images
Reducing reliance on labeled training data
Incorporating 3D priors into pose estimation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates 3D branch to inject 3D priors
Enables co-evolution of 2D and 3D representations
Uses topological graph with consistency supervision
🔎 Similar Papers
No similar papers found.