RoMeO: Robust Metric Visual Odometry

📅 2024-12-16
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Monocular RGB visual odometry (VO) suffers from poor robustness in outdoor environments, unobservable metric scale, and limited generalization capability. To address these limitations, we propose a metric-scale VO method that operates without IMU or 3D sensors. Our approach is the first to jointly leverage pre-trained monocular depth estimation and multi-view stereo (MVS) reconstruction to establish cooperative geometric constraints. We further introduce noise-injection training and an adaptive depth-prior filtering mechanism to enhance robustness under challenging野外 conditions. Additionally, we design a depth-prior-guided bundle adjustment (BA) framework that jointly optimizes camera poses and scene depth in a unified manner. Evaluated on six indoor and outdoor benchmarks, our method reduces relative and absolute trajectory errors by over 50% compared to the state-of-the-art DPVO, while significantly improving generalization and metric-scale accuracy. The framework seamlessly integrates into full SLAM systems.

Technology Category

Application Category

📝 Abstract
Visual odometry (VO) aims to estimate camera poses from visual inputs -- a fundamental building block for many applications such as VR/AR and robotics. This work focuses on monocular RGB VO where the input is a monocular RGB video without IMU or 3D sensors. Existing approaches lack robustness under this challenging scenario and fail to generalize to unseen data (especially outdoors); they also cannot recover metric-scale poses. We propose Robust Metric Visual Odometry (RoMeO), a novel method that resolves these issues leveraging priors from pre-trained depth models. RoMeO incorporates both monocular metric depth and multi-view stereo (MVS) models to recover metric-scale, simplify correspondence search, provide better initialization and regularize optimization. Effective strategies are proposed to inject noise during training and adaptively filter noisy depth priors, which ensure the robustness of RoMeO on in-the-wild data. As shown in Fig.1, RoMeO advances the state-of-the-art (SOTA) by a large margin across 6 diverse datasets covering both indoor and outdoor scenes. Compared to the current SOTA DPVO, RoMeO reduces the relative (align the trajectory scale with GT) and absolute trajectory errors both by>50%. The performance gain also transfers to the full SLAM pipeline (with global BA&loop closure). Code will be released upon acceptance.
Problem

Research questions and friction points this paper is trying to address.

Estimates camera poses from monocular RGB video
Improves robustness and generalization in outdoor scenes
Recovers metric-scale poses using depth priors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages pre-trained depth models for robustness
Combines monocular depth and multi-view stereo
Uses noise injection and adaptive filtering
🔎 Similar Papers
No similar papers found.