🤖 AI Summary
Monocular depth estimation often suffers from temporal jitter and inconsistency due to scale drift and dynamic scene content. This work proposes a temporally consistent depth estimation framework that fuses wheel odometry with optical flow: sparse depth cues are generated via flow-based triangulation, while a recursive Bayesian filter dynamically calibrates the scale ambiguity, enabling rescaling of outputs from a pretrained monocular depth model. Evaluated on KITTI, TartanAir, MS2, and a custom dataset, the method demonstrates robustness and high accuracy, significantly improving both temporal consistency and geometric fidelity of depth predictions in dynamic scenes.
📝 Abstract
Monocular depth estimation (MDE) has been widely adopted in the perception systems of autonomous vehicles and mobile robots. However, existing approaches often struggle to maintain temporal consistency in depth estimation across consecutive frames. This inconsistency not only causes jitter but can also lead to estimation failures when the depth range changes abruptly. To address these challenges, this paper proposes a consistency-aware monocular depth estimation framework that leverages wheel odometry from a mobile robot to achieve stable and coherent depth predictions over time. Specifically, we estimate camera pose and sparse depth from triangulation using optical flow between consecutive frames. The sparse depth estimates are used to update a recursive Bayesian estimate of the metric scale, which is then applied to rescale the relative depth predicted by a pre-trained depth estimation foundation model. The proposed method is evaluated on the KITTI, TartanAir, MS2, and our own dataset, demonstrating robust and accurate depth estimation performance.