GVDepth: Zero-Shot Monocular Depth Estimation for Ground Vehicles based on Probabilistic Cue Fusion

πŸ“… 2024-12-08
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Monocular depth estimation for autonomous driving suffers from poor generalization in zero-shot cross-dataset scenarios due to the entanglement of camera intrinsics and depth. Method: This paper proposes a camera-agnostic canonical depth representation that explicitly decouples depth from intrinsic parameters. We design a dual-cue depth regression framework leveraging object scale and vertical image position, and introduce a probabilistic adaptive Bayesian weighting fusion network to robustly integrate cues with uncertainty-awareness. Crucially, our method requires no multi-camera or multi-dataset joint trainingβ€”zero-shot transfer is achieved using only a single source dataset. Contribution/Results: Evaluated on five mainstream autonomous driving benchmarks, our approach significantly outperforms existing zero-shot state-of-the-art methods, achieving high-accuracy, strongly generalizable metric-scale depth estimation without fine-tuning.

Technology Category

Application Category

πŸ“ Abstract
Generalizing metric monocular depth estimation presents a significant challenge due to its ill-posed nature, while the entanglement between camera parameters and depth amplifies issues further, hindering multi-dataset training and zero-shot accuracy. This challenge is particularly evident in autonomous vehicles and mobile robotics, where data is collected with fixed camera setups, limiting the geometric diversity. Yet, this context also presents an opportunity: the fixed relationship between the camera and the ground plane imposes additional perspective geometry constraints, enabling depth regression via vertical image positions of objects. However, this cue is highly susceptible to overfitting, thus we propose a novel canonical representation that maintains consistency across varied camera setups, effectively disentangling depth from specific parameters and enhancing generalization across datasets. We also propose a novel architecture that adaptively and probabilistically fuses depths estimated via object size and vertical image position cues. A comprehensive evaluation demonstrates the effectiveness of the proposed approach on five autonomous driving datasets, achieving accurate metric depth estimation for varying resolutions, aspect ratios and camera setups. Notably, we achieve comparable accuracy to existing zero-shot methods, despite training on a single dataset with a single-camera setup.
Problem

Research questions and friction points this paper is trying to address.

Generalizing metric monocular depth estimation for ground vehicles
Disentangling depth from specific camera parameters and setups
Achieving accurate zero-shot depth estimation across diverse datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Canonical representation disentangles depth from camera parameters
Probabilistic fusion of object size and vertical position cues
Single-dataset training generalizes to multiple autonomous driving datasets
πŸ”Ž Similar Papers
No similar papers found.