🤖 AI Summary
Omnidirectional stereo depth estimation has long suffered from a scarcity of real-world training data, severely limiting model generalization. To address this, we introduce Helvipad—the first large-scale, real-scene omnidirectional stereo depth dataset—comprising 40K spherical video frames with LiDAR-calibrated dense depth and disparity ground truth. To tackle geometric distortions inherent in spherical imagery and the challenge of sparse depth supervision, we propose a spherical stereo adaptation architecture coupled with a depth completion enhancement strategy. Our approach leverages dual co-located 360° cameras (top-bottom configuration), equirectangular projection, and LiDAR point cloud projection, and further improves omnidirectional stereo matching via architectural refinements. Benchmarking reveals that existing methods degrade significantly on omnidirectional scenes; our adapted framework achieves substantial accuracy gains. Helvipad establishes the first standardized evaluation benchmark and reproducible technical pipeline for omnidirectional stereo depth estimation.
📝 Abstract
Despite considerable progress in stereo depth estimation, omnidirectional imaging remains underexplored, mainly due to the lack of appropriate data. We introduce Helvipad, a real-world dataset for omnidirectional stereo depth estimation, consisting of 40K frames from video sequences across diverse environments, including crowded indoor and outdoor scenes with diverse lighting conditions. Collected using two 360{deg} cameras in a top-bottom setup and a LiDAR sensor, the dataset includes accurate depth and disparity labels by projecting 3D point clouds onto equirectangular images. Additionally, we provide an augmented training set with a significantly increased label density by using depth completion. We benchmark leading stereo depth estimation models for both standard and omnidirectional images. The results show that while recent stereo methods perform decently, a significant challenge persists in accurately estimating depth in omnidirectional imaging. To address this, we introduce necessary adaptations to stereo models, achieving improved performance.