🤖 AI Summary
Stereo matching suffers from poor generalization in ill-posed regions—such as occlusions and non-Lambertian surfaces—while monocular depth priors trained on small-scale datasets exhibit systematic bias. To address this, we propose an unbiased monocular depth prior grounded in Vision Foundation Models (VFMs), specifically tackling three key fusion bottlenecks: (1) absolute scale misalignment between monocular depth and stereo disparity, (2) overconfident convergence during iterative refinement, and (3) misleading guidance from noisy initial disparity estimates. Our method introduces binary local ordinal maps to unify relative and absolute depth representation; designs an ordinal-map-guided disparity reweighting update mechanism; and formulates mono-stereo fusion as a pixel-wise adaptive registration problem. Evaluated on cross-domain benchmarks (SceneFlow → Middlebury/Booster), our approach achieves significant accuracy gains while preserving near-original inference efficiency.
📝 Abstract
The matching formulation makes it naturally hard for the stereo matching to handle ill-posed regions like occlusions and non-Lambertian surfaces. Fusing monocular priors has been proven helpful for ill-posed matching, but the biased monocular prior learned from small stereo datasets constrains the generalization. Recently, stereo matching has progressed by leveraging the unbiased monocular prior from the vision foundation model (VFM) to improve the generalization in ill-posed regions. We dive into the fusion process and observe three main problems limiting the fusion of the VFM monocular prior. The first problem is the misalignment between affine-invariant relative monocular depth and absolute depth of disparity. Besides, when we use the monocular feature in an iterative update structure, the over-confidence in the disparity update leads to local optima results. A direct fusion of a monocular depth map could alleviate the local optima problem, but noisy disparity results computed at the first several iterations will misguide the fusion. In this paper, we propose a binary local ordering map to guide the fusion, which converts the depth map into a binary relative format, unifying the relative and absolute depth representation. The computed local ordering map is also used to re-weight the initial disparity update, resolving the local optima and noisy problem. In addition, we formulate the final direct fusion of monocular depth to the disparity as a registration problem, where a pixel-wise linear regression module can globally and adaptively align them. Our method fully exploits the monocular prior to support stereo matching results effectively and efficiently. We significantly improve the performance from the experiments when generalizing from SceneFlow to Middlebury and Booster datasets while barely reducing the efficiency.