🤖 AI Summary
Existing stereo matching methods fail in transparent scenes due to violation of the single-depth assumption and inability to jointly estimate depths of transparent objects and occluded backgrounds. To address this, we reformulate stereo matching as a pixel-wise multi-label regression task. We propose a pixel-wise multivariate Gaussian representation: its mean vector explicitly encodes multiple depth hypotheses, while its covariance matrix adaptively determines the necessity of multi-label prediction. Furthermore, we design a GRU-based iterative framework that jointly optimizes mean updates and covariance estimation. We introduce the first synthetic transparent-scene dataset, comprising 10 diverse scenes with 89 transparent objects. Experiments demonstrate substantial improvements in depth accuracy for transparent surfaces, complete preservation of background structure, and robust support for high-fidelity 3D reconstruction of transparent scenes.
📝 Abstract
In this paper, we present a multi-label stereo matching method to simultaneously estimate the depth of the transparent objects and the occluded background in transparent scenes.Unlike previous methods that assume a unimodal distribution along the disparity dimension and formulate the matching as a single-label regression problem, we propose a multi-label regression formulation to estimate multiple depth values at the same pixel in transparent scenes. To resolve the multi-label regression problem, we introduce a pixel-wise multivariate Gaussian representation, where the mean vector encodes multiple depth values at the same pixel, and the covariance matrix determines whether a multi-label representation is necessary for a given pixel. The representation is iteratively predicted within a GRU framework. In each iteration, we first predict the update step for the mean parameters and then use both the update step and the updated mean parameters to estimate the covariance matrix. We also synthesize a dataset containing 10 scenes and 89 objects to validate the performance of transparent scene depth estimation. The experiments show that our method greatly improves the performance on transparent surfaces while preserving the background information for scene reconstruction. Code is available at https://github.com/BFZD233/TranScene.