π€ AI Summary
RAFT-Stereo suffers from inconsistent convergence across frequency bands during iterative optimization due to uniform full-spectrum updates, leading to severe degradation of high-frequency detailsβsuch as object boundaries and fine structures. To address this, we propose a wavelet-decomposition-based dual-path stereo matching framework that explicitly decouples frequency components for the first time: a low-frequency branch captures global scene structure, while a high-frequency branch employs an LSTM-driven adaptive update operator to dynamically preserve and refine fine details. An iterative frequency adapter is further introduced to harmonize optimization between the two paths. Our approach breaks the limitations of conventional iterative paradigms, achieving state-of-the-art performance on multiple metrics of the KITTI 2015 and KITTI 2012 benchmarks. It notably enhances texture recovery in distant regions and significantly improves reconstruction accuracy of fine-grained structures.
π Abstract
We find that the EPE evaluation metrics of RAFT-stereo converge inconsistently in the low and high frequency regions, resulting high frequency degradation (e.g., edges and thin objects) during the iterative process. The underlying reason for the limited performance of current iterative methods is that it optimizes all frequency components together without distinguishing between high and low frequencies. We propose a wavelet-based stereo matching framework (Wavelet-Stereo) for solving frequency convergence inconsistency. Specifically, we first explicitly decompose an image into high and low frequency components using discrete wavelet transform. Then, the high-frequency and low-frequency components are fed into two different multi-scale frequency feature extractors. Finally, we propose a novel LSTM-based high-frequency preservation update operator containing an iterative frequency adapter to provide adaptive refined high-frequency features at different iteration steps by fine-tuning the initial high-frequency features. By processing high and low frequency components separately, our framework can simultaneously refine high-frequency information in edges and low-frequency information in smooth regions, which is especially suitable for challenging scenes with fine details and textures in the distance. Extensive experiments demonstrate that our Wavelet-Stereo outperforms the state-of-the-art methods and ranks 1st on both the KITTI 2015 and KITTI 2012 leaderboards for almost all metrics. We will provide code and pre-trained models to encourage further exploration, application, and development of our innovative framework (https://github.com/SIA-IDE/Wavelet-Stereo).