SMFormer: Empowering Self-supervised Stereo Matching via Foundation Models and Data Augmentation

๐Ÿ“… 2026-04-11
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the performance gap between self-supervised and supervised stereo matching methods, which stems from the reliance of existing self-supervised approaches on photometric consistencyโ€”a assumption often violated in real-world scenes due to illumination variations. To overcome this limitation, we propose the first integration of Vision Foundation Models (VFMs) into self-supervised stereo matching, combined with a Feature Pyramid Network (FPN). Our approach leverages illumination-invariant data augmentation and enforces feature-disparity consistency regularization to construct robust self-supervised signals. This effectively mitigates photometric inconsistency and achieves state-of-the-art performance among self-supervised methods across multiple standard benchmarks. Notably, our model even surpasses certain supervised counterparts, such as CFNet, on challenging datasets like Booster, significantly narrowing the performance gap with fully supervised approaches.

Technology Category

Application Category

๐Ÿ“ Abstract
Recent self-supervised stereo matching methods have made significant progress. They typically rely on the photometric consistency assumption, which presumes corresponding points across views share the same appearance. However, this assumption could be compromised by real-world disturbances, resulting in invalid supervisory signals and a significant accuracy gap compared to supervised methods. To address this issue, we propose SMFormer, a framework integrating more reliable self-supervision guided by the Vision Foundation Model (VFM) and data augmentation. We first incorporate the VFM with the Feature Pyramid Network (FPN), providing a discriminative and robust feature representation against disturbance in various scenarios. We then devise an effective data augmentation mechanism that ensures robustness to various transformations. The data augmentation mechanism explicitly enforces consistency between learned features and those influenced by illumination variations. Additionally, it regularizes the output consistency between disparity predictions of strong augmented samples and those generated from standard samples. Experiments on multiple mainstream benchmarks demonstrate that our SMFormer achieves state-of-the-art (SOTA) performance among self-supervised methods and even competes on par with supervised ones. Remarkably, in the challenging Booster benchmark, SMFormer even outperforms some SOTA supervised methods, such as CFNet.
Problem

Research questions and friction points this paper is trying to address.

self-supervised stereo matching
photometric consistency
real-world disturbances
accuracy gap
supervisory signals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Vision Foundation Model
Self-supervised Stereo Matching
Feature Pyramid Network
Data Augmentation
Disparity Consistency
๐Ÿ”Ž Similar Papers
No similar papers found.