🤖 AI Summary
Monocular depth estimation (MDE) suffers from limited generalization in complex scenes, fine-grained structures, and occluded objects. To address this, we propose a geometry–semantics co-modeling framework. Our method freezes pre-trained depth and semantic segmentation foundation models and introduces a novel learnable bridging gating mechanism to enable cross-modal feature alignment; it further incorporates attention temperature scaling to enhance robustness in feature selection. Only lightweight modules are fine-tuned, balancing accuracy, generalization, and training efficiency. Extensive experiments demonstrate that our approach significantly outperforms state-of-the-art methods on multiple out-of-distribution benchmarks. It exhibits strong robustness and generalization capability in disambiguating occluded objects, recovering boundary details, and adapting across diverse domains—without requiring additional scene-specific supervision or architectural modifications.
📝 Abstract
We present Bridging Geometric and Semantic (BriGeS), an effective method that fuses geometric and semantic information within foundation models to enhance Monocular Depth Estimation (MDE). Central to BriGeS is the Bridging Gate, which integrates the complementary strengths of depth and segmentation foundation models. This integration is further refined by our Attention Temperature Scaling technique. It finely adjusts the focus of the attention mechanisms to prevent over-concentration on specific features, thus ensuring balanced performance across diverse inputs. BriGeS capitalizes on pre-trained foundation models and adopts a strategy that focuses on training only the Bridging Gate. This method significantly reduces resource demands and training time while maintaining the model's ability to generalize effectively. Extensive experiments across multiple challenging datasets demonstrate that BriGeS outperforms state-of-the-art methods in MDE for complex scenes, effectively handling intricate structures and overlapping objects.