🤖 AI Summary
Cross-image feature matching is a fundamental problem underlying geometric alignment, semantic correspondence, and video temporal tracking; however, existing methods struggle to unify diverse matching requirements within a single framework. This paper introduces the first unified matching framework grounded in a single homogeneous feature space. Our approach innovatively integrates diffusion-model priors, an attention-driven dynamic fusion mechanism that jointly aggregates multi-level features, and a DINOv2-guided, object-level semantic multi-scale matching paradigm. By jointly optimizing feature representation, cross-scale alignment, and matching inference, our method achieves state-of-the-art performance across three major benchmarks: geometric matching (SPair-71k), semantic correspondence (PF-Pascal), and temporal tracking (DAVIS). It significantly improves matching robustness and generalization, thereby validating both the effectiveness and scalability of a unified matching paradigm.
📝 Abstract
Establishing correspondences across images is a fundamental challenge in computer vision, underpinning tasks like Structure-from-Motion, image editing, and point tracking. Traditional methods are often specialized for specific correspondence types, geometric, semantic, or temporal, whereas humans naturally identify alignments across these domains. Inspired by this flexibility, we propose MATCHA, a unified feature model designed to ``rule them all'', establishing robust correspondences across diverse matching tasks. Building on insights that diffusion model features can encode multiple correspondence types, MATCHA augments this capacity by dynamically fusing high-level semantic and low-level geometric features through an attention-based module, creating expressive, versatile, and robust features. Additionally, MATCHA integrates object-level features from DINOv2 to further boost generalization, enabling a single feature capable of matching anything. Extensive experiments validate that MATCHA consistently surpasses state-of-the-art methods across geometric, semantic, and temporal matching tasks, setting a new foundation for a unified approach for the fundamental correspondence problem in computer vision. To the best of our knowledge, MATCHA is the first approach that is able to effectively tackle diverse matching tasks with a single unified feature.