🤖 AI Summary
Self-supervised dense correspondence learning typically relies on costly video data or suffers from insufficient pose variation when using simple image cropping. Method: We propose a diffusion-based multi-anchor masked autoencoder framework that leverages conditional image-to-image diffusion models to synthesize diverse, pose-varied views from a single input image—thereby constructing more challenging self-supervised pretraining tasks. We further introduce a local-global consistency evaluation mechanism and a multi-anchor contrastive learning strategy to enhance cross-view representation alignment. Contribution/Results: Experiments demonstrate that our method significantly outperforms static-image-only self-supervised approaches across multiple benchmarks. Its dense correspondence prediction and semantic segmentation transfer performance approach those of video-supervised models, providing the first empirical validation of the effectiveness and scalability of generative views for dense correspondence learning.
📝 Abstract
Learning dense correspondences, critical for application such as video label propagation, is hindered by tedious and unscalable manual annotation. Self-supervised methods address this by using a cross-view pretext task, often modeled with a masked autoencoder, where a masked target view is reconstructed from an anchor view. However, acquiring effective training data remains a challenge - collecting diverse video datasets is difficult and costly, while simple image crops lack necessary pose variations. This paper introduces CDG-MAE, a novel MAE-based self-supervised method that uses diverse synthetic views generated from static images via an image-conditioned diffusion model. These generated views exhibit substantial changes in pose and perspective, providing a rich training signal that overcomes the limitations of video and crop-based anchors. We present a quantitative method to evaluate local and global consistency of generated images, discussing their use for cross-view self-supervised pretraining. Furthermore, we enhance the standard single-anchor MAE setting to a multi-anchor strategy to effectively modulate the difficulty of pretext task. CDG-MAE significantly outperforms state-of-the-art MAE methods reliant only on images and substantially narrows the performance gap to video-based approaches.