🤖 AI Summary
To address source-free domain adaptation (SFDA)—where source data are inaccessible—this paper proposes Discriminative Vicinal Diffusion (DVD). DVD explicitly transfers decision boundaries by generating pseudo-source samples in latent space to align target-domain features, leveraging a label-consistent Gaussian prior over k-nearest neighborhoods and operating solely with a pretrained classifier and a frozen latent diffusion module. DVD is the first to integrate latent diffusion models into discriminative source-free domain adaptation, jointly enforcing k-NN neighborhood constraints, InfoNCE contrastive loss, and implicit reconstruction of source features—thereby balancing privacy preservation and effective knowledge transfer. On standard SFDA benchmarks, DVD significantly outperforms state-of-the-art methods, while also improving source-domain classification accuracy. Moreover, it demonstrates strong generalization capability on both supervised classification and domain generalization tasks.
📝 Abstract
Recent work on latent diffusion models (LDMs) has focused almost exclusively on generative tasks, leaving their potential for discriminative transfer largely unexplored. We introduce Discriminative Vicinity Diffusion (DVD), a novel LDM-based framework for a more practical variant of source-free domain adaptation (SFDA): the source provider may share not only a pre-trained classifier but also an auxiliary latent diffusion module, trained once on the source data and never exposing raw source samples. DVD encodes each source feature's label information into its latent vicinity by fitting a Gaussian prior over its k-nearest neighbors and training the diffusion network to drift noisy samples back to label-consistent representations. During adaptation, we sample from each target feature's latent vicinity, apply the frozen diffusion module to generate source-like cues, and use a simple InfoNCE loss to align the target encoder to these cues, explicitly transferring decision boundaries without source access. Across standard SFDA benchmarks, DVD outperforms state-of-the-art methods. We further show that the same latent diffusion module enhances the source classifier's accuracy on in-domain data and boosts performance in supervised classification and domain generalization experiments. DVD thus reinterprets LDMs as practical, privacy-preserving bridges for explicit knowledge transfer, addressing a core challenge in source-free domain adaptation that prior methods have yet to solve.