🤖 AI Summary
This work addresses the interpretability challenge in cross-dataset image generation by proposing unsupervised Contrastive Analysis (CA): a novel paradigm that automatically disentangles shared generative factors from dataset-specific factors between two image collections—without requiring attribute annotations. We formally define and solve the CA problem under weak supervision for the first time. Our general-purpose framework supports both GANs and diffusion models. Key technical innovations include multi-scale feature alignment, discriminative latent-space regularization, and a novel disentanglement loss, jointly optimizing factor separation quality and generation fidelity. Extensive experiments on face, animal, and medical imaging datasets demonstrate state-of-the-art disentanglement accuracy and high-fidelity synthesis, significantly outperforming existing conditional editing and unsupervised disentanglement approaches.
📝 Abstract
Recent advancements in image synthesis have enabled high-quality image generation and manipulation. Most works focus on: 1) conditional manipulation, where an image is modified conditioned on a given attribute, or 2) disentangled representation learning, where each latent direction should represent a distinct semantic attribute. In this paper, we focus on a different and less studied research problem, called Contrastive Analysis (CA). Given two image datasets, we want to separate the common generative factors, shared across the two datasets, from the salient ones, specific to only one dataset. Compared to existing methods, which use attributes as supervised signals for editing (e.g., glasses, gender), the proposed method is weaker, since it only uses the dataset signal. We propose a novel framework for CA, that can be adapted to both GAN and Diffusion models, to learn both common and salient factors. By defining new and well-adapted learning strategies and losses, we ensure a relevant separation between common and salient factors, preserving a high-quality generation. We evaluate our approach on diverse datasets, covering human faces, animal images and medical scans. Our framework demonstrates superior separation ability and image quality synthesis compared to prior methods.