Learning Common and Salient Generative Factors Between Two Image Datasets

📅 2025-12-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the interpretability challenge in cross-dataset image generation by proposing unsupervised Contrastive Analysis (CA): a novel paradigm that automatically disentangles shared generative factors from dataset-specific factors between two image collections—without requiring attribute annotations. We formally define and solve the CA problem under weak supervision for the first time. Our general-purpose framework supports both GANs and diffusion models. Key technical innovations include multi-scale feature alignment, discriminative latent-space regularization, and a novel disentanglement loss, jointly optimizing factor separation quality and generation fidelity. Extensive experiments on face, animal, and medical imaging datasets demonstrate state-of-the-art disentanglement accuracy and high-fidelity synthesis, significantly outperforming existing conditional editing and unsupervised disentanglement approaches.

Technology Category

Application Category

📝 Abstract
Recent advancements in image synthesis have enabled high-quality image generation and manipulation. Most works focus on: 1) conditional manipulation, where an image is modified conditioned on a given attribute, or 2) disentangled representation learning, where each latent direction should represent a distinct semantic attribute. In this paper, we focus on a different and less studied research problem, called Contrastive Analysis (CA). Given two image datasets, we want to separate the common generative factors, shared across the two datasets, from the salient ones, specific to only one dataset. Compared to existing methods, which use attributes as supervised signals for editing (e.g., glasses, gender), the proposed method is weaker, since it only uses the dataset signal. We propose a novel framework for CA, that can be adapted to both GAN and Diffusion models, to learn both common and salient factors. By defining new and well-adapted learning strategies and losses, we ensure a relevant separation between common and salient factors, preserving a high-quality generation. We evaluate our approach on diverse datasets, covering human faces, animal images and medical scans. Our framework demonstrates superior separation ability and image quality synthesis compared to prior methods.
Problem

Research questions and friction points this paper is trying to address.

Separate common and salient generative factors between two image datasets
Use only dataset signals without supervised attribute labels
Adapt framework to both GAN and Diffusion models for high-quality generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning common and salient generative factors between datasets
Adapting framework to both GAN and Diffusion models
Using dataset signals without attribute supervision for separation
🔎 Similar Papers
No similar papers found.
Yunlong He
Yunlong He
LTCI, Télécom Paris, Institut Polytechnique de Paris, 19 Place Marguerite Perey, 91120 Palaiseau, France
G
Gwilherm Lesné
LTCI, Télécom Paris, Institut Polytechnique de Paris, 19 Place Marguerite Perey, 91120 Palaiseau, France
Z
Ziqian Liu
LTCI, Télécom Paris, Institut Polytechnique de Paris, 19 Place Marguerite Perey, 91120 Palaiseau, France
M
Michaël Soumm
LTCI, Télécom Paris, Institut Polytechnique de Paris, 19 Place Marguerite Perey, 91120 Palaiseau, France
Pietro Gori
Pietro Gori
Télécom Paris (IPParis)
Representation learningmachine learningmedical imagingcomputational anatomy