DebiasDiff: Debiasing Text-to-image Diffusion Models with Self-discovering Latent Attribute Directions

📅 2024-12-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models often inherit and amplify societal biases—such as gender and racial biases—present in training data. Existing debiasing approaches typically require costly retraining, manually annotated reference sets, or external classifiers, limiting scalability and generalizability. This paper proposes a plug-and-play, training-free debiasing framework that operates without labeled data or auxiliary models. Our method introduces two key innovations: (1) a self-discovered implicit attribute direction mechanism enabling joint disentanglement of gender, race, and their intersections; and (2) a noise-synthesis-optimized architecture comprising attribute adapters and distribution indicators, facilitating attribute disentanglement and controllable guidance within the diffusion latent space. Evaluated across multiple bias benchmark datasets, our approach significantly outperforms state-of-the-art methods while preserving generation quality, computational efficiency, and model agnosticism.

Technology Category

Application Category

📝 Abstract
While Diffusion Models (DM) exhibit remarkable performance across various image generative tasks, they nonetheless reflect the inherent bias presented in the training set. As DMs are now widely used in real-world applications, these biases could perpetuate a distorted worldview and hinder opportunities for minority groups. Existing methods on debiasing DMs usually requires model re-training with a human-crafted reference dataset or additional classifiers, which suffer from two major limitations: (1) collecting reference datasets causes expensive annotation cost; (2) the debiasing performance is heavily constrained by the quality of the reference dataset or the additional classifier. To address the above limitations, we propose DebiasDiff, a plug-and-play method that learns attribute latent directions in a self-discovering manner, thus eliminating the reliance on such reference dataset. Specifically, DebiasDiff consists of two parts: a set of attribute adapters and a distribution indicator. Each adapter in the set aims to learn an attribute latent direction, and is optimized via noise composition through a self-discovering process. Then, the distribution indicator is multiplied by the set of adapters to guide the generation process towards the prescribed distribution. Our method enables debiasing multiple attributes in DMs simultaneously, while remaining lightweight and easily integrable with other DMs, eliminating the need for re-training. Extensive experiments on debiasing gender, racial, and their intersectional biases show that our method outperforms previous SOTA by a large margin.
Problem

Research questions and friction points this paper is trying to address.

Bias Amplification
Diffusion Models
Fairness in Image Generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

DebiasDiff
Bias Correction
Attribute Modulation
🔎 Similar Papers
No similar papers found.