Using LLMs as prompt modifier to avoid biases in AI image generators

📅 2025-04-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses inherent societal biases in text-to-image generation. We propose a lightweight, model-agnostic intervention that requires no modification or fine-tuning of the underlying diffusion models: a plug-and-play prompt regulator powered by large language models (LLMs), which dynamically rewrites user prompts to enhance fairness and diversity across demographic dimensions such as gender and race. Our method is the first to achieve cross-model compatibility—supporting SDXL, Stable Diffusion 3.5, and Flux—without model retraining. It introduces quantitative metrics—neutrality score and demographic alignment ratio—to guide prompt rewriting. Experiments demonstrate significant reduction in systemic bias, even under underspecified prompts, while preserving semantic fidelity and increasing output diversity. All code, rewritten prompts, and generated image datasets are publicly released.

Technology Category

Application Category

📝 Abstract
This study examines how Large Language Models (LLMs) can reduce biases in text-to-image generation systems by modifying user prompts. We define bias as a model's unfair deviation from population statistics given neutral prompts. Our experiments with Stable Diffusion XL, 3.5 and Flux demonstrate that LLM-modified prompts significantly increase image diversity and reduce bias without the need to change the image generators themselves. While occasionally producing results that diverge from original user intent for elaborate prompts, this approach generally provides more varied interpretations of underspecified requests rather than superficial variations. The method works particularly well for less advanced image generators, though limitations persist for certain contexts like disability representation. All prompts and generated images are available at https://iisys-hof.github.io/llm-prompt-img-gen/
Problem

Research questions and friction points this paper is trying to address.

Reducing biases in AI image generators using LLMs
Increasing image diversity via LLM-modified prompts
Addressing unfair deviations in text-to-image systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs modify prompts to reduce AI bias
Enhances diversity without altering generators
Effective for underspecified user requests
🔎 Similar Papers
No similar papers found.