Aesthetic Alignment Risks Assimilation: How Image Generation and Reward Models Reinforce Beauty Bias and Ideological "Censorship"

📅 2025-12-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Image generation models exhibit excessive alignment with dominant aesthetic norms, suppressing user-driven anti-aesthetic, critical, or low-fidelity visual expressions—thereby compromising instruction fidelity and aesthetic pluralism. Method: We conduct the first systematic investigation revealing that multimodal reward models (e.g., BLIP-2, CLIP-based) impose significant implicit penalties (average −37.2%) on instruction-compliant anti-aesthetic images, exposing ideological gatekeeping and developer-centric bias. Leveraging a broad-spectrum aesthetic dataset, we evaluate SDXL and other generative models within an image editing assessment framework, empirically demonstrating severe instruction deviation under abstract art editing and negative prompting. Contribution/Results: We identify and quantify the “aesthetic hegemony” mechanism embedded in reward modeling—characterized by normative aesthetic enforcement—and provide both theoretical grounding and empirical evidence for decoupling value alignment from user autonomy in generative AI.

Technology Category

Application Category

📝 Abstract
Over-aligning image generation models to a generalized aesthetic preference conflicts with user intent, particularly when ``anti-aesthetic" outputs are requested for artistic or critical purposes. This adherence prioritizes developer-centered values, compromising user autonomy and aesthetic pluralism. We test this bias by constructing a wide-spectrum aesthetics dataset and evaluating state-of-the-art generation and reward models. We find that aesthetic-aligned generation models frequently default to conventionally beautiful outputs, failing to respect instructions for low-quality or negative imagery. Crucially, reward models penalize anti-aesthetic images even when they perfectly match the explicit user prompt. We confirm this systemic bias through image-to-image editing and evaluation against real abstract artworks.
Problem

Research questions and friction points this paper is trying to address.

Aesthetic alignment in image generation models conflicts with user intent for anti-aesthetic outputs
Reward models penalize anti-aesthetic images despite matching user prompts
Systemic bias reinforces conventional beauty standards and compromises aesthetic pluralism
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constructing wide-spectrum aesthetics dataset for bias testing
Evaluating generation and reward models on anti-aesthetic outputs
Confirming bias via image editing and abstract artwork comparison
🔎 Similar Papers
No similar papers found.
W
Wenqi Marshall Guo
Department of CMPS, University of British Columbia, Canada
Q
Qingyun Qian
Department of CMPS, University of British Columbia, Canada
Khalad Hasan
Khalad Hasan
University of British Columbia
Human-Computer Interaction
Shan Du
Shan Du
The University of British Columbia
Image processingvideo processingvideo surveillancecomputer visionmachine learning