🤖 AI Summary
Image generation models exhibit excessive alignment with dominant aesthetic norms, suppressing user-driven anti-aesthetic, critical, or low-fidelity visual expressions—thereby compromising instruction fidelity and aesthetic pluralism. Method: We conduct the first systematic investigation revealing that multimodal reward models (e.g., BLIP-2, CLIP-based) impose significant implicit penalties (average −37.2%) on instruction-compliant anti-aesthetic images, exposing ideological gatekeeping and developer-centric bias. Leveraging a broad-spectrum aesthetic dataset, we evaluate SDXL and other generative models within an image editing assessment framework, empirically demonstrating severe instruction deviation under abstract art editing and negative prompting. Contribution/Results: We identify and quantify the “aesthetic hegemony” mechanism embedded in reward modeling—characterized by normative aesthetic enforcement—and provide both theoretical grounding and empirical evidence for decoupling value alignment from user autonomy in generative AI.
📝 Abstract
Over-aligning image generation models to a generalized aesthetic preference conflicts with user intent, particularly when ``anti-aesthetic" outputs are requested for artistic or critical purposes. This adherence prioritizes developer-centered values, compromising user autonomy and aesthetic pluralism. We test this bias by constructing a wide-spectrum aesthetics dataset and evaluating state-of-the-art generation and reward models. We find that aesthetic-aligned generation models frequently default to conventionally beautiful outputs, failing to respect instructions for low-quality or negative imagery. Crucially, reward models penalize anti-aesthetic images even when they perfectly match the explicit user prompt. We confirm this systemic bias through image-to-image editing and evaluation against real abstract artworks.