🤖 AI Summary
This work identifies systematic cultural biases in text-to-image diffusion models, particularly in representing culturally specific attributes—such as architecture, attire, and cuisine—of underrepresented countries. To address this, we introduce CultDiff, the first cross-cultural benchmark encompassing ten diverse national cultural contexts. We propose CultDiff-S, a culture-aware neural similarity metric grounded in human evaluation, achieving a Pearson correlation coefficient of 0.89 with human judgments. Additionally, we design a fine-grained, multidimensional similarity analysis framework and a visual representation bias diagnostic methodology. Empirical evaluation reveals pronounced regional disparities across mainstream models in cultural relevance, caption fidelity, and photorealism. Our findings underscore critical gaps in cultural expressivity and equity in generative AI. This work advances culturally inclusive generative modeling and establishes a novel paradigm for culture-aware evaluation, promoting data fairness and representational justice in multimodal foundation models.
📝 Abstract
Text-to-image diffusion models have recently enabled the creation of visually compelling, detailed images from textual prompts. However, their ability to accurately represent various cultural nuances remains an open question. In our work, we introduce CultDiff benchmark, evaluating state-of-the-art diffusion models whether they can generate culturally specific images spanning ten countries. We show that these models often fail to generate cultural artifacts in architecture, clothing, and food, especially for underrepresented country regions, by conducting a fine-grained analysis of different similarity aspects, revealing significant disparities in cultural relevance, description fidelity, and realism compared to real-world reference images. With the collected human evaluations, we develop a neural-based image-image similarity metric, namely, CultDiff-S, to predict human judgment on real and generated images with cultural artifacts. Our work highlights the need for more inclusive generative AI systems and equitable dataset representation over a wide range of cultures.