π€ AI Summary
Existing LMM bias evaluation datasets predominantly rely on synthetic images and exhibit limited category coverage, failing to reflect stereotypical biases in authentic visual contexts. To address this, we propose SB-Benchβthe first benchmark for stereotypical bias evaluation grounded in real-world visual scenes. It spans nine sociodemographic dimensions and employs non-synthetic images, diverse visual variants, and visually anchored multiple-choice questions. Our method innovatively decouples visual and textual bias assessment, introduces difficulty-stratified visual reasoning tasks, and incorporates adversarial image variant generation. We conduct systematic evaluations across 12 state-of-the-art LMMs, uncovering pronounced bias patterns along gender, occupation, and race dimensions. Both code and dataset are fully open-sourced, establishing a reproducible and extensible paradigm for multimodal fairness research.
π Abstract
Stereotype biases in Large Multimodal Models (LMMs) perpetuate harmful societal prejudices, undermining the fairness and equity of AI applications. As LMMs grow increasingly influential, addressing and mitigating inherent biases related to stereotypes, harmful generations, and ambiguous assumptions in real-world scenarios has become essential. However, existing datasets evaluating stereotype biases in LMMs often lack diversity and rely on synthetic images, leaving a gap in bias evaluation for real-world visual contexts. To address this, we introduce the Stereotype Bias Benchmark (SB-bench), the most comprehensive framework to date for assessing stereotype biases across nine diverse categories with non-synthetic images. SB-bench rigorously evaluates LMMs through carefully curated, visually grounded scenarios, challenging them to reason accurately about visual stereotypes. It offers a robust evaluation framework featuring real-world visual samples, image variations, and multiple-choice question formats. By introducing visually grounded queries that isolate visual biases from textual ones, SB-bench enables a precise and nuanced assessment of a model's reasoning capabilities across varying levels of difficulty. Through rigorous testing of state-of-the-art open-source and closed-source LMMs, SB-bench provides a systematic approach to assessing stereotype biases in LMMs across key social dimensions. This benchmark represents a significant step toward fostering fairness in AI systems and reducing harmful biases, laying the groundwork for more equitable and socially responsible LMMs. Our code and dataset are publicly available.