🤖 AI Summary
This study investigates the real-world efficacy of automatically generated counter-stereotypical texts in mitigating gender bias on social media and examines users’ perceived consistency with such interventions. Method: Leveraging natural language generation, we constructed two types of intervention texts—counterfactual statements and universal generalizations—and deployed them in authentic social media contexts. We conducted a multi-dimensional, subgroup-stratified evaluation using the Implicit Association Test (IAT) and self-report scales. Results: We observed a significant dissociation between objective bias reduction and subjective perception: overall bias mitigation was modest but strongly identity-sensitive—implicit bias decreased significantly among older male users, whereas it increased among certain subgroups of young women. Based on these findings, we propose a novel “user-identity-dynamic adaptation” paradigm for anti-bias interventions, advocating that algorithmic fairness mechanisms be grounded in sociopsychological heterogeneity. This work advances both theoretical foundations and practical guidelines for trustworthy AI and equitable human–AI collaboration.
📝 Abstract
We investigate the effect of automatically generated counter-stereotypes on gender bias held by users of various demographics on social media. Building on recent NLP advancements and social psychology literature, we evaluate two counter-stereotype strategies -- counter-facts and broadening universals (i.e., stating that anyone can have a trait regardless of group membership) -- which have been identified as the most potentially effective in previous studies. We assess the real-world impact of these strategies on mitigating gender bias across user demographics (gender and age), through the Implicit Association Test and the self-reported measures of explicit bias and perceived utility. Our findings reveal that actual effectiveness does not align with perceived effectiveness, and the former is a nuanced and sometimes divergent phenomenon across demographic groups. While overall bias reduction was limited, certain groups (e.g., older, male participants) exhibited measurable improvements in implicit bias in response to some interventions. Conversely, younger participants, especially women, showed increasing bias in response to the same interventions. These results highlight the complex and identity-sensitive nature of stereotype mitigation and call for dynamic and context-aware evaluation and mitigation strategies.