Talent or Luck? Evaluating Attribution Bias in Large Language Models

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses a previously unexplored dimension of LLM fairness—implicit attributional bias across demographic groups, i.e., whether models systematically attribute outcomes to internal factors (e.g., ability, effort) versus external factors (e.g., luck, environment), reflecting deeper cognitive biases beyond surface-level stereotypes. Method: We introduce attribution theory from social psychology into LLM evaluation, designing structured, theory-grounded prompting templates for controlled causal reasoning. We then conduct cross-group comparative analysis of attribution distributions. Contribution/Results: Empirical evaluation across multiple state-of-the-art LLMs reveals significant inter-group attribution imbalance—for instance, heightened internal attributions for minority ethnic groups. These findings expose structural, cognition-level fairness violations not captured by conventional fairness metrics. Our work establishes the first theoretical framework for attributional fairness in LLMs and provides an interpretable, theory-driven paradigm for detecting and diagnosing attribution bias.

Technology Category

Application Category

📝 Abstract
When a student fails an exam, do we tend to blame their effort or the test's difficulty? Attribution, defined as how reasons are assigned to event outcomes, shapes perceptions, reinforces stereotypes, and influences decisions. Attribution Theory in social psychology explains how humans assign responsibility for events using implicit cognition, attributing causes to internal (e.g., effort, ability) or external (e.g., task difficulty, luck) factors. LLMs' attribution of event outcomes based on demographics carries important fairness implications. Most works exploring social biases in LLMs focus on surface-level associations or isolated stereotypes. This work proposes a cognitively grounded bias evaluation framework to identify how models' reasoning disparities channelize biases toward demographic groups.
Problem

Research questions and friction points this paper is trying to address.

Evaluating attribution bias in LLMs' reasoning
Assessing fairness implications of demographic-based attributions
Proposing cognitive framework to identify bias disparities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cognitively grounded bias evaluation framework
Analyzes LLMs' attribution disparities
Links reasoning biases to demographics
🔎 Similar Papers
No similar papers found.
Chahat Raj
Chahat Raj
George Mason University
NLPFairnessEthicsSociety & Culture
M
Mahika Banerjee
Thomas Jefferson High School For Science and Technology
Aylin Caliskan
Aylin Caliskan
Assistant Professor, University of Washington
AI biasAI ethicsmachine learningnatural language processingtech policy
A
Antonios Anastasopoulos
George Mason University, Archimedes, Athena Research Center, Greece
Z
Ziwei Zhu
George Mason University