🤖 AI Summary
This study identifies significant sociolinguistic biases in large language models (LLMs) concerning factual accuracy, truthfulness, and refusal behavior—specifically disadvantaging users with lower English proficiency, lower educational attainment, and non-U.S. nationality. Method: We develop a controlled prompt–response analytical framework grounded in a dual-dimensional (factualness/truthfulness) benchmark dataset, evaluating GPT-4, Claude, and Llama across diverse user profiles. Contribution/Results: Our empirical analysis reveals that error rates for these three marginalized groups increase by 37–62%, refusal rates rise 2.1-fold, and overall response credibility degrades significantly. Moving beyond conventional fairness metrics, we propose the first fairness evaluation paradigm for LLMs explicitly centered on users’ sociolinguistic attributes. This work establishes both theoretical foundations and methodological tools to mitigate digital inequity in generative AI systems.
📝 Abstract
While state-of-the-art large language models (LLMs) have shown impressive performance on many tasks, there has been extensive research on undesirable model behavior such as hallucinations and bias. In this work, we investigate how the quality of LLM responses changes in terms of information accuracy, truthfulness, and refusals depending on three user traits: English proficiency, education level, and country of origin. We present extensive experimentation on three state-of-the-art LLMs and two different datasets targeting truthfulness and factuality. Our findings suggest that undesirable behaviors in state-of-the-art LLMs occur disproportionately more for users with lower English proficiency, of lower education status, and originating from outside the US, rendering these models unreliable sources of information towards their most vulnerable users.