Cultural Bias in Large Language Models: Evaluating AI Agents through Moral Questionnaires

📅 2025-07-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit systematic cultural bias in moral representation, producing homogenized outputs that fail to faithfully capture cross-cultural ethical variation—rendering them unsuitable as “synthetic populations” for social science research. Method: Grounded in Moral Foundations Theory (MFT), this study conducts a large-scale cross-cultural empirical comparison across 19 cultural contexts, evaluating multiple state-of-the-art LLMs against human responses on the standardized Moral Foundations Questionnaire (MFQ). Contribution/Results: We find that scaling model parameters does not improve cultural representational fidelity, and current prompt-engineering–based alignment methods inadequately capture culture-specific moral intuitions. This work provides the first quantitative, empirically grounded assessment of moral cultural bias in LLMs at scale. It demonstrates that prevailing alignment paradigms are insufficient for culturally nuanced moral reasoning and advocates a paradigm shift toward data-driven, culturally embedded AI alignment frameworks.

Technology Category

Application Category

📝 Abstract
Are AI systems truly representing human values, or merely averaging across them? Our study suggests a concerning reality: Large Language Models (LLMs) fail to represent diverse cultural moral frameworks despite their linguistic capabilities. We expose significant gaps between AI-generated and human moral intuitions by applying the Moral Foundations Questionnaire across 19 cultural contexts. Comparing multiple state-of-the-art LLMs' origins against human baseline data, we find these models systematically homogenize moral diversity. Surprisingly, increased model size doesn't consistently improve cultural representation fidelity. Our findings challenge the growing use of LLMs as synthetic populations in social science research and highlight a fundamental limitation in current AI alignment approaches. Without data-driven alignment beyond prompting, these systems cannot capture the nuanced, culturally-specific moral intuitions. Our results call for more grounded alignment objectives and evaluation metrics to ensure AI systems represent diverse human values rather than flattening the moral landscape.
Problem

Research questions and friction points this paper is trying to address.

LLMs fail to represent diverse cultural moral frameworks
AI-generated moral intuitions gap with human values
Current AI alignment cannot capture culturally-specific morals
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluating LLMs with Moral Foundations Questionnaire
Comparing AI moral outputs across 19 cultures
Proposing data-driven alignment beyond prompting
🔎 Similar Papers
No similar papers found.