Risks of Cultural Erasure in Large Language Models

๐Ÿ“… 2025-01-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This study identifies cultural erasure risks in large language models (LLMs) for travel recommendation and geographic description, focusing on two imbalances: cultural absence (underrepresentation) and cultural simplification (stereotyped representation). Methodologically, we propose the first sociologically grounded, measurable framework for assessing cultural impact in LLM outputsโ€”novelly distinguishing *whether* cultures are represented from *how* they are represented, thereby advancing fairness evaluation from coverage breadth to representational quality. Leveraging prompt engineering, cross-cultural content analysis, and a custom-built benchmark, we conduct systematic qualitative and semi-quantitative evaluations across multiple models. Results reveal significant underrepresentation and stereotyping of non-Western cultures in mainstream LLMs. Our work delivers the first operationalizable, quality-oriented assessment pathway for cultural representation in NLP, addressing a critical gap in the quantitative study of cultural fairness.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models are increasingly being integrated into applications that shape the production and discovery of societal knowledge such as search, online education, and travel planning. As a result, language models will shape how people learn about, perceive and interact with global cultures making it important to consider whose knowledge systems and perspectives are represented in models. Recognizing this importance, increasingly work in Machine Learning and NLP has focused on evaluating gaps in global cultural representational distribution within outputs. However, more work is needed on developing benchmarks for cross-cultural impacts of language models that stem from a nuanced sociologically-aware conceptualization of cultural impact or harm. We join this line of work arguing for the need of metricizable evaluations of language technologies that interrogate and account for historical power inequities and differential impacts of representation on global cultures, particularly for cultures already under-represented in the digital corpora. We look at two concepts of erasure: omission: where cultures are not represented at all and simplification i.e. when cultural complexity is erased by presenting one-dimensional views of a rich culture. The former focuses on whether something is represented, and the latter on how it is represented. We focus our analysis on two task contexts with the potential to influence global cultural production. First, we probe representations that a language model produces about different places around the world when asked to describe these contexts. Second, we analyze the cultures represented in the travel recommendations produced by a set of language model applications. Our study shows ways in which the NLP community and application developers can begin to operationalize complex socio-cultural considerations into standard evaluations and benchmarks.
Problem

Research questions and friction points this paper is trying to address.

cultural representation
language models
bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cultural Bias
Language Models
Socio-cultural Evaluation