Language Models Entangle Language and Culture

📅 2026-01-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the uneven response quality of large language models (LLMs) across languages and the systematic disadvantage faced by speakers of low-resource languages due to language-dependent cultural contextualization. By constructing a multilingual open-ended question set based on WildChat and a translated version of the CulturalBench benchmark, this work provides the first systematic analysis of the entanglement between language and culture in LLMs. Employing an LLM-as-a-Judge approach to identify cultural context, combined with multilingual benchmarking and human evaluation, the study reveals a significant decline in response quality for low-resource languages. Furthermore, it demonstrates that switching input languages substantially alters the cultural context invoked by the model, thereby affecting the relevance and appropriateness of its responses.

Technology Category

Application Category

📝 Abstract
Users should not be systemically disadvantaged by the language they use for interacting with LLMs; i.e. users across languages should get responses of similar quality irrespective of language used. In this work, we create a set of real-world open-ended questions based on our analysis of the WildChat dataset and use it to evaluate whether responses vary by language, specifically, whether answer quality depends on the language used to query the model. We also investigate how language and culture are entangled in LLMs such that choice of language changes the cultural information and context used in the response by using LLM-as-a-Judge to identify the cultural context present in responses. To further investigate this, we evaluate LLMs on a translated subset of the CulturalBench benchmark across multiple languages. Our evaluations reveal that LLMs consistently provide lower quality answers to open-ended questions in low resource languages. We find that language significantly impacts the cultural context used by the model. This difference in context impacts the quality of the downstream answer.
Problem

Research questions and friction points this paper is trying to address.

language bias
cultural entanglement
low-resource languages
response quality
multilingual LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

language bias
cultural entanglement
multilingual evaluation
LLM-as-a-Judge
low-resource languages
🔎 Similar Papers
No similar papers found.
S
Shourya Jain
Lossfunk
Paras Chopra
Paras Chopra
Independent Researcher