Language Specific Knowledge: Do Models Know Better in X than in English?

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the existence of “language-specific knowledge” (LSK) in knowledge-intensive language models—i.e., asymmetric distribution of culturally grounded knowledge across languages. The authors formally define and empirically validate LSK for the first time, revealing that low-resource languages can outperform English on cultural reasoning tasks. To address this, they propose LSKExtractor: a language-adaptive inference framework that dynamically selects the optimal input language at inference time, integrating chain-of-thought (CoT) prompting with cross-lingual knowledge assessment. Evaluated across multiple large language models and diverse cultural datasets, LSKExtractor achieves an average 10% accuracy gain over baselines. The method significantly enhances cultural adaptivity and linguistic inclusivity, establishing a novel paradigm for building language-fair, knowledge-enhanced models.

Technology Category

Application Category

📝 Abstract
Code-switching is a common phenomenon of alternating between different languages in the same utterance, thought, or conversation. We posit that humans code-switch because they feel more comfortable talking about certain topics and domains in one language than another. With the rise of knowledge-intensive language models, we ask ourselves the next, natural question: Could models hold more knowledge on some topics in some language X? More importantly, could we improve reasoning by changing the language that reasoning is performed in? We coin the term Language Specific Knowledge (LSK) to represent this phenomenon. As ethnic cultures tend to develop alongside different languages, we employ culture-specific datasets (that contain knowledge about cultural and social behavioral norms). We find that language models can perform better when using chain-of-thought reasoning in some languages other than English, sometimes even better in low-resource languages. Paired with previous works showing that semantic similarity does not equate to representational similarity, we hypothesize that culturally specific texts occur more abundantly in corresponding languages, enabling specific knowledge to occur only in specific"expert"languages. Motivated by our initial results, we design a simple methodology called LSKExtractor to benchmark the language-specific knowledge present in a language model and, then, exploit it during inference. We show our results on various models and datasets, showing an average relative improvement of 10% in accuracy. Our research contributes to the open-source development of language models that are inclusive and more aligned with the cultural and linguistic contexts in which they are deployed.
Problem

Research questions and friction points this paper is trying to address.

Do models hold more knowledge in language X than English?
Can reasoning improve by switching the language used?
Does cultural context affect model performance in specific languages?
Innovation

Methods, ideas, or system contributions that make the work stand out.

Employ culture-specific datasets for language models
Introduce LSKExtractor to benchmark language-specific knowledge
Improve reasoning by switching languages in models
I
Ishika Agarwal
Department of Computer Science, University of Illinois, Urbana-Champaign
Nimet Beyza Bozdag
Nimet Beyza Bozdag
University of Illinois Urbana-Champaign
NLPConversational AI
D
Dilek Hakkani-Tur
Department of Computer Science, University of Illinois, Urbana-Champaign