Benchmarking Concept-Spilling Across Languages in LLMs

📅 2026-01-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of "language leakage" in multilingual large language models, where semantic interference from dominant languages—particularly English—compromises the semantic fidelity of non-English generations. To quantify this phenomenon, the authors propose a "concept leakage" evaluation framework that leverages a multilingual generation task centered on 100 highly polysemous English words, systematically assessing semantic robustness across nine languages. By analyzing the sequential emergence of target-language versus dominant-language semantics in model outputs, the method enables relative model ranking without requiring explicit error attribution. Experiments reveal substantial variation in semantic robustness across both models and languages, offering an extensible benchmark and automated validation toolkit to support the development of linguistically balanced multilingual AI systems.

Technology Category

Application Category

📝 Abstract
Multilingual Large Language Models (LLMs) exhibit remarkable cross-lingual abilities, yet often exhibit a systematic bias toward the representations from other languages, resulting in semantic interference when generating content in non-English languages$-$a phenomenon we define as language spilling. This paper presents a novel comparative framework for evaluating multilingual semantic robustness by systematically measuring how models handle polysemous words across languages. Our methodology provides a relative measure of model performance: when required to generate exactly five meanings, both strong and weak models may resort to meanings from dominant languages, but semantically stronger models do so later in the generation sequence, producing more true meanings from the target language before failing, while weaker models resort to dominant-language meanings earlier in the sequence. We evaluate a diverse set of open and closed multilingual LLMs using a structured meaning generation task across nine languages, employing a carefully curated benchmark of 100 high-polysemy English words. Our findings reveal significant variation in semantic robustness across both models and languages, providing a principled ranking system for model comparison without requiring definitive causal attribution of error sources. We contribute both a scalable comparative benchmark for multilingual semantic evaluation and a rigorous validation pipeline$-$critical tools for developing more linguistically balanced AI systems.
Problem

Research questions and friction points this paper is trying to address.

language spilling
multilingual LLMs
semantic robustness
polysemous words
cross-lingual generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

language spilling
multilingual LLMs
semantic robustness
polysemous words
comparative benchmark
🔎 Similar Papers
No similar papers found.