🤖 AI Summary
This study addresses the persistent performance disparities of multilingual language models across languages, investigating whether these gaps stem from inherent linguistic complexity or modeling design choices. For the first time, it systematically disentangles these factors by jointly analyzing modeling mechanisms—including tokenization, encoding strategies, data sampling, and parameter sharing—alongside linguistic properties such as morphology, syntax, and information density. The findings reveal that the majority of cross-lingual performance gaps are attributable to modeling decisions rather than intrinsic language characteristics. Building on this insight, the work proposes actionable principles for fairer model design and demonstrates that standardized tokenization, unified encoding, and balanced data exposure significantly enhance linguistic equity in multilingual systems.
📝 Abstract
Multilingual language models (LMs) promise broader NLP access, yet current systems deliver uneven performance across the world's languages. This survey examines why these gaps persist and whether they reflect intrinsic linguistic difficulty or modeling artifacts. We organize the literature around two questions: do linguistic disparities arise from representation and allocation choices (e.g., tokenization, encoding, data exposure, parameter sharing) rather than inherent complexity; and which design choices mitigate inequities across typologically diverse languages. We review linguistic features, such as orthography, morphology, lexical diversity, syntax, information density, and typological distance, linking each to concrete modeling mechanisms. Gaps often shrink when segmentation, encoding, and data exposure are normalized, suggesting much apparent difficulty stems from current modeling choices. We synthesize these insights into design recommendations for tokenization, sampling, architectures, and evaluation to support more balanced multilingual LMs.