Multilingual Large Language Models and Curse of Multilinguality

📅 2024-06-15
🏛️ arXiv.org
📈 Citations: 4
Influential: 1
📄 PDF
🤖 AI Summary
This paper addresses the “multilingual curse” in multilingual large language models (LLMs)—a pervasive trade-off wherein improved multilingual capability consistently degrades monolingual performance. Method: Through systematic architecture analysis, derivation of pretraining objectives, data provenance tracing, and comparative evaluation of tokenization strategies—empirically validated across mBERT, XLM-R, XGLM, BLOOM, and mT5—we identify the root causes as competitive representational conflicts across languages and imbalances in data distribution and optimization dynamics. Contribution/Results: We formally define the phenomenon for the first time, propose a mitigation paradigm comprising language-balanced sampling, hierarchical adaptation, and task-aware fine-tuning, and introduce the first collaborative evaluation framework specifically designed for multilingual LLMs. Our work establishes both theoretical foundations and reproducible technical pathways toward models that simultaneously achieve broad multilingual coverage and high monolingual accuracy.

Technology Category

Application Category

📝 Abstract
Multilingual Large Language Models (LLMs) have gained large popularity among Natural Language Processing (NLP) researchers and practitioners. These models, trained on huge datasets, show proficiency across various languages and demonstrate effectiveness in numerous downstream tasks. This paper navigates the landscape of multilingual LLMs, providing an introductory overview of their technical aspects. It explains underlying architectures, objective functions, pre-training data sources, and tokenization methods. This work explores the unique features of different model types: encoder-only (mBERT, XLM-R), decoder-only (XGLM, PALM, BLOOM, GPT-3), and encoder-decoder models (mT5, mBART). Additionally, it addresses one of the significant limitations of multilingual LLMs - the curse of multilinguality - and discusses current attempts to overcome it.
Problem

Research questions and friction points this paper is trying to address.

Exploring multilingual LLMs' architectures and training methods
Analyzing performance of different multilingual LLM types
Addressing the curse of multilinguality limitation in LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explores multilingual LLM architectures and tokenization
Analyzes encoder-only, decoder-only, encoder-decoder models
Addresses curse of multilinguality with solutions
🔎 Similar Papers
No similar papers found.