🤖 AI Summary
This study addresses the limited generalization of multilingual neural machine translation (NMT) for low-resource languages, which often stems from the scarcity of parallel data. The authors systematically investigate how linguistic similarity, data composition, and training strategies influence cross-lingual knowledge transfer. They propose a novel approach that integrates retrieval-augmented mechanisms with auxiliary supervision signals and further analyze performance trade-offs during fine-tuning. Experimental results demonstrate that the proposed method substantially improves translation quality for low-resource languages, enhances model generalization, and reduces out-of-domain generation. These findings offer an effective pathway toward building more robust and inclusive multilingual natural language processing systems.
📝 Abstract
Multilingual machine translation systems aim to make knowledge accessible across languages, yet learning effective cross-lingual representations remains challenging. These challenges are especially pronounced for low-resource languages, where limited parallel data constrains generalization and transfer. Understanding how multilingual models share knowledge across languages requires examining the interaction between representations, data availability, and training strategies. In this thesis, we study cross-lingual knowledge transfer in neural models and develop methods to improve robustness and generalization in multilingual settings, using machine translation as a central testbed. We analyze how similarity between languages influences transfer, how retrieval and auxiliary supervision can strengthen low-resource translation, and how fine-tuning on parallel data can introduce unintended trade-offs in large language models. We further examine the role of language diversity during training and show that increasing translation coverage improves generalization and reduces off-target behavior. Together, this work highlights how modeling choices and data composition shape multilingual learning and offers insights toward more inclusive and resilient multilingual NLP systems.