Beyond the Rosetta Stone: Unification Forces in Generalization Dynamics

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from hallucination in cross-lingual knowledge transfer, primarily due to misaligned factual representations across languages. To address this, we train small Transformer models from scratch on controlled synthetic multilingual data, systematically characterizing how representational alignment evolves dynamically during training. We propose two targeted interventions—data distribution modulation and tokenizer-aware tokenization strategies—to enhance cross-lingual transfer. Furthermore, we design a mutual information–driven metric and visualization toolkit to quantify inter-language extractability and representation coupling. Experiments demonstrate that explicitly improving cross-lingual representational consistency significantly suppresses hallucination and strengthens transfer robustness. This work provides an interpretable theoretical framework and reproducible technical methodology for understanding and optimizing the multilingual generalization capabilities of large models.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) struggle with cross-lingual knowledge transfer: they hallucinate when asked in one language about facts expressed in a different language during training. This work introduces a controlled setting to study the causes and dynamics of this phenomenon by training small Transformer models from scratch on synthetic multilingual datasets. We identify a learning phase wherein a model develops either separate or unified representations of the same facts across languages, and show that unification is essential for cross-lingual transfer. We also show that the degree of unification depends on mutual information between facts and training data language, and on how easy it is to extract that language. Based on these insights, we develop methods to modulate the level of cross-lingual transfer by manipulating data distribution and tokenization, and we introduce metrics and visualizations to formally characterize their effects on unification. Our work shows how controlled settings can shed light on pre-training dynamics and suggests new directions for improving cross-lingual transfer in LLMs.
Problem

Research questions and friction points this paper is trying to address.

Study cross-lingual knowledge transfer challenges in LLMs
Identify factors affecting unified multilingual fact representations
Develop methods to improve cross-lingual transfer dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Controlled study with synthetic multilingual datasets
Modulate transfer via data and tokenization manipulation
Metrics to visualize unification effects
🔎 Similar Papers
No similar papers found.