🤖 AI Summary
In multilingual domain adaptation (ML-DA), the intra-lingual acquisition mechanisms of domain knowledge and cross-lingual transfer pathways remain poorly understood, hindering performance on low-resource languages.
Method: Focusing on the English–Japanese bilingual biomedical domain, this work systematically investigates knowledge acquisition dynamics in a 13B-parameter large language model. We propose AdaXEval—a structured, bilingual multiple-choice QA evaluation framework built on domain-specific parallel corpora—to enable fine-grained, continuous tracking of knowledge learning. Experiments employ continual training with multi-formulation data strategies.
Contribution/Results: Despite high-quality bilingual data, cross-lingual knowledge transfer exhibits pronounced asymmetry. AdaXEval effectively uncovers transfer bottlenecks and intra-lingual knowledge consolidation patterns. All code and datasets are publicly released.
📝 Abstract
Multilingual domain adaptation (ML-DA) is widely used to learn new domain knowledge across languages into large language models (LLMs). Although many methods have been proposed to improve domain adaptation, the mechanisms of multilingual knowledge acquisition, how domain knowledge is learned within a language and transferred across languages, remain underexplored. This gap leads to suboptimal performance, particularly in low-resource settings. This work examines the learning dynamics of LLMs during ML-DA. Because prior ML-DA studies often train and evaluate on datasets with mismatched knowledge coverage, we propose AdaXEval, an adaptive evaluation method that builds multiple-choice QA datasets from the same bilingual domain corpus used for training, thereby directly studying multilingual knowledge acquisition. Through continual training of LLMs with diverse data recipes, we track how LLMs acquire domain facts and pinpoint the mechanism behind the transformation process from domain training data to knowledge. Our experiments on a 13B English-Japanese bilingual LLM reveal that cross-lingual transfer remains challenging despite a high-quality bilingual corpus. The code has been released.