Tracing Multilingual Knowledge Acquisition Dynamics in Domain Adaptation: A Case Study of English-Japanese Biomedical Adaptation

📅 2025-10-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multilingual domain adaptation (ML-DA), the intra-lingual acquisition mechanisms of domain knowledge and cross-lingual transfer pathways remain poorly understood, hindering performance on low-resource languages. Method: Focusing on the English–Japanese bilingual biomedical domain, this work systematically investigates knowledge acquisition dynamics in a 13B-parameter large language model. We propose AdaXEval—a structured, bilingual multiple-choice QA evaluation framework built on domain-specific parallel corpora—to enable fine-grained, continuous tracking of knowledge learning. Experiments employ continual training with multi-formulation data strategies. Contribution/Results: Despite high-quality bilingual data, cross-lingual knowledge transfer exhibits pronounced asymmetry. AdaXEval effectively uncovers transfer bottlenecks and intra-lingual knowledge consolidation patterns. All code and datasets are publicly released.

Technology Category

Application Category

📝 Abstract
Multilingual domain adaptation (ML-DA) is widely used to learn new domain knowledge across languages into large language models (LLMs). Although many methods have been proposed to improve domain adaptation, the mechanisms of multilingual knowledge acquisition, how domain knowledge is learned within a language and transferred across languages, remain underexplored. This gap leads to suboptimal performance, particularly in low-resource settings. This work examines the learning dynamics of LLMs during ML-DA. Because prior ML-DA studies often train and evaluate on datasets with mismatched knowledge coverage, we propose AdaXEval, an adaptive evaluation method that builds multiple-choice QA datasets from the same bilingual domain corpus used for training, thereby directly studying multilingual knowledge acquisition. Through continual training of LLMs with diverse data recipes, we track how LLMs acquire domain facts and pinpoint the mechanism behind the transformation process from domain training data to knowledge. Our experiments on a 13B English-Japanese bilingual LLM reveal that cross-lingual transfer remains challenging despite a high-quality bilingual corpus. The code has been released.
Problem

Research questions and friction points this paper is trying to address.

Examining multilingual knowledge acquisition dynamics in domain adaptation
Investigating cross-lingual knowledge transfer mechanisms in LLMs
Addressing suboptimal performance in low-resource multilingual domain adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposed adaptive evaluation method AdaXEval
Tracked knowledge acquisition via continual training
Analyzed cross-lingual transfer mechanisms in LLMs
🔎 Similar Papers
No similar papers found.
X
Xin Zhao
The University of Tokyo
Naoki Yoshinaga
Naoki Yoshinaga
Institute of Industrial Science, The University of Tokyo
natural language processingcomputational linguisticsmachine learning
Y
Yuma Tsuta
National Institute of Informatics
A
Akiko Aizawa
National Institute of Informatics