🤖 AI Summary
This work investigates how bilingual language models acquire cross-lingual shared syntactic representations, focusing on the mechanisms and limitations of cross-lingual syntactic transfer. Methodologically, it introduces a controlled bilingual pretraining paradigm—systematically varying training data volume and language exposure order—combined with the structural priming paradigm, adapting a classic psycholinguistic experimental design for large language model analysis. This enables quantitative measurement of cross-lingual syntactic similarity and empirical testing of priming effects. Results reveal pronounced asymmetry in syntactic transfer, modulated by language exposure order: transfer is robust between typologically similar languages but deteriorates sharply for dissimilar language pairs. The study thus uncovers the dynamic, structurally constrained nature of multilingual syntactic representation formation in large models, offering novel empirical evidence and a methodological advance for understanding the internal mechanisms underlying multilingual competence in foundation models.
📝 Abstract
While crosslingual transfer is crucial to contemporary language models' multilingual capabilities, how it occurs is not well understood. In this paper, we ask what happens to a monolingual language model when it begins to be trained on a second language. Specifically, we train small bilingual models for which we control the amount of data for each language and the order of language exposure. To find evidence of shared multilingual representations, we turn to structural priming, a method used to study grammatical representations in humans. We first replicate previous crosslingual structural priming results and find that after controlling for training data quantity and language exposure, there are asymmetrical effects across language pairs and directions. We argue that this asymmetry may shape hypotheses about human structural priming effects. We also find that structural priming effects are less robust for less similar language pairs, highlighting potential limitations of crosslingual transfer learning and shared representations for typologically diverse languages.