🤖 AI Summary
This study investigates how source-language properties influence the modeling and grammaticality judgment capabilities of English language models trained on machine-translated data. We train small language models on English translations generated from 24 typologically diverse languages varying in resource availability, and conduct a systematic analysis integrating linguistic typology metrics with corpus-level statistical features. Our findings reveal, for the first time, that model perplexity is primarily driven by lexical diversity in the translated corpora, whereas grammaticality judgment performance strongly depends on syntactic similarity between the source language and English—particularly when ample training data is available. These results elucidate the differential mechanisms through which source-language attributes shape model behavior in cross-lingual transfer scenarios.
📝 Abstract
Machine-translated data is widely used in multilingual NLP, particularly when native text is scarce. However, translated text differs systematically from native text. This phenomenon is known as translationese, and it reflects both traces of the source language and characteristic properties of translation itself. In this paper, we study how training on machine-translated data affects small English language models, focusing on how translationese from different source languages shapes linguistic acceptability judgments and language modelling for different domains. We train models on English text translated from 24 typologically and resource-diverse source languages, enabling a systematic analysis of how source language and corpus properties influence what models learn. Our results show that the source language has a clear impact on model behavior: general perplexity is more driven by the lexical diversity of the translated corpus, while grammatical performance is strongly correlated to typological similarity to English, given enough data.