Preferences for Idiomatic Language are Acquired Slowly -- and Forgotten Quickly: A Case Study on Swedish

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how language models acquire preferences for Swedish idiomatic expressions during pretraining and cross-lingual transfer, and why this capability is vulnerable to degradation during instruction tuning. Using both training-from-scratch and English-to-Swedish fine-tuning approaches, the authors evaluate 8B-scale models across multiple training stages with minimal-pair probing to assess their ability to distinguish grammatical acceptability from idiomaticity. The work presents the first systematic evidence that idiomatic competence emerges significantly slower than syntactic or lexical knowledge and is highly susceptible to interference from English–Swedish machine translation data. To address this, the authors introduce two novel Swedish idiom evaluation benchmarks and demonstrate that continued pretraining enhances idiomatic preference, whereas instruction tuning incorporating translated data rapidly erodes it.

Technology Category

Application Category

📝 Abstract
In this study, we investigate how language models develop preferences for \textit{idiomatic} as compared to \textit{linguistically acceptable} Swedish, both during pretraining and when adapting a model from English to Swedish. To do so, we train models on Swedish from scratch and by fine-tuning English-pretrained models, probing their preferences at various checkpoints using minimal pairs that differ in linguistic acceptability or idiomaticity. For linguistic acceptability, we adapt existing benchmarks into a minimal-pair format. To assess idiomaticity, we introduce two novel datasets: one contrasting conventionalized idioms with plausible variants, and another contrasting idiomatic Swedish with Translationese. Our findings suggest that idiomatic competence emerges more slowly than other linguistic abilities, including grammatical and lexical correctness. While longer training yields diminishing returns for most tasks, idiom-related performance continues to improve, particularly in the largest model tested (8B). However, instruction tuning on data machine-translated from English -- the common approach for languages with little or no native instruction data -- causes models to rapidly lose their preference for idiomatic language.
Problem

Research questions and friction points this paper is trying to address.

idiomaticity
language models
Swedish
Translationese
linguistic acceptability
Innovation

Methods, ideas, or system contributions that make the work stand out.

idiomaticity
minimal pairs
translationese
cross-lingual transfer
instruction tuning
🔎 Similar Papers
No similar papers found.