🤖 AI Summary
Standard tokenization methods, optimized for information-theoretic objectives (e.g., compression), neglect linguistically grounded constraints—such as morphological alignment—leading to suboptimal performance on downstream tasks for morphologically rich languages like Latin. To address this, we propose a context-aware tokenization framework that explicitly integrates morphological knowledge. Our approach is the first to incorporate fine-grained Latin lexicons and morphological analyzers into medium-scale pretraining, jointly optimizing input representations via morphology-guided segmentation and contextual encoding. Crucially, it operates without reliance on large-scale annotated data, offering a linguistically motivated alternative for low-resource language modeling. Evaluated across four diverse downstream tasks, our method achieves consistent and significant performance gains, with particularly strong out-of-domain generalization—demonstrating the substantial and empirically validated benefit of morphological priors for encoder-based representation learning.
📝 Abstract
Tokenization is a critical component of language model pretraining, yet standard tokenization methods often prioritize information-theoretical goals like high compression and low fertility rather than linguistic goals like morphological alignment. In fact, they have been shown to be suboptimal for morphologically rich languages, where tokenization quality directly impacts downstream performance. In this work, we investigate morphologically-aware tokenization for Latin, a morphologically rich language that is medium-resource in terms of pretraining data, but high-resource in terms of curated lexical resources -- a distinction that is often overlooked but critical in discussions of low-resource language modeling. We find that morphologically-guided tokenization improves overall performance on four downstream tasks. Performance gains are most pronounced for out of domain texts, highlighting our models'improved generalization ability. Our findings demonstrate the utility of linguistic resources to improve language modeling for morphologically complex languages. For low-resource languages that lack large-scale pretraining data, the development and incorporation of linguistic resources can serve as a feasible alternative to improve LM performance.