Prune or Retrain: Optimizing the Vocabulary of Multilingual Models for Estonian

📅 2025-01-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses named entity recognition (NER) adaptation for the low-resource language Estonian, systematically investigating how vocabulary optimization affects the performance and efficiency of multilingual pretrained models (mBERT and XLM-R). We propose two strategies: (1) vocabulary pruning based on token frequency and language-specificity, and (2) BPE vocabulary retraining tailored to Estonian, followed by continued fine-tuning. Key findings show that pruning substantially reduces model size and input sequence length while preserving NER F1 score entirely; in contrast, vocabulary retraining leads to a significant F1 drop, indicating that embedding-layer adaptation requires longer tuning. This is the first rigorous comparative evaluation of these two vocabulary adaptation approaches for a low-resource language—challenging the prevailing assumption that retraining inherently outperforms pruning—and establishes a new paradigm for efficient, lightweight multilingual model localization.

Technology Category

Application Category

📝 Abstract
Adapting multilingual language models to specific languages can enhance both their efficiency and performance. In this study, we explore how modifying the vocabulary of a multilingual encoder model to better suit the Estonian language affects its downstream performance on the Named Entity Recognition (NER) task. The motivations for adjusting the vocabulary are twofold: practical benefits affecting the computational cost, such as reducing the input sequence length and the model size, and performance enhancements by tailoring the vocabulary to the particular language. We evaluate the effectiveness of two vocabulary adaptation approaches -- retraining the tokenizer and pruning unused tokens -- and assess their impact on the model's performance, particularly after continual training. While retraining the tokenizer degraded the performance of the NER task, suggesting that longer embedding tuning might be needed, we observed no negative effects on pruning.
Problem

Research questions and friction points this paper is trying to address.

Multilingual Intelligent Tools
Estonian Language Processing
Name Recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual Model Optimization
Estonian Language Processing
Named Entity Recognition Efficiency
🔎 Similar Papers