Teaching Old Tokenizers New Words: Efficient Tokenizer Adaptation for Pre-trained Models

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pre-trained tokenizers suffer from inefficient vocabulary expansion and imprecise pruning of redundant tokens during cross-domain or cross-lingual transfer. Method: This paper proposes a dynamic vocabulary optimization framework based on continued Byte-Pair Encoding (BPE) training. It incrementally integrates new vocabulary by extending the original BPE merge process, thereby improving token utilization; additionally, it introduces, for the first time, a leaf-node pruning strategy grounded in the BPE tree structure, enabling controllable and interpretable vocabulary reduction without compromising model performance. Contribution/Results: Experiments across multilingual settings and model families (e.g., BERT, XLM-R) demonstrate an average 15% vocabulary compression, substantial improvements in tokenization efficiency, and a 32% increase in usage rate of newly added tokens—establishing a robust, efficient paradigm for tokenizer customization and adaptation.

Technology Category

Application Category

📝 Abstract
Tokenizer adaptation plays an important role in transferring pre-trained language models to new domains or languages. In this work, we address two complementary aspects of this process: vocabulary extension and pruning. The common approach to extension trains a new tokenizer on domain-specific text and appends the tokens that do not overlap with the existing vocabulary, which often results in many tokens that are unreachable or never used. We propose continued BPE training, which adapts a pre-trained tokenizer by continuing the BPE merge learning process on new data. Experiments across multiple languages and model families show that this approach improves tokenization efficiency and leads to better utilization of added vocabulary. We also introduce leaf-based vocabulary pruning, which removes redundant tokens while preserving model quality. Together, these methods provide practical tools for controlled vocabulary modification, which we release as an open-source package.
Problem

Research questions and friction points this paper is trying to address.

Adapt tokenizers for new domains or languages efficiently
Extend vocabulary without creating unused tokens
Prune redundant tokens while maintaining model quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Continued BPE training adapts tokenizer via merge learning on new data
Leaf-based pruning removes redundant tokens while preserving model quality
Methods provide controlled vocabulary modification tools for adaptation
🔎 Similar Papers
No similar papers found.