BBPE16: UTF-16-based byte-level byte-pair encoding for improved multilingual speech recognition

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of UTF-8-based byte-level Byte Pair Encoding (BPE) in multilingual speech recognition, particularly for Chinese, Japanese, and Korean (CJK) languages, where it generates excessively long token sequences that increase computational and memory overhead. To mitigate this, the authors propose BBPE16, the first UTF-16-based byte-level BPE tokenizer, which leverages UTF-16’s consistent two-byte encoding units for modern scripts to enhance cross-lingual token sharing while preserving language agnosticism. Experimental results demonstrate that BBPE16 achieves comparable or superior accuracy across monolingual to multilingual continual learning automatic speech recognition (ASR) tasks. Notably, on Chinese tasks, it reduces token sequence length by up to 10.4% and decoding iterations by 10.3%, significantly accelerating both training and inference while lowering memory consumption.

Technology Category

Application Category

📝 Abstract
Multilingual automatic speech recognition (ASR) requires tokenization that efficiently covers many writing systems. Byte-level BPE (BBPE) using UTF-8 is widely adopted for its language-agnostic design and full Unicode coverage, but its variable-length encoding inflates token sequences for non-Latin scripts, such as Chinese, Japanese, and Korean (CJK). Longer sequences increase computational load and memory use. We propose BBPE16, a UTF-16-based BBPE tokenizer that represents most modern scripts with a uniform 2-byte code unit. BBPE16 preserves BBPE's language-agnostic properties while substantially improving cross-lingual token sharing. Across monolingual, bilingual, and trilingual ASR, and in a multilingual continual-learning setup, BBPE16 attains comparable or better accuracy; for Chinese, it reduces token counts by up to 10.4% and lowers decoding iterations by up to 10.3%. These reductions speed up fine-tuning and inference and decrease memory usage, making BBPE16 a practical tokenization choice for multilingual ASR.
Problem

Research questions and friction points this paper is trying to address.

multilingual ASR
tokenization
UTF-8
CJK
byte-level BPE
Innovation

Methods, ideas, or system contributions that make the work stand out.

BBPE16
UTF-16
byte-level BPE
multilingual ASR
tokenization
🔎 Similar Papers
No similar papers found.
H
Hyunsik Kim
Samsung Research
H
Haeri Kim
Samsung Research
M
Munhak Lee
Samsung Research
Kyungmin Lee
Kyungmin Lee
Samsung Electronics
speech recognitiondeep learning