GPUTOK: GPU Accelerated Byte Level BPE Tokenization

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the performance bottleneck caused by CPU-based tokenizers in large language models with million-token context windows, which severely underutilizes GPU resources. We present the first efficient GPU implementation of a byte-level BPE tokenizer adhering to GPT-2’s merge rules, ensuring bit-identical output to widely used CPU tokenizers such as tiktoken and HuggingFace. Our design leverages a BlockBPE-style kernel, static mappings from cuCollections, CUB reductions, and a pybind11 interface. Evaluated on the WikiText103 dataset, our tokenizer achieves a 1.7× speedup over tiktoken and a 7.6× speedup over the HuggingFace GPT-2 tokenizer for the longest input sequences, substantially alleviating the tokenization bottleneck in long-context processing.

Technology Category

Application Category

📝 Abstract
As large language models move toward million-token context windows, CPU tokenizers become a major slowdown because they process text one step at a time while powerful GPUs sit unused. We built a GPU-based byte-level BPE tokenizer that follows GPT-2's merge rules. It includes a basic BlockBPE-style kernel and a faster, optimized version that uses cuCollections static map, CUB reductions, and a pybind11 interface for Python. On WikiText103 sequences up to 131k tokens, the optimized GPU tokenizer produces the same tokens as a CPU version and, for the longest inputs, is about 1.7x faster than tiktoken and about 7.6x faster than the HuggingFace GPT-2 tokenizer. Nsight profiling shows that 70-80% of CUDA API time goes to memory allocation, so adding memory pooling should give the biggest speed boost next. Tests on generation tasks using WikiText103 prompts show that our GPU tokenizer's outputs stay within about one percentage point of tiktoken and HuggingFace GPT-2 on similarity and overlap metrics, meaning it keeps output quality while making long-context inference more practical.
Problem

Research questions and friction points this paper is trying to address.

GPU acceleration
tokenization
byte-level BPE
large language models
long-context inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPU tokenization
byte-level BPE
CUDA optimization
long-context LLM
memory-efficient tokenizer
🔎 Similar Papers
No similar papers found.