🤖 AI Summary
Current mainstream tokenizers (e.g., BPE) rely on fixed-frequency, deterministic rules, while neural tokenizers often require architectural modifications to language models, hindering scalable deployment. To address this, we propose GQ-VAE—a gated quantization variational autoencoder that is independently pretrainable and plug-and-play—marking the first end-to-end differentiable framework for learning variable-length discrete tokens. GQ-VAE integrates vector quantization with variational inference and employs a gating mechanism to dynamically control the granularity of discretization in the latent space. Experiments demonstrate that GQ-VAE matches BPE in compression ratio and language modeling performance; under equivalent compression, it significantly outperforms small-vocabulary BPE, yielding consistent gains in downstream tasks; and—critically—it requires no modification to the backbone language model, enabling seamless large-scale integration.
📝 Abstract
While most frontier models still use deterministic frequency-based tokenization algorithms such as byte-pair encoding (BPE), there has been significant recent work to design learned neural tokenizers. However, these schemes generally add to underlying language model complexity and force large changes to architecture, making them hard to implement at large scales. To overcome these challenges, we propose the gated quantized variational autoencoder (GQ-VAE), a novel architecture that can be independently pre-trained to serve as a drop-in replacement for existing tokenizers. The key innovation of the architecture is to learn to encode variable-length discrete tokens. GQ-VAE improves compression and language modeling performance over a standard VQ-VAE tokenizer, and approaches the compression rate and language modeling performance of BPE. Interestingly, if we use BPE with a smaller vocabulary, such that the compression is equivalent between GQ-VAE and BPE, we find that GQ-VAE improves downstream language model learning. We conclude with a discussion of several exciting avenues for future work. Code can be found at https://github.com/Theo-Datta-115/gq-vae.