GQ-VAE: A gated quantized VAE for learning variable length tokens

📅 2025-12-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current mainstream tokenizers (e.g., BPE) rely on fixed-frequency, deterministic rules, while neural tokenizers often require architectural modifications to language models, hindering scalable deployment. To address this, we propose GQ-VAE—a gated quantization variational autoencoder that is independently pretrainable and plug-and-play—marking the first end-to-end differentiable framework for learning variable-length discrete tokens. GQ-VAE integrates vector quantization with variational inference and employs a gating mechanism to dynamically control the granularity of discretization in the latent space. Experiments demonstrate that GQ-VAE matches BPE in compression ratio and language modeling performance; under equivalent compression, it significantly outperforms small-vocabulary BPE, yielding consistent gains in downstream tasks; and—critically—it requires no modification to the backbone language model, enabling seamless large-scale integration.

Technology Category

Application Category

📝 Abstract
While most frontier models still use deterministic frequency-based tokenization algorithms such as byte-pair encoding (BPE), there has been significant recent work to design learned neural tokenizers. However, these schemes generally add to underlying language model complexity and force large changes to architecture, making them hard to implement at large scales. To overcome these challenges, we propose the gated quantized variational autoencoder (GQ-VAE), a novel architecture that can be independently pre-trained to serve as a drop-in replacement for existing tokenizers. The key innovation of the architecture is to learn to encode variable-length discrete tokens. GQ-VAE improves compression and language modeling performance over a standard VQ-VAE tokenizer, and approaches the compression rate and language modeling performance of BPE. Interestingly, if we use BPE with a smaller vocabulary, such that the compression is equivalent between GQ-VAE and BPE, we find that GQ-VAE improves downstream language model learning. We conclude with a discussion of several exciting avenues for future work. Code can be found at https://github.com/Theo-Datta-115/gq-vae.
Problem

Research questions and friction points this paper is trying to address.

Proposes GQ-VAE as a drop-in replacement for existing tokenizers
Learns variable-length discrete tokens to improve compression and modeling
Enables neural tokenization without major architectural changes or complexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

GQ-VAE is a gated quantized variational autoencoder for tokenization
It learns variable-length discrete tokens as a drop-in replacement
It improves compression and language modeling over standard VQ-VAE
🔎 Similar Papers
No similar papers found.
T
Theo Datta
Kempner Institute, Harvard University
K
Kayla Huang
Kempner Institute, Harvard University
S
Sham Kakade
Kempner Institute, Harvard University
David Brandfonbrener
David Brandfonbrener
Meta
machine learningreinforcement learninglanguage models