Out-of-Vocabulary Sampling Boosts Speculative Decoding

๐Ÿ“… 2025-06-02
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In speculative decoding, small-vocabulary draft models suffer from sharply reduced acceptance rates due to their inability to generate out-of-vocabulary (OOV) tokens, exposing an inherent trade-off between vocabulary size and inference efficiency. To address this, we propose Redistributing Drafter Kernels (RDK), the first framework that redistributes probability mass over tokens via a token-affinity priorโ€”thereby relaxing the hard vocabulary constraint of conventional sampling and enabling OOV token generation. Our method models token similarity to guide redistribution and employs a first-order approximation algorithm for efficient optimization, achieving *O*(*N*) time complexity. We provide theoretical guarantees showing strictly higher acceptance rates than baseline methods. Experiments demonstrate that RDK substantially improves acceptance rates even when vocabulary size is reduced by over 75%, rendering minimalist drafters practically viable for the first time and significantly accelerating speculative decoding.

Technology Category

Application Category

๐Ÿ“ Abstract
Speculative decoding relies on fast and accurate drafters. Recent state-of-the-art language models employ larger and larger vocabularies, which significantly slows down drafters. One promising approach to boost the efficiency of speculative decoding is to use drafters with smaller vocabularies. However, existing sampling methods cannot draw out-of-vocabulary tokens, creating a tradeoff between drafters' vocabulary size and acceptance rates. This paper introduces Redistributing Drafter Kernels (RDK), the first out-of-vocabulary sampler that effectively recovers acceptance rates by virtually restoring pruned target tokens. RDK leverages token-affinity priors to reallocate drafter mass towards high-overlap regions. We prove mathematically that RDK can achieve higher acceptance rates than vanilla and state-of-the-art samplers. We provide an efficient first-order approximation of RDK and prove that it reduces redistribution times from $O(N^2)$ to $O(N)$, enabling lightweight implementations for large vocabularies. Our experiments demonstrate that this linear-time RDK significantly boosts acceptance rates even after extreme pruning (removing more than 75% of the drafter's vocabulary), where existing samplers fail. RDK opens the door to extremely pruned drafters, which were previously impractical.
Problem

Research questions and friction points this paper is trying to address.

Improves speculative decoding with out-of-vocabulary sampling
Addresses tradeoff between drafter vocabulary size and acceptance rates
Enables efficient use of pruned drafters with large vocabularies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces out-of-vocabulary sampler RDK
Leverages token-affinity priors for redistribution
Enables linear-time approximation for large vocabularies
๐Ÿ”Ž Similar Papers
No similar papers found.