OPT-Tree: Speculative Decoding with Adaptive Draft Tree Structure

📅 2024-02-28
🏛️ Transactions of the Association for Computational Linguistics
📈 Citations: 10
Influential: 0
📄 PDF
🤖 AI Summary
To address the slow autoregressive decoding of large language models (LLMs) and the low acceptance rates and limited speedup of existing speculative decoding methods—caused by rigid, fixed-structure draft sequences—this paper proposes Dynamic Optimal Draft Tree Speculative Decoding. We formulate the single-step expected accepted token length as the optimization objective and adaptively construct an extensible draft tree structure, thereby overcoming the limitations of static tree designs. Our method integrates a probability-driven tree search algorithm, a lightweight autoregressive draft model, and an efficient verification mechanism to enable parallel multi-token generation with lossless acceleration. Experiments across diverse LLMs and tasks demonstrate up to 3.2× decoding speedup over standard autoregressive decoding, with an average of over 10 tokens accepted per step—significantly outperforming state-of-the-art draft strategies.

Technology Category

Application Category

📝 Abstract
Autoregressive language models demonstrate excellent performance in various scenarios. However, the inference efficiency is limited by its one-step-one-word generation mode, which has become a pressing problem recently as the models become increasingly larger. Speculative decoding employs a “draft and then verify” mechanism to allow multiple tokens to be generated in one step, realizing lossless acceleration. Existing methods mainly adopt fixed heuristic draft structures, which do not adapt to different situations to maximize the acceptance length during verification. To alleviate this dilemma, we propose OPT-Tree, an algorithm to construct adaptive and scalable draft trees, which can be applied to any autoregressive draft model. It searches the optimal tree structure that maximizes the mathematical expectation of the acceptance length in each decoding step. Experimental results reveal that OPT-Tree outperforms the existing draft structures and achieves a speed-up ratio of up to 3.2 compared with autoregressive decoding. If the draft model is powerful enough and the node budget is sufficient, it can generate more than ten tokens in a single step. Our code is available at https://github.com/Jikai0Wang/OPT-Tree.
Problem

Research questions and friction points this paper is trying to address.

Improving inference efficiency in autoregressive language models
Adapting draft tree structures for speculative decoding
Maximizing token acceptance length per decoding step
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive draft tree structure for speculative decoding
Optimal tree search maximizes acceptance length
Achieves up to 3.2x speed-up ratio
🔎 Similar Papers
No similar papers found.