AdaEAGLE: Optimizing Speculative Decoding via Explicit Modeling of Adaptive Draft Structures

📅 2024-12-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing speculative decoding (SD) methods employ fixed draft structures, limiting decoding efficiency and hindering adaptability across diverse inference scenarios. To address this, we propose a dynamically adaptive speculative decoding framework. Our method introduces two key innovations: (1) a lightweight draft length predictor (LDLP), the first of its kind, which enables context-aware, adaptive draft length selection without manual threshold tuning; and (2) explicit modeling of variable-length draft structures to support fine-grained, application-specific optimization. Extensive experiments demonstrate that our approach achieves a 1.62× speedup over standard autoregressive decoding, significantly outperforming state-of-the-art fixed-length baselines, while rigorously preserving output quality—i.e., generating identical token sequences under equivalent conditions.

Technology Category

Application Category

📝 Abstract
Speculative Decoding (SD) is a popular lossless technique for accelerating the inference of Large Language Models (LLMs). We show that the decoding speed of SD frameworks with static draft structures can be significantly improved by incorporating context-aware adaptive draft structures. However, current studies on adaptive draft structures are limited by their performance, modeling approaches, and applicability. In this paper, we introduce AdaEAGLE, the first SD framework that explicitly models adaptive draft structures. AdaEAGLE leverages the Lightweight Draft Length Predictor (LDLP) module to explicitly predict the optimal number of draft tokens during inference to guide the draft model. It achieves comparable speedup results without manual thresholds and allows for deeper, more specialized optimizations. Moreover, together with threshold-based strategies, AdaEAGLE achieves a $1.62 imes$ speedup over the vanilla AR decoding and outperforms fixed-length SotA baseline while maintaining output quality.
Problem

Research questions and friction points this paper is trying to address.

Speculative Decoding
Large Language Models
Decoding Efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

AdaEAGLE
LDLP
Speculative Decoding
🔎 Similar Papers
No similar papers found.
Situo Zhang
Situo Zhang
Shanghai Jiao Tong University
Large Language ModelsReinforcement Learning
Hankun Wang
Hankun Wang
Shanghai Jiao Tong University
Speech Synthesis
Da Ma
Da Ma
Assistant Professor, School of Medicine, Wake Forest University
Medical Image ComputingComputational NeuroanatomyRadiogenomicsNeurodegenerative Disease
Zichen Zhu
Zichen Zhu
Shanghai Jiao Tong University
GUI智能体,多模态大模型,人机交互
L
Lu Chen
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai Jiao Tong University, Shanghai, China
Kunyao Lan
Kunyao Lan
Shanghai Jiao Tong University
Natural Language Processing
K
Kai Yu
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai Jiao Tong University, Shanghai, China