🤖 AI Summary
To address the latency overhead of existing speculative decoding (SD) methods—particularly under low request rates or memory-constrained conditions where batch parallelism is insufficient, and where SD exacerbates latency under high load or low speculation accuracy—this paper proposes SmartSpec, a dynamically adaptive speculative decoding framework. Its core innovation is the first system-level, goodput-driven runtime mechanism for continuous, fine-grained selection of speculation length—from zero to multiple steps—enabling real-time adaptation. SmartSpec integrates dynamic scheduling, goodput-aware load modeling, continuous batching optimization, and multi-paradigm SD support—including prompt lookup and tree-based decoding. Extensive experiments across diverse model sizes, request rates, and datasets demonstrate that SmartSpec reduces average request latency by up to 3.2×, significantly improving end-to-end throughput and response efficiency in realistic LLM serving scenarios.
📝 Abstract
Reducing the inference latency of large language models (LLMs) is crucial, and speculative decoding (SD) stands out as one of the most effective techniques. Rather than letting the LLM generate all tokens directly, speculative decoding employs effective proxies to predict potential outputs, which are then verified by the LLM without compromising the generation quality. Yet, deploying SD in real online LLM serving systems (with continuous batching) does not always yield improvement -- under higher request rates or low speculation accuracy, it paradoxically increases latency. Furthermore, there is no best speculation length work for all workloads under different system loads. Based on the observations, we develop a dynamic framework SmartSpec. SmartSpec dynamically determines the best speculation length for each request (from 0, i.e., no speculation, to many tokens) -- hence the associated speculative execution costs -- based on a new metric called goodput, which characterizes the current observed load of the entire system and the speculation accuracy. We show that SmartSpec consistently reduces average request latency by up to 3.2x compared to non-speculative decoding baselines across different sizes of target models, draft models, request rates, and datasets. Moreover, SmartSpec can be applied to different styles of speculative decoding, including traditional, model-based approaches as well as model-free methods like prompt lookup and tree-style decoding.