🤖 AI Summary
To address the slow autoregressive decoding speed of vision-language models (VLMs), which hinders real-time applications, this paper proposes Dynamic Tree Speculative Decoding (DTSD). DTSD constructs a lightweight draft model that collaborates with a large target VLM in a tree-structured speculative framework. It introduces two key innovations: (1) confidence-driven parallel verification, enabling simultaneous validation of multiple candidate tokens; and (2) adaptive tree expansion and pruning, dynamically adjusting the speculation tree based on token-level uncertainty to balance efficiency and accuracy. Evaluated on LLaVA-1.5, DTSD achieves up to 3.28× decoding speedup without compromising generation quality—measured by standard multimodal benchmarks—including no degradation in visual grounding, instruction following, or factual consistency. The method significantly reduces computational overhead, making it suitable for resource-constrained and edge-device deployment scenarios.
📝 Abstract
Vision-Language Models (VLMs) enable powerful multimodal reasoning but suffer from slow autoregressive inference, limiting their deployment in real-time applications. We introduce Spec-LLaVA, a system that applies speculative decoding to accelerate VLMs without sacrificing output quality. Spec-LLaVA pairs a lightweight draft VLM with a large target model: the draft speculates future tokens, which the target verifies in parallel, allowing multiple tokens to be generated per step. To maximize efficiency, we design a dynamic tree-based verification algorithm that adaptively expands and prunes speculative branches using draft model confidence. On MS COCO out-of-domain images, Spec-LLaVA achieves up to 3.28$ imes$ faster decoding on LLaVA-1.5 (7B, 13B) with no loss in generation quality. This work presents a lossless acceleration framework for VLMs using dynamic tree-structured speculative decoding, opening a path toward practical real-time multimodal assistants. Importantly, the lightweight draft model design makes the framework amenable to resource-constrained or on-device deployment settings.