Spec-LLaVA: Accelerating Vision-Language Models with Dynamic Tree-Based Speculative Decoding

📅 2025-09-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the slow autoregressive decoding speed of vision-language models (VLMs), which hinders real-time applications, this paper proposes Dynamic Tree Speculative Decoding (DTSD). DTSD constructs a lightweight draft model that collaborates with a large target VLM in a tree-structured speculative framework. It introduces two key innovations: (1) confidence-driven parallel verification, enabling simultaneous validation of multiple candidate tokens; and (2) adaptive tree expansion and pruning, dynamically adjusting the speculation tree based on token-level uncertainty to balance efficiency and accuracy. Evaluated on LLaVA-1.5, DTSD achieves up to 3.28× decoding speedup without compromising generation quality—measured by standard multimodal benchmarks—including no degradation in visual grounding, instruction following, or factual consistency. The method significantly reduces computational overhead, making it suitable for resource-constrained and edge-device deployment scenarios.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) enable powerful multimodal reasoning but suffer from slow autoregressive inference, limiting their deployment in real-time applications. We introduce Spec-LLaVA, a system that applies speculative decoding to accelerate VLMs without sacrificing output quality. Spec-LLaVA pairs a lightweight draft VLM with a large target model: the draft speculates future tokens, which the target verifies in parallel, allowing multiple tokens to be generated per step. To maximize efficiency, we design a dynamic tree-based verification algorithm that adaptively expands and prunes speculative branches using draft model confidence. On MS COCO out-of-domain images, Spec-LLaVA achieves up to 3.28$ imes$ faster decoding on LLaVA-1.5 (7B, 13B) with no loss in generation quality. This work presents a lossless acceleration framework for VLMs using dynamic tree-structured speculative decoding, opening a path toward practical real-time multimodal assistants. Importantly, the lightweight draft model design makes the framework amenable to resource-constrained or on-device deployment settings.
Problem

Research questions and friction points this paper is trying to address.

Accelerate vision-language models for real-time applications
Maintain output quality while speeding up inference
Enable efficient deployment on resource-constrained devices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic tree-based speculative decoding algorithm
Lightweight draft model for token speculation
Parallel verification of multiple tokens
🔎 Similar Papers
No similar papers found.
M
Mingxiao Huo
Carnegie Mellon University
J
Jiayi Zhang
University of Nottingham
Hewei Wang
Hewei Wang
Carnegie Mellon University / Apple
Machine LearningComputer VisionVision-Language ModelsGenerative Models
J
Jinfeng Xu
The University of Hong Kong
Zheyu Chen
Zheyu Chen
PhD, Beijing Institute of Technology
Recommendation System
H
Huilin Tai
Columbia University
Y
Yijun Chen
University of California, Berkeley