🤖 AI Summary
This work addresses the high latency of autoregressive decoding in large language models by proposing a training-free, plug-and-play speculative decoding framework. Unlike existing approaches that require additional training or complex optimization—leading to high deployment costs—the proposed method constructs a lightweight draft model through layer pruning of the target model, guided by a sensitivity metric based on the trace of the Fisher information matrix (FIT). This ensures full compatibility with the original model and preserves its output distribution without any hyperparameter tuning. Evaluated across multiple benchmarks, the approach achieves decoding speedups of 1.32× to 1.5×, significantly enhancing inference efficiency and making it well-suited for low-latency applications such as multimedia processing.
📝 Abstract
Large language models (LLMs) underpin interactive multimedia applications such as captioning, retrieval, recommendation, and creative content generation, yet their autoregressive decoding incurs substantial latency. Speculative decoding reduces latency using a lightweight draft model, but deployment is often limited by the cost and complexity of acquiring, tuning, and maintaining an effective draft model. Recent approaches usually require auxiliary training or specialization, and even training-free methods incur costly search or optimization. We propose SDFP, a fully training-free and plug-and-play framework that builds the draft model via Fisher Information Trace (FIT)-based layer pruning of a given LLM. Using layer sensitivity as a proxy for output perturbation, SDFP removes low-impact layers to obtain a compact draft while preserving compatibility with the original model for standard speculative verification. SDFP needs no additional training, hyperparameter tuning, or separately maintained drafts, enabling rapid, deployment-friendly draft construction. Across benchmarks, SDFP delivers 1.32x-1.5x decoding speedup without altering the target model's output distribution, supporting low-latency multimedia applications.