SDFP: Speculative Decoding with FIT-Pruned Models for Training-Free and Plug-and-Play LLM Acceleration

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high latency of autoregressive decoding in large language models by proposing a training-free, plug-and-play speculative decoding framework. Unlike existing approaches that require additional training or complex optimization—leading to high deployment costs—the proposed method constructs a lightweight draft model through layer pruning of the target model, guided by a sensitivity metric based on the trace of the Fisher information matrix (FIT). This ensures full compatibility with the original model and preserves its output distribution without any hyperparameter tuning. Evaluated across multiple benchmarks, the approach achieves decoding speedups of 1.32× to 1.5×, significantly enhancing inference efficiency and making it well-suited for low-latency applications such as multimedia processing.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) underpin interactive multimedia applications such as captioning, retrieval, recommendation, and creative content generation, yet their autoregressive decoding incurs substantial latency. Speculative decoding reduces latency using a lightweight draft model, but deployment is often limited by the cost and complexity of acquiring, tuning, and maintaining an effective draft model. Recent approaches usually require auxiliary training or specialization, and even training-free methods incur costly search or optimization. We propose SDFP, a fully training-free and plug-and-play framework that builds the draft model via Fisher Information Trace (FIT)-based layer pruning of a given LLM. Using layer sensitivity as a proxy for output perturbation, SDFP removes low-impact layers to obtain a compact draft while preserving compatibility with the original model for standard speculative verification. SDFP needs no additional training, hyperparameter tuning, or separately maintained drafts, enabling rapid, deployment-friendly draft construction. Across benchmarks, SDFP delivers 1.32x-1.5x decoding speedup without altering the target model's output distribution, supporting low-latency multimedia applications.
Problem

Research questions and friction points this paper is trying to address.

speculative decoding
large language models
latency
draft model
training-free
Innovation

Methods, ideas, or system contributions that make the work stand out.

Speculative Decoding
Training-Free
Fisher Information Trace
Layer Pruning
Plug-and-Play
🔎 Similar Papers
No similar papers found.
H
Hanyu Wei
Tsinghua University
Z
Zunhai Su
Tsinghua University
P
Peng Lu
Advanced Micro Devices, Inc.
C
Chao Li
Advanced Micro Devices, Inc.
S
Spandan Tiwari
Advanced Micro Devices, Inc.
Ashish Sirasao
Ashish Sirasao
AI@AMD
CompilersNumericsCircuitsSystemsAI
Yuhan Dong
Yuhan Dong
Associate Professor, Tsinghua Shenzhen International Graduate School
Optical wireless communicationsMachine learning and optimization