SpecAttn: Co-Designing Sparse Attention with Self-Speculative Decoding

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high memory overhead of KV caching in long-context large language model inference, which severely limits efficiency. The authors propose a verification-guided sparse attention mechanism that co-designs KV cache selection with the verification process in speculative decoding. By leveraging the critical KV information naturally produced during verification to guide draft generation, the method avoids the redundant overhead associated with independent cache selection. This approach significantly improves draft acceptance rates and system throughput while preserving generation quality. Compared to standard autoregressive decoding, it achieves a 2.81× speedup, and outperforms existing sparse speculative decoding methods by 1.29×.

Technology Category

Application Category

📝 Abstract
Long-context large language model (LLM) inference has become the norm for today's AI applications. However, it is severely bottlenecked by the increasing memory demands of its KV cache. Previous works have shown that self-speculative decoding with sparse attention, where tokens are drafted using a subset of the KV cache and verified in parallel with full KV cache, speeds up inference in a lossless way. However, this approach relies on standalone KV selection algorithms to select the KV entries used for drafting and overlooks that the criticality of each KV entry is inherently computed during verification. In this paper, we propose SpecAttn, a self-speculative decoding method with verification-guided sparse attention. SpecAttn identifies critical KV entries as a byproduct of verification and only loads these entries when drafting subsequent tokens. This not only improves draft token acceptance rate but also incurs low KV selection overhead, thereby improving decoding throughput. SpecAttn achieves 2.81$\times$ higher throughput over vanilla auto-regressive decoding and 1.29$\times$ improvement over state-of-the-art sparsity-based self-speculative decoding methods.
Problem

Research questions and friction points this paper is trying to address.

KV cache
sparse attention
self-speculative decoding
long-context LLM inference
memory bottleneck
Innovation

Methods, ideas, or system contributions that make the work stand out.

SpecAttn
self-speculative decoding
sparse attention
KV cache optimization
verification-guided
🔎 Similar Papers
No similar papers found.