SparK: Query-Aware Unstructured Sparsity with Recoverable KV Cache Channel Pruning

📅 2025-08-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the bottlenecks of linear memory growth in KV caches and quadratic computational complexity in attention during long-context reasoning with large language models (LLMs), this paper proposes a training-free, plug-and-play channel-level dynamic pruning method. Unlike mainstream approaches that compress along the temporal dimension, our method introduces, for the first time, a query-aware unstructured sparsity mechanism that dynamically identifies and retains salient KV channels along the feature (channel) dimension. Pruning-induced information loss is mitigated via attention score reconstruction. The method is fully compatible with existing quantization and compression techniques and requires no fine-tuning. Experiments demonstrate that, under identical memory budgets, it enables longer sequence support; reduces KV cache volume by over 30%; and maintains less than 5% performance degradation even at an 80% channel pruning ratio—significantly outperforming conventional token-dropping baselines.

Technology Category

Application Category

📝 Abstract
Long-context inference in large language models (LLMs) is increasingly constrained by the KV cache bottleneck: memory usage grows linearly with sequence length, while attention computation scales quadratically. Existing approaches address this issue by compressing the KV cache along the temporal axis through strategies such as token eviction or merging to reduce memory and computational overhead. However, these methods often neglect fine-grained importance variations across feature dimensions (i.e., the channel axis), thereby limiting their ability to effectively balance efficiency and model accuracy. In reality, we observe that channel saliency varies dramatically across both queries and positions: certain feature channels carry near-zero information for a given query, while others spike in relevance. To address this oversight, we propose SPARK, a training-free plug-and-play method that applies unstructured sparsity by pruning KV at the channel level, while dynamically restoring the pruned entries during attention score computation. Notably, our approach is orthogonal to existing KV compression and quantization techniques, making it compatible for integration with them to achieve further acceleration. By reducing channel-level redundancy, SPARK enables processing of longer sequences within the same memory budget. For sequences of equal length, SPARK not only preserves or improves model accuracy but also reduces KV cache storage by over 30% compared to eviction-based methods. Furthermore, even with an aggressive pruning ratio of 80%, SPARK maintains performance with less degradation than 5% compared to the baseline eviction method, demonstrating its robustness and effectiveness. Our code will be available at https://github.com/Xnhyacinth/SparK.
Problem

Research questions and friction points this paper is trying to address.

KV cache memory grows linearly with sequence length in LLMs
Existing methods ignore fine-grained channel importance variations
Channel saliency varies dramatically across queries and positions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unstructured sparsity pruning at channel level
Dynamically restores pruned entries during computation
Orthogonal to existing compression and quantization techniques
🔎 Similar Papers
No similar papers found.
Huanxuan Liao
Huanxuan Liao
Institute of Automation, Chinese Academy of Sciences
Natural Language ProcessingLarge Language ModelLong Context Modeling
Yixing Xu
Yixing Xu
AMD
machine learningdeep learning
S
Shizhu He
Institute of Automation, Chinese Academy of Sciences, University of Chinese Academy of Sciences
G
Guanchen Li
Advanced Micro Devices (china) Co., Ltd.
X
Xuanwu Yin
Advanced Micro Devices (china) Co., Ltd.
D
Dong Li
Advanced Micro Devices (china) Co., Ltd.
Emad Barsoum
Emad Barsoum
AMD, Columbia University
Generative AIFoundation ModelsAgentic AIComputer VisionML Frameworks
J
Jun Zhao
Institute of Automation, Chinese Academy of Sciences, University of Chinese Academy of Sciences
K
Kang Liu
Institute of Automation, Chinese Academy of Sciences, University of Chinese Academy of Sciences