🤖 AI Summary
To address the bottlenecks of linear memory growth in KV caches and quadratic computational complexity in attention during long-context reasoning with large language models (LLMs), this paper proposes a training-free, plug-and-play channel-level dynamic pruning method. Unlike mainstream approaches that compress along the temporal dimension, our method introduces, for the first time, a query-aware unstructured sparsity mechanism that dynamically identifies and retains salient KV channels along the feature (channel) dimension. Pruning-induced information loss is mitigated via attention score reconstruction. The method is fully compatible with existing quantization and compression techniques and requires no fine-tuning. Experiments demonstrate that, under identical memory budgets, it enables longer sequence support; reduces KV cache volume by over 30%; and maintains less than 5% performance degradation even at an 80% channel pruning ratio—significantly outperforming conventional token-dropping baselines.
📝 Abstract
Long-context inference in large language models (LLMs) is increasingly constrained by the KV cache bottleneck: memory usage grows linearly with sequence length, while attention computation scales quadratically. Existing approaches address this issue by compressing the KV cache along the temporal axis through strategies such as token eviction or merging to reduce memory and computational overhead. However, these methods often neglect fine-grained importance variations across feature dimensions (i.e., the channel axis), thereby limiting their ability to effectively balance efficiency and model accuracy. In reality, we observe that channel saliency varies dramatically across both queries and positions: certain feature channels carry near-zero information for a given query, while others spike in relevance. To address this oversight, we propose SPARK, a training-free plug-and-play method that applies unstructured sparsity by pruning KV at the channel level, while dynamically restoring the pruned entries during attention score computation. Notably, our approach is orthogonal to existing KV compression and quantization techniques, making it compatible for integration with them to achieve further acceleration. By reducing channel-level redundancy, SPARK enables processing of longer sequences within the same memory budget. For sequences of equal length, SPARK not only preserves or improves model accuracy but also reduces KV cache storage by over 30% compared to eviction-based methods. Furthermore, even with an aggressive pruning ratio of 80%, SPARK maintains performance with less degradation than 5% compared to the baseline eviction method, demonstrating its robustness and effectiveness. Our code will be available at https://github.com/Xnhyacinth/SparK.