🤖 AI Summary
To address memory and bandwidth bottlenecks induced by KV caching in long-context generation, this paper proposes the first dual-stage separation compression paradigm—distinctly optimizing prefilling and decoding. During prefilling, excessive compression is avoided to preserve contextual understanding; during decoding, a sliding-window re-hitter identification mechanism coupled with adaptive discontinuous memory transfer dynamically retains critical key-value pairs. The method is plug-and-play compatible with mainstream KV compression techniques. Evaluated on LongGenBench, it achieves significant reductions in memory footprint and memory bandwidth consumption compared to prefilling-only compression baselines, while maintaining lossless generation quality and strong generalization across diverse long-context tasks.
📝 Abstract
Key-Value (KV) cache has become a bottleneck of LLMs for long-context generation. Despite the numerous efforts in this area, the optimization for the decoding phase is generally ignored. However, we believe such optimization is crucial, especially for long-output generation tasks based on the following two observations: (i) Excessive compression during the prefill phase, which requires specific full context impairs the comprehension of the reasoning task; (ii) Deviation of heavy hitters occurs in the reasoning tasks with long outputs. Therefore, SCOPE, a simple yet efficient framework that separately performs KV cache optimization during the prefill and decoding phases, is introduced. Specifically, the KV cache during the prefill phase is preserved to maintain the essential information, while a novel strategy based on sliding is proposed to select essential heavy hitters for the decoding phase. Memory usage and memory transfer are further optimized using adaptive and discontinuous strategies. Extensive experiments on LongGenBench show the effectiveness and generalization of SCOPE and its compatibility as a plug-in to other prefill-only KV compression methods.