🤖 AI Summary
In long-context reasoning, KV cache memory consumption grows linearly with sequence length, and existing compression methods degrade performance by discarding tokens. Method: This paper first identifies a universal decay pattern of attention importance with respect to relative token distance, and proposes a lossless KV cache compression mechanism. It jointly leverages attention distribution modeling, cross-layer sharing of attention scores, dynamic quantization, and sparsification—without discarding any tokens. Contribution/Results: The core innovation lies in exploiting cross-layer attention similarity to guide compression, thereby preserving critical information. Experiments demonstrate that the method reduces KV cache memory footprint by 35% while maintaining identical generation quality, significantly improving throughput and memory efficiency for long-context inference.
📝 Abstract
The increasing context window size in Large Language Models (LLMs), such as the GPT and LLaMA series, has improved their ability to tackle complex, long-text tasks, but at the cost of inference efficiency, particularly regarding memory and computational complexity. Existing methods, including selective token retention and window-based attention, improve efficiency but risk discarding important tokens needed for future text generation. In this paper, we propose an approach that enhances LLM efficiency without token loss by reducing the memory and computational load of less important tokens, rather than discarding them.We address two challenges: 1) investigating the distribution of important tokens in the context, discovering recent tokens are more important than distant tokens in context, and 2) optimizing resources for distant tokens by sharing attention scores across layers. The experiments show that our method saves $35%$ KV cache without compromising the performance.