Compressing KV Cache for Long-Context LLM Inference with Inter-Layer Attention Similarity

📅 2024-12-03
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
In long-context reasoning, KV cache memory consumption grows linearly with sequence length, and existing compression methods degrade performance by discarding tokens. Method: This paper first identifies a universal decay pattern of attention importance with respect to relative token distance, and proposes a lossless KV cache compression mechanism. It jointly leverages attention distribution modeling, cross-layer sharing of attention scores, dynamic quantization, and sparsification—without discarding any tokens. Contribution/Results: The core innovation lies in exploiting cross-layer attention similarity to guide compression, thereby preserving critical information. Experiments demonstrate that the method reduces KV cache memory footprint by 35% while maintaining identical generation quality, significantly improving throughput and memory efficiency for long-context inference.

Technology Category

Application Category

📝 Abstract
The increasing context window size in Large Language Models (LLMs), such as the GPT and LLaMA series, has improved their ability to tackle complex, long-text tasks, but at the cost of inference efficiency, particularly regarding memory and computational complexity. Existing methods, including selective token retention and window-based attention, improve efficiency but risk discarding important tokens needed for future text generation. In this paper, we propose an approach that enhances LLM efficiency without token loss by reducing the memory and computational load of less important tokens, rather than discarding them.We address two challenges: 1) investigating the distribution of important tokens in the context, discovering recent tokens are more important than distant tokens in context, and 2) optimizing resources for distant tokens by sharing attention scores across layers. The experiments show that our method saves $35%$ KV cache without compromising the performance.
Problem

Research questions and friction points this paper is trying to address.

Reduce KV cache memory usage in LLMs
Preserve important tokens without discarding
Share key states for distant tokens
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compress KV cache via inter-layer attention similarity
Retain less important tokens in compact shared form
Share key states across layers for distant tokens
🔎 Similar Papers
No similar papers found.
Da Ma
Da Ma
Assistant Professor, School of Medicine, Wake Forest University
Medical Image ComputingComputational NeuroanatomyRadiogenomicsNeurodegenerative Disease
L
Lu Chen
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai Jiao Tong University, Shanghai, China
Situo Zhang
Situo Zhang
Shanghai Jiao Tong University
Large Language ModelsReinforcement Learning
Y
Yuxun Miao
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai Jiao Tong University, Shanghai, China
S
Su Zhu
AISpeech Co., Ltd., Suzhou, China
Z
Zhi Chen
ByteDance
Hongshen Xu
Hongshen Xu
Shanghai Jiao Tong University
Natural Language ProcessingLarge Language ModelLLM Alignment
H
Hanqi Li
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai Jiao Tong University, Shanghai, China
Shuai Fan
Shuai Fan
AISpeech Co., Ltd., Suzhou, China
Lei Pan
Lei Pan
Michigan Technological University
Wetting FilmFroth Flotationthin liquid filmsurface forcehydrophobic force
K
Kai Yu
X-LANCE Lab, Department of Computer Science and Engineering, MoE Key Lab of Artificial Intelligence, SJTU AI Institute, Shanghai Jiao Tong University, Shanghai, China; AISpeech Co., Ltd., Suzhou, China