A Systematic Study of Cross-Layer KV Sharing for Efficient LLM Inference

📅 2024-10-18
🏛️ arXiv.org
📈 Citations: 2
Influential: 1
📄 PDF
🤖 AI Summary
This paper addresses the memory and computational overhead induced by KV cache redundancy in large language model (LLM) inference. We propose the first systematic, cross-layer KV cache sharing framework, unifying mainstream strategies and novel variants under a single analytical model. Methodologically, we design a structured KV reuse mechanism enabling flexible inter-layer sharing, jointly optimizing for throughput gains and preservation of language modeling and downstream task performance. Key findings: reusing high-layer KV states across all layers achieves optimal trade-offs under aggressive compression (e.g., 50% cache reduction); we uncover a new Pareto frontier between training overhead and prefill latency. Experiments demonstrate that halving KV cache size yields substantial throughput improvements over standard Transformers across most configurations—while maintaining competitive task performance.

Technology Category

Application Category

📝 Abstract
Recently, sharing key-value (KV) cache across layers has been found effective in efficient inference of large language models (LLMs). To systematically investigate different techniques of cross-layer KV sharing, we propose a unified framework that covers several recent methods and their novel variants. We conduct comprehensive experiments on all the configurations of the framework, evaluating their generation throughput and performance in language modeling and downstream tasks. We find that when reducing the size of the KV cache by 2$ imes$, most configurations can achieve higher throughput than standard transformers while maintaining competitive performance. When further reducing the size of the KV cache, however, pairing queries of all layers with KVs of upper layers performs better, at the expense of additional training cost and prefilling latency. We hope that this work will help users make more informed choices of cross-layer KV sharing approaches and facilitate future research on efficient LLM inference.
Problem

Research questions and friction points this paper is trying to address.

Optimize KV cache sharing
Enhance LLM inference efficiency
Evaluate cross-layer KV techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework for KV sharing
Reduced KV cache size by 2x
Layer pairing with upper KVs
🔎 Similar Papers
No similar papers found.
Y
You Wu
School of Information Science and Technology, ShanghaiTech University, Shanghai Engineering Research Center of Intelligent Vision and Imaging
Haoyi Wu
Haoyi Wu
ShanghaiTech University
Kewei Tu
Kewei Tu
School of Information Science and Technology, ShanghaiTech University, China
Natural Language ProcessingMachine Learning