KVTuner: Sensitivity-Aware Layer-wise Mixed Precision KV Cache Quantization for Efficient and Nearly Lossless LLM Inference

📅 2025-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
KV cache quantization for long-context reasoning in large language models (LLMs) faces three key bottlenecks: neglect of inter-layer sensitivity, high overhead from online fine-grained precision decisions, and poor generalizability across models and hardware constraints. Method: This paper proposes a layer-aware mixed-precision KV quantization framework. It is the first to empirically reveal intra-layer differential sensitivity of KV caches to quantization. Leveraging pattern correlation, it introduces inter-layer clustering and intra-layer precision pruning, enabling a hardware-friendly paradigm that decouples offline search from lightweight online deployment. Results: The framework achieves near-lossless quantization at 3.25 bits on Llama-3.1-8B and 4.0 bits on Qwen2.5-7B, boosting inference throughput by up to 38.3%—significantly outperforming the KV8 baseline—while maintaining strong cross-model and constraint adaptability.

Technology Category

Application Category

📝 Abstract
KV cache quantization can improve Large Language Models (LLMs) inference throughput and latency in long contexts and large batch-size scenarios while preserving LLMs effectiveness. However, current methods have three unsolved issues: overlooking layer-wise sensitivity to KV cache quantization, high overhead of online fine-grained decision-making, and low flexibility to different LLMs and constraints. Therefore, we thoroughly analyze the inherent correlation of layer-wise transformer attention patterns to KV cache quantization errors and study why key cache is more important than value cache for quantization error reduction. We further propose a simple yet effective framework KVTuner to adaptively search for the optimal hardware-friendly layer-wise KV quantization precision pairs for coarse-grained KV cache with multi-objective optimization and directly utilize the offline searched configurations during online inference. To reduce the computational cost of offline calibration, we utilize the intra-layer KV precision pair pruning and inter-layer clustering to reduce the search space. Experimental results show that we can achieve nearly lossless 3.25-bit mixed precision KV cache quantization for LLMs like Llama-3.1-8B-Instruct and 4.0-bit for sensitive models like Qwen2.5-7B-Instruct on mathematical reasoning tasks. The maximum inference throughput can be improved by 38.3% compared with KV8 quantization over various context lengths.
Problem

Research questions and friction points this paper is trying to address.

Optimize KV cache quantization for LLM efficiency.
Address layer-wise sensitivity in quantization processes.
Reduce computational overhead in offline calibration.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Layer-wise sensitivity analysis
Multi-objective optimization framework
Intra-layer precision pruning
🔎 Similar Papers