DeltaKV: Residual-Based KV Cache Compression via Long-Range Similarity

๐Ÿ“… 2026-02-08
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the severe memory bottleneck in long-context large language model inference, where KV cache memory grows linearly with context length, hindering deployment efficiency. The authors propose DeltaKV, a novel framework that introduces residual encoding to KV cache compression for the first time. By modeling semantic similarity among distant tokens, DeltaKV stores only the residuals of KV states relative to historical reference points. Coupled with Sparse-vLLMโ€”a hardware-aware sparse inference engineโ€”the framework enables highly efficient memory management and computation. Evaluated on LongBench, SCBench, and AIME, DeltaKV achieves near-lossless accuracy while compressing the KV cache to 29% of its original size and delivering up to a 2ร— throughput improvement.

Technology Category

Application Category

๐Ÿ“ Abstract
The deployment of efficient long-context LLMs in applications like autonomous agents, long-chain reasoning, and creative writing is fundamentally bottlenecked by the linear growth of KV cache memory. Existing compression and eviction methods often struggle to balance accuracy, compression ratio, and hardware efficiency. We propose DeltaKV, a residual-based KV cache compression framework motivated by two empirical findings: long-range inter-token similarity and highly shared latent components in KV representations. Instead of discarding tokens, DeltaKV encodes semantic residuals relative to retrieved historical references, preserving fidelity while substantially reducing storage. To translate compression gains into real system speedups, we further introduce Sparse-vLLM, a high-performance inference engine with decoupled memory management and kernels optimized for sparse and irregular KV layouts. Experiments show that DeltaKV reduces KV cache memory to 29\% of the original while maintaining near-lossless accuracy on LongBench, SCBench, and AIME. When integrated with Sparse-vLLM, it achieves up to 2$\times$ throughput improvement over vLLM in long-context scenarios, demonstrating a practical path toward scalable long-context LLM deployment. Code, model checkpoints, and datasets are available at https://github.com/CURRENTF/Sparse-vLLM.
Problem

Research questions and friction points this paper is trying to address.

KV cache compression
long-context LLMs
memory bottleneck
inference efficiency
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

KV cache compression
residual-based encoding
long-range similarity
sparse inference engine
long-context LLM
๐Ÿ”Ž Similar Papers
No similar papers found.