🤖 AI Summary
To address attention error propagation and degraded generation quality caused by 2-bit KV cache quantization in large language model (LLM) inference, this paper proposes KVLinC. The method introduces three key components: (i) Hadamard rotation preprocessing of the value (V) tensor to reduce quantization sensitivity; (ii) a lightweight low-rank linear correction adapter that explicitly compensates for quantization errors in the key (K) tensor; and (iii) a customized attention kernel enabling efficient decompression and computation. Evaluated on LLaMA, Qwen2.5, and Qwen3 models, KVLinC preserves near-full-precision generation quality under 2-bit KV quantization while achieving up to 2.55× inference speedup over FlashAttention. It significantly outperforms existing quantization baselines in both accuracy and efficiency, offering a practical solution for memory-constrained, high-throughput LLM deployment.
📝 Abstract
Quantizing the key-value (KV) cache is a promising strategy for improving the inference efficiency of large language models (LLMs). However, aggressive quantization to very low precision (e.g., 2 bits) introduces significant errors in the stored key and value tensors, which propagate through the dot-product attention mechanism and ultimately degrade generation quality. To address this, we propose KVLinC, a framework to mitigate attention errors introduced by KV cache quantization in the extreme low-precision regime. KVLinC combines a Hadamard rotation, which reduces quantization error in values, with lightweight linear correction adapters that explicitly compensate for errors introduced by quantized keys. Across extensive evaluations on the LLaMA, Qwen2.5, and Qwen3 model families, KVLinC consistently matches or surpasses strong baselines while achieving higher KV-cache compression. Furthermore, we implement a custom attention kernel that results in upto 2.55x faster inference compared to Flash Attention baseline, enabling efficient long-context LLM inference.