PrefixQuant: Eliminating Outliers by Prefixed Tokens for Large Language Models Quantization

📅 2024-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address accuracy degradation in large language model (LLM) quantization caused by token-wise outliers being overlooked, this paper proposes a prefix token isolation mechanism: dynamically inserting learnable prefix tokens for outlier tokens within the KV cache, enabling training-free quantization without fine-tuning. Coupled with block-wise learnable compensation parameters, the method enhances quantization error correction. It is compatible with both dynamic and static quantization schemes. On Llama-3-8B under W4A4KV4 quantization, it achieves +3.08 (dynamic) and +2.85 (static) average zero-shot accuracy gains, while accelerating prefill and decoding throughput by 2.74× and 2.16×, respectively. The core contribution is the first token-level outlier-aware prefix isolation technique, uniquely balancing ultra-low-bit precision preservation and inference efficiency.

Technology Category

Application Category

📝 Abstract
Existing weight-activation quantization methods for Large Language Models (LLMs) primarily address channel-wise outliers but often neglect token-wise outliers, which limits the accuracy of quantized models. In this work, we propose PrefixQuant, a novel quantization method that achieves state-of-the-art performance across various precision levels (W4A4KV4 and W4A8KV4) and granularities (dynamic and static quantization) by effectively isolating token-wise outliers. First, PrefixQuant eliminates token-wise outliers by prefixing outlier tokens in the KV cache, a process that is training-free and highly efficient (e.g., 1 minutes for Llama-3-70B). Second, PrefixQuant introduces new trainable parameters for block-wise training to compensate for quantization error. Our experiments show that PrefixQuant significantly outperforms existing dynamic quantization methods, even under coarser static quantization settings. For instance, PrefixQuant achieves an average accuracy improvement of +3.08 and +2.85 points over SpinQuant (dynamic quantization) on five zero-shot reasoning tasks under dynamic and static quantization settings, respectively, on W4A4KV4 Llama-3-8B. Additionally, we demonstrate up to 2.74x prefilling speedup and 2.16x decoding speedup for LLMs using W4A4 PrefixQuant. Our code is available at https://github.com/ChenMnZ/PrefixQuant.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Quantization
Accuracy Degradation
Innovation

Methods, ideas, or system contributions that make the work stand out.

PrefixQuant
Quantization Methodology
Performance Optimization
🔎 Similar Papers
No similar papers found.