🤖 AI Summary
To address the significant accuracy degradation in large language models (LLMs) after ultra-low-bit (3/4-bit) quantization, this paper proposes a dynamic error compensation mechanism. It identifies outlier-prone activation channels to trigger on-demand residual loading from CPU memory and performs real-time, layer-wise activation correction on GPU. The method integrates dynamic channel selection, CPU-GPU collaborative residual storage, and optimized low-bit weight quantization inference. While preserving memory savings and low-latency advantages, it substantially restores model performance: perplexity of 3-bit Llama-3-8B-Instruct drops from 10.15 to 9.12—surpassing a 3.5-bit baseline—with only 0.0003% additional GPU memory overhead and a 1.7% increase in inference latency. The core innovation lies in an activation-driven, lightweight, layer-wise, and dynamic residual compensation paradigm.
📝 Abstract
Quantization of Large Language Models (LLMs) has recently gained popularity, particularly for on-device settings with limited hardware resources. While efficient, quantization inevitably degrades model quality, especially in aggressive low-bit settings such as 3-bit and 4-bit precision. In this paper, we propose QDEC, an inference scheme that improves the quality of low-bit LLMs while preserving the key benefits of quantization: GPU memory savings and inference latency reduction. QDEC stores the residual matrix -- the difference between full-precision and quantized weights -- in CPU, and dynamically fetches the residuals for only a small portion of the weights. This portion corresponds to the salient channels, marked by activation outliers, with the fetched residuals helping to correct quantization errors in these channels. Salient channels are identified dynamically at each decoding step by analyzing the input activations -- this allows for the adaptation to the dynamic nature of activation distribution, and thus maximizes the effectiveness of error compensation. We demonstrate the effectiveness of QDEC by augmenting state-of-the-art quantization methods. For example, QDEC reduces the perplexity of a 3-bit Llama-3-8B-Instruct model from 10.15 to 9.12 -- outperforming its 3.5-bit counterpart -- while adding less than 0.0003% to GPU memory usage and incurring only a 1.7% inference slowdown on NVIDIA RTX 4050 Mobile GPU. The code will be publicly available soon.