🤖 AI Summary
Existing post-training quantization methods for large language models (LLMs) suffer from two key limitations: (1) they ignore the heterogeneous contributions of hidden-layer features to the final loss, and (2) they fail to model intra-channel dependencies among weights. To address these issues, we propose a gradient-aware non-uniform quantization framework. First, we explicitly incorporate gradients of the final loss with respect to weights into the quantization objective, thereby capturing intra-output-channel weight coupling. Second, we design a monotonically convergent non-uniform scalar quantization algorithm. Third, we introduce a joint weight–activation optimization mechanism. Evaluated across multiple LLMs—including Llama-2/3 and Qwen—and standard benchmarks, our method consistently outperforms state-of-the-art approaches in scalar quantization, vector quantization, and joint weight–activation quantization, achieving significant improvements in accuracy recovery.
📝 Abstract
Post-training quantization is a key technique for reducing the memory and inference latency of large language models by quantizing weights and activations without requiring retraining. However, existing methods either (1) fail to account for the varying importance of hidden features to the end loss or, when incorporating end loss, (2) neglect the critical interactions between model weights. To address these limitations, we propose GuidedQuant, a novel quantization approach that integrates gradient information from the end loss into the quantization objective while preserving cross-weight dependencies within output channels. GuidedQuant consistently boosts the performance of state-of-the-art quantization methods across weight-only scalar, weight-only vector, and weight-and-activation quantization. Additionally, we introduce a novel non-uniform scalar quantization algorithm, which is guaranteed to monotonically decrease the quantization objective value, and outperforms existing methods in this category. We release the code at https://github.com/snu-mllab/GuidedQuant.