GuidedQuant: Large Language Model Quantization via Exploiting End Loss Guidance

📅 2025-05-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing post-training quantization methods for large language models (LLMs) suffer from two key limitations: (1) they ignore the heterogeneous contributions of hidden-layer features to the final loss, and (2) they fail to model intra-channel dependencies among weights. To address these issues, we propose a gradient-aware non-uniform quantization framework. First, we explicitly incorporate gradients of the final loss with respect to weights into the quantization objective, thereby capturing intra-output-channel weight coupling. Second, we design a monotonically convergent non-uniform scalar quantization algorithm. Third, we introduce a joint weight–activation optimization mechanism. Evaluated across multiple LLMs—including Llama-2/3 and Qwen—and standard benchmarks, our method consistently outperforms state-of-the-art approaches in scalar quantization, vector quantization, and joint weight–activation quantization, achieving significant improvements in accuracy recovery.

Technology Category

Application Category

📝 Abstract
Post-training quantization is a key technique for reducing the memory and inference latency of large language models by quantizing weights and activations without requiring retraining. However, existing methods either (1) fail to account for the varying importance of hidden features to the end loss or, when incorporating end loss, (2) neglect the critical interactions between model weights. To address these limitations, we propose GuidedQuant, a novel quantization approach that integrates gradient information from the end loss into the quantization objective while preserving cross-weight dependencies within output channels. GuidedQuant consistently boosts the performance of state-of-the-art quantization methods across weight-only scalar, weight-only vector, and weight-and-activation quantization. Additionally, we introduce a novel non-uniform scalar quantization algorithm, which is guaranteed to monotonically decrease the quantization objective value, and outperforms existing methods in this category. We release the code at https://github.com/snu-mllab/GuidedQuant.
Problem

Research questions and friction points this paper is trying to address.

Quantizes LLMs without retraining, reducing memory and latency
Addresses varying feature importance and weight interactions
Introduces non-uniform scalar quantization for better performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates end loss gradient into quantization objective
Preserves cross-weight dependencies in output channels
Introduces non-uniform scalar quantization algorithm
🔎 Similar Papers
No similar papers found.