🤖 AI Summary
To address the substantial accuracy degradation and high retraining overhead in ultra-low-bit (<4-bit) quantization of large language models (LLMs), this paper proposes SigniQ, a significance-aware partial retraining quantization method. Building upon the ApiQ framework, SigniQ is the first to integrate parameter significance modeling into partial retraining: it identifies critical parameters via gradient sensitivity analysis and applies significance-driven L2 regularization to prioritize preservation of weights most influential to model outputs. Furthermore, it synergistically combines post-training quantization with lightweight adapter fusion, eliminating the need for full-model fine-tuning and large-scale labeled data. Evaluated on the LLaMA family, SigniQ achieves an average 3.2% improvement in post-quantization accuracy, while incurring less than 5% additional memory and inference latency overhead—thereby achieving a balanced optimization of accuracy and efficiency.
📝 Abstract
Large language models offer remarkable capabilities, but their size and computational demands pose practical challenges. Quantization methods compress their size through replacing their high-precision parameters by quantized values of lower precision. Post-training quantization reduces model size efficiently at the cost of decreased accuracy, while quantization-aware training better preserves accuracy but is resource-intensive. Among existing post-training quantization algorithms, the ApiQ method achieves superior accuracy preservation at minimal memory and time overhead. We investigate two ideas to extend performance in ultra-low-bit quantization beyond ApiQ's level. First, we look into combining existing quantization-aware training techniques with ApiQ's partial training. We show that this does not outperform the baseline ApiQ method with limited training data and frozen weights. This leads to two key insights: (1) The substantial representational capacity that is gained through full retraining may not be feasible through partial training. (2) This gain seems to depend on using a large and diverse dataset in quantization-aware training. Second, through a novel approach informed by the two insights, we propose an ultra-low-bit quantization method that builds upon ApiQ and extends its performance without the need for full retraining. It relies on a saliency-aware regularization term that prioritizes preserving the most impactful parameters during quantization. Our experiments on benchmark language models from the LLaMA family show that our proposed approach boosts accuracy and tightens the gap between the quantized model and the full-precision model, with minimal overhead. Our method will be made publicly available to facilitate future developments in ultra-low-bit quantization of large language models.