BAQ: Efficient Bit Allocation Quantization for Large Language Models

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing post-training quantization methods for large language models (LLMs) commonly adopt uniform or heuristic bit-width allocation, ignoring the non-uniform sensitivity of model weights to quantization noise. Method: This paper proposes a Hessian-proxy-driven sensitivity modeling and convex optimization framework for layer- and component-level bit-width allocation. It formulates bit-width assignment as an analytically solvable convex optimization problem—first of its kind—and uncovers an equi-loss structural principle. The method comprises three stages: Hessian-based sensitivity estimation, closed-form bit-width derivation, and a lightweight integrated quantization pipeline. Contribution/Results: Our approach achieves theoretically interpretable and accuracy-adaptive dynamic bit-width allocation. Evaluated on models ranging from 125M to 30B parameters, it significantly outperforms GPTQ: perplexity improves by up to 56× at the same bit-width, while incurring negligible deployment overhead.

Technology Category

Application Category

📝 Abstract
Post-training model quantization is a widely adopted technique for reducing the memory and computational costs of large language models (LLMs). However, most existing methods rely on uniform or heuristic bitwidth assignments, failing to account for the nonuniform sensitivity of weights to quantization noise. In this paper, we propose a novel framework for allocating quantization bitwidths based on sensitivity metrics derived from a Hessian proxy. We make key assumptions, which allow the layer/component-wise loss function to be expressed as an explicit function of the bitwidths. This enables a neat formulation of the bit allocation problem as a convex optimization task, whose closed-form solution adapts precision across weights to minimize the layer-wise quantization loss. Inspecting the solution provides several insights (such as the equal-loss structure), which are then exploited to design the proposed extbf{BAQ} (Bit Allocation Quantization) algorithm. The proposed algorithm achieves a good trade-off between loss minimization and complexity and allows BAQ to be integrated into standard quantization pipelines with minimal overhead. Experimental results show that BAQ consistently outperforms GPTQ, achieving up to 56$ imes$ lower perplexity at the same bitwidth on large language models ranging from 125M to 30B parameters. Leveraging our analytical results derived from solving the optimal bit allocation problem, we also provide a theoretical explanation for the observed gains. All codes of this paper are available at https://github.com/CSU-ModelCompression/BAQ.
Problem

Research questions and friction points this paper is trying to address.

Optimize bit allocation for LLM quantization using sensitivity metrics
Formulate bit allocation as convex optimization to minimize loss
Achieve better performance than GPTQ with lower perplexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hessian proxy sensitivity metrics for bit allocation
Convex optimization for adaptive bitwidth assignment
Equal-loss structure exploited in BAQ algorithm
🔎 Similar Papers
No similar papers found.