QUAD: Quantization and Parameter-Efficient Tuning of LLM with Activation Decomposition

📅 2025-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the severe accuracy degradation in 4-bit quantization of large language models (LLMs) caused by activation outliers, this paper proposes an SVD-based activation decomposition framework: outlier components in the activation tensor are orthogonally projected onto a low-dimensional subspace and retained in full precision, while the remaining components undergo 4-bit quantization. The method jointly incorporates W4A4/A8 hybrid quantization and LoRA-style parameter-efficient fine-tuning. Evaluated on Llama-3-8B and Qwen-2.5, it achieves 94–96% of the W4A4 baseline accuracy out-of-the-box and reaches 98% after fine-tuning—substantially outperforming existing 4-bit quantization approaches. The core contribution lies in the first integration of SVD-driven orthogonal activation decomposition into the quantization pipeline, effectively balancing outlier modeling fidelity with compression efficiency. This work establishes a new paradigm for high-fidelity, low-overhead edge deployment of LLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) excel in diverse applications but suffer inefficiency due to massive scale. While quantization reduces computational costs, existing methods degrade accuracy in medium-sized LLMs (e.g., Llama-3-8B) due to activation outliers. To address this, we propose QUAD (Quantization with Activation Decomposition), a framework leveraging Singular Value Decomposition (SVD) to suppress activation outliers for effective 4-bit quantization. QUAD estimates activation singular vectors offline using calibration data to construct an orthogonal transformation matrix P, shifting outliers to additional dimensions in full precision while quantizing rest components to 4-bit. Additionally, QUAD enables parameter-efficient fine-tuning via adaptable full-precision outlier weights, narrowing the accuracy gap between quantized and full-precision models. Experiments demonstrate that QUAD achieves 94% ~ 96% accuracy under W4A4 quantization and 98% accuracy with W4A4/A8 and parameter-efficient fine-tuning for Llama-3 and Qwen-2.5 models. Our code is available at href{https://github.com/hyx1999/Quad}{repository}.
Problem

Research questions and friction points this paper is trying to address.

Addresses accuracy degradation in medium-sized LLMs due to activation outliers during quantization.
Proposes QUAD framework using SVD to suppress outliers for effective 4-bit quantization.
Enables parameter-efficient fine-tuning to narrow accuracy gap between quantized and full-precision models.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses SVD to suppress activation outliers
Enables 4-bit quantization via orthogonal transformation
Integrates parameter-efficient fine-tuning for accuracy
🔎 Similar Papers
No similar papers found.
Y
Yuxuan Hu
Renmin University of China
X
Xiaodong Chen
Renmin University of China
Cuiping Li
Cuiping Li
Renmin University of China
Databasebig data analysis and mining
H
Hong Chen
Renmin University of China
J
Jing Zhang
Renmin University of China