Grouped Sequency-arranged Rotation: Optimizing Rotation Transformation for Quantization for Free

📅 2025-05-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer severe accuracy degradation under post-training quantization (PTQ) at ultra-low bit-widths (e.g., 2-bit), and existing rotation-based methods lack robustness. Method: This paper proposes a fully training-free rotational transformation optimization framework. Its core innovation is the first application of the sequency-ordered Walsh–Hadamard transform (WHT) to model frequency-domain quantization error, coupled with a grouped block-diagonal rotation (GSR) structure that clusters frequency components and isolates outliers. Contribution/Results: GSR requires no gradient updates or data-driven fine-tuning. On WikiText-2, it significantly reduces perplexity (PPL) under 2-bit PTQ, achieving inference performance on par with—or exceeding—that of state-of-the-art learnable rotation methods. Moreover, it serves as a plug-and-play module that consistently boosts the accuracy of downstream PTQ techniques.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) face deployment challenges due to high computational costs, and while Post-Training Quantization (PTQ) offers a solution, existing rotation-based methods struggle at very low bit-widths like 2-bit. We introduce a novel, training-free approach to construct an improved rotation matrix, addressing the limitations of current methods. The key contributions include leveraging the Walsh-Hadamard transform with sequency ordering, which clusters similar frequency components to reduce quantization error compared to standard Hadamard matrices, significantly improving performance. Furthermore, we propose a Grouped Sequency-arranged Rotation (GSR) using block-diagonal matrices with smaller Walsh blocks, effectively isolating outlier impacts and achieving performance comparable to optimization-based methods without requiring any training. Our method demonstrates robust performance on reasoning tasks and Perplexity (PPL) score on WikiText-2. Our method also enhances results even when applied over existing learned rotation techniques.
Problem

Research questions and friction points this paper is trying to address.

Optimizing rotation matrices for low-bit quantization in LLMs
Reducing quantization error via sequency-ordered Walsh-Hadamard transform
Improving performance without training using block-diagonal outlier isolation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free improved rotation matrix construction
Walsh-Hadamard transform with sequency ordering
Grouped Sequency-arranged Rotation using block-diagonal matrices
🔎 Similar Papers
No similar papers found.
E
Euntae Choi
Seoul National University
S
Sumin Song
Seoul National University
W
Woosang Lim
Seoul National University
Sungjoo Yoo
Sungjoo Yoo
Seoul National University
memorystorage