🤖 AI Summary
Large language models (LLMs) suffer severe accuracy degradation under post-training quantization (PTQ) at ultra-low bit-widths (e.g., 2-bit), and existing rotation-based methods lack robustness. Method: This paper proposes a fully training-free rotational transformation optimization framework. Its core innovation is the first application of the sequency-ordered Walsh–Hadamard transform (WHT) to model frequency-domain quantization error, coupled with a grouped block-diagonal rotation (GSR) structure that clusters frequency components and isolates outliers. Contribution/Results: GSR requires no gradient updates or data-driven fine-tuning. On WikiText-2, it significantly reduces perplexity (PPL) under 2-bit PTQ, achieving inference performance on par with—or exceeding—that of state-of-the-art learnable rotation methods. Moreover, it serves as a plug-and-play module that consistently boosts the accuracy of downstream PTQ techniques.
📝 Abstract
Large Language Models (LLMs) face deployment challenges due to high computational costs, and while Post-Training Quantization (PTQ) offers a solution, existing rotation-based methods struggle at very low bit-widths like 2-bit. We introduce a novel, training-free approach to construct an improved rotation matrix, addressing the limitations of current methods. The key contributions include leveraging the Walsh-Hadamard transform with sequency ordering, which clusters similar frequency components to reduce quantization error compared to standard Hadamard matrices, significantly improving performance. Furthermore, we propose a Grouped Sequency-arranged Rotation (GSR) using block-diagonal matrices with smaller Walsh blocks, effectively isolating outlier impacts and achieving performance comparable to optimization-based methods without requiring any training. Our method demonstrates robust performance on reasoning tasks and Perplexity (PPL) score on WikiText-2. Our method also enhances results even when applied over existing learned rotation techniques.