WUSH: Near-Optimal Adaptive Transforms for LLM Quantization

📅 2025-11-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In low-bit quantization, extreme values in weights and activations inflate the dynamic range, degrading accuracy. Existing orthogonal transformations—e.g., Hadamard—ignore data statistics and lack theoretical guarantees of optimality. To address this, we propose WUSH: a data-aware, near-optimal adaptive linear block-wise transformation. WUSH is the first method to derive a closed-form optimal solution that jointly incorporates second-order moment statistics and Hadamard structure, yielding a non-orthogonal yet provably optimal structured transform. It integrates AbsMax scaling, round-to-nearest (RTN), and integer/floating-point block quantizers to enable efficient structured matrix operations. Experiments across mainstream numerical formats demonstrate that WUSH significantly compresses dynamic range while consistently improving quantization accuracy, achieving superior trade-offs between compression ratio and inference efficiency.

Technology Category

Application Category

📝 Abstract
Quantization to low bitwidth is a standard approach for deploying large language models, however, a few extreme weights and activations stretch the dynamic range and reduce the effective resolution of the quantizer. A common mitigation approach is to apply some fixed orthogonal transforms, such as Hadamard matrices, before quantization, which typically reduces the dynamic range. Yet, these transforms ignore the statistics of the data, and their optimality is currently not understood. In this work, we derive, for the first time, closed-form optimal linear blockwise transforms for joint weight-activation quantization using standard data-free quantizers for common numerical formats. Specifically, we provide derivations of the optimal adaptive (data-aware) transforms for round-to-nearest (RTN), AbsMax-scaled block quantizers for both integer and floating-point formats. The resulting construction, which we call WUSH, combines a Hadamard backbone with a data-dependent component based on second-order moments, yielding a non-orthogonal transform that is provably optimal under mild assumptions and remains structured for efficient implementation. Preliminary experimental results show that our approach consistently improves upon the Hadamard transform for common formats.
Problem

Research questions and friction points this paper is trying to address.

Optimizes adaptive transforms for LLM quantization
Reduces dynamic range of extreme weights and activations
Improves quantizer resolution using data-aware linear transforms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive transforms optimize quantization using data statistics
WUSH combines Hadamard backbone with second-order moments
Non-orthogonal structured transforms improve dynamic range reduction
🔎 Similar Papers
No similar papers found.