SINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLM Weights

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing post-training quantization methods suffer from severe perplexity degradation at ≤4-bit precision due to outlier-induced parameter representation imbalance—particularly in calibration-free, uniform quantization. To address this, we propose Sinkhorn-Q, a calibration-free quantization framework that normalizes row- and column-wise variances of weight matrices. It introduces axis-aligned scaling factors and efficiently minimizes a matrix imbalance proxy objective via the Sinkhorn–Knopp algorithm, enabling universal applicability across arbitrary linear layers and novel architectures. Retaining the simplicity of uniform quantization, Sinkhorn-Q effectively mitigates weight distribution skew at low bit-widths. On Qwen3 and DeepSeek-V2.5, it achieves significantly lower WikiText2 and C4 perplexity compared to calibration-free uniform baselines. Moreover, Sinkhorn-Q is inherently compatible with both calibration-based and non-uniform quantization schemes for further performance gains.

Technology Category

Application Category

📝 Abstract
Post-training quantization has emerged as the most widely used strategy for deploying large language models at low precision. Still, current methods show perplexity degradation at bit-widths less than or equal to 4, partly because representing outliers causes precision issues in parameters that share the same scales as these outliers. This problem is especially pronounced for calibration-free, uniform quantization methods. We introduce SINQ to augment existing post-training quantizers with an additional second-axis scale factor and a fast Sinkhorn-Knopp-style algorithm that finds scales to normalize per-row and per-column variances, thereby minimizing a novel per-matrix proxy target for quantization: the matrix imbalance. Our method has no interactions between layers and can be trivially applied to new architectures to quantize any linear layers. We evaluate our method on the Qwen3 model family and DeepSeek-V2.5. SINQ improves WikiText2 and C4 perplexity significantly against uncalibrated uniform quantization baselines and can be further enhanced by combining it with calibration and non-uniform quantization levels. Code to reproduce the results of this work and to easily quantize models using SINQ is available at https://github.com/huawei-csl/SINQ.
Problem

Research questions and friction points this paper is trying to address.

Addresses precision degradation in low-bit LLM quantization
Solves outlier-induced scaling issues in uniform quantization
Enables calibration-free quantization for diverse model architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sinkhorn-normalized quantization for low-precision LLM weights
Adds second-axis scale factor to existing quantizers
Uses Sinkhorn-Knopp algorithm to normalize matrix variances
🔎 Similar Papers
No similar papers found.
L
Lorenz K. Müller
Computing Systems Lab, Huawei Zurich Research Center
Philippe Bich
Philippe Bich
Politecnico di Torino
LLMs/VLMs optimizationQuantizationPruningVisual navigation
J
Jiawei Zhuang
Computing Systems Lab, Huawei Zurich Research Center
A
Ahmet Çelik
Computing Systems Lab, Huawei Zurich Research Center
L
Luca Benfenati
Computing Systems Lab, Huawei Zurich Research Center
Lukas Cavigelli
Lukas Cavigelli
Researcher (Expert/Architect), Huawei Technologies
Deep LearningComputer ArchitectureCircuits and SystemsVLSISignal Processing