ARCQuant: Boosting NVFP4 Quantization with Augmented Residual Channels for LLMs

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of adapting existing post-training quantization methods to fine-grained formats such as NVFP4, which often violate block isolation, incur significant quantization errors, or breach hardware-imposed uniform precision constraints. To overcome these limitations, we propose an NVFP4 quantization framework compatible with standard GEMM kernels, which embeds enhanced residual channels along the reduction dimension to compensate for quantization error while strictly preserving block isolation and hardware uniformity. Our approach employs a two-stage quantization strategy, achieving near full-precision perplexity and downstream task performance on LLaMA and Qwen models. Furthermore, it delivers up to 3× inference speedup over FP16 on RTX 5090 and PRO 6000 GPUs.

Technology Category

Application Category

📝 Abstract
The emergence of fine-grained numerical formats like NVFP4 presents new opportunities for efficient Large Language Model (LLM) inference. However, it is difficult to adapt existing Post-Training Quantization (PTQ) strategies to these formats: rotation-based methods compromise fine-grained block isolation; smoothing techniques struggle with significant 4-bit quantization errors; and mixed-precision approaches often conflict with hardware constraints on unified-precision computation. To address these challenges, we propose ARCQuant, a framework that boosts NVFP4 performance via Augmented Residual Channels. Distinct from methods that compromise block isolation or hardware uniformity, ARCQuant maintains a strictly unified NVFP4 format by augmenting the activation matrix with quantized residual channels. This design integrates the error compensation process directly into the matrix reduction dimension, enabling the use of standard, highly optimized GEMM kernels with minimal overhead. Theoretical analysis confirms that the worst-case error bound of our dual-stage NVFP4 quantization is comparable to that of standard 8-bit formats such as MXFP8. Extensive experiments on LLaMA and Qwen models demonstrate that ARCQuant achieves state-of-the-art accuracy, comparable to full-precision baselines in perplexity and downstream tasks. Furthermore, deployment on RTX 5090 and RTX PRO 6000 GPUs confirms practical benefits, achieving up to 3x speedup over FP16. Our code is available at https://github.com/actypedef/ARCQuant .
Problem

Research questions and friction points this paper is trying to address.

NVFP4
Post-Training Quantization
LLM inference
quantization error
hardware constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

NVFP4
Post-Training Quantization
Augmented Residual Channels
Unified-Precision Inference
LLM Quantization
🔎 Similar Papers
No similar papers found.
H
Haoqian Meng
School of Computer Science and Technology, Tianjin University
Yilun Luo
Yilun Luo
General Motors
Power ElectronicsPowertrainControlMachine LearningIntelligent Wearable Interfaces
Y
Yafei Zhao
School of Computer Science and Technology, Tianjin University
W
Wenyuan Liu
School of Computer Science and Technology, Tianjin University
Peng Zhang
Peng Zhang
Professor, Tianjin University
Information RetrievalMachine LearningNatural Language Processing
X
Xindian Ma
School of Computer Science and Technology, Tianjin University