Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically investigates the impact of quantization on reasoning-oriented large language models (LLMs) ranging from 1.5B to 70B parameters, including DeepSeek-R1-Distilled, LLaMA, and QwQ. We jointly evaluate weight quantization (AWQ/GPTQ), KV cache quantization (FP8/INT4), and activation quantization (with dynamic calibration) across mathematical (MATH-500, AIME), scientific (GPQA), and programming (LiveCodeBench) benchmarks. Our study establishes the first empirical robustness boundaries for reasoning LLMs under quantization: W8A8 and W4A16 preserve full-precision performance with negligible degradation; sub-4-bit quantization incurs significant accuracy loss; and output length remains invariant to quantization level. We identify model scale, architecture, and task difficulty as key moderating factors, and propose scaling up model size or extending reasoning steps as effective mitigation strategies. All models and code are open-sourced. On MATH-500, W8A8 introduces <1.2% error increase, and multiple benchmarks approach full-precision performance.

Technology Category

Application Category

📝 Abstract
Recent advancements in reasoning language models have demonstrated remarkable performance in complex tasks, but their extended chain-of-thought reasoning process increases inference overhead. While quantization has been widely adopted to reduce the inference cost of large language models, its impact on reasoning models remains understudied. In this study, we conduct the first systematic study on quantized reasoning models, evaluating the open-sourced DeepSeek-R1-Distilled Qwen and LLaMA families ranging from 1.5B to 70B parameters, and QwQ-32B. Our investigation covers weight, KV cache, and activation quantization using state-of-the-art algorithms at varying bit-widths, with extensive evaluation across mathematical (AIME, MATH-500), scientific (GPQA), and programming (LiveCodeBench) reasoning benchmarks. Our findings reveal that while lossless quantization can be achieved with W8A8 or W4A16 quantization, lower bit-widths introduce significant accuracy risks. We further identify model size, model origin, and task difficulty as critical determinants of performance. Contrary to expectations, quantized models do not exhibit increased output lengths. In addition, strategically scaling the model sizes or reasoning steps can effectively enhance the performance. All quantized models and codes will be open-sourced in https://github.com/ruikangliu/Quantized-Reasoning-Models.
Problem

Research questions and friction points this paper is trying to address.

Impact of quantization on reasoning model performance
Evaluation of quantized models across various benchmarks
Determinants of performance in quantized reasoning models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic study on quantized reasoning models
Evaluate W8A8 and W4A16 quantization impacts
Open-source quantized models and codes
🔎 Similar Papers
No similar papers found.
R
Ruikang Liu
Shenzhen International Graduate School, Tsinghua University
Y
Yuxuan Sun
Huawei Noah’s Ark Lab
M
Manyi Zhang
Huawei Noah’s Ark Lab
Haoli Bai
Haoli Bai
Huawei Technologies
natural language processingmodel compression
Xianzhi Yu
Xianzhi Yu
Unknown affiliation
AIHPC
Tiezheng Yu
Tiezheng Yu
Huawei Noah's Ark Lab
Natural Language ProcessingText SummarizationMulti-modal LearningDomain Adaption
C
Chun Yuan
Shenzhen International Graduate School, Tsinghua University
L
Lu Hou
Huawei Noah’s Ark Lab