StableQuant: Layer Adaptive Post-Training Quantization for Speech Foundation Models

📅 2025-04-06
🏛️ IEEE International Conference on Acoustics, Speech, and Signal Processing
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address significant accuracy degradation and poor architectural adaptability in post-training quantization (PTQ) of speech foundation models (SFMs), this paper proposes StableQuant—a layer-adaptive PTQ algorithm. StableQuant jointly models the scale distributions of weights and activations across layers and dynamically searches for optimal quantization ranges under a Word Error Rate (WER) constraint. It is the first method to achieve fine-tuning-free, architecture-agnostic 8-bit PTQ across diverse SFMs (e.g., HuBERT and wav2vec 2.0). Its core innovation lies in balancing quantization robustness with automatic speech recognition (ASR) accuracy stability. Experiments demonstrate that StableQuant reduces model size by 75%, doubles inference speed, and incurs a WER increase of less than 0.3% under 8-bit quantization—substantially outperforming existing PTQ approaches.

Technology Category

Application Category

📝 Abstract
In this paper, we propose StableQuant, a novel adaptive post-training quantization (PTQ) algorithm for widely used speech foundation models (SFMs). While PTQ has been successfully employed for compressing large language models (LLMs) due to its ability to bypass additional fine-tuning, directly applying these techniques to SFMs may not yield optimal results, as SFMs utilize distinct network architecture for feature extraction. StableQuant demonstrates optimal quantization performance regardless of the network architecture type, as it adaptively determines the quantization range for each layer by analyzing both the scale distributions and overall performance. We evaluate our algorithm on two SFMs, HuBERT and wav2vec2.0, for an automatic speech recognition (ASR) task, and achieve superior performance compared to traditional PTQ methods. StableQuant successfully reduces the sizes of SFM models to a quarter and doubles the inference speed while limiting the word error rate (WER) performance drop to less than 0.3% with 8-bit quantization.
Problem

Research questions and friction points this paper is trying to address.

Adaptive quantization for speech foundation models
Optimize layer-specific quantization ranges dynamically
Maintain accuracy while compressing model size
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive layer-wise quantization for speech models
Analyzes scale distributions for optimal quantization
Maintains performance with 8-bit compression
🔎 Similar Papers
No similar papers found.
Y
Yeona Hong
Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
Hyewon Han
Hyewon Han
42dot
Speech Signal ProcessingSpeech Enhancement
W
Woo-Jin Chung
Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
Hong-Goo Kang
Hong-Goo Kang
Yonsei University