🤖 AI Summary
To address significant accuracy degradation and poor architectural adaptability in post-training quantization (PTQ) of speech foundation models (SFMs), this paper proposes StableQuant—a layer-adaptive PTQ algorithm. StableQuant jointly models the scale distributions of weights and activations across layers and dynamically searches for optimal quantization ranges under a Word Error Rate (WER) constraint. It is the first method to achieve fine-tuning-free, architecture-agnostic 8-bit PTQ across diverse SFMs (e.g., HuBERT and wav2vec 2.0). Its core innovation lies in balancing quantization robustness with automatic speech recognition (ASR) accuracy stability. Experiments demonstrate that StableQuant reduces model size by 75%, doubles inference speed, and incurs a WER increase of less than 0.3% under 8-bit quantization—substantially outperforming existing PTQ approaches.
📝 Abstract
In this paper, we propose StableQuant, a novel adaptive post-training quantization (PTQ) algorithm for widely used speech foundation models (SFMs). While PTQ has been successfully employed for compressing large language models (LLMs) due to its ability to bypass additional fine-tuning, directly applying these techniques to SFMs may not yield optimal results, as SFMs utilize distinct network architecture for feature extraction. StableQuant demonstrates optimal quantization performance regardless of the network architecture type, as it adaptively determines the quantization range for each layer by analyzing both the scale distributions and overall performance. We evaluate our algorithm on two SFMs, HuBERT and wav2vec2.0, for an automatic speech recognition (ASR) task, and achieve superior performance compared to traditional PTQ methods. StableQuant successfully reduces the sizes of SFM models to a quarter and doubles the inference speed while limiting the word error rate (WER) performance drop to less than 0.3% with 8-bit quantization.