๐ค AI Summary
To address two key challenges in quantization-aware training (QAT)โnon-uniform activation distributions and static weight quantization codebooks ill-suited to dynamic parameter shiftsโthis paper proposes the Adaptive Distribution-aware Quantization (ADQ) framework. ADQ jointly enables dynamic distribution alignment and hardware-efficient low-bit quantization via three core components: quantile-based codebook initialization, an online codebook adaptation mechanism leveraging exponential moving average, and a sensitivity-driven mixed-precision allocation strategy. The method integrates non-uniform-to-uniform mapping, online distribution modeling, and co-optimization of bit-width allocation with layer-wise sensitivity. On ImageNet, ResNet-18 quantized by ADQ achieves 71.512% Top-1 accuracy at an average bit-width of 2.81, substantially outperforming prior state-of-the-art methods. Ablation studies confirm the effectiveness of each component. The primary contribution lies in the first unified, lightweight QAT framework that simultaneously incorporates distribution adaptivity, online codebook updating, and sensitivity-aware precision assignment.
๐ Abstract
Quantization-Aware Training (QAT) is a critical technique for deploying deep neural networks on resource-constrained devices. However, existing methods often face two major challenges: the highly non-uniform distribution of activations and the static, mismatched codebooks used in weight quantization. To address these challenges, we propose Adaptive Distribution-aware Quantization (ADQ), a mixed-precision quantization framework that employs a differentiated strategy. The core of ADQ is a novel adaptive weight quantization scheme comprising three key innovations: (1) a quantile-based initialization method that constructs a codebook closely aligned with the initial weight distribution; (2) an online codebook adaptation mechanism based on Exponential Moving Average (EMA) to dynamically track distributional shifts; and (3) a sensitivity-informed strategy for mixed-precision allocation. For activations, we integrate a hardware-friendly non-uniform-to-uniform mapping scheme. Comprehensive experiments validate the effectiveness of our method. On ImageNet, ADQ enables a ResNet-18 to achieve 71.512% Top-1 accuracy with an average bit-width of only 2.81 bits, outperforming state-of-the-art methods under comparable conditions. Furthermore, detailed ablation studies on CIFAR-10 systematically demonstrate the individual contributions of each innovative component, validating the rationale and effectiveness of our design.