🤖 AI Summary
This work addresses a critical limitation in existing post-training quantization methods for vision-language models: the neglect of distributional disparities in channel importance across input tokens and modalities, which leads to insufficient error compensation. To overcome this, we propose a token-aware adaptive quantization error compensation method that, for the first time, integrates a Mixture-of-Experts (MoE) mechanism into vision-language model quantization. By analyzing cross-modal channel sensitivity, our approach distinguishes between token-agnostic and token-dependent important channels, applying shared experts for global compensation and routed experts for local refinement. Combined with low-rank adapters and token-aware routing, the method significantly improves post-quantization performance across models ranging from 2B to 70B parameters, achieving accuracy closely matching that of full-precision counterparts.
📝 Abstract
Post-Training Quantization (PTQ) has emerged as an effective technique for alleviating the substantial computational and memory overheads of Vision-Language Models (VLMs) by compressing both weights and activations without retraining the full model. Existing PTQ methods primarily rely on static identification and global compensation of sensitive or outlier channels, yet they often overlook the distributional differences of these important channels across inputs, leading to unsatisfactory quantization. In this work, we observe that the distributions and occurrence frequencies of important channels vary significantly both across modalities and among tokens, even within the same modality. Accordingly, we propose \textbf{Quant Experts (QE)}, a token-aware adaptive error compensation with mixture-of-experts for VLMs quantization. QE divides the important channels into token-independent and token-dependent groups. For the former, a shared expert is designed for most tokens to compensate for global quantization error using a low-rank adapter. For the latter, routed experts including multiple routed low-rank adapters are elaborated to compensate for local quantization error related to specific tokens. Extensive experiments demonstrate that QE consistently enhances task accuracy across various quantization settings and model scales, ranging from 2B to 70B parameters, while maintaining performance comparable to full-precision models.