Quant Experts: Token-aware Adaptive Error Reconstruction with Mixture of Experts for Large Vision-Language Models Quantization

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical limitation in existing post-training quantization methods for vision-language models: the neglect of distributional disparities in channel importance across input tokens and modalities, which leads to insufficient error compensation. To overcome this, we propose a token-aware adaptive quantization error compensation method that, for the first time, integrates a Mixture-of-Experts (MoE) mechanism into vision-language model quantization. By analyzing cross-modal channel sensitivity, our approach distinguishes between token-agnostic and token-dependent important channels, applying shared experts for global compensation and routed experts for local refinement. Combined with low-rank adapters and token-aware routing, the method significantly improves post-quantization performance across models ranging from 2B to 70B parameters, achieving accuracy closely matching that of full-precision counterparts.

Technology Category

Application Category

📝 Abstract
Post-Training Quantization (PTQ) has emerged as an effective technique for alleviating the substantial computational and memory overheads of Vision-Language Models (VLMs) by compressing both weights and activations without retraining the full model. Existing PTQ methods primarily rely on static identification and global compensation of sensitive or outlier channels, yet they often overlook the distributional differences of these important channels across inputs, leading to unsatisfactory quantization. In this work, we observe that the distributions and occurrence frequencies of important channels vary significantly both across modalities and among tokens, even within the same modality. Accordingly, we propose \textbf{Quant Experts (QE)}, a token-aware adaptive error compensation with mixture-of-experts for VLMs quantization. QE divides the important channels into token-independent and token-dependent groups. For the former, a shared expert is designed for most tokens to compensate for global quantization error using a low-rank adapter. For the latter, routed experts including multiple routed low-rank adapters are elaborated to compensate for local quantization error related to specific tokens. Extensive experiments demonstrate that QE consistently enhances task accuracy across various quantization settings and model scales, ranging from 2B to 70B parameters, while maintaining performance comparable to full-precision models.
Problem

Research questions and friction points this paper is trying to address.

Post-Training Quantization
Vision-Language Models
Token-aware
Quantization Error
Outlier Channels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-Training Quantization
Vision-Language Models
Mixture of Experts
Token-aware Adaptation
Low-rank Adapter
🔎 Similar Papers
No similar papers found.
C
Chenwei Jia
State Key Laboratory of Human-Machine Hybrid Augmented Intelligence, Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University
B
Baoting Li
State Key Laboratory of Human-Machine Hybrid Augmented Intelligence, Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University
X
Xuchong Zhang
State Key Laboratory of Human-Machine Hybrid Augmented Intelligence, Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University
M
Mingzhuo Wei
State Key Laboratory of Human-Machine Hybrid Augmented Intelligence, Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University
B
Bochen Lin
State Key Laboratory of Human-Machine Hybrid Augmented Intelligence, Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University
Hongbin Sun
Hongbin Sun
Xi'an Jiaotong University
Computer ArchitectureVLSI Circuit