LLM-empowered Dynamic Prompt Routing for Vision-Language Models Tuning under Long-Tailed Distributions

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address class bias in vision-language models (VLMs) during fine-tuning under long-tailed distributions—stemming from inherent class imbalance in pretraining—this paper proposes the Multi-dimensional Dynamic Prompt Routing (MDPR) framework. MDPR leverages a large language model to construct a knowledge base spanning five visual-semantic dimensions and employs a dynamic routing mechanism to achieve global class alignment, optimal prompt retrieval, and fine-grained semantic balancing. The method integrates dynamic prompt generation, multi-dimensional semantic alignment, and logits-weighted fusion, effectively mitigating bias accumulation with minimal computational overhead. MDPR achieves state-of-the-art performance on CIFAR-LT, ImageNet-LT, and Places-LT. Ablation studies confirm the critical contributions of multi-dimensional semantic modeling and dynamic routing to robust long-tail generalization.

Technology Category

Application Category

📝 Abstract
Pre-trained vision-language models (VLMs), such as CLIP, have demonstrated impressive capability in visual tasks, but their fine-tuning often suffers from bias in class-imbalanced scene. Recent works have introduced large language models (LLMs) to enhance VLM fine-tuning with supplementing semantic information. However, they often overlook inherent class imbalance in VLMs' pre-training, which may lead to bias accumulation in downstream tasks. To address this problem, this paper proposes a Multi-dimensional Dynamic Prompt Routing (MDPR) framework. MDPR constructs a comprehensive knowledge base for classes, spanning five visual-semantic dimensions. During fine-tuning, the dynamic routing mechanism aligns global visual classes, retrieves optimal prompts, and balances fine-grained semantics, yielding stable predictions through logits fusion. Extensive experiments on long-tailed benchmarks, including CIFAR-LT, ImageNet-LT, and Places-LT, demonstrate that MDPR achieves comparable results with current SOTA methods. Ablation studies further confirm the effectiveness of our semantic library for tail classes, and show that our dynamic routing incurs minimal computational overhead, making MDPR a flexible and efficient enhancement for VLM fine-tuning under data imbalance.
Problem

Research questions and friction points this paper is trying to address.

Addresses bias in class-imbalanced VLM fine-tuning
Mitigates bias accumulation from pre-training data imbalance
Enhances prompt selection for long-tailed visual recognition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-dimensional knowledge base spanning five visual-semantic dimensions
Dynamic routing mechanism aligning classes and retrieving optimal prompts
Logits fusion for stable predictions with minimal computational overhead
🔎 Similar Papers
No similar papers found.
Y
Yongju Jia
Shandong University, Weihai, China
J
Jiarui Ma
Shandong University, Weihai, China
X
Xiangxian Li
Shandong University, Weihai, China
B
Baiqiao Zhang
Shandong University, Weihai, China; The Hong Kong University of Science and Technology, Hong Kong, China
X
Xianhui Cao
AiLF Instruments, Weihai, China
Juan Liu
Juan Liu
Wuhan University
Data MiningArtificial Intelligence in BioinformaticsBiomedicine
Y
Yulong Bian
Shandong University, Weihai, China; Shandong Key Laboratory of Intelligent Electronic Packaging Testing and Application, Weihai, China