HyDRA: Hierarchical and Dynamic Rank Adaptation for Mobile Vision Language Model

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fixed-rank Low-Rank Adaptation (LoRA) suffers from limited expressivity and poor adaptability to heterogeneous multimodal features in mobile Visual Language Model (VLM) fine-tuning. Method: We propose a hierarchical dynamic rank scheduling mechanism: (i) inter-layer rank differentiation, (ii) intra-layer fine-grained rank adaptation, and (iii) a lightweight end-to-end performance predictor for automatic rank configuration. Our approach integrates LoRA, hierarchical parameter optimization, and multimodal joint fine-tuning—without increasing trainable parameters. Contribution/Results: Evaluated across multiple benchmarks, our method achieves an average accuracy gain of 4.7%, outperforming full-parameter fine-tuning on several tasks. It significantly enhances both efficiency and accuracy of mobile VLM fine-tuning, enabling adaptive, parameter-efficient adaptation to multimodal heterogeneity.

Technology Category

Application Category

📝 Abstract
Vision Language Models (VLMs) have undergone significant advancements, particularly with the emergence of mobile-oriented VLMs, which offer a wide range of application scenarios. However, the substantial computational requirements for training these models present a significant obstacle to their practical application. To address this issue, Low-Rank Adaptation (LoRA) has been proposed. Nevertheless, the standard LoRA with a fixed rank lacks sufficient capability for training mobile VLMs that process both text and image modalities. In this work, we introduce HyDRA, a parameter-efficient fine-tuning framework designed to implement hierarchical and dynamic rank scheduling for mobile VLMs. This framework incorporates two essential optimization strategies: (1) hierarchical optimization, which involves a coarse-grained approach that assigns different ranks to various layers, as well as a fine-grained method that adjusts ranks within individual layers, and (2) dynamic adjustment, which employs an end-to-end automatic optimization using a lightweight performance model to determine and adjust ranks during the fine-tuning process. Comprehensive experiments conducted on popular benchmarks demonstrate that HyDRA consistently outperforms the baseline, achieving a 4.7% improvement across various model sizes without increasing the number of trainable parameters. In some tasks, it even surpasses full-parameter fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Addresses high computational demands in mobile vision language model training
Enhances Low-Rank Adaptation with hierarchical and dynamic rank scheduling
Improves parameter-efficient fine-tuning for multimodal mobile applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical rank assignment for different model layers
Dynamic rank adjustment using lightweight performance model
Parameter-efficient fine-tuning for mobile vision language models
🔎 Similar Papers
No similar papers found.
Yuanhao Xi
Yuanhao Xi
University of Goettingen
Large Language Model
X
Xiaohuan Bing
Liaoning Technical University, Huludao, China; University of Göttingen, Göttingen, Germany; Gesellschaft für Wissenschaftliche Datenverarbeitung mbH Göttingen, Göttingen, Germany
Ramin Yahyapour
Ramin Yahyapour
GWDG, University Göttingen
HPCDistributed ComputingCloudsGridsData Management