🤖 AI Summary
Fixed-rank Low-Rank Adaptation (LoRA) suffers from limited expressivity and poor adaptability to heterogeneous multimodal features in mobile Visual Language Model (VLM) fine-tuning.
Method: We propose a hierarchical dynamic rank scheduling mechanism: (i) inter-layer rank differentiation, (ii) intra-layer fine-grained rank adaptation, and (iii) a lightweight end-to-end performance predictor for automatic rank configuration. Our approach integrates LoRA, hierarchical parameter optimization, and multimodal joint fine-tuning—without increasing trainable parameters.
Contribution/Results: Evaluated across multiple benchmarks, our method achieves an average accuracy gain of 4.7%, outperforming full-parameter fine-tuning on several tasks. It significantly enhances both efficiency and accuracy of mobile VLM fine-tuning, enabling adaptive, parameter-efficient adaptation to multimodal heterogeneity.
📝 Abstract
Vision Language Models (VLMs) have undergone significant advancements, particularly with the emergence of mobile-oriented VLMs, which offer a wide range of application scenarios. However, the substantial computational requirements for training these models present a significant obstacle to their practical application. To address this issue, Low-Rank Adaptation (LoRA) has been proposed. Nevertheless, the standard LoRA with a fixed rank lacks sufficient capability for training mobile VLMs that process both text and image modalities. In this work, we introduce HyDRA, a parameter-efficient fine-tuning framework designed to implement hierarchical and dynamic rank scheduling for mobile VLMs. This framework incorporates two essential optimization strategies: (1) hierarchical optimization, which involves a coarse-grained approach that assigns different ranks to various layers, as well as a fine-grained method that adjusts ranks within individual layers, and (2) dynamic adjustment, which employs an end-to-end automatic optimization using a lightweight performance model to determine and adjust ranks during the fine-tuning process. Comprehensive experiments conducted on popular benchmarks demonstrate that HyDRA consistently outperforms the baseline, achieving a 4.7% improvement across various model sizes without increasing the number of trainable parameters. In some tasks, it even surpasses full-parameter fine-tuning.