🤖 AI Summary
This work addresses the limitation of existing parameter-efficient fine-tuning methods, such as LoRA, which process modalities independently in dual-stream multimodal architectures and thus struggle to model cross-modal interactions effectively. To overcome this, the authors propose CoLA, a novel framework that extends LoRA by introducing parallel cross-modal low-rank adaptation pathways decoupled from unimodal paths, enabling efficient and interference-free multimodal adaptation. CoLA is the first parameter-efficient fine-tuning approach capable of supporting diverse tasks including visual grounding, and it seamlessly integrates with prevalent foundation models like DINO and BERT through its cross-modal low-rank matrix injection mechanism. Experimental results demonstrate consistent performance gains—approximately 3% and 2% relative improvements on benchmarks such as RefCOCO, AVE, and AVS—while maintaining high parameter efficiency.
📝 Abstract
Foundation models have revolutionized AI, but adapting them efficiently for multimodal tasks, particularly in dual-stream architectures composed of unimodal encoders, such as DINO and BERT, remains a significant challenge. Parameter-Efficient Fine-Tuning (PEFT) methods like Low-Rank Adaptation (LoRA) enable lightweight adaptation, yet they operate in isolation within each modality, limiting their ability in capturing cross-modal interactions. In this paper, we take a step in bridging this gap with Cross-Modal Low-Rank Adaptation (CoLA), a novel PEFT framework that extends LoRA by introducing a dedicated inter-modal adaptation pathway alongside the standard intra-modal one. This dual-path design enables CoLA to adapt unimodal foundation models to multimodal tasks effectively, without interference between modality-specific and cross-modal learning. We evaluate CoLA across a range of vision-language (RefCOCO, RefCOCO+, RefCOCOg) and audio-visual (AVE, AVS) benchmarks, where it consistently outperforms LORA, achieving a relative gain of around 3\% and 2\%, respectively, while maintaining parameter efficiency. Notably, CoLA enables the first multi-task PEFT framework for visual grounding, bridging a key gap in efficient multimodal adaptation.