🤖 AI Summary
To address the training instability and suboptimal sample efficiency inherent in DoRA for parameter-efficient fine-tuning, this paper proposes DoRAN. Methodologically, DoRAN introduces denominator noise within the weight decomposition framework to enable adaptive regularization and designs a learnable auxiliary network that dynamically generates cross-layer coupled low-rank adaptation matrices. This work is the first to jointly model dynamic matrix generation and noise-based stabilization, thereby enhancing training robustness and generalization capability both theoretically and empirically. Experiments across multimodal benchmarks demonstrate that DoRAN significantly outperforms baselines—including LoRA and DoRA—in both accuracy and stability. Notably, under few-shot settings, DoRAN achieves faster convergence and improved training stability, validating its effectiveness and broad applicability across diverse downstream tasks.
📝 Abstract
Parameter-efficient fine-tuning (PEFT) methods have become the standard paradigm for adapting large-scale models. Among these techniques, Weight-Decomposed Low-Rank Adaptation (DoRA) has been shown to improve both the learning capacity and training stability of the vanilla Low-Rank Adaptation (LoRA) method by explicitly decomposing pre-trained weights into magnitude and directional components. In this work, we propose DoRAN, a new variant of DoRA designed to further stabilize training and boost the sample efficiency of DoRA. Our approach includes two key stages: (i) injecting noise into the denominator of DoRA's weight decomposition, which serves as an adaptive regularizer to mitigate instabilities; and (ii) replacing static low-rank matrices with auxiliary networks that generate them dynamically, enabling parameter coupling across layers and yielding better sample efficiency in both theory and practice. Comprehensive experiments on vision and language benchmarks show that DoRAN consistently outperforms LoRA, DoRA, and other PEFT baselines. These results underscore the effectiveness of combining stabilization through noise-based regularization with network-based parameter generation, offering a promising direction for robust and efficient fine-tuning of foundation models.