MAP: Revisiting Weight Decomposition for Low-Rank Adaptation

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing parameter-efficient fine-tuning (PEFT) methods—such as LoRA and DoRA—suffer from a fundamental geometric inconsistency in modeling weight update directions. To address this, we propose a novel adapter paradigm grounded in rigorous geometric decomposition: the weight matrix is mapped to a high-dimensional vector; the base weight is normalized, orthogonal update directions are learned on the unit sphere, and two independent scalars separately modulate the magnitudes of the base and the direction. This framework achieves, for the first time, orthogonal, interpretable, and decoupled control over direction and magnitude—establishing a general theoretical foundation for PEFT. The method is plug-and-play compatible with mainstream PEFT approaches. Extensive experiments across multiple tasks and architectures demonstrate consistent performance gains, faster convergence, improved generalization, and zero inference overhead.

Technology Category

Application Category

📝 Abstract
The rapid development of large language models has revolutionized natural language processing, but their fine-tuning remains computationally expensive, hindering broad deployment. Parameter-efficient fine-tuning (PEFT) methods, such as LoRA, have emerged as solutions. Recent work like DoRA attempts to further decompose weight adaptation into direction and magnitude components. However, existing formulations often define direction heuristically at the column level, lacking a principled geometric foundation. In this paper, we propose MAP, a novel framework that reformulates weight matrices as high-dimensional vectors and decouples their adaptation into direction and magnitude in a rigorous manner. MAP normalizes the pre-trained weights, learns a directional update, and introduces two scalar coefficients to independently scale the magnitude of the base and update vectors. This design enables more interpretable and flexible adaptation, and can be seamlessly integrated into existing PEFT methods. Extensive experiments show that MAP significantly improves performance when coupling with existing methods, offering a simple yet powerful enhancement to existing PEFT methods. Given the universality and simplicity of MAP, we hope it can serve as a default setting for designing future PEFT methods.
Problem

Research questions and friction points this paper is trying to address.

High computational cost of fine-tuning large language models
Heuristic and unprincipled weight adaptation in existing methods
Lack of interpretable and flexible parameter-efficient fine-tuning approaches
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reformulates weight matrices as high-dimensional vectors
Decouples adaptation into direction and magnitude
Introduces scalar coefficients for flexible scaling
🔎 Similar Papers
No similar papers found.
C
Chongjie Si
Shanghai Jiao Tong University
Zhiyi Shi
Zhiyi Shi
University of Illinois at Urbana-Champaign
VLMPEFT
Y
Yadao Wang
Alibaba Group
X
Xiaokang Yang
Shanghai Jiao Tong University
S
Susanto Rahardja
Singapore Institute of Technology
W
Wei Shen
Shanghai Jiao Tong University