๐ค AI Summary
While Low-Rank Adaptation (LoRA) is parameter-efficient, it often introduces redundant parameters that increase training overhead and degrade fine-tuning performance; identifying such redundancy is inherently challenging. Method: We propose a task-aligned prior sparsification framework that leverages the importance distribution of pre-trained model weights to identify task-relevant core parameter regions *before* LoRA fine-tuningโenabling structured sparsity without relying on fine-tuning gradients. This approach combines importance estimation with task-specific distribution modeling to design sparse LoRA modules *a priori*. Contribution/Results: Extensive experiments demonstrate that our method consistently outperforms standard LoRA across multiple downstream tasks under equal or lower parameter budgets. It achieves up to 60% reduction in trainable parameters while improving generalization, establishing a new paradigm for efficient and effective adapter-based fine-tuning.
๐ Abstract
LoRA has become one of the most widely used parameter-efficient fine-tuning methods due to its simplicity and effectiveness. However, numerous studies have shown that LoRA often introduces substantial parameter redundancy, which not only increases the number of trainable parameters but also hinders the effectiveness of fine-tuning. Since identifying redundant parameters in LoRA is inherently difficult, how to eliminate them efficiently and accurately remains a challenging problem. In this paper, we propose TASO, a redundancy reduction method that leverages importance information from the pretrained model's weights to mitigate LoRA redundancy. Specifically, we estimate parameter importance on downstream tasks and identify task-specific core regions based on the distribution of importance scores. The location information of these core regions is then used to determine the sparse structure of LoRA modules, enabling redundancy removal before fine-tuning. Our approach significantly reduces the number of trainable parameters required for task adaptation, while providing a novel task-aligned perspective for LoRA redundancy reduction. Experimental results demonstrate that, with a parameter budget comparable to LoRA with rank $r = 1$, TASO consistently outperforms standard LoRA across multiple tasks, achieving strong fine-tuning performance while effectively eliminating redundant parameters.