TASO: Task-Aligned Sparse Optimization for Parameter-Efficient Model Adaptation

๐Ÿ“… 2025-09-22
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
While Low-Rank Adaptation (LoRA) is parameter-efficient, it often introduces redundant parameters that increase training overhead and degrade fine-tuning performance; identifying such redundancy is inherently challenging. Method: We propose a task-aligned prior sparsification framework that leverages the importance distribution of pre-trained model weights to identify task-relevant core parameter regions *before* LoRA fine-tuningโ€”enabling structured sparsity without relying on fine-tuning gradients. This approach combines importance estimation with task-specific distribution modeling to design sparse LoRA modules *a priori*. Contribution/Results: Extensive experiments demonstrate that our method consistently outperforms standard LoRA across multiple downstream tasks under equal or lower parameter budgets. It achieves up to 60% reduction in trainable parameters while improving generalization, establishing a new paradigm for efficient and effective adapter-based fine-tuning.

Technology Category

Application Category

๐Ÿ“ Abstract
LoRA has become one of the most widely used parameter-efficient fine-tuning methods due to its simplicity and effectiveness. However, numerous studies have shown that LoRA often introduces substantial parameter redundancy, which not only increases the number of trainable parameters but also hinders the effectiveness of fine-tuning. Since identifying redundant parameters in LoRA is inherently difficult, how to eliminate them efficiently and accurately remains a challenging problem. In this paper, we propose TASO, a redundancy reduction method that leverages importance information from the pretrained model's weights to mitigate LoRA redundancy. Specifically, we estimate parameter importance on downstream tasks and identify task-specific core regions based on the distribution of importance scores. The location information of these core regions is then used to determine the sparse structure of LoRA modules, enabling redundancy removal before fine-tuning. Our approach significantly reduces the number of trainable parameters required for task adaptation, while providing a novel task-aligned perspective for LoRA redundancy reduction. Experimental results demonstrate that, with a parameter budget comparable to LoRA with rank $r = 1$, TASO consistently outperforms standard LoRA across multiple tasks, achieving strong fine-tuning performance while effectively eliminating redundant parameters.
Problem

Research questions and friction points this paper is trying to address.

Reducing parameter redundancy in LoRA fine-tuning methods
Identifying and eliminating redundant parameters efficiently
Maintaining performance while minimizing trainable parameters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages pretrained weight importance for redundancy reduction
Identifies task-specific core regions using importance scores
Determines sparse LoRA structure before fine-tuning begins
๐Ÿ”Ž Similar Papers
No similar papers found.
D
Daiye Miao
East China Normal University
Y
Yufang Liu
East China Normal University
J
Jie Wang
East China Normal University
Changzhi Sun
Changzhi Sun
Institute of Artificial Intelligence (TeleAI), China Telecom
Machine LearningNatural Language ProcessingAI for Science
Y
Yunke Zhang
Honor Device Co., Ltd.
D
Demei Yan
Honor Device Co., Ltd.
Shaokang Dong
Shaokang Dong
Honor Device Co., Ltd
Multi-agent RLRLHFLLM Agent
Q
Qi Zhang
Fudan University
Y
Yuanbin Wu
East China Normal University