ShapLoRA: Allocation of Low-rank Adaption on Large Language Models via Shapley Value Inspired Importance Estimation

📅 2026-01-25
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing low-rank adaptation (LoRA) methods, which rely on opaque importance metrics for rank allocation, thereby hindering performance gains. To overcome this, we propose ShapLoRA, a novel framework that introduces Shapley values—originating from cooperative game theory—into LoRA rank assignment for the first time. By integrating sensitivity analysis, ShapLoRA constructs an interpretable measure of module importance and further incorporates a validation-set-driven evaluation and retraining pipeline to ensure fair comparison and efficient optimization. Extensive experiments across diverse challenging tasks demonstrate that ShapLoRA consistently outperforms current baselines with a comparable number of trainable parameters, achieving both superior effectiveness and enhanced interpretability.

Technology Category

Application Category

📝 Abstract
Low-rank adaption (LoRA) is a representative method in the field of parameter-efficient fine-tuning (PEFT), and is key to Democratizating the modern large language models (LLMs). The vanilla LoRA is implemented with uniform ranks, and the recent literature have found that properly allocating ranks on the LLM backbones results in performance boosts. However, the previous rank allocation methods have limitations since they rely on inexplanable and unreliable importance measures for the LoRA ranks. To address the above issues, we propose the ShapLoRA framework. Inspired by the explanable attribution measure Shapley Value, we combine the sensitivity-based measures with the idea of coalitions in the collaborative games among LoRA ranks, and propose a more explainable importance measure called Shapley sensitivity. In addition, we optimize the workflow of the existing works by: (a) calculating Shapley sensitivity on a separate validation set; (b) Setting up the allocating-retraining procedures for fair comparisons. We have conducted experiments on various challenging tasks, and the experimental results demonstrate that our ShapLoRA method can outperform the recent baselines with comparable tunable parameters.\footnote{Codes and fine-tuned models will be open-sourced to facilitate future research.
Problem

Research questions and friction points this paper is trying to address.

Low-rank Adaptation
Rank Allocation
Shapley Value
Parameter-Efficient Fine-Tuning
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Shapley Value
Low-rank Adaptation
Parameter-Efficient Fine-Tuning
Explainable Importance
Large Language Models
🔎 Similar Papers
No similar papers found.
Y
Yi Zhao
Singapore Management University
Q
Qinghua Yao
University of Pennsylvania
X
Xinyuan song
Emory University
Wei Zhu
Wei Zhu
University of Hong Kong