Revisiting LoRA through the Lens of Parameter Redundancy: Spectral Encoding Helps

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
LoRA fine-tuning suffers from significant parameter redundancy, limiting its capacity and efficiency. Method: This paper identifies spectral density redundancy in low-rank adapters—redundancy that can be safely pruned without compromising representational capacity—and proposes Spectral-encoded LoRA (SeLoRA). SeLoRA reparameterizes adapters via spectral bases, reconstructing them within a sparse spectral subspace to jointly achieve high expressivity and low redundancy. It adopts a plug-and-play architecture, fully compatible with mainstream LoRA variants without modifying the base model structure. Contribution/Results: On commonsense reasoning, mathematical reasoning, and code generation benchmarks, SeLoRA surpasses strong baselines—including LoRA and QLoRA—with fewer parameters and up to 2.1× faster training. These results validate spectral sparsity modeling as a novel paradigm for lightweight, efficient fine-tuning.

Technology Category

Application Category

📝 Abstract
Low-Rank Adaptation (LoRA) has emerged as a prominent technique for fine-tuning large foundation models. Despite its successes, the substantial parameter redundancy, which limits the capacity and efficiency of LoRA, has been recognized as a bottleneck. In this work, we systematically investigate the impact of redundancy in fine-tuning LoRA and reveal that reducing density redundancy does not degrade expressiveness. Based on this insight, we introduce underline{S}pectral-underline{e}ncoding underline{L}ow-underline{R}ank underline{A}daptation (SeLoRA), which harnesses the robust expressiveness of spectral bases to re-parameterize LoRA from a sparse spectral subspace. Designed with simplicity, SeLoRA enables seamless integration with various LoRA variants for performance boosting, serving as a scalable plug-and-play framework. Extensive experiments substantiate that SeLoRA achieves greater efficiency with fewer parameters, delivering superior performance enhancements over strong baselines on various downstream tasks, including commonsense reasoning, math reasoning, and code generation.
Problem

Research questions and friction points this paper is trying to address.

Reducing parameter redundancy in LoRA fine-tuning
Enhancing LoRA efficiency with spectral encoding
Improving performance on reasoning and generation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spectral encoding re-parameterizes LoRA
Reduces parameter redundancy without performance loss
Plug-and-play framework for LoRA variants
🔎 Similar Papers
No similar papers found.
Jiashun Cheng
Jiashun Cheng
Hong Kong University of Science and Technology
machine learning
Aochuan Chen
Aochuan Chen
The Hong Kong University of Science and Technology (Guangzhou)
Machine LearningApplied Data Science
N
Nuo Chen
The Hong Kong University of Science and Technology (Guangzhou)
Ziqi Gao
Ziqi Gao
HKUST
AI for ProteinGraph Machine Learning
Y
Yuhan Li
The Hong Kong University of Science and Technology (Guangzhou)
J
Jia Li
The Hong Kong University of Science and Technology
F
Fugee Tsung
The Hong Kong University of Science and Technology