🤖 AI Summary
LoRA fine-tuning suffers from significant parameter redundancy, limiting its capacity and efficiency. Method: This paper identifies spectral density redundancy in low-rank adapters—redundancy that can be safely pruned without compromising representational capacity—and proposes Spectral-encoded LoRA (SeLoRA). SeLoRA reparameterizes adapters via spectral bases, reconstructing them within a sparse spectral subspace to jointly achieve high expressivity and low redundancy. It adopts a plug-and-play architecture, fully compatible with mainstream LoRA variants without modifying the base model structure. Contribution/Results: On commonsense reasoning, mathematical reasoning, and code generation benchmarks, SeLoRA surpasses strong baselines—including LoRA and QLoRA—with fewer parameters and up to 2.1× faster training. These results validate spectral sparsity modeling as a novel paradigm for lightweight, efficient fine-tuning.
📝 Abstract
Low-Rank Adaptation (LoRA) has emerged as a prominent technique for fine-tuning large foundation models. Despite its successes, the substantial parameter redundancy, which limits the capacity and efficiency of LoRA, has been recognized as a bottleneck. In this work, we systematically investigate the impact of redundancy in fine-tuning LoRA and reveal that reducing density redundancy does not degrade expressiveness. Based on this insight, we introduce underline{S}pectral-underline{e}ncoding underline{L}ow-underline{R}ank underline{A}daptation (SeLoRA), which harnesses the robust expressiveness of spectral bases to re-parameterize LoRA from a sparse spectral subspace. Designed with simplicity, SeLoRA enables seamless integration with various LoRA variants for performance boosting, serving as a scalable plug-and-play framework. Extensive experiments substantiate that SeLoRA achieves greater efficiency with fewer parameters, delivering superior performance enhancements over strong baselines on various downstream tasks, including commonsense reasoning, math reasoning, and code generation.