Generalized Tensor-based Parameter-Efficient Fine-Tuning via Lie Group Transformations

📅 2025-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pre-trained large models face prohibitive computational costs in full fine-tuning across multi-task scenarios, while existing parameter-efficient fine-tuning (PEFT) methods are largely restricted to low-rank updates on two-dimensional weight matrices. To address this, we propose the first PEFT framework that models parameter updates on high-dimensional tensors—such as convolutional kernels—as structure-preserving perturbations on Lie groups, leveraging Lie algebraic perturbations and exponential mapping to ensure geometric consistency. Our approach generalizes matrix-based PEFT methods (e.g., LoRA) to arbitrary-order tensors without task-specific architectural modifications. Extensive evaluation across diverse computer vision and natural language processing multi-task benchmarks demonstrates that our method reduces trainable parameter count by 30–50% relative to LoRA and AdaLoRA, while improving average accuracy by 1.2–2.8 percentage points—validating the efficacy of geometry-aware, high-dimensional fine-tuning.

Technology Category

Application Category

📝 Abstract
Adapting pre-trained foundation models for diverse downstream tasks is a core practice in artificial intelligence. However, the wide range of tasks and high computational costs make full fine-tuning impractical. To overcome this, parameter-efficient fine-tuning (PEFT) methods like LoRA have emerged and are becoming a growing research focus. Despite the success of these methods, they are primarily designed for linear layers, focusing on two-dimensional matrices while largely ignoring higher-dimensional parameter spaces like convolutional kernels. Moreover, directly applying these methods to higher-dimensional parameter spaces often disrupts their structural relationships. Given the rapid advancements in matrix-based PEFT methods, rather than designing a specialized strategy, we propose a generalization that extends matrix-based PEFT methods to higher-dimensional parameter spaces without compromising their structural properties. Specifically, we treat parameters as elements of a Lie group, with updates modeled as perturbations in the corresponding Lie algebra. These perturbations are mapped back to the Lie group through the exponential map, ensuring smooth, consistent updates that preserve the inherent structure of the parameter space. Extensive experiments on computer vision and natural language processing validate the effectiveness and versatility of our approach, demonstrating clear improvements over existing methods.
Problem

Research questions and friction points this paper is trying to address.

Extending matrix-based PEFT to higher-dimensional parameter spaces
Preserving structural relationships in higher-dimensional parameter updates
Generalizing Lie group transformations for efficient fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends matrix-based PEFT to higher dimensions
Uses Lie group transformations for parameter updates
Preserves structural properties in parameter spaces
🔎 Similar Papers
No similar papers found.
C
Chongjie Si
Shanghai Jiao Tong University
Zhiyi Shi
Zhiyi Shi
University of Illinois at Urbana-Champaign
VLMPEFT
Xuehui Wang
Xuehui Wang
PhD Candidate, Shanghai Jiao Tong University
Computer VisionSegmentationDetection
Y
Yichen Xiao
Southeast University
X
Xiaokang Yang
Shanghai Jiao Tong University
W
Wei Shen
Shanghai Jiao Tong University