Quantum-PEFT: Ultra parameter-efficient fine-tuning

📅 2025-03-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Parameter-efficient fine-tuning (PEFT) methods suffer from linear growth in trainable parameters with model scale, limiting scalability. Method: We propose Quantum Unitary Adapters (Q-UA), a full-rank, low-parameter PEFT module parameterized via Pauli matrices. Q-UA constructs learnable, unitary transformations in a low-dimensional quantum state space, enabling logarithmic parameter scaling with respect to dimensionality—bypassing the linear parameter expansion inherent in LoRA and other PEFT approaches. Contribution/Results: Theoretical analysis shows Q-UA’s parameter efficiency improves continuously as model size increases. Empirically, on language and vision transfer learning benchmarks, Q-UA achieves comparable or superior performance to the lowest-rank LoRA variants while using less than 1% of their trainable parameters. This work establishes the first high-performance, ultra-low-parameter fine-tuning paradigm driven by quantum-structured representations.

Technology Category

Application Category

📝 Abstract
This paper introduces Quantum-PEFT that leverages quantum computations for parameter-efficient fine-tuning (PEFT). Unlike other additive PEFT methods, such as low-rank adaptation (LoRA), Quantum-PEFT exploits an underlying full-rank yet surprisingly parameter efficient quantum unitary parameterization. With the use of Pauli parameterization, the number of trainable parameters grows only logarithmically with the ambient dimension, as opposed to linearly as in LoRA-based PEFT methods. Quantum-PEFT achieves vanishingly smaller number of trainable parameters than the lowest-rank LoRA as dimensions grow, enhancing parameter efficiency while maintaining a competitive performance. We apply Quantum-PEFT to several transfer learning benchmarks in language and vision, demonstrating significant advantages in parameter efficiency.
Problem

Research questions and friction points this paper is trying to address.

Leverages quantum computations for efficient fine-tuning.
Reduces trainable parameters logarithmically with dimension.
Enhances parameter efficiency in transfer learning benchmarks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantum computations for efficient fine-tuning
Logarithmic parameter growth with Pauli parameterization
Outperforms LoRA in parameter efficiency
🔎 Similar Papers
No similar papers found.
T
T. Koike-Akino
Mitsubishi Electric Research Laboratories (MERL), 201 Broadway, Cambridge, MA, USA
F
Francesco Tonin
LIONS, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
Yongtao Wu
Yongtao Wu
epfl
Trustworthy machine learningOptimization
F
Frank Zhengqing Wu
LIONS, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
L
Leyla Naz Candogan
LIONS, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
V
V. Cevher
LIONS, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland