Exploring Sparsity for Parameter Efficient Fine Tuning Using Wavelets

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In low-resource settings, parameter-efficient fine-tuning (PEFT) suffers from coarse-grained updates and performance degradation when tuning extremely sparse parameter subsets (<0.1%). To address this, we propose Wavelet Fine-Tuning (WaveFT), the first PEFT method incorporating wavelet transforms: it learns structured sparse updates in the wavelet domain of residual matrices. By decomposing residuals via orthogonal wavelet bases, WaveFT enables finer-grained, multi-scale parameter modulation—circumventing optimization instability and representation degradation inherent in direct weight-domain sparsification. Evaluated on personalized text-to-image generation with Stable Diffusion XL, WaveFT substantially outperforms mainstream methods (e.g., LoRA) under ultra-low tunable parameter budgets, preserving high subject fidelity, prompt alignment, and image diversity. This work establishes a novel paradigm for ultra-low-budget adaptation of large foundation models.

Technology Category

Application Category

📝 Abstract
Efficiently adapting large foundation models is critical, especially with tight compute and memory budgets. Parameter-Efficient Fine-Tuning (PEFT) methods such as LoRA offer limited granularity and effectiveness in few-parameter regimes. We propose Wavelet Fine-Tuning (WaveFT), a novel PEFT method that learns highly sparse updates in the wavelet domain of residual matrices. WaveFT allows precise control of trainable parameters, offering fine-grained capacity adjustment and excelling with remarkably low parameter count, potentially far fewer than LoRA's minimum -- ideal for extreme parameter-efficient scenarios. In order to demonstrate the effect of the wavelet transform, we compare WaveFT with a special case, called SHiRA, that entails applying sparse updates directly in the weight domain. Evaluated on personalized text-to-image generation using Stable Diffusion XL as baseline, WaveFT significantly outperforms LoRA and other PEFT methods, especially at low parameter counts; achieving superior subject fidelity, prompt alignment, and image diversity.
Problem

Research questions and friction points this paper is trying to address.

Efficiently adapt large models with tight compute budgets
Improve parameter efficiency in fine-tuning few-parameter regimes
Enhance performance in personalized text-to-image generation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

WaveFT learns sparse updates in wavelet domain
Precise control of trainable parameters achieved
Excels in low-parameter scenarios effectively
🔎 Similar Papers
No similar papers found.
Ahmet Bilican
Ahmet Bilican
Koç University
Image and Video ProcessingDeep Learning
M
M. Akin Yilmaz
Codeway AI Research
A
A. Murat Tekalp
Dept. of Electrical and Electronics Engineering, Koç University
R
R. Gokberk Cinbics
Dept. of Computer Engineering, Middle East Technical University