Edit2Restore:Few-Shot Image Restoration via Parameter-Efficient Adaptation of Pre-trained Editing Models

📅 2026-01-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high cost of traditional image restoration methods, which require large-scale paired datasets to train specialized models for each degradation type. For the first time, it introduces the large-scale pretrained text-guided image editing model FLUX.1 Kontext into few-shot image restoration. By employing parameter-efficient fine-tuning via Low-Rank Adaptation (LoRA), the authors construct a unified adapter that enables restoration across diverse degradations—such as denoising, deraining, and dehazing—using only 16–128 training samples per task, guided by concise textual prompts. The proposed approach substantially reduces both data and computational requirements while achieving superior perceptual quality. Although it does not attain the highest PSNR or SSIM scores, it demonstrates compelling performance in high-quality, multi-task few-shot image restoration.

Technology Category

Application Category

📝 Abstract
Image restoration has traditionally required training specialized models on thousands of paired examples per degradation type. We challenge this paradigm by demonstrating that powerful pre-trained text-conditioned image editing models can be efficiently adapted for multiple restoration tasks through parameter-efficient fine-tuning with remarkably few examples. Our approach fine-tunes LoRA adapters on FLUX.1 Kontext, a state-of-the-art 12B parameter flow matching model for image-to-image translation, using only 16-128 paired images per task, guided by simple text prompts that specify the restoration operation. Unlike existing methods that train specialized restoration networks from scratch with thousands of samples, we leverage the rich visual priors already encoded in large-scale pre-trained editing models, dramatically reducing data requirements while maintaining high perceptual quality. A single unified LoRA adapter, conditioned on task-specific text prompts, effectively handles multiple degradations including denoising, deraining, and dehazing. Through comprehensive ablation studies, we analyze: (i) the impact of training set size on restoration quality, (ii) trade-offs between task-specific versus unified multi-task adapters, (iii) the role of text encoder fine-tuning, and (iv) zero-shot baseline performance. While our method prioritizes perceptual quality over pixel-perfect reconstruction metrics like PSNR/SSIM, our results demonstrate that pre-trained image editing models, when properly adapted, offer a compelling and data-efficient alternative to traditional image restoration approaches, opening new avenues for few-shot, prompt-guided image enhancement. The code to reproduce our results are available at: https://github.com/makinyilmaz/Edit2Restore
Problem

Research questions and friction points this paper is trying to address.

few-shot image restoration
parameter-efficient adaptation
pre-trained editing models
low-data restoration
multi-degradation restoration
Innovation

Methods, ideas, or system contributions that make the work stand out.

few-shot image restoration
parameter-efficient fine-tuning
LoRA
pre-trained editing models
text-conditioned adaptation
🔎 Similar Papers
No similar papers found.
M
M. Yilmaz
Codeway AI, Istanbul, Turkey
Ahmet Bilican
Ahmet Bilican
Koç University
Image and Video ProcessingDeep Learning
B
Burak Can Biner
Codeway AI, Istanbul, Turkey
A
A. Tekalp
Koc University, Istanbul, Turkey