Improving Diffusion-based Inverse Algorithms under Few-Step Constraint via Learnable Linear Extrapolation

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models for inverse problems suffer from slow convergence, typically requiring hundreds of iterative steps; performance degrades significantly under few-step regimes (16–64 steps). To address this, we propose a Learnable Linear Extrapolation (LLE) module—the first to incorporate high-order ODE extrapolation principles into observation-driven inverse problem solving—enabling plug-and-play enhancement of any diffusion-based inverse algorithm. We introduce a unified standard decomposition framework for inverse algorithms, allowing LLE to adaptively model the linear subspace evolution of iterative trajectories with minimal learnable parameters. Extensive experiments on super-resolution, denoising, and MRI reconstruction demonstrate that LLE consistently improves PSNR and SSIM under few-step settings, substantially narrowing the performance gap with full-step baselines. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Diffusion models have demonstrated remarkable performance in modeling complex data priors, catalyzing their widespread adoption in solving various inverse problems. However, the inherently iterative nature of diffusion-based inverse algorithms often requires hundreds to thousands of steps, with performance degradation occurring under fewer steps which limits their practical applicability. While high-order diffusion ODE solvers have been extensively explored for efficient diffusion sampling without observations, their application to inverse problems remains underexplored due to the diverse forms of inverse algorithms and their need for repeated trajectory correction based on observations. To address this gap, we first introduce a canonical form that decomposes existing diffusion-based inverse algorithms into three modules to unify their analysis. Inspired by the linear subspace search strategy in the design of high-order diffusion ODE solvers, we propose the Learnable Linear Extrapolation (LLE) method, a lightweight approach that universally enhances the performance of any diffusion-based inverse algorithm that fits the proposed canonical form. Extensive experiments demonstrate consistent improvements of the proposed LLE method across multiple algorithms and tasks, indicating its potential for more efficient solutions and boosted performance of diffusion-based inverse algorithms with limited steps. Codes for reproducing our experiments are available at href{https://github.com/weigerzan/LLE_inverse_problem}{https://github.com/weigerzan/LLE_inverse_problem}.
Problem

Research questions and friction points this paper is trying to address.

Enhances diffusion-based inverse algorithms under few-step constraints.
Introduces Learnable Linear Extrapolation for improved algorithm performance.
Addresses performance degradation in inverse problems with limited steps.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces Learnable Linear Extrapolation method
Unifies diffusion-based inverse algorithms analysis
Enhances performance with limited computational steps
🔎 Similar Papers
No similar papers found.