Morse: Dual-Sampling for Lossless Acceleration of Diffusion Models

📅 2025-06-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the slow sampling speed of diffusion models, this paper proposes Morse, a dual-sampling framework comprising two lightweight models—Dash (for fast leapfrog sampling) and Dot (for residual feedback correction)—that alternate and collaborate without fine-tuning the original diffusion model, achieving lossless acceleration. Its core innovations include the first dual-model leapfrog generation architecture, integrating conditional noise estimation enhancement, adaptive residual feedback modeling, and cross-model weight sharing. Morse is compatible with diverse diffusion models and achieves average speedups of 1.78×–3.31× over nine baselines across six text-to-image generation tasks, while preserving fidelity—evidenced by unchanged FID and CLIP-Score—and successfully extends to advanced acceleration paradigms such as LCM-SDXL.

Technology Category

Application Category

📝 Abstract
In this paper, we present Morse, a simple dual-sampling framework for accelerating diffusion models losslessly. The key insight of Morse is to reformulate the iterative generation (from noise to data) process via taking advantage of fast jump sampling and adaptive residual feedback strategies. Specifically, Morse involves two models called Dash and Dot that interact with each other. The Dash model is just the pre-trained diffusion model of any type, but operates in a jump sampling regime, creating sufficient space for sampling efficiency improvement. The Dot model is significantly faster than the Dash model, which is learnt to generate residual feedback conditioned on the observations at the current jump sampling point on the trajectory of the Dash model, lifting the noise estimate to easily match the next-step estimate of the Dash model without jump sampling. By chaining the outputs of the Dash and Dot models run in a time-interleaved fashion, Morse exhibits the merit of flexibly attaining desired image generation performance while improving overall runtime efficiency. With our proposed weight sharing strategy between the Dash and Dot models, Morse is efficient for training and inference. Our method shows a lossless speedup of 1.78X to 3.31X on average over a wide range of sampling step budgets relative to 9 baseline diffusion models on 6 image generation tasks. Furthermore, we show that our method can be also generalized to improve the Latent Consistency Model (LCM-SDXL, which is already accelerated with consistency distillation technique) tailored for few-step text-to-image synthesis. The code and models are available at https://github.com/deep-optimization/Morse.
Problem

Research questions and friction points this paper is trying to address.

Accelerate diffusion models losslessly with dual-sampling
Improve sampling efficiency via jump and residual feedback
Enhance runtime without sacrificing image generation quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-sampling framework for lossless acceleration
Fast jump sampling and adaptive residual feedback
Weight sharing between Dash and Dot models
🔎 Similar Papers
No similar papers found.