SCott: Accelerating Diffusion Models with Stochastic Consistency Distillation

πŸ“… 2024-03-03
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 5
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
Diffusion models suffer from high inference latency, hindering real-time text-to-image generation. To address this, we propose Stochastic Consistency Distillation (SCott), the first method to embed a stochastic differential equation (SDE) solver into the consistency distillation framework. SCott jointly optimizes noise scheduling and sampling steps while introducing an adversarial loss to enhance sample fidelity under extreme acceleration (1–2 steps). Distilled from Stable Diffusion-V1.5, SCott generates high-fidelity images in just 1–2 steps, achieving an FID of 22.1 on MSCOCO-2017β€”surpassing 1-step InstaFlow (23.4) and matching 4-step UFOGen. It also improves diversity in high-resolution generation by 16%. Key innovations include: (i) SDE-driven consistency modeling, (ii) joint noise-step control, and (iii) adversarial regularization for ultra-low-step synthesis.

Technology Category

Application Category

πŸ“ Abstract
The iterative sampling procedure employed by diffusion models (DMs) often leads to significant inference latency. To address this, we propose Stochastic Consistency Distillation (SCott) to enable accelerated text-to-image generation, where high-quality generations can be achieved with just 1-2 sampling steps, and further improvements can be obtained by adding additional steps. In contrast to vanilla consistency distillation (CD) which distills the ordinary differential equation solvers-based sampling process of a pretrained teacher model into a student, SCott explores the possibility and validates the efficacy of integrating stochastic differential equation (SDE) solvers into CD to fully unleash the potential of the teacher. SCott is augmented with elaborate strategies to control the noise strength and sampling process of the SDE solver. An adversarial loss is further incorporated to strengthen the sample quality with rare sampling steps. Empirically, on the MSCOCO-2017 5K dataset with a Stable Diffusion-V1.5 teacher, SCott achieves an FID (Frechet Inceptio Distance) of 22.1, surpassing that (23.4) of the 1-step InstaFlow (Liu et al., 2023) and matching that of 4-step UFOGen (Xue et al., 2023b). Moreover, SCott can yield more diverse samples than other consistency models for high-resolution image generation (Luo et al., 2023a), with up to 16% improvement in a qualified metric. The code and checkpoints are coming soon.
Problem

Research questions and friction points this paper is trying to address.

Reduce diffusion models' inference latency
Enhance text-to-image generation speed
Improve sample quality with fewer steps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates SDE solvers into distillation
Controls noise and sampling process
Uses adversarial loss for quality improvement
πŸ”Ž Similar Papers
No similar papers found.
Hongjian Liu
Hongjian Liu
Anhui Polytechnic University
Complex NetworksNeural Networks
Q
Qingsong Xie
OPPO AI Center
Z
Zhijie Deng
Shanghai Jiao Tong University
C
Chen Chen
OPPO AI Center
S
Shixiang Tang
The Chinese University of Hong Kong
X
Xueyang Fu
University of Science and Technology of China
Z
Zheng-jun Zha
University of Science and Technology of China
H
Haonan Lu
OPPO AI Center