One-Way Ticket:Time-Independent Unified Encoder for Distilling Text-to-Image Diffusion Models

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-image diffusion models face a three-way trade-off among inference speed, generation quality, and sample diversity. This paper identifies redundant computations across timesteps in the UNet encoder, while observing that the decoder excels at modeling explicit semantics. To address this, we propose the first timestep-agnostic unified encoder (TiUE): it decouples encoding into a single forward pass, with features shared across all timesteps in the decoder, enabling truly parallel sampling. Additionally, we introduce KL-divergence regularization on noise prediction to jointly enhance fidelity and diversity. TiUE operates within a UNet distillation framework and requires no architectural modification to the decoder. Experiments demonstrate that TiUE achieves state-of-the-art performance—surpassing LCM, SD-Turbo, and SwiftBrushv2—in image quality, distributional diversity, and photorealism, even under extreme acceleration (e.g., one- or two-step sampling).

Technology Category

Application Category

📝 Abstract
Text-to-Image (T2I) diffusion models have made remarkable advancements in generative modeling; however, they face a trade-off between inference speed and image quality, posing challenges for efficient deployment. Existing distilled T2I models can generate high-fidelity images with fewer sampling steps, but often struggle with diversity and quality, especially in one-step models. From our analysis, we observe redundant computations in the UNet encoders. Our findings suggest that, for T2I diffusion models, decoders are more adept at capturing richer and more explicit semantic information, while encoders can be effectively shared across decoders from diverse time steps. Based on these observations, we introduce the first Time-independent Unified Encoder TiUE for the student model UNet architecture, which is a loop-free image generation approach for distilling T2I diffusion models. Using a one-pass scheme, TiUE shares encoder features across multiple decoder time steps, enabling parallel sampling and significantly reducing inference time complexity. In addition, we incorporate a KL divergence term to regularize noise prediction, which enhances the perceptual realism and diversity of the generated images. Experimental results demonstrate that TiUE outperforms state-of-the-art methods, including LCM, SD-Turbo, and SwiftBrushv2, producing more diverse and realistic results while maintaining the computational efficiency.
Problem

Research questions and friction points this paper is trying to address.

Reduces redundant UNet encoder computations in T2I diffusion models
Improves image diversity and quality in one-step distilled models
Enables parallel sampling to significantly cut inference time
Innovation

Methods, ideas, or system contributions that make the work stand out.

Time-independent Unified Encoder (TiUE) for UNet
Shared encoder features across decoder steps
KL divergence term for noise prediction regularization
🔎 Similar Papers
No similar papers found.
Senmao Li
Senmao Li
Ph.D Student, Nankai University
GANsImage-to-image translationDiffusion Models
L
Lei Wang
VCIP, CS, Nankai University
K
Kai Wang
Computer Vision Center, Universitat Autònoma de Barcelona
T
Tao Liu
VCIP, CS, Nankai University
J
Jiehang Xie
School of Big Data and Computer Science, Guizhou Normal University
J
J. Weijer
Computer Vision Center, Universitat Autònoma de Barcelona
Fahad Shahbaz Khan
Fahad Shahbaz Khan
MBZUAI, Linköping University Sweden
Computer VisionObject RecognitionGenerative AIAI for Science
S
Shiqi Yang
Yaxing Wang
Yaxing Wang
Associate professor, Nankai University
Deep learningGANsImage-to-image translationTransfer learning
J
Jian Yang
VCIP, CS, Nankai University