DDiT: Dynamic Patch Scheduling for Efficient Diffusion Transformers

📅 2026-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion Transformers for image and video generation suffer from high computational costs due to their reliance on fixed-size image patches throughout the entire process. This work proposes the first content- and temporal-aware dynamic patching mechanism, which adaptively adjusts patch sizes during inference based on the denoising timestep and local content complexity: coarse-grained patches are used in early stages to model global structure, while fine-grained patches are employed in later stages to refine details. By integrating dynamic tokenization with adaptive patch scheduling, the method achieves up to 3.52× and 3.2× inference speedup on FLUX-1.Dev and Wan 2.1, respectively, while preserving generation quality and prompt fidelity.

Technology Category

Application Category

📝 Abstract
Diffusion Transformers (DiTs) have achieved state-of-the-art performance in image and video generation, but their success comes at the cost of heavy computation. This inefficiency is largely due to the fixed tokenization process, which uses constant-sized patches throughout the entire denoising phase, regardless of the content's complexity. We propose dynamic tokenization, an efficient test-time strategy that varies patch sizes based on content complexity and the denoising timestep. Our key insight is that early timesteps only require coarser patches to model global structure, while later iterations demand finer (smaller-sized) patches to refine local details. During inference, our method dynamically reallocates patch sizes across denoising steps for image and video generation and substantially reduces cost while preserving perceptual generation quality. Extensive experiments demonstrate the effectiveness of our approach: it achieves up to $3.52\times$ and $3.2\times$ speedup on FLUX-1.Dev and Wan $2.1$, respectively, without compromising the generation quality and prompt adherence.
Problem

Research questions and friction points this paper is trying to address.

Diffusion Transformers
computational efficiency
fixed tokenization
patch size
denoising process
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Tokenization
Diffusion Transformers
Efficient Inference
Adaptive Patch Sizing
Denoising Schedule
🔎 Similar Papers
No similar papers found.