DiMSUM: Diffusion Mamba - A Scalable and Unified Spatial-Frequency Method for Image Generation

📅 2024-11-06
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models struggle with modeling local image structures and capturing long-range dependencies. To address this, we propose DiMSUM—a novel generative framework integrating wavelet transforms with the Mamba state-space model. Our key contributions are threefold: (i) the first incorporation of wavelet subband decomposition into the Mamba architecture, enabling joint modeling of spatial locality and frequency-domain long-range relationships; (ii) a cross-attention fusion layer that jointly optimizes spatial- and frequency-domain representations; and (iii) a globally shared Transformer module to enhance global consistency. On standard benchmarks, DiMSUM significantly outperforms DiT and DIFFUSSM—achieving faster training convergence, lower FID scores, and generating images with richer fine details and more realistic structural fidelity. Code and pretrained models are publicly available.

Technology Category

Application Category

📝 Abstract
We introduce a novel state-space architecture for diffusion models, effectively harnessing spatial and frequency information to enhance the inductive bias towards local features in input images for image generation tasks. While state-space networks, including Mamba, a revolutionary advancement in recurrent neural networks, typically scan input sequences from left to right, they face difficulties in designing effective scanning strategies, especially in the processing of image data. Our method demonstrates that integrating wavelet transformation into Mamba enhances the local structure awareness of visual inputs and better captures long-range relations of frequencies by disentangling them into wavelet subbands, representing both low- and high-frequency components. These wavelet-based outputs are then processed and seamlessly fused with the original Mamba outputs through a cross-attention fusion layer, combining both spatial and frequency information to optimize the order awareness of state-space models which is essential for the details and overall quality of image generation. Besides, we introduce a globally-shared transformer to supercharge the performance of Mamba, harnessing its exceptional power to capture global relationships. Through extensive experiments on standard benchmarks, our method demonstrates superior results compared to DiT and DIFFUSSM, achieving faster training convergence and delivering high-quality outputs. The codes and pretrained models are released at https://github.com/VinAIResearch/DiMSUM.git.
Problem

Research questions and friction points this paper is trying to address.

Image Generation
Realism and Detail
Local Structure and Long-range Dependency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Wavelet Transform
Global Shared Transformer
Enhanced Image Generation
🔎 Similar Papers
No similar papers found.
Hao Phung
Hao Phung
CS PhD Student, Cornell University
Generative Models
Q
Quan Dao
Rutgers University
T
T. Dao
VinAI Research
H
Hoang Phan
New York University
D
Dimitris Metaxas
Rutgers University
A
Anh Tran
VinAI Research