Marrying Autoregressive Transformer and Diffusion with Multi-Reference Autoregression

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses two key limitations in image generation: the low inference efficiency of autoregressive (AR) models and the weak semantic modeling capability of diffusion models. To this end, we propose TransDiff—the first unified framework that synergistically integrates autoregressive Transformers with diffusion modeling. Its core contributions are: (1) a Multi-Reference Autoregressive (MRAR) paradigm that conditions sequence modeling on multiple previously generated images, thereby enhancing diversity and fidelity; and (2) joint optimization of semantic feature encoding and diffusion-based distribution estimation for high-fidelity synthesis. On ImageNet 256×256, TransDiff achieves state-of-the-art performance with an FID of 1.42 and an Inception Score (IS) of 293.4. Moreover, it attains a 2× speedup over AR models and a 112× acceleration over standard diffusion models—demonstrating unprecedented balance among generation quality, semantic coherence, and inference efficiency.

Technology Category

Application Category

📝 Abstract
We introduce TransDiff, the first image generation model that marries Autoregressive (AR) Transformer with diffusion models. In this joint modeling framework, TransDiff encodes labels and images into high-level semantic features and employs a diffusion model to estimate the distribution of image samples. On the ImageNet 256x256 benchmark, TransDiff significantly outperforms other image generation models based on standalone AR Transformer or diffusion models. Specifically, TransDiff achieves a Fr'echet Inception Distance (FID) of 1.61 and an Inception Score (IS) of 293.4, and further provides x2 faster inference latency compared to state-of-the-art methods based on AR Transformer and x112 faster inference compared to diffusion-only models. Furthermore, building on the TransDiff model, we introduce a novel image generation paradigm called Multi-Reference Autoregression (MRAR), which performs autoregressive generation by predicting the next image. MRAR enables the model to reference multiple previously generated images, thereby facilitating the learning of more diverse representations and improving the quality of generated images in subsequent iterations. By applying MRAR, the performance of TransDiff is improved, with the FID reduced from 1.61 to 1.42. We expect TransDiff to open up a new frontier in the field of image generation.
Problem

Research questions and friction points this paper is trying to address.

Combining AR Transformer and diffusion for image generation
Improving image quality with Multi-Reference Autoregression (MRAR)
Achieving faster inference than AR or diffusion-only models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines AR Transformer and diffusion models
Uses Multi-Reference Autoregression for diverse generation
Achieves faster inference and better image quality
🔎 Similar Papers
No similar papers found.
Dingcheng Zhen
Dingcheng Zhen
SoulApp.com
LLMComputer visionMulti-modalAIGC
Q
Qian Qiao
Soul AI
Tan Yu
Tan Yu
NVIDIA
LLMRAGCross-modal searchadvertisingvision backbone
K
Kangxi Wu
Soul AI
Z
Ziwei Zhang
Soul AI
S
Siyuan Liu
Soul AI
S
Shunshun Yin
Soul AI
M
Ming Tao
Soul AI