RandAR: Decoder-only Autoregressive Visual Generation in Random Orders

πŸ“… 2024-12-02
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 20
✨ Influential: 3
πŸ“„ PDF
πŸ€– AI Summary
Traditional decoder-only autoregressive vision generation models are constrained by fixed raster-scan token ordering, limiting flexibility and generalization. This paper proposes RandARβ€”the first pure-decoder autoregressive model supporting arbitrary spatial-order image token generation. RandAR explicitly models 2D coordinates via position instruction tokens and employs a randomized sequence training paradigm to eliminate raster-order dependency. It achieves, for the first time, zero-shot inpainting, outpainting, and resolution extrapolation in autoregressive vision generation. Additionally, we introduce a parallel KV-cache decoding mechanism that accelerates inference by 2.5Γ— without compromising generation quality. Extensive experiments demonstrate that RandAR significantly outperforms baseline models in multi-task zero-shot generalization while matching the performance of raster-order models. RandAR establishes a new paradigm for autoregressive visual generation, enabling order-agnostic, spatially flexible, and scalable image synthesis.

Technology Category

Application Category

πŸ“ Abstract
We introduce RandAR, a decoder-only visual autoregressive (AR) model capable of generating images in arbitrary token orders. Unlike previous decoder-only AR models that rely on a predefined generation order, RandAR removes this inductive bias, unlocking new capabilities in decoder-only generation. Our essential design enables random order by inserting a"position instruction token"before each image token to be predicted, representing the spatial location of the next image token. Trained on randomly permuted token sequences -- a more challenging task than fixed-order generation, RandAR achieves comparable performance to its conventional raster-order counterpart. More importantly, decoder-only transformers trained from random orders acquire new capabilities. For the efficiency bottleneck of AR models, RandAR adopts parallel decoding with KV-Cache at inference time, enjoying 2.5x acceleration without sacrificing generation quality. Additionally, RandAR supports inpainting, outpainting and resolution extrapolation in a zero-shot manner. We hope RandAR inspires new directions for decoder-only visual generation models and broadens their applications across diverse scenarios. Our project page is at https://rand-ar.github.io/.
Problem

Research questions and friction points this paper is trying to address.

Enables image generation in arbitrary token orders
Removes predefined generation order bias in AR models
Supports zero-shot inpainting and outpainting tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Random order image generation via position tokens
Parallel decoding with KV-Cache for speedup
Zero-shot inpainting and outpainting support
πŸ”Ž Similar Papers
No similar papers found.