π€ AI Summary
Traditional decoder-only autoregressive vision generation models are constrained by fixed raster-scan token ordering, limiting flexibility and generalization. This paper proposes RandARβthe first pure-decoder autoregressive model supporting arbitrary spatial-order image token generation. RandAR explicitly models 2D coordinates via position instruction tokens and employs a randomized sequence training paradigm to eliminate raster-order dependency. It achieves, for the first time, zero-shot inpainting, outpainting, and resolution extrapolation in autoregressive vision generation. Additionally, we introduce a parallel KV-cache decoding mechanism that accelerates inference by 2.5Γ without compromising generation quality. Extensive experiments demonstrate that RandAR significantly outperforms baseline models in multi-task zero-shot generalization while matching the performance of raster-order models. RandAR establishes a new paradigm for autoregressive visual generation, enabling order-agnostic, spatially flexible, and scalable image synthesis.
π Abstract
We introduce RandAR, a decoder-only visual autoregressive (AR) model capable of generating images in arbitrary token orders. Unlike previous decoder-only AR models that rely on a predefined generation order, RandAR removes this inductive bias, unlocking new capabilities in decoder-only generation. Our essential design enables random order by inserting a"position instruction token"before each image token to be predicted, representing the spatial location of the next image token. Trained on randomly permuted token sequences -- a more challenging task than fixed-order generation, RandAR achieves comparable performance to its conventional raster-order counterpart. More importantly, decoder-only transformers trained from random orders acquire new capabilities. For the efficiency bottleneck of AR models, RandAR adopts parallel decoding with KV-Cache at inference time, enjoying 2.5x acceleration without sacrificing generation quality. Additionally, RandAR supports inpainting, outpainting and resolution extrapolation in a zero-shot manner. We hope RandAR inspires new directions for decoder-only visual generation models and broadens their applications across diverse scenarios. Our project page is at https://rand-ar.github.io/.