Spectral Image Tokenizer

📅 2024-12-12
🏛️ arXiv.org
📈 Citations: 4
Influential: 0
📄 PDF
🤖 AI Summary
Existing image tokenizers employ raster-scan spatial token ordering, which is inherently incompatible with autoregressive modeling. This work proposes a spectral-domain image tokenizer based on the discrete wavelet transform (DWT), mapping images into a coarse-to-fine, multi-scale spectral token sequence—establishing the first spectral-domain, coarse-grained-first tokenization paradigm. The method naturally enables zero-shot cross-resolution reconstruction, partial decoding for rapid coarse preview generation, text-guided super-resolution and editing, and adapts to arbitrary resolutions without retraining. Experiments demonstrate substantial improvements in token reconstruction fidelity and achieve state-of-the-art performance on multi-scale image generation, text-guided super-resolution, and text-guided image editing tasks.

Technology Category

Application Category

📝 Abstract
Image tokenizers map images to sequences of discrete tokens, and are a crucial component of autoregressive transformer-based image generation. The tokens are typically associated with spatial locations in the input image, arranged in raster scan order, which is not ideal for autoregressive modeling. In this paper, we propose to tokenize the image spectrum instead, obtained from a discrete wavelet transform (DWT), such that the sequence of tokens represents the image in a coarse-to-fine fashion. Our tokenizer brings several advantages: 1) it leverages that natural images are more compressible at high frequencies, 2) it can take and reconstruct images of different resolutions without retraining, 3) it improves the conditioning for next-token prediction -- instead of conditioning on a partial line-by-line reconstruction of the image, it takes a coarse reconstruction of the full image, 4) it enables partial decoding where the first few generated tokens can reconstruct a coarse version of the image, 5) it enables autoregressive models to be used for image upsampling. We evaluate the tokenizer reconstruction metrics as well as multiscale image generation, text-guided image upsampling and editing.
Problem

Research questions and friction points this paper is trying to address.

Improves image tokenization via spectral methods
Enables multi-resolution image handling without retraining
Enhances autoregressive modeling for image generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tokenizes image spectrum via wavelet transform
Enables multi-resolution without retraining
Improves autoregressive conditioning with coarse reconstruction
🔎 Similar Papers
No similar papers found.
Carlos Esteves
Carlos Esteves
Research Scientist, Google Research
Machine LearningComputer Vision
M
Mohammed Suhail
Google Research
A
A. Makadia
Google Research