🤖 AI Summary
To address the high computational cost and memory footprint of frame-wise encoding in long-video tokenization, this paper proposes CoordTok—the first framework to introduce coordinate-based implicit neural representations into video tokenization. Its core innovation lies in factorizing video representation into three orthogonal planes and enabling sparse reconstruction via random spatiotemporal coordinate sampling (x, y, t), thereby abandoning conventional frame-wise tokenization. The method integrates coordinate-implicit modeling, triplane decomposition, and a lightweight decoder, all trained end-to-end. On 128×128×128 videos, CoordTok reduces token count to just 1,280—compared to baseline methods requiring 6,144–8,192 tokens—yielding substantial reductions in GPU memory consumption and FLOPs. This efficiency enables scalable training of diffusion Transformers for long videos and supports single-pass generation of 128-frame sequences.
📝 Abstract
Efficient tokenization of videos remains a challenge in training vision models that can process long videos. One promising direction is to develop a tokenizer that can encode long video clips, as it would enable the tokenizer to leverage the temporal coherence of videos better for tokenization. However, training existing tokenizers on long videos often incurs a huge training cost as they are trained to reconstruct all the frames at once. In this paper, we introduce CoordTok, a video tokenizer that learns a mapping from coordinate-based representations to the corresponding patches of input videos, inspired by recent advances in 3D generative models. In particular, CoordTok encodes a video into factorized triplane representations and reconstructs patches that correspond to randomly sampled $(x,y,t)$ coordinates. This allows for training large tokenizer models directly on long videos without requiring excessive training resources. Our experiments show that CoordTok can drastically reduce the number of tokens for encoding long video clips. For instance, CoordTok can encode a 128-frame video with 128$ imes$128 resolution into 1280 tokens, while baselines need 6144 or 8192 tokens to achieve similar reconstruction quality. We further show that this efficient video tokenization enables memory-efficient training of a diffusion transformer that can generate 128 frames at once.