🤖 AI Summary
In video generation, the high computational cost of Transformer self-attention severely hinders long-sequence modeling. Existing sparsification methods—such as factorization or fixed-window attention—fail to effectively exploit inherent spatiotemporal redundancy in videos. To address this, we propose a hardware-aware structured sparse attention framework. First, we analyze attention distributions in video diffusion Transformers, revealing head-wise heterogeneous sparsity patterns. Second, we design an adaptive block partitioning strategy coupled with a time-varying sliding window mechanism to dynamically capture critical spatiotemporal dependencies. Third, we employ automated configuration search and hardware-friendly scheduling to optimize sparse computation. Our method achieves 1.6–2.5× attention speedup on a single GPU while matching the generation quality of full-attention baselines. It significantly improves efficiency for long-video synthesis without compromising fidelity.
📝 Abstract
The computational demands of self-attention mechanisms pose a critical challenge for transformer-based video generation, particularly in synthesizing ultra-long sequences. Current approaches, such as factorized attention and fixed sparse patterns, fail to fully exploit the inherent spatio-temporal redundancies in video data. Through systematic analysis of video diffusion transformers (DiT), we uncover a key insight: Attention matrices exhibit structured, yet heterogeneous sparsity patterns, where specialized heads dynamically attend to distinct spatiotemporal regions (e.g., local pattern, cross-shaped pattern, or global pattern). Existing sparse attention methods either impose rigid constraints or introduce significant overhead, limiting their effectiveness. To address this, we propose Compact Attention, a hardware-aware acceleration framework featuring three innovations: 1) Adaptive tiling strategies that approximate diverse spatial interaction patterns via dynamic tile grouping, 2) Temporally varying windows that adjust sparsity levels based on frame proximity, and 3) An automated configuration search algorithm that optimizes sparse patterns while preserving critical attention pathways. Our method achieves 1.6~2.5x acceleration in attention computation on single-GPU setups while maintaining comparable visual quality with full-attention baselines. This work provides a principled approach to unlocking efficient long-form video generation through structured sparsity exploitation. Project Page: https://yo-ava.github.io/Compact-Attention.github.io/