🤖 AI Summary
Existing sparse attention methods in video generation suffer from systematic bias—over-amplifying weights of salient tokens while completely neglecting non-salient ones—leading to degraded attention fidelity and generation performance. This work is the first to identify and characterize this bias, proposing an implicit full-attention reference mechanism that jointly optimizes salient and non-salient token contributions via isolated pooling redistribution and gain-aware correction. The approach significantly improves alignment between sparse and full-attention maps. Furthermore, we introduce error-aware reweighting, multimodal pooling, and Triton-optimized kernels to accelerate inference. Our method achieves 3.33× and 2.08× speedups on HunyuanVideo and Wan 2.1, respectively, without compromising generation quality. Code is publicly available.
📝 Abstract
Diffusion Transformers dominate video generation, but the quadratic complexity of attention computation introduces substantial latency. Attention sparsity reduces computational costs by focusing on critical tokens while ignoring non-critical tokens. However, existing methods suffer from severe performance degradation. In this paper, we revisit attention sparsity and reveal that existing methods induce systematic biases in attention allocation: (1) excessive focus on critical tokens amplifies their attention weights; (2) complete neglect of non-critical tokens causes the loss of relevant attention weights. To address these issues, we propose Rectified SpaAttn, which rectifies attention allocation with implicit full attention reference, thereby enhancing the alignment between sparse and full attention maps. Specifically: (1) for critical tokens, we show that their bias is proportional to the sparse attention weights, with the ratio governed by the amplified weights. Accordingly, we propose Isolated-Pooling Attention Reallocation, which calculates accurate rectification factors by reallocating multimodal pooled weights. (2) for non-critical tokens, recovering attention weights from the pooled query-key yields attention gains but also introduces pooling errors. Therefore, we propose Gain-Aware Pooling Rectification, which ensures that the rectified gain consistently surpasses the induced error. Moreover, we customize and integrate the Rectified SpaAttn kernel using Triton, achieving up to 3.33 and 2.08 times speedups on HunyuanVideo and Wan 2.1, respectively, while maintaining high generation quality. We release Rectified SpaAttn as open-source at https://github.com/BienLuky/Rectified-SpaAttn .