🤖 AI Summary
To address load imbalance in sparse attention for vision generative models under sequence parallelism—caused by inter-head sparsity variation and irregular dense block distribution—this paper proposes Dual-Balanced Sequence Parallelism (db-SP). db-SP introduces a two-level dynamic partitioning scheme across the head and block dimensions, enabling runtime adaptive adjustment of parallelism to accommodate evolving sparsity patterns during denoising and across transformer layers. We introduce the *sparse imbalance ratio*—the first quantitative metric for characterizing sparse load imbalance—and design a sparsity-aware load balancing algorithm integrated into a unified parallel architecture. Experiments demonstrate that db-SP achieves 1.25× end-to-end speedup and 1.40× acceleration in the attention module over state-of-the-art methods, significantly improving inference efficiency for diffusion Transformers.
📝 Abstract
Scaling Diffusion Transformer (DiT) inference via sequence parallelism is critical for reducing latency in visual generation, but is severely hampered by workload imbalance when applied to models employing block-wise sparse attention. The imbalance stems from the inherent variation in sparsity across attention heads and the irregular distribution of dense blocks within the sparse mask, when sequence parallelism is applied along the head dimension (as in Ulysses) or the block dimension (as in Ring Attention). In this paper, we formalize a sparse imbalance ratio to quantify the imbalance, and propose db-SP, a sparsity-aware sequence parallelism technique that tackles the challenge. db-SP contains a dual-level partitioning approach that achieves near-perfect workload balance at both the head and block levels with negligible overhead. Furthermore, to handle the evolving sparsity patterns across denoising steps and layers, db-SP dynamically determines the parallel degrees for the head and block dimensions at runtime. Experimental results demonstrate that db-SP delivers an end-to-end speedup of 1.25x and an attention-specific speedup of 1.40x over state-of-the-art sequence parallel methods on average. Code is available at https://github.com/thu-nics/db-SP.