db-SP: Accelerating Sparse Attention for Visual Generative Models with Dual-Balanced Sequence Parallelism

📅 2025-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address load imbalance in sparse attention for vision generative models under sequence parallelism—caused by inter-head sparsity variation and irregular dense block distribution—this paper proposes Dual-Balanced Sequence Parallelism (db-SP). db-SP introduces a two-level dynamic partitioning scheme across the head and block dimensions, enabling runtime adaptive adjustment of parallelism to accommodate evolving sparsity patterns during denoising and across transformer layers. We introduce the *sparse imbalance ratio*—the first quantitative metric for characterizing sparse load imbalance—and design a sparsity-aware load balancing algorithm integrated into a unified parallel architecture. Experiments demonstrate that db-SP achieves 1.25× end-to-end speedup and 1.40× acceleration in the attention module over state-of-the-art methods, significantly improving inference efficiency for diffusion Transformers.

Technology Category

Application Category

📝 Abstract
Scaling Diffusion Transformer (DiT) inference via sequence parallelism is critical for reducing latency in visual generation, but is severely hampered by workload imbalance when applied to models employing block-wise sparse attention. The imbalance stems from the inherent variation in sparsity across attention heads and the irregular distribution of dense blocks within the sparse mask, when sequence parallelism is applied along the head dimension (as in Ulysses) or the block dimension (as in Ring Attention). In this paper, we formalize a sparse imbalance ratio to quantify the imbalance, and propose db-SP, a sparsity-aware sequence parallelism technique that tackles the challenge. db-SP contains a dual-level partitioning approach that achieves near-perfect workload balance at both the head and block levels with negligible overhead. Furthermore, to handle the evolving sparsity patterns across denoising steps and layers, db-SP dynamically determines the parallel degrees for the head and block dimensions at runtime. Experimental results demonstrate that db-SP delivers an end-to-end speedup of 1.25x and an attention-specific speedup of 1.40x over state-of-the-art sequence parallel methods on average. Code is available at https://github.com/thu-nics/db-SP.
Problem

Research questions and friction points this paper is trying to address.

Addresses workload imbalance in sparse attention for visual generative models
Improves sequence parallelism efficiency for Diffusion Transformer inference
Dynamically balances computation across attention heads and blocks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-level partitioning balances head and block workloads
Dynamic parallel degrees adapt to evolving sparsity patterns
Sparsity-aware sequence parallelism accelerates sparse attention
🔎 Similar Papers
No similar papers found.
S
Siqi Chen
Tsinghua University, Beijing, China
Ke Hong
Ke Hong
Tsinghua University
efficient computingGPU accelerationsparse computingML system
T
Tianchen Zhao
Tsinghua University, Beijing, China
R
Ruiqi Xie
Tsinghua University, Beijing, China
Z
Zhenhua Zhu
Tsinghua University, Beijing, China
X
Xudong Zhang
Tsinghua University, Beijing, China
Y
Yu Wang
Tsinghua University, Beijing, China