BurstEngine: an Efficient Distributed Framework for Training Transformers on Extremely Long Sequences of over 1M Tokens

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low FLOPs utilization, high communication overhead, and poor scalability in training large language models (LLMs) on ultra-long sequences (>1M tokens), this paper proposes an efficient distributed training framework. Our method introduces three key innovations: (1) BurstAttention—a topology-aware ring-based communication scheme with fine-grained computation-communication overlap for optimized attention; (2) sequence-level selective activation checkpointing, drastically reducing GPU memory footprint; and (3) fusion of the language modeling head with the loss function to improve load balancing and computational efficiency. Evaluated on million-token sequences, our framework achieves 1.2× speedup over state-of-the-art approaches, significantly lowers memory consumption, and substantially improves FLOPs utilization—demonstrating superior scalability and resource efficiency for ultra-long-context LLM training.

Technology Category

Application Category

📝 Abstract
Existing methods for training LLMs on long-sequence data, such as Tensor Parallelism and Context Parallelism, exhibit low Model FLOPs Utilization as sequence lengths and number of GPUs increase, especially when sequence lengths exceed 1M tokens. To address these challenges, we propose BurstEngine, an efficient framework designed to train LLMs on long-sequence data. BurstEngine introduces BurstAttention, an optimized distributed attention with lower communication cost than RingAttention. BurstAttention leverages topology-aware ring communication to fully utilize network bandwidth and incorporates fine-grained communication-computation overlap. Furthermore, BurstEngine introduces sequence-level selective checkpointing and fuses the language modeling head with the loss function to reduce memory cost. Additionally, BurstEngine introduces workload balance optimization for various types of attention masking. By integrating these optimizations, BurstEngine achieves a $1.2 imes$ speedup with much lower memory overhead than the state-of-the-art baselines when training LLMs on extremely long sequences of over 1M tokens. We have made our code publicly available on GitHub: https://github.com/thunlp/BurstEngine.
Problem

Research questions and friction points this paper is trying to address.

Existing methods show low efficiency with extremely long sequences over 1M tokens
Current distributed attention approaches have high communication costs
Training LLMs on long sequences faces memory and workload imbalance challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

BurstAttention uses topology-aware ring communication for efficiency
Sequence-level selective checkpointing reduces memory overhead significantly
Workload balance optimization handles various attention masking types
🔎 Similar Papers
No similar papers found.
A
Ao Sun
Beijing University of Posts and Telecommunications, Beijing, China
Weilin Zhao
Weilin Zhao
Tsinghua University
Natural Language ProcessingArtificial IntelligenceEfficient LLM
X
Xu Han
Department of Computer Science and Technology, Tsinghua University, Beijing, China
C
Cheng Yang
Beijing University of Posts and Telecommunications, Beijing, China
Z
Zhiyuan Liu
Department of Computer Science and Technology, Tsinghua University, Beijing, China
Chuan Shi
Chuan Shi
Beijing University of Posts and Telecommunications
data miningmachine learningsocial network analysis
Maosong Sun
Maosong Sun
Professor of Computer Science and Technology, Tsinghua University
Natural Language ProcessingArtificial IntelligenceSocial Computing