🤖 AI Summary
To address low FLOPs utilization, high communication overhead, and poor scalability in training large language models (LLMs) on ultra-long sequences (>1M tokens), this paper proposes an efficient distributed training framework. Our method introduces three key innovations: (1) BurstAttention—a topology-aware ring-based communication scheme with fine-grained computation-communication overlap for optimized attention; (2) sequence-level selective activation checkpointing, drastically reducing GPU memory footprint; and (3) fusion of the language modeling head with the loss function to improve load balancing and computational efficiency. Evaluated on million-token sequences, our framework achieves 1.2× speedup over state-of-the-art approaches, significantly lowers memory consumption, and substantially improves FLOPs utilization—demonstrating superior scalability and resource efficiency for ultra-long-context LLM training.
📝 Abstract
Existing methods for training LLMs on long-sequence data, such as Tensor Parallelism and Context Parallelism, exhibit low Model FLOPs Utilization as sequence lengths and number of GPUs increase, especially when sequence lengths exceed 1M tokens. To address these challenges, we propose BurstEngine, an efficient framework designed to train LLMs on long-sequence data. BurstEngine introduces BurstAttention, an optimized distributed attention with lower communication cost than RingAttention. BurstAttention leverages topology-aware ring communication to fully utilize network bandwidth and incorporates fine-grained communication-computation overlap. Furthermore, BurstEngine introduces sequence-level selective checkpointing and fuses the language modeling head with the loss function to reduce memory cost. Additionally, BurstEngine introduces workload balance optimization for various types of attention masking. By integrating these optimizations, BurstEngine achieves a $1.2 imes$ speedup with much lower memory overhead than the state-of-the-art baselines when training LLMs on extremely long sequences of over 1M tokens. We have made our code publicly available on GitHub: https://github.com/thunlp/BurstEngine.