SlimPipe: Memory-Thrifty and Efficient Pipeline Parallelism for Long-Context LLM Training

📅 2025-04-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address two critical bottlenecks in training long-context large language models—high activation memory peaks and large pipeline bubbles induced by pipeline parallelism—this paper proposes Fine-Grained Pipeline Parallelism (FGPP). FGPP employs uniform sequence slicing coupled with a 1F1B (one forward, one backward) scheduling strategy, achieving per-slice activation memory retention for the first time and substantially reducing peak GPU memory consumption. Additionally, it introduces a causality-aware dynamic load redistribution mechanism to mitigate inter-slice computational imbalance. Evaluated on Llama-70B, FGPP incurs near-zero overhead and minimal bubble, delivering a 1.57× improvement in Model FLOPs Utilization (MFU) at 512K context length. At extreme scale—2,048K context and 256 Hopper GPUs—it sustains >45% MFU, whereas state-of-the-art methods fail due to GPU memory overflow or severe performance collapse.

Technology Category

Application Category

📝 Abstract
Pipeline Parallelism (PP) serves as a crucial technique for training Large Language Models (LLMs), owing to its capability to alleviate memory pressure from model states with relatively low communication overhead. However, in long-context scenarios, existing pipeline parallelism methods fail to address the substantial activation memory pressure, primarily due to the peak memory consumption resulting from the accumulation of activations across multiple microbatches. Moreover, these approaches inevitably introduce considerable pipeline bubbles, further hindering efficiency. To tackle these challenges, we propose SlimPipe, a novel approach to fine-grained pipeline parallelism that employs uniform sequence slicing coupled with one-forward-one-backward (1F1B) schedule. It reduces the accumulated activations from several microbatches to just one, which is split into several slices. Although the slices are evenly partitioned, the computation cost is not equal across slices due to causal attention. We develop a sophisticated workload redistribution technique to address this load imbalance. SlimPipe achieves (1) near-zero memory overhead and (2) minimal pipeline bubbles simultaneously. The effectiveness of SlimPipe has been proven by thorough testing with diverse model architectures, context window sizes, and SlimPipe-specific configurations. For example, on the Llama 70B model, compared to state-of-the-art methods, SlimPipe significantly boosts the Model FLOPs Utilization (MFU) to up to $1.57 imes$ for a context length of 512K. More notably, for a context length of 2048K, it maintains over 45% utilization on 256 NVIDIA Hopper 80GB GPUs, while other approaches either suffer significant performance drops or fail entirely due to memory constraints.
Problem

Research questions and friction points this paper is trying to address.

Reduces activation memory pressure in long-context LLM training
Minimizes pipeline bubbles to enhance training efficiency
Balances workload across slices despite causal attention costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uniform sequence slicing for activation reduction
One-forward-one-backward schedule optimization
Workload redistribution for load balance
🔎 Similar Papers
No similar papers found.
Z
Zhouyang Li
Kuaishou Technology, Beijing, China
Y
Yuliang Liu
Kuaishou Technology, Beijing, China
W
Wei Zhang
Kuaishou Technology, Beijing, China
Tailing Yuan
Tailing Yuan
Department of Computer Science & Technology, Tsinghua University, Beijing, China
Computer Graphics
B
Bin Chen
Kuaishou Technology, Beijing, China
Chengru Song
Chengru Song
Unknown affiliation
D
Di Zhang
Kuaishou Technology, Beijing, China