Efficiently Training 7B LLM with 1 Million Sequence Length on 8 GPUs

📅 2024-07-16
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Training 7B-language models on ultra-long contexts (1M tokens) across 8×A800 GPUs suffers from severe activation memory explosion and GPU memory fragmentation, resulting in low model flops utilization (MFU). Method: We propose a fine-grained activation memory management framework featuring (i) token-level dynamic offloading and reloading, (ii) a two-tier mixed-integer programming (MIP) optimizer for CPU–GPU heterogeneous memory reuse under CPU memory constraints, and (iii) tight integration of FlashAttention, activation recomputation, and distributed training primitives. Contribution/Results: Our approach achieves 52.30% MFU—97% and 80% higher than Megatron-LM and DeepSpeed, respectively—while enabling stable, efficient training of million-token sequences on 8 A800 GPUs for the first time. The framework preserves computational continuity and respects system memory budgets without compromising throughput or scalability.

Technology Category

Application Category

📝 Abstract
Nowadays, Large Language Models (LLMs) have been trained using extended context lengths to foster more creative applications. However, long context training poses great challenges considering the constraint of GPU memory. It not only leads to substantial activation memory consumption during training, but also incurs considerable memory fragmentation. To facilitate long context training, existing frameworks have adopted strategies such as recomputation and various forms of parallelisms. Nevertheless, these techniques rely on redundant computation or extensive communication, resulting in low Model FLOPS Utilization (MFU). In this paper, we propose MEMO, a novel LLM training framework designed for fine-grained activation memory management. Given the quadratic scaling of computation and linear scaling of memory with sequence lengths when using FlashAttention, we offload memory-consuming activations to CPU memory after each layer's forward pass and fetch them during the backward pass. To maximize the swapping of activations without hindering computation, and to avoid exhausting limited CPU memory, we implement a token-wise activation recomputation and swapping mechanism. Furthermore, we tackle the memory fragmentation issue by employing a bi-level Mixed Integer Programming (MIP) approach, optimizing memory reuse across transformer layers. Empirical results demonstrate that MEMO achieves an average of 1.97x and 1.80x MFU compared to Megatron-LM and DeepSpeed, respectively. This improvement is attributed to MEMO's ability to minimize memory fragmentation, reduce recomputation and intensive communication, and circumvent the delays associated with the memory reorganization process due to fragmentation. By leveraging fine-grained activation memory management, MEMO facilitates efficient training of 7B LLM with 1 million sequence length on just 8 A800 GPUs, achieving an MFU of 52.30%.
Problem

Research questions and friction points this paper is trying to address.

Multi-GPU Training
Large Language Models
Memory Management
Innovation

Methods, ideas, or system contributions that make the work stand out.

Memory Optimization
GPU Training Efficiency
Large Language Models
P
Pinxue Zhao
School of Computer Science & Key Lab of High Confidence Software Technologies (MOE), Peking University, China
H
Hailin Zhang
School of Computer Science & Key Lab of High Confidence Software Technologies (MOE), Peking University, China
Fangcheng Fu
Fangcheng Fu
Shanghai Jiao Tong University
machine learningdeep learningMLSysdistributed computation
Xiaonan Nie
Xiaonan Nie
ByteDance Seed, Peking University
MLSysLLMDiTUnified Model
Q
Qibin Liu
Tencent Inc., China
F
Fang Yang
Tencent Inc., China
Y
Yuanbo Peng
Tencent Inc., China
D
Dian Jiao
Tencent Inc., China
Shuaipeng Li
Shuaipeng Li
Tencent
J
Jinbao Xue
Tencent Inc., China
Y
Yangyu Tao
Tencent Inc., China
B
Bin Cui
School of Computer Science & Key Lab of High Confidence Software Technologies (MOE), Peking University, China; Institute of Computational Social Science, Peking University (Qingdao), China