TorchTitan: One-stop PyTorch native solution for production ready LLM pre-training

📅 2024-10-09
🏛️ International Conference on Learning Representations
📈 Citations: 14
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM pretraining frameworks suffer from fragmentation, poor interoperability, and high maintenance overhead, severely hindering systematic evaluation and production deployment of training methodologies. This paper introduces an open-source, PyTorch-native distributed training system designed for models ranging from 10B to 400B parameters. It proposes a novel modular 3D parallelism architecture—integrating data, tensor, and pipeline parallelism—and tightly couples Float8 quantization with SymmetricMemory hardware-software co-design. The system further incorporates elastic scaling, unified checkpointing, and a reproducible experiment platform. Evaluated on the Llama 3.1 series, it achieves 65.08% speedup on 128 GPUs (8B model), an additional 12.59% gain on 256 GPUs (70B), and a further 30% improvement on 512 GPUs (405B), significantly outperforming baseline systems. The framework delivers high performance, strong scalability, and production readiness.

Technology Category

Application Category

📝 Abstract
The development of large language models (LLMs) has been instrumental in advancing state-of-the-art natural language processing applications. Training LLMs with billions of parameters and trillions of tokens require sophisticated distributed systems that enable composing and comparing several state-of-the-art techniques in order to efficiently scale across thousands of accelerators. However, existing solutions are complex, scattered across multiple libraries/repositories, lack interoperability, and are cumbersome to maintain. Thus, curating and empirically comparing training recipes require non-trivial engineering effort. This paper introduces TorchTitan, an open-source, PyTorch-native distributed training system that unifies state-of-the-art techniques, streamlining integration and reducing overhead. TorchTitan enables 3D parallelism in a modular manner with elastic scaling, providing comprehensive logging, checkpointing, and debugging tools for production-ready training. It also incorporates hardware-software co-designed solutions, leveraging features like Float8 training and SymmetricMemory. As a flexible test bed, TorchTitan facilitates custom recipe curation and comparison, allowing us to develop optimized training recipes for Llama 3.1 and provide guidance on selecting techniques for maximum efficiency based on our experiences. We thoroughly assess TorchTitan on the Llama 3.1 family of LLMs, spanning 8 billion to 405 billion parameters, and showcase its exceptional performance, modular composability, and elastic scalability. By stacking training optimizations, we demonstrate accelerations of 65.08% with 1D parallelism at the 128-GPU scale (Llama 3.1 8B), an additional 12.59% with 2D parallelism at the 256-GPU scale (Llama 3.1 70B), and an additional 30% with 3D parallelism at the 512-GPU scale (Llama 3.1 405B) on NVIDIA H100 GPUs over optimized baselines.
Problem

Research questions and friction points this paper is trying to address.

Simplifying complex distributed LLM training systems
Unifying scattered techniques into a PyTorch-native solution
Enabling efficient scaling and optimization for large models
Innovation

Methods, ideas, or system contributions that make the work stand out.

PyTorch-native unified distributed training system
Modular 3D parallelism with elastic scaling
Hardware-software co-designed Float8 training
🔎 Similar Papers
No similar papers found.
W
Wanchao Liang
Meta
T
Tianyu Liu
Meta
L
Less Wright
Meta
W
Will Constable
Meta
A
Andrew Gu
Meta
Chien-Chin Huang
Chien-Chin Huang
Meta
I
Iris Zhang
Meta
W
Wei Feng
Meta
Howard Huang
Howard Huang
Nokia Bell Labs
Communication theorywireless system designmultiple antenna technologieslocalization
J
Junjie Wang
Meta
S
S. Purandare
Harvard University
G
Gokul Nadathur
Meta
Stratos Idreos
Stratos Idreos
Harvard University
Data/AI SystemsData StructuresNeural NetworksNoSQLImage AI