🤖 AI Summary
Existing LLM pretraining frameworks suffer from fragmentation, poor interoperability, and high maintenance overhead, severely hindering systematic evaluation and production deployment of training methodologies. This paper introduces an open-source, PyTorch-native distributed training system designed for models ranging from 10B to 400B parameters. It proposes a novel modular 3D parallelism architecture—integrating data, tensor, and pipeline parallelism—and tightly couples Float8 quantization with SymmetricMemory hardware-software co-design. The system further incorporates elastic scaling, unified checkpointing, and a reproducible experiment platform. Evaluated on the Llama 3.1 series, it achieves 65.08% speedup on 128 GPUs (8B model), an additional 12.59% gain on 256 GPUs (70B), and a further 30% improvement on 512 GPUs (405B), significantly outperforming baseline systems. The framework delivers high performance, strong scalability, and production readiness.
📝 Abstract
The development of large language models (LLMs) has been instrumental in advancing state-of-the-art natural language processing applications. Training LLMs with billions of parameters and trillions of tokens require sophisticated distributed systems that enable composing and comparing several state-of-the-art techniques in order to efficiently scale across thousands of accelerators. However, existing solutions are complex, scattered across multiple libraries/repositories, lack interoperability, and are cumbersome to maintain. Thus, curating and empirically comparing training recipes require non-trivial engineering effort. This paper introduces TorchTitan, an open-source, PyTorch-native distributed training system that unifies state-of-the-art techniques, streamlining integration and reducing overhead. TorchTitan enables 3D parallelism in a modular manner with elastic scaling, providing comprehensive logging, checkpointing, and debugging tools for production-ready training. It also incorporates hardware-software co-designed solutions, leveraging features like Float8 training and SymmetricMemory. As a flexible test bed, TorchTitan facilitates custom recipe curation and comparison, allowing us to develop optimized training recipes for Llama 3.1 and provide guidance on selecting techniques for maximum efficiency based on our experiences. We thoroughly assess TorchTitan on the Llama 3.1 family of LLMs, spanning 8 billion to 405 billion parameters, and showcase its exceptional performance, modular composability, and elastic scalability. By stacking training optimizations, we demonstrate accelerations of 65.08% with 1D parallelism at the 128-GPU scale (Llama 3.1 8B), an additional 12.59% with 2D parallelism at the 256-GPU scale (Llama 3.1 70B), and an additional 30% with 3D parallelism at the 512-GPU scale (Llama 3.1 405B) on NVIDIA H100 GPUs over optimized baselines.