🤖 AI Summary
This work systematically investigates efficiency bottlenecks in large-scale LLM training across multi-GPU clusters (NVIDIA H100/H200, AMD MI250), focusing on the coupled effects of hardware utilization, power consumption, thermal throttling, and communication overhead. We conduct a multidimensional performance analysis of dense and sparse models using joint evaluation of tensor, pipeline, data, and expert parallelism—augmented with activation recomputation and compute-communication overlap. Key findings include: (i) scaling alone does not guarantee superior performance; smaller high-memory clusters outperform larger configurations in specific scenarios; (ii) tensor + pipeline parallelism often underutilizes interconnect bandwidth; and (iii) excessively large microbatches trigger power spikes and thermal throttling. Based on these insights, we propose parallelism strategy optimizations that jointly improve scalability and thermal stability. All experimental code is publicly released.
📝 Abstract
The rapid scaling of Large Language Models (LLMs) has pushed training workloads far beyond the limits of single-node analysis, demanding a deeper understanding of how these models behave across large-scale, multi-GPU systems. In this paper, we present a comprehensive characterization of LLM training across diverse real-world workloads and hardware platforms, including NVIDIA H100/H200 and AMD MI250 GPUs. We analyze dense and sparse models under various parallelism strategies -- tensor, pipeline, data, and expert -- and evaluate their effects on hardware utilization, power consumption, and thermal behavior. We further evaluate the effectiveness of optimizations such as activation recomputation and compute-communication overlap. Our findings show that performance is not determined solely by scaling hardware capacity. Scale-up systems with fewer, higher-memory GPUs can outperform scale-out systems in communication-bound regimes, but only under carefully tuned configurations; in other cases, scale-out deployments achieve superior throughput. We also show that certain parallelism combinations, such as tensor with pipeline, lead to bandwidth underutilization due to inefficient data chunking, while increasing microbatch sizes beyond a certain point induces bursty execution and peak power excursions that worsen thermal throttling. These insights reveal how training performance is shaped by complex interactions between hardware, system topology, and model execution. We conclude by offering recommendations for system and hardware design to improve the scalability and reliability of future LLM systems and workloads. The source code of this project is available at https://github.com/sitar-lab/CharLLM-PPT.