🤖 AI Summary
This study addresses the threat of silent data corruption (SDC) caused by permanent GPU hardware defects during large language model pretraining, which can induce anomalous gradients and parameters, thereby compromising training stability and model quality. For the first time, the work systematically evaluates the robustness of large-scale pretraining to SDC at the hardware level by integrating RTL-level GPU fault simulation with a stochastic fault injection engine embedded in Megatron-LM. Conducting 7,664 experiments across FP16, BF16, and FP8 numerical formats, the study elucidates how fault type, injection rate, and numerical precision jointly influence training stability. It reveals that while low-frequency faults are generally tolerable, certain data paths remain vulnerable to catastrophic divergence even under moderate fault rates.
📝 Abstract
Large-scale LLM training is increasingly susceptible to hardware defects stemming from manufacturing escapes and silicon aging. These defects manifest as Silent Data Corruption (SDC) that perturb gradients and parameters throughout the training process. We present LLM-PRISM, a methodology to characterize LLM pre-training resilience to hardware faults. LLM-PRISM couples RTL-level GPU fault simulation with a stochastic injection engine embedded in Megatron-LM. Through 7,664 training runs across FP16, BF16, and FP8 regimes, we analyze how fault type, rate, and numeric format govern resilience. We find that while LLMs resist low-frequency faults, impact is highly non-uniform; critical datapaths and specific precision formats can induce catastrophic divergence even at moderate fault rates. This study provides the first hardware-grounded, pre-training characterization of SDC resilience.