LLM-PRISM: Characterizing Silent Data Corruption from Permanent GPU Faults in LLM Training

📅 2026-04-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the threat of silent data corruption (SDC) caused by permanent GPU hardware defects during large language model pretraining, which can induce anomalous gradients and parameters, thereby compromising training stability and model quality. For the first time, the work systematically evaluates the robustness of large-scale pretraining to SDC at the hardware level by integrating RTL-level GPU fault simulation with a stochastic fault injection engine embedded in Megatron-LM. Conducting 7,664 experiments across FP16, BF16, and FP8 numerical formats, the study elucidates how fault type, injection rate, and numerical precision jointly influence training stability. It reveals that while low-frequency faults are generally tolerable, certain data paths remain vulnerable to catastrophic divergence even under moderate fault rates.

Technology Category

Application Category

📝 Abstract
Large-scale LLM training is increasingly susceptible to hardware defects stemming from manufacturing escapes and silicon aging. These defects manifest as Silent Data Corruption (SDC) that perturb gradients and parameters throughout the training process. We present LLM-PRISM, a methodology to characterize LLM pre-training resilience to hardware faults. LLM-PRISM couples RTL-level GPU fault simulation with a stochastic injection engine embedded in Megatron-LM. Through 7,664 training runs across FP16, BF16, and FP8 regimes, we analyze how fault type, rate, and numeric format govern resilience. We find that while LLMs resist low-frequency faults, impact is highly non-uniform; critical datapaths and specific precision formats can induce catastrophic divergence even at moderate fault rates. This study provides the first hardware-grounded, pre-training characterization of SDC resilience.
Problem

Research questions and friction points this paper is trying to address.

Silent Data Corruption
GPU faults
LLM training
hardware defects
pre-training resilience
Innovation

Methods, ideas, or system contributions that make the work stand out.

Silent Data Corruption
GPU fault simulation
LLM training resilience
stochastic fault injection
numerical precision
🔎 Similar Papers
No similar papers found.
A
Abhishek Tyagi
University of Rochester
Saurabh Hukerikar
Saurabh Hukerikar
Nvidia
High Performance ComputingResilienceFault ToleranceProgramming ModelsComputer Architecture
N
Nirmal Saxena
NVIDIA Corporation
Y
Yanxiang Huang
NVIDIA Corporation
P
Philip Shirvani
NVIDIA Corporation
C
Chung-Hsuan Tung
Duke University
Yuhao Zhu
Yuhao Zhu
University of Rochester
Visual ComputingHuman VisionComputer Architecture