🤖 AI Summary
Silent data corruption (SDC)—hardware-induced errors yielding incorrect computations without failure signals—remains uncharacterized in large language model (LLM) training. Method: We conduct the first empirical study of SDC’s impact on LLM training, deploying controlled experiments on real faulty cloud nodes. We systematically compare healthy versus corrupted nodes across three granularities: model submodules, optimization steps, and full training epochs. Contribution/Results: We find that even minute SDC perturbations significantly deflect weight evolution trajectories, causing abrupt loss spikes, convergence to distinct optima, or outright divergence. To isolate and analyze SDC effects, we propose a novel framework leveraging XLA’s deterministic execution and a custom gradient synchronization mechanism. This work provides the first empirical evidence and reproducible methodology for understanding and mitigating SDC in LLM training, establishing foundational insights for fault-tolerant LLM training systems.
📝 Abstract
As the scale of training large language models (LLMs) increases, one emergent failure is silent data corruption (SDC), where hardware produces incorrect computations without explicit failure signals. In this work, we are the first to investigate the impact of real-world SDCs on LLM training by comparing model training between healthy production nodes and unhealthy nodes exhibiting SDCs. With the help from a cloud computing platform, we access the unhealthy nodes that were swept out from production by automated fleet management. Using deterministic execution via XLA compiler and our proposed synchronization mechanisms, we isolate and analyze the impact of SDC errors on these nodes at three levels: at each submodule computation, at a single optimizer step, and at a training period. Our results reveal that the impact of SDCs on computation varies on different unhealthy nodes. Although in most cases the perturbations from SDCs on submodule computation and gradients are relatively small, SDCs can lead models to converge to different optima with different weights and even cause spikes in the training loss. Our analysis sheds light on further understanding and mitigating the impact of SDCs.