🤖 AI Summary
Deep neural networks (DNNs) deployed in high-radiation environments suffer inference failures due to multi-bit single-event upsets (SEUs), which corrupt multiple bits simultaneously. Existing fault-tolerance techniques often require hardware modifications or fail to model SEU propagation across DNN layers.
Method: This paper proposes Fault-Aware Training (FAT), a hardware-agnostic training framework that systematically models SEU propagation across layers and integrates end-to-end differentiable fault injection. FAT explicitly injects multi-point faults during training and introduces gradient-based fault-masking regularization. It further combines weight sensitivity analysis with adversarial robustness training to enhance resilience.
Contribution/Results: Evaluated on CIFAR-10 and ImageNet, FAT improves multi-SEU tolerance by up to 3× over baseline methods, significantly mitigates accuracy degradation under radiation-induced faults, and incurs no additional inference latency or hardware overhead.
📝 Abstract
Deep neural networks (DNNs) are increasingly used in safety-critical applications. Reliable fault analysis and mitigation are essential to ensure their functionality in harsh environments that contain high radiation levels. This study analyses the impact of multiple single-bit single-event upsets in DNNs by performing fault injection at the level of a DNN model. Additionally, a fault aware training (FAT) methodology is proposed that improves the DNNs' robustness to faults without any modification to the hardware. Experimental results show that the FAT methodology improves the tolerance to faults up to a factor 3.