🤖 AI Summary
This work addresses the dual vulnerability of deep neural networks under quantized deployment—namely, adversarial attacks and hardware-induced bit-flip faults—and reveals, for the first time, an asymmetric relationship between adversarial robustness and fault tolerance. To jointly enhance both forms of robustness, the authors propose a unified three-stage optimization framework: first, adversarial fine-tuning to improve resilience against input perturbations; second, fault-aware fine-tuning guided by bit-flip fault simulation; and third, a lightweight post-training quantization fusion strategy. Evaluated across multiple models and datasets, the approach achieves significant improvements, yielding up to a 10.35% gain in adversarial robustness and a 12.47% increase in fault robustness while maintaining high accuracy.
📝 Abstract
This work proposes a unified three-stage framework that produces a quantized DNN with balanced fault and attack robustness. The first stage improves attack resilience via fine-tuning that desensitizes feature representations to small input perturbations. The second stage reinforces fault resilience through fault-aware fine-tuning under simulated bit-flip faults. Finally, a lightweight post-training adjustment integrates quantization to enhance efficiency and further mitigate fault sensitivity without degrading attack resilience. Experiments on ResNet18, VGG16, EfficientNet, and Swin-Tiny in CIFAR-10, CIFAR-100, and GTSRB show consistent gains of up to 10.35% in attack resilience and 12.47% in fault resilience, while maintaining competitive accuracy in quantized networks. The results also highlight an asymmetric interaction in which improvements in fault resilience generally increase resilience to adversarial attacks, whereas enhanced adversarial resilience does not necessarily lead to higher fault resilience.