RESQ: A Unified Framework for REliability- and Security Enhancement of Quantized Deep Neural Networks

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the dual vulnerability of deep neural networks under quantized deployment—namely, adversarial attacks and hardware-induced bit-flip faults—and reveals, for the first time, an asymmetric relationship between adversarial robustness and fault tolerance. To jointly enhance both forms of robustness, the authors propose a unified three-stage optimization framework: first, adversarial fine-tuning to improve resilience against input perturbations; second, fault-aware fine-tuning guided by bit-flip fault simulation; and third, a lightweight post-training quantization fusion strategy. Evaluated across multiple models and datasets, the approach achieves significant improvements, yielding up to a 10.35% gain in adversarial robustness and a 12.47% increase in fault robustness while maintaining high accuracy.

Technology Category

Application Category

📝 Abstract
This work proposes a unified three-stage framework that produces a quantized DNN with balanced fault and attack robustness. The first stage improves attack resilience via fine-tuning that desensitizes feature representations to small input perturbations. The second stage reinforces fault resilience through fault-aware fine-tuning under simulated bit-flip faults. Finally, a lightweight post-training adjustment integrates quantization to enhance efficiency and further mitigate fault sensitivity without degrading attack resilience. Experiments on ResNet18, VGG16, EfficientNet, and Swin-Tiny in CIFAR-10, CIFAR-100, and GTSRB show consistent gains of up to 10.35% in attack resilience and 12.47% in fault resilience, while maintaining competitive accuracy in quantized networks. The results also highlight an asymmetric interaction in which improvements in fault resilience generally increase resilience to adversarial attacks, whereas enhanced adversarial resilience does not necessarily lead to higher fault resilience.
Problem

Research questions and friction points this paper is trying to address.

quantized deep neural networks
fault resilience
attack resilience
adversarial robustness
bit-flip faults
Innovation

Methods, ideas, or system contributions that make the work stand out.

quantized DNN
fault resilience
adversarial robustness
unified framework
post-training adjustment
🔎 Similar Papers
No similar papers found.
A
Ali Soltan Mohammadi
University of Zanjan, Zanjan, Iran
S
Samira Nazari
University of Zanjan, Zanjan, Iran
A
Ali Azarpeyvand
University of Zanjan, Zanjan, Iran
Mahdi Taheri
Mahdi Taheri
Postdoc - BTU Cottbus-Senftenberg
ReliabilityFault TolerantNeural NetworksHardware AccelerationApproximate Computing
Milos Krstic
Milos Krstic
Professor, University of Potsdam; Department Head, IHP, Frankfurt (Oder) Germany
GALSasynchronous circuit designfault toleranceradhard designreliability
M
Michael Huebner
Brandenburg Technical University, Cottbus, Germany
C
Christian Herglotz
Brandenburg Technical University, Cottbus, Germany
Tara Ghasempouri
Tara Ghasempouri
Professor, Tallinn University of Technology
Security of digital systemsHardware verificationData-mining