On Hardening DNNs against Noisy Computations

📅 2025-01-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the significant accuracy degradation of DNN inference on analog hardware due to inherent device-level noise. To enhance robustness, we systematically compare quantization-aware training (QAT) and noise-injection training (NIT), explicitly modeling analog computational noise distributions during training and conducting cross-architecture empirical evaluation (ResNet, VGG, MobileNet, etc.). Our key findings are: (1) NIT substantially outperforms fixed-scale QAT in deep networks—reducing inference error by over 40% on ResNet-18; and (2) NIT exhibits strong generalization across diverse analog hardware platforms without requiring precise knowledge of the underlying noise model. These results establish NIT as a hardware-agnostic, noise-resilient training paradigm. This work provides both a novel methodological framework and an empirical benchmark for developing robust DNNs targeting analog AI accelerators.

Technology Category

Application Category

📝 Abstract
The success of deep learning has sparked significant interest in designing computer hardware optimized for the high computational demands of neural network inference. As further miniaturization of digital CMOS processors becomes increasingly challenging, alternative computing paradigms, such as analog computing, are gaining consideration. Particularly for compute-intensive tasks such as matrix multiplication, analog computing presents a promising alternative due to its potential for significantly higher energy efficiency compared to conventional digital technology. However, analog computations are inherently noisy, which makes it challenging to maintain high accuracy on deep neural networks. This work investigates the effectiveness of training neural networks with quantization to increase the robustness against noise. Experimental results across various network architectures show that quantization-aware training with constant scaling factors enhances robustness. We compare these methods with noisy training, which incorporates a noise injection during training that mimics the noise encountered during inference. While both two methods increase tolerance against noise, noisy training emerges as the superior approach for achieving robust neural network performance, especially in complex neural architectures.
Problem

Research questions and friction points this paper is trying to address.

Deep Neural Networks
Simulated Computational Noise
Accuracy Maintenance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantization Techniques
Robustness Enhancement
Noisy Training Comparison
🔎 Similar Papers
No similar papers found.
X
Xiao Wang
HAWAII Lab, Heidelberg University, Germany
H
Hendrik Borras
HAWAII Lab, Heidelberg University, Germany
Bernhard Klein
Bernhard Klein
Researcher at University of Deusto
Pervasive SystemsAmbient IntelligenceSocial SoftwareSocial Data MiningData Stream Processing
H
Holger Froning
HAWAII Lab, Heidelberg University, Germany