Variance-Aware Noisy Training: Hardening DNNs against Unstable Analog Computations

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dynamic noise—such as thermal and temporal drift—in analog computing hardware severely degrades DNN inference accuracy. To address this, we propose a variance-aware noise training paradigm. Our core innovation is the first introduction of a noise evolution scheduling mechanism, enabling the training-time noise distribution to dynamically adapt to the time-varying noise variance observed during inference—thereby overcoming the robustness bottleneck of conventional static-noise training in non-stationary environments. The method comprises three components: noise-scheduled training, variance-driven adaptive noise sampling, and a lightweight ensemble strategy requiring zero additional parameters or computational overhead. Evaluated on CIFAR-10 and Tiny ImageNet, our approach achieves robust accuracies of 97.3% (+25.0%) and 89.9% (+51.4%), respectively, significantly outperforming baselines without increasing training cost.

Technology Category

Application Category

📝 Abstract
The disparity between the computational demands of deep learning and the capabilities of compute hardware is expanding drastically. Although deep learning achieves remarkable performance in countless tasks, its escalating requirements for computational power and energy consumption surpass the sustainable limits of even specialized neural processing units, including the Apple Neural Engine and NVIDIA TensorCores. This challenge is intensified by the slowdown in CMOS scaling. Analog computing presents a promising alternative, offering substantial improvements in energy efficiency by directly manipulating physical quantities such as current, voltage, charge, or photons. However, it is inherently vulnerable to manufacturing variations, nonlinearities, and noise, leading to degraded prediction accuracy. One of the most effective techniques for enhancing robustness, Noisy Training, introduces noise during the training phase to reinforce the model against disturbances encountered during inference. Although highly effective, its performance degrades in real-world environments where noise characteristics fluctuate due to external factors such as temperature variations and temporal drift. This study underscores the necessity of Noisy Training while revealing its fundamental limitations in the presence of dynamic noise. To address these challenges, we propose Variance-Aware Noisy Training, a novel approach that mitigates performance degradation by incorporating noise schedules which emulate the evolving noise conditions encountered during inference. Our method substantially improves model robustness, without training overhead. We demonstrate a significant increase in robustness, from 72.3% with conventional Noisy Training to 97.3% with Variance-Aware Noisy Training on CIFAR-10 and from 38.5% to 89.9% on Tiny ImageNet.
Problem

Research questions and friction points this paper is trying to address.

Addresses DNN robustness against unstable analog computations.
Mitigates performance degradation from dynamic noise in analog computing.
Proposes Variance-Aware Noisy Training to enhance model robustness.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Variance-Aware Noisy Training enhances DNN robustness.
Incorporates noise schedules for dynamic noise conditions.
Improves robustness without additional training overhead.
🔎 Similar Papers
No similar papers found.
X
Xiao Wang
Hardware and Artificial Intelligence Lab, Institute of Computer Engineering, Heidelberg University, Germany
H
Hendrik Borras
Hardware and Artificial Intelligence Lab, Institute of Computer Engineering, Heidelberg University, Germany
Bernhard Klein
Bernhard Klein
Researcher at University of Deusto
Pervasive SystemsAmbient IntelligenceSocial SoftwareSocial Data MiningData Stream Processing