🤖 AI Summary
Analog compute-in-memory (CIM) hardware offers substantial energy efficiency gains for neural network inference, yet its deployment robustness is severely hindered by complex, non-ideal hardware noise. Existing noise-aware training relies on differentiable, oversimplified noise models that fail to capture realistic hardware distortions. To address this, we propose a decoupled training framework that separately models forward-pass hardware noise and backward-pass gradient computation. Crucially, we extend the straight-through estimator (STE) to support high-fidelity, non-differentiable noise modeling, with theoretical analysis guaranteeing gradient direction consistency. Our method significantly improves model resilience to hardware non-idealities: it achieves up to 5.3% higher accuracy on image classification, reduces perplexity by 0.72 on text generation, accelerates training by 2.2×, and cuts peak memory usage by 37.9%.
📝 Abstract
Analog Compute-In-Memory (CIM) architectures promise significant energy efficiency gains for neural network inference, but suffer from complex hardware-induced noise that poses major challenges for deployment. While noise-aware training methods have been proposed to address this issue, they typically rely on idealized and differentiable noise models that fail to capture the full complexity of analog CIM hardware variations. Motivated by the Straight-Through Estimator (STE) framework in quantization, we decouple forward noise simulation from backward gradient computation, enabling noise-aware training with more accurate but computationally intractable noise modeling in analog CIM systems. We provide theoretical analysis demonstrating that our approach preserves essential gradient directional information while maintaining computational tractability and optimization stability. Extensive experiments show that our extended STE framework achieves up to 5.3% accuracy improvement on image classification, 0.72 perplexity reduction on text generation, 2.2$ imes$ speedup in training time, and 37.9% lower peak memory usage compared to standard noise-aware training methods.