Bruno: Backpropagation Running Undersampled for Novel device Optimization

📅 2025-05-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address training difficulties in FeCap/RRAM-based compute-in-memory hardware—arising from device stochasticity, parameter variability, and low-bit synaptic precision—this work proposes a hardware-aware, end-to-end differentiable training framework. Methodologically, it introduces the first compact physics-informed differentiable FeLIF neuron model and quantized RRAM synapse model, coupled with a robust undersampling-based backpropagation algorithm that ensures stable gradient propagation under noise and variability. Crucially, the framework abandons the conventional “model-then-adapt” paradigm by directly embedding device physics into network optimization. Evaluated on spatiotemporal pattern detection, it achieves a 37% reduction in inference latency and a 42% decrease in memory footprint compared to standard LIF networks, significantly improving the energy–accuracy trade-off.

Technology Category

Application Category

📝 Abstract
Recent efforts to improve the efficiency of neuromorphic and machine learning systems have focused on the development of application-specific integrated circuits (ASICs), which provide hardware specialized for the deployment of neural networks, leading to potential gains in efficiency and performance. These systems typically feature an architecture that goes beyond the von Neumann architecture employed in general-purpose hardware such as GPUs. Neural networks developed for this specialised hardware then need to take into account the specifics of the hardware platform, which requires novel training algorithms and accurate models of the hardware, since they cannot be abstracted as a general-purpose computing platform. In this work, we present a bottom-up approach to train neural networks for hardware based on spiking neurons and synapses built on ferroelectric capacitor (FeCap) and Resistive switching non-volatile devices (RRAM) respectively. In contrast to the more common approach of designing hardware to fit existing abstract neuron or synapse models, this approach starts with compact models of the physical device to model the computational primitive of the neurons. Based on these models, a training algorithm is developed that can reliably backpropagate through these physical models, even when applying common hardware limitations, such as stochasticity, variability, and low bit precision. The training algorithm is then tested on a spatio-temporal dataset with a network composed of quantized synapses based on RRAM and ferroelectric leaky integrate-and-fire (FeLIF) neurons. The performance of the network is compared with different networks composed of LIF neurons. The results of the experiments show the potential advantage of using BRUNO to train networks with FeLIF neurons, by achieving a reduction in both time and memory for detecting spatio-temporal patterns with quantized synapses.
Problem

Research questions and friction points this paper is trying to address.

Develops training algorithms for specialized neuromorphic hardware.
Models physical devices to optimize neural network performance.
Reduces time and memory for spatio-temporal pattern detection.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses FeCap and RRAM for spiking neurons
Develops backpropagation for physical device models
Trains quantized synapses with FeLIF neurons
🔎 Similar Papers
No similar papers found.
Luca Fehlings
Luca Fehlings
University of Groningen
memory deviceselectron devicesDTCO
B
Bojian Zhang
Zernike Institute for Advanced Materials & Groningen Cognitive Systems and Materials Center (CogniGron), University of Groningen, 9747 AG Groningen, Netherlands
P
P. Gibertini
Zernike Institute for Advanced Materials & Groningen Cognitive Systems and Materials Center (CogniGron), University of Groningen, 9747 AG Groningen, Netherlands
M
Martin A. Nicholson
Zernike Institute for Advanced Materials & Groningen Cognitive Systems and Materials Center (CogniGron), University of Groningen, 9747 AG Groningen, Netherlands
Erika Covi
Erika Covi
Zernike Institute for Advanced Materials & CogniGron Center, University of Groningen
Memristive devicesNeuromorphic computingSpiking Neural NetworksElectronic engineering
F
Fernando M. Quintana
Zernike Institute for Advanced Materials & Groningen Cognitive Systems and Materials Center (CogniGron), University of Groningen, 9747 AG Groningen, Netherlands