🤖 AI Summary
This work proposes a modular p-bit architecture that overcomes the limitations of conventional p-bit neurons, which are constrained by fixed stochastic activation mechanisms and thus unable to support diverse probabilistic behaviors. By decoupling the stochastic signal path from the input data path, the design enables configurable probabilistic neurons capable of implementing stochastic variants of canonical activation functions—including Sigmoid, Tanh, and ReLU—for the first time in hardware. Leveraging a spintronic (CMOS + sMTJ) implementation and FPGA prototyping, the architecture incorporates a shared stochastic unit mechanism that substantially reduces hardware overhead. Experimental results demonstrate a one-order-of-magnitude (approximately 10×) reduction in resource consumption compared to traditional digital p-bit implementations, confirming the approach’s efficiency and feasibility.
📝 Abstract
Probabilistic bits (p-bits) have recently been employed in neural networks (NNs) as stochastic neurons with sigmoidal probabilistic activation functions. Nonetheless, there remain a wealth of other probabilistic activation functions that are yet to be explored. Here we re-engineer the p-bit by decoupling its stochastic signal path from its input data path, giving rise to a modular p-bit that enables the realization of probabilistic neurons (p-neurons) with a range of configurable probabilistic activation functions, including a probabilistic version of the widely used Logistic Sigmoid, Tanh and Rectified Linear Unit (ReLU) activation functions. We present spintronic (CMOS + sMTJ) designs that show wide and tunable probabilistic ranges of operation. Finally, we experimentally implement digital-CMOS versions on an FPGA, with stochastic unit sharing, and demonstrate an order of magnitude (10x) saving in required hardware resources compared to conventional digital p-bit implementations.