🤖 AI Summary
To address the high energy consumption and computational demands of large-scale neural network training, this work proposes a photonic–electronic hybrid neural network architecture. It leverages the intrinsic, massive nonlinear optical response of femtosecond pulse propagation in multimode fiber to implement a low-power physical computing layer. Crucially, we introduce the first differentiable digital twin model based on multimode fiber nonlinearity, enabling end-to-end joint training of the photonic and electronic layers while exhibiting strong robustness to experimental drift. Experimental evaluation on image classification achieves state-of-the-art accuracy at the time of publication; simulation and hardware measurements show excellent agreement. Moreover, the architecture significantly reduces computational energy consumption and hardware dependency. By tightly integrating physical dynamics with learnable models, this work establishes a scalable, physics-informed paradigm for green AI.
📝 Abstract
The ability to train ever-larger neural networks brings artificial intelligence to the forefront of scientific and technical discoveries. However, their exponentially increasing size creates a proportionally greater demand for energy and computational hardware. Incorporating complex physical events in networks as fixed, efficient computation modules can address this demand by decreasing the complexity of trainable layers. Here, we utilize ultrashort pulse propagation in multimode fibers, which perform large-scale nonlinear transformations, for this purpose. Training the hybrid architecture is achieved through a neural model that differentiably approximates the optical system. The training algorithm updates the neural simulator and backpropagates the error signal over this proxy to optimize layers preceding the optical one. Our experimental results achieve state-of-the-art image classification accuracies and simulation fidelity. Moreover, the framework demonstrates exceptional resilience to experimental drifts. By integrating low-energy physical systems into neural networks, this approach enables scalable, energy-efficient AI models with significantly reduced computational demands.