🤖 AI Summary
This work addresses the lack of native spiking neural network (SNN) research tools on Apple Silicon platforms, where existing libraries predominantly rely on PyTorch or custom backends. The authors present the first native SNN library built on Apple’s MLX framework, integrating six neuron models—including LIF and Izhikevich—four surrogate gradients, four encoding schemes, and an EEG-specific encoder. By leveraging MLX features such as unified memory, lazy evaluation, and functional transformations (e.g., mx.grad and mx.compile), the library enables efficient temporal backpropagation. Evaluated on an M3 Max chip, it achieves 97.28% accuracy on MNIST while offering 2.0–2.5× faster training and 3–10× lower GPU memory consumption compared to snnTorch, demonstrating significant performance advantages.
📝 Abstract
We introduce mlx-snn, the first spiking neural network (SNN) library built natively on Apple's MLX framework. As SNN research grows rapidly, all major libraries -- snnTorch, Norse, SpikingJelly, Lava -- target PyTorch or custom backends, leaving Apple Silicon users without a native option. mlx-snn provides six neuron models (LIF, IF, Izhikevich, Adaptive LIF, Synaptic, Alpha), four surrogate gradient functions, four spike encoding methods (including an EEG-specific encoder), and a complete backpropagation-through-time training pipeline. The library leverages MLX's unified memory architecture, lazy evaluation, and composable function transforms (mx.grad, mx.compile) to enable efficient SNN research on Apple Silicon hardware. We validate mlx-snn on MNIST digit classification across five hyperparameter configurations and three backends, achieving up to 97.28% accuracy with 2.0--2.5 times faster training and 3--10 times lower GPU memory than snnTorch on the same M3 Max hardware. mlx-snn is open-source under the MIT license and available on PyPI. https://github.com/D-ST-Sword/mlx-snn