mlx-snn: Spiking Neural Networks on Apple Silicon via MLX
arXiv:2603.03529v1 Announce Type: new Abstract: We introduce mlx-snn, the first spiking neural network (SNN) library built natively on Apple's MLX framework. As SNN research grows rapidly, all major libraries -- snnTorch, Norse, SpikingJelly, Lava -- target PyTorch or custom backends, leaving Apple Silicon users without a native option. mlx-snn provides six neuron models (LIF, IF, Izhikevich, Adaptive LIF, Synaptic, Alpha), four surrogate gradient functions, four spike encoding methods (including an EEG-specific encoder), and a complete backpropagation-through-time training pipeline. The library leverages MLX's unified memory architecture, lazy evaluation, and composable function transforms (mx.grad, mx.compile) to enable efficient SNN research on Apple Silicon hardware. We validate mlx-snn on MNIST digit classification across five hyperparameter configurations and three backends, achieving up to 97.28% accuracy with 2.0--2.5 times faster training and 3--10 times lower GPU memory than
arXiv:2603.03529v1 Announce Type: new Abstract: We introduce mlx-snn, the first spiking neural network (SNN) library built natively on Apple's MLX framework. As SNN research grows rapidly, all major libraries -- snnTorch, Norse, SpikingJelly, Lava -- target PyTorch or custom backends, leaving Apple Silicon users without a native option. mlx-snn provides six neuron models (LIF, IF, Izhikevich, Adaptive LIF, Synaptic, Alpha), four surrogate gradient functions, four spike encoding methods (including an EEG-specific encoder), and a complete backpropagation-through-time training pipeline. The library leverages MLX's unified memory architecture, lazy evaluation, and composable function transforms (mx.grad, mx.compile) to enable efficient SNN research on Apple Silicon hardware. We validate mlx-snn on MNIST digit classification across five hyperparameter configurations and three backends, achieving up to 97.28% accuracy with 2.0--2.5 times faster training and 3--10 times lower GPU memory than snnTorch on the same M3 Max hardware. mlx-snn is open-source under the MIT license and available on PyPI. https://github.com/D-ST-Sword/mlx-snn
Executive Summary
The article introduces mlx-snn, a spiking neural network library built natively on Apple's MLX framework, providing a native option for Apple Silicon users. The library offers various neuron models, surrogate gradient functions, and spike encoding methods, enabling efficient SNN research on Apple Silicon hardware. Validation on MNIST digit classification shows promising results, with up to 97.28% accuracy and improved training speed and GPU memory usage compared to snnTorch.
Key Points
- ▸ mlx-snn is the first SNN library built natively on Apple's MLX framework
- ▸ The library provides six neuron models and four surrogate gradient functions
- ▸ mlx-snn achieves up to 97.28% accuracy on MNIST digit classification with improved training speed and GPU memory usage
Merits
Native Integration
mlx-snn's native integration with Apple's MLX framework enables efficient SNN research on Apple Silicon hardware
Comprehensive Features
The library offers a range of neuron models, surrogate gradient functions, and spike encoding methods, making it a versatile tool for SNN research
Demerits
Limited Compatibility
mlx-snn is currently only compatible with Apple Silicon hardware, limiting its accessibility to users with other hardware configurations
Expert Commentary
The introduction of mlx-snn marks a significant milestone in the development of spiking neural network libraries, providing a native option for Apple Silicon users and enabling more efficient research in this area. The library's comprehensive features and native integration with Apple's MLX framework make it an attractive tool for researchers, and its potential to accelerate SNN research on Apple Silicon hardware is substantial. However, the limited compatibility of the library may hinder its adoption by users with other hardware configurations, highlighting the need for further development and optimization.
Recommendations
- ✓ Researchers should explore the capabilities of mlx-snn and its potential applications in SNN research
- ✓ Developers should consider optimizing their libraries for specific hardware configurations to enable more efficient research and development in machine learning