Chinami Marushima, Toyohiro Aoki, et al.
ECTC 2022
Deep spiking neural networks (SNNs) are trying to mimic brain functioning and therefore offer a promise of low power artificial intelligence. However, training deep and sparse SNNs or converting deep ANNs to sparse SNNs without loss of performance has been a challenge. We assume that a multi-layer network of Rectified Linear Units (ReLUs) was trained to high performance on some training base. Furthermore, we assume that we have access to the training data (at least its input vectors, not necessarily the target outputs) as well as to the exact parameters (weights and thresholds) of the ReLU network. Then there is an exact mapping from such network to a spiking neural network (SNN) with exactly one spike per hidden-layer neuron. ReLU networks with fully connected, convolutional, max pooling, batch normalization and dropout layers are considered. We also account for input on an arbitrary range as well zero padding in the convolutional layers. The algorithm is tested on data classification task and mulitple datasets of different sizes and complexity, from MNIST, Fashion-MNIST, CIFAR10 and CIFAR100, to Places365 and PASS. As a corollary of the exact mapping, the SNN achieves the same accuracy on the test set as the original ReLU network. This implies that deep ReLU network can be replaced with energy efficient single-spike neural network without any loss of performance.
Chinami Marushima, Toyohiro Aoki, et al.
ECTC 2022
Matteo Baldo, Mario Allegra, et al.
IEDM 2025
Karthik Swaminathan, Martin Cochet, et al.
ISCA 2025
Olivier Maher, N. Harnack, et al.
DRC 2023