Towards Scalable GPU-Accelerated SNN Training via Temporal Fusion | SpringerLink
Skip to main content

Towards Scalable GPU-Accelerated SNN Training via Temporal Fusion

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2024 (ICANN 2024)

Abstract

Drawing on the intricate structures of the brain, Spiking Neural Networks (SNNs) emerge as a transformative development in artificial intelligence, closely emulating the complex dynamics of biological neural networks. While SNNs show promising efficiency on specialized sparse-computational hardware, their practical training often relies on conventional GPUs. This reliance frequently leads to extended computation times when contrasted with traditional Artificial Neural Networks (ANNs), presenting significant hurdles for advancing SNN research. To navigate this challenge, we present a novel temporal fusion method, specifically designed to expedite the propagation dynamics of SNNs on GPU platforms, which serves as an enhancement to the current significant approaches for handling deep learning tasks with SNNs. This method underwent thorough validation through extensive experiments in both authentic training scenarios and idealized conditions, confirming its efficacy and adaptability for single and multi-GPU systems. Benchmarked against various existing SNN libraries/implementations, our method achieved accelerations ranging from \(5\times \) to \(40\times \) on NVIDIA A100 GPUs. Publicly available experimental codes can be found at https://github.com/EMI-Group/snn-temporal-fusion.

This work was supported in part by Guangdong Natural Science Funds for Distinguished Young Scholar under Grant 2024B1515020019.

Y. Li and J. Li contributed equally to this work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 8007
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 10009
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Amir, A., et al.: A low power, fully event-based gesture recognition system. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 7388–7397. IEEE Computer Society (2017)

    Google Scholar 

  2. Bu, T., Fang, W., Ding, J., Dai, P., Yu, Z., Huang, T.: Optimal ANN-SNN conversion for high-accuracy and ultra-low-latency spiking neural networks. In: The Tenth International Conference on Learning Representations, ICLR. OpenReview.net (2022)

    Google Scholar 

  3. Chetlur, S., et al.: cuDNN: efficient primitives for deep learning. CoRR abs/1410.0759 (2014)

    Google Scholar 

  4. Chowdhury, S.S., Lee, C., Roy, K.: Towards understanding the effect of leak in spiking neural networks. Neurocomputing 464, 83–94 (2021)

    Article  Google Scholar 

  5. Chowdhury, S.S., Rathi, N., Roy, K.: Towards ultra low latency spiking neural networks for vision and sequential tasks using temporal pruning. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13671, pp. 709–726. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-20083-0_42

  6. Ding, J., Yu, Z., Tian, Y., Huang, T.: Optimal ANN-SNN conversion for fast and accurate inference in deep spiking neural networks. In: Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI, pp. 2328–2336 (2021)

    Google Scholar 

  7. Eshraghian, J.K., et al.: Training spiking neural networks using lessons from deep learning. Proc. IEEE 111(9), 1016–1054 (2023)

    Article  Google Scholar 

  8. Fang, W., et al.: SpikingJelly: an open-source machine learning infrastructure platform for spike-based intelligence. Sci. Adv. 9(40), eadi1480 (2023)

    Google Scholar 

  9. Fang, W., et al.: Parallel spiking neurons with high efficiency and ability to learn long-term dependencies. In: Thirty-Seventh Conference on Neural Information Processing Systems (2023)

    Google Scholar 

  10. Gerstner, W., Kistler, W.M., Naud, R., Paninski, L.: Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition. Cambridge University Press, Cambridge (2014)

    Book  Google Scholar 

  11. Guo, W., Fouda, M.E., Eltawil, A.M., Salama, K.N.: Efficient training of spiking neural networks with temporally-truncated local backpropagation through time. Front. Neurosci. 17 (2023)

    Google Scholar 

  12. Han, B., Srinivasan, G., Roy, K.: RMP-SNN: residual membrane potential neuron for enabling deeper high-accuracy and low-latency spiking neural network. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, pp. 13555–13564. Computer Vision Foundation/IEEE (2020)

    Google Scholar 

  13. Hazan, H., et al.: BindsNET: a machine learning-oriented spiking neural networks library in Python. Frontiers Neuroinformatics 12, 89 (2018)

    Article  Google Scholar 

  14. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR, pp. 770–778. IEEE Computer Society (2016)

    Google Scholar 

  15. Hodgkin, A.L., Huxley, A.F.: Currents carried by sodium and potassium ions through the membrane of the giant axon of Loligo. J. Physiol. 116(4), 449–472 (1952)

    Article  Google Scholar 

  16. Huang, Y., et al: GPipe: efficient training of giant neural networks using pipeline parallelism. In: Advances in Neural Information Processing Systems, NeurIPS, pp. 103–112 (2019)

    Google Scholar 

  17. Izhikevich, E.M.: Simple model of spiking neurons. IEEE Trans. Neural Networks 14(6), 1569–1572 (2003)

    Article  MathSciNet  Google Scholar 

  18. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR (2015)

    Google Scholar 

  19. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  20. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1106–1114 (2012)

    Google Scholar 

  21. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  22. Li, Y., Deng, S., Dong, X., Gong, R., Gu, S.: A free lunch from ANN: towards efficient, accurate spiking neural networks calibration. In: Proceedings of the 38th International Conference on Machine Learning, ICML. Proceedings of Machine Learning Research, vol. 139, pp. 6316–6325 (2021)

    Google Scholar 

  23. Li, Y., Guo, Y., Zhang, S., Deng, S., Hai, Y., Gu, S.: Differentiable spike: rethinking gradient-descent for training spiking neural networks. In: Advances in Neural Information Processing Systems, NeurIPS, pp. 23426–23439 (2021)

    Google Scholar 

  24. Luebke, D.P.: CUDA: scalable parallel programming for high-performance scientific computing. In: Proceedings of the 2008 IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pp. 836–838. IEEE (2008)

    Google Scholar 

  25. Meng, Q., Xiao, M., Yan, S., Wang, Y., Lin, Z., Luo, Z.: Towards memory- and time-efficient backpropagation for training spiking neural networks. In: IEEE/CVF International Conference on Computer Vision, ICCV, pp. 6143–6153. IEEE (2023)

    Google Scholar 

  26. Orchard, G., Jayawant, A., Cohen, G.K., Thakor, N.: Converting static image datasets to spiking neuromorphic datasets using saccades. Front. Neurosci. 9 (2015)

    Google Scholar 

  27. Paredes-Vallés, F., Scheper, K.Y.W., de Croon, G.C.H.E.: Unsupervised learning of a hierarchical spiking neural network for optical flow estimation: from events to global motion perception. IEEE Trans. Pattern Anal. Mach. Intell. 42(8), 2051–2064 (2020)

    Article  Google Scholar 

  28. Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, NeurIPS, pp. 8024–8035 (2019)

    Google Scholar 

  29. Pehle, C., Pedersen, J.E.: Norse—a deep learning library for spiking neural networks (2021)

    Google Scholar 

  30. Rathi, N., Roy, K.: DIET-SNN: A low-latency spiking neural network with direct input encoding and leakage and threshold optimization. IEEE Trans. Neural Networks Learn. Syst. 34(6), 3174–3182 (2023)

    Article  Google Scholar 

  31. Rueckauer, B., Lungu, I.A., Hu, Y., Pfeiffer, M., Liu, S.C.: Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Front. Neurosci. 11 (2017)

    Google Scholar 

  32. Shaban, A., Bezugam, S.S., Suri, M.: An adaptive threshold neuron for recurrent spiking neural networks with nanodevice hardware implementation. Nat. Commun. 12(1), 4234 (2021)

    Article  Google Scholar 

  33. Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., Catanzaro, B.: Megatron-LM: Training multi-billion parameter language models using model parallelism. CoRR abs/1909.08053 (2019)

    Google Scholar 

  34. State, L., Vilimelis Aceituno, P.: Training delays in spiking neural networks. In: Tetko, I.V., Kürková, V., Karpov, P., Theis, F. (eds.) ICANN 2019. LNCS, vol. 11727, pp. 713–717. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30487-4_54

    Chapter  Google Scholar 

  35. Teeter, C., et al.: Generalized leaky integrate-and-fire models classify multiple neuron types. Nat. Commun. 9(1), 709 (2018)

    Article  Google Scholar 

  36. Wu, Y., Deng, L., Li, G., Zhu, J., Shi, L.: Spatio-temporal backpropagation for training high-performance spiking neural networks. Front. Neurosci. 12 (2018)

    Google Scholar 

  37. Wu, Y., Deng, L., Li, G., Zhu, J., Xie, Y., Shi, L.: Direct training for spiking neural networks: faster, larger, better. In: The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI, pp. 1311–1318. AAAI Press (2019)

    Google Scholar 

  38. Xiao, M., Meng, Q., Zhang, Z., He, D., Lin, Z.: Online training through time for spiking neural networks. In: Advances in Neural Information Processing Systems, NeurIPS (2022)

    Google Scholar 

  39. Yavuz, E., Turner, J., Nowotny, T.: GeNN: a code generation framework for accelerated brain simulations. Sci. Rep. 6, 18854 (2016)

    Article  Google Scholar 

  40. Zhang, Y., Cao, J., Chen, J., Sun, W., Wang, Y.: Razor SNN: efficient spiking neural network with temporal embeddings. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds.) Artificial Neural Networks and Machine Learning — ICANN, pp. 411–422. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-44192-9_33

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ran Cheng .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, Y., Li, J., Sun, K., Leng, L., Cheng, R. (2024). Towards Scalable GPU-Accelerated SNN Training via Temporal Fusion. In: Wand, M., Malinovská, K., Schmidhuber, J., Tetko, I.V. (eds) Artificial Neural Networks and Machine Learning – ICANN 2024. ICANN 2024. Lecture Notes in Computer Science, vol 15019. Springer, Cham. https://doi.org/10.1007/978-3-031-72341-4_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72341-4_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72340-7

  • Online ISBN: 978-3-031-72341-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics