Abstract
Neural coding is one of the central questions in neuroscience for converting visual information into spike patterns. However, the existing encoding techniques require a preset time window and lack effective learning. In order to overcome these two problems, we design an adaptive convolutional auto-encoder based on spiking neurons in this paper. We first exploit the spike pixel mapping decoding approach to find the optimal value of the time window automatically. Next, we design a deep convolutional neural network to adapt the learning parameters by reconstruction errors to realize the spike encoding process. Then we can naturally get coding pre-training parameters for unifying the convolutional spike coding layer with back-end deep spiking neural networks (SNNs) for recognition tasks. Simulation results demonstrate that the proposed method can achieve better performance compared with other encoding methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Azarfar, A., Calcini, N., Huang, C., et al.: Neural coding: a single neuron’s perspective. Neurosci. Biobehav. Rev. 94, 238–247 (2018)
Van Rullen, R., Thorpe, S.J.: Rate coding versus temporal order coding: what the retinal ganglion cells tell the visual cortex. Neural Comput. 13(6), 1255–1283 (2001)
VanRullen, R., Guyonneau, R., Thorpe, S.J.: Spike times make sense. Trends Neurosci. 28(1), 1–4 (2005)
Panzeri, S., Brunel, N., Logothetis, N.K., et al.: Sensory neural codes using multiplexed temporal scales. Trends Neurosci. 33(3), 111–120 (2010)
Parthasarathy, N., Batty, E., Falcon, W., et al.: Neural networks for efficient Bayesian decoding of natural images from retinal neurons. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Zhang, Y., Jia, S., Zheng, Y., et al.: Reconstruction of natural visual scenes from neural spikes with deep neural networks. Neural Netw. 125, 19–30 (2020)
Pfeiffer, M., Pfeil, T.: Deep learning with spiking neurons: opportunities and challenges. Front. Neurosci. 12, 774 (2018)
Zheng, H., Wu, Y., Deng, L., et al.: Going deeper with directly-trained larger spiking neural networks. arXiv preprint, arXiv:2011.05280 (2020)
Li, Y., Guo, Y., Zhang, S., et al.: Differentiable spike: rethinking gradient-descent for training spiking neural networks. In: Advances in Neural Information Processing Systems, vol. 34 (2021)
Orhan, E.: The leaky integrate-and-fire neuron model, no. 3, pp. 1–6 (2012)
LeCun, Y.: The MNIST database of handwritten digits (1998). http://yann.lecun.com/exdb/mnist/
Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)
Huang, G., Liu, Z., Van Der Maaten, L., et al.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)
Rueckauer, B., Lungu, I.A., Hu, Y.H., et al.: Conversion of continuous-valued deep networks to efficient event-driven networks for image classification. Front. Neurosci. 11, 682 (2017)
Sengupta, A., Ye, Y., Wang, R., et al.: Going deeper in spiking neural networks: VGG and residual architectures. Front. Neurosci. 13, 95 (2019)
Lee, C., Sarwar, S.S., Panda, P., et al.: Enabling spike-based backpropagation for training deep neural network architectures. Front. Neurosci. 119 (2020)
Jin, Y., Zhang, W., Li, P.: Hybrid macro/micro level backpropagation for training deep spiking neural networks. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Severa, W., Vineyard, C.M., Dellana, R., et al.: Training deep neural networks for binary communication with the whetstone method. Nat. Mach. Intell. 1(2), 86–94 (2019)
Gu, P., Xiao, R., Pan, G., et al.: STCA: spatio-temporal credit assignment with delayed feedback in deep spiking neural networks. In: IJCAI, pp. 1366–1372 (2019)
Wu, Y.J., Deng, L., Li, G.Q., et al.: Direct training for spiking neural networks: faster, larger, better. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, pp. 1311–1318 (2019)
Acknowledgments
This work was supported by the National Natural Science Foundation of China NSAF under Grant No. U2030204.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhu, C., Jiang, J., Jiang, R., Yan, R. (2023). An Adaptive Convolution Auto-encoder Based on Spiking Neurons. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Lecture Notes in Computer Science, vol 13624. Springer, Cham. https://doi.org/10.1007/978-3-031-30108-7_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-30108-7_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-30107-0
Online ISBN: 978-3-031-30108-7
eBook Packages: Computer ScienceComputer Science (R0)