Abstract
Continuous attractor neural networks (CANNs) have been widely used as a canonical model for neural information representation. It remains, however, unclear how the neural system acquires such a network structure in practice. In the present study, we propose a biological plausible scheme for the neural system to learn a CANN from real images. The scheme contains two key issues. One is to generate high-level representations of objects, such that the correlation between neural representations reflects the sematic relationship between objects. We adopt a deep neural network trained by a large number of natural images to achieve this goal. The other is to learn correlated memory patterns in a recurrent neural network. We adopt a modified Hebb rule, which encodes the correlation between neural representations into the connection form of the network. We carry out a number of experiments to demonstrate that when the presented images are linked by a continuous feature, the neural system learns a CANN successfully, in term of that these images are stored as a continuous family of stationary states of the network, forming a sub-manifold of low energy in the network state space.
X. Zou and Z. Ji—Equal contribution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Nat. Acad. Sci. 79, 2554–2558 (1982)
Kim, S.S., Rouault, H., Druckmann, S., Jayaraman, V.: Ring attractor dynamics in the Drosophila central brain. Science 356, 849–853 (2017)
Seelig, J.D., Jayaraman, V.: Neural dynamics for landmark orientation and angular path integration. Nature 521, 186–191 (2015)
Amari, S.I.: Dynamics of pattern formation in lateral-inhibition type neural fields. Biol. Cybern. 27, 77–87 (1977)
Zhang, K.: Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. J. Neurosci. 16, 2112–2126 (1996)
Wu, S., Wong, K.M., Fung, C.A., Mi, Y., Zhang, W.: Continuous attractor neural networks: candidate of a canonical model for neural information representation. F1000Research, 5 (2016)
Yoon, K., Buice, M.A., Barry, C., Hayman, R., Burgess, N., Fiete, I.R.: Specific evidence of low-dimensional continuous attractor dynamics in grid cells. Nat. Neurosci. 16, 1077–1084 (2013)
Mante, V., Sussillo, D., Shenoy, K.V., Newsome, W.T.: Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78 (2013)
Yamins, D.L., Hong, H., Cadieu, C.F., Solomon, E.A., Seibert, D., DiCarlo, J.J.: Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proc. Nat. Acad. Sci. 111, 8619–8624 (2014)
Kropff, E., Treves, A.: Uninformative memories will prevail: the storage of correlated representations and its consequences. HFSP J. 1, 249–262 (2007)
Blumenfeld, B., Preminger, S., Sagi, D., Tsodyks, M.: Dynamics of memory representations in networks with novelty-facilitated synaptic plasticity. Neuron 52, 383–394 (2006)
Leutgeb, J.K., Leutgeb, S., Treves, A., et al.: Progressive transformation of hippocampal neuronal representations in morphed environments. Neuron 48, 345–358 (2005)
Wills, T.J., Lever, C., Cacucci, F., Burgess, N., O’keefe, J.: Attractor dynamics in the hippocampal representation of the local environment. Science 308, 873–876 (2005)
Srivastava, V., Sampath, S., Parker, D.J.: Overcoming catastrophic interference in connectionist networks using gram-schmidt orthogonalization. PLoS One 9, e105619 (2014)
Kumaran, D., Hassabis, D., McClelland, J.L.: What learning systems do intelligent agents need? Complementary learning systems theory updated. Trends Cogn. Sci. 20, 512–534 (2016)
Carandini, M., Heeger, D.J.: Normalization as a canonical neural computation. Nat. Rev. Neurosci. 13, 51–62 (2012)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Georgopoulos, A.P., Taira, M., Lukashin, A.: Cognitive neurophysiology of the motor cortex. Science 260, 47–52 (1993). New York then Washington
Mi, Y., Fung, C.A., Wong, K.M., Wu, S.: Spike frequency adaptation implements anticipative tracking in continuous attractor neural networks. In: Advances in Neural Information Processing Systems, pp. 505–513 (2014)
Acknowledgments
This work was supported by BMSTC (Beijing municipal science and technology commission) under grant No: Z161100000216143 (SW), Z171100000117007 (DHW&YYM). The National Natural Science Foundation of China (31371109), National Key Basic Research Program of China (2014CB846101).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Zou, X., Ji, Z., Liu, X., Mi, Y., Wong, K.Y.M., Wu, S. (2017). Learning a Continuous Attractor Neural Network from Real Images. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10637. Springer, Cham. https://doi.org/10.1007/978-3-319-70093-9_66
Download citation
DOI: https://doi.org/10.1007/978-3-319-70093-9_66
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-70092-2
Online ISBN: 978-3-319-70093-9
eBook Packages: Computer ScienceComputer Science (R0)