Learning a Continuous Attractor Neural Network from Real Images | SpringerLink
Skip to main content

Learning a Continuous Attractor Neural Network from Real Images

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10637))

Included in the following conference series:

Abstract

Continuous attractor neural networks (CANNs) have been widely used as a canonical model for neural information representation. It remains, however, unclear how the neural system acquires such a network structure in practice. In the present study, we propose a biological plausible scheme for the neural system to learn a CANN from real images. The scheme contains two key issues. One is to generate high-level representations of objects, such that the correlation between neural representations reflects the sematic relationship between objects. We adopt a deep neural network trained by a large number of natural images to achieve this goal. The other is to learn correlated memory patterns in a recurrent neural network. We adopt a modified Hebb rule, which encodes the correlation between neural representations into the connection form of the network. We carry out a number of experiments to demonstrate that when the presented images are linked by a continuous feature, the neural system learns a CANN successfully, in term of that these images are stored as a continuous family of stationary states of the network, forming a sub-manifold of low energy in the network state space.

X. Zou and Z. Ji—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 11439
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 14299
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Nat. Acad. Sci. 79, 2554–2558 (1982)

    Article  MATH  MathSciNet  Google Scholar 

  2. Kim, S.S., Rouault, H., Druckmann, S., Jayaraman, V.: Ring attractor dynamics in the Drosophila central brain. Science 356, 849–853 (2017)

    Article  Google Scholar 

  3. Seelig, J.D., Jayaraman, V.: Neural dynamics for landmark orientation and angular path integration. Nature 521, 186–191 (2015)

    Article  Google Scholar 

  4. Amari, S.I.: Dynamics of pattern formation in lateral-inhibition type neural fields. Biol. Cybern. 27, 77–87 (1977)

    Article  MATH  MathSciNet  Google Scholar 

  5. Zhang, K.: Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. J. Neurosci. 16, 2112–2126 (1996)

    Google Scholar 

  6. Wu, S., Wong, K.M., Fung, C.A., Mi, Y., Zhang, W.: Continuous attractor neural networks: candidate of a canonical model for neural information representation. F1000Research, 5 (2016)

    Google Scholar 

  7. Yoon, K., Buice, M.A., Barry, C., Hayman, R., Burgess, N., Fiete, I.R.: Specific evidence of low-dimensional continuous attractor dynamics in grid cells. Nat. Neurosci. 16, 1077–1084 (2013)

    Article  Google Scholar 

  8. Mante, V., Sussillo, D., Shenoy, K.V., Newsome, W.T.: Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503, 78 (2013)

    Article  Google Scholar 

  9. Yamins, D.L., Hong, H., Cadieu, C.F., Solomon, E.A., Seibert, D., DiCarlo, J.J.: Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proc. Nat. Acad. Sci. 111, 8619–8624 (2014)

    Article  Google Scholar 

  10. Kropff, E., Treves, A.: Uninformative memories will prevail: the storage of correlated representations and its consequences. HFSP J. 1, 249–262 (2007)

    Article  Google Scholar 

  11. Blumenfeld, B., Preminger, S., Sagi, D., Tsodyks, M.: Dynamics of memory representations in networks with novelty-facilitated synaptic plasticity. Neuron 52, 383–394 (2006)

    Article  Google Scholar 

  12. Leutgeb, J.K., Leutgeb, S., Treves, A., et al.: Progressive transformation of hippocampal neuronal representations in morphed environments. Neuron 48, 345–358 (2005)

    Article  Google Scholar 

  13. Wills, T.J., Lever, C., Cacucci, F., Burgess, N., O’keefe, J.: Attractor dynamics in the hippocampal representation of the local environment. Science 308, 873–876 (2005)

    Article  Google Scholar 

  14. Srivastava, V., Sampath, S., Parker, D.J.: Overcoming catastrophic interference in connectionist networks using gram-schmidt orthogonalization. PLoS One 9, e105619 (2014)

    Article  Google Scholar 

  15. Kumaran, D., Hassabis, D., McClelland, J.L.: What learning systems do intelligent agents need? Complementary learning systems theory updated. Trends Cogn. Sci. 20, 512–534 (2016)

    Article  Google Scholar 

  16. Carandini, M., Heeger, D.J.: Normalization as a canonical neural computation. Nat. Rev. Neurosci. 13, 51–62 (2012)

    Article  Google Scholar 

  17. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  18. Georgopoulos, A.P., Taira, M., Lukashin, A.: Cognitive neurophysiology of the motor cortex. Science 260, 47–52 (1993). New York then Washington

    Article  Google Scholar 

  19. Mi, Y., Fung, C.A., Wong, K.M., Wu, S.: Spike frequency adaptation implements anticipative tracking in continuous attractor neural networks. In: Advances in Neural Information Processing Systems, pp. 505–513 (2014)

    Google Scholar 

Download references

Acknowledgments

This work was supported by BMSTC (Beijing municipal science and technology commission) under grant No: Z161100000216143 (SW), Z171100000117007 (DHW&YYM). The National Natural Science Foundation of China (31371109), National Key Basic Research Program of China (2014CB846101).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Si Wu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Zou, X., Ji, Z., Liu, X., Mi, Y., Wong, K.Y.M., Wu, S. (2017). Learning a Continuous Attractor Neural Network from Real Images. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10637. Springer, Cham. https://doi.org/10.1007/978-3-319-70093-9_66

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70093-9_66

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70092-2

  • Online ISBN: 978-3-319-70093-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics