Recurrent Neural Network Learning of Performance and Intrinsic Population Dynamics from Sparse Neural Data | SpringerLink
Skip to main content

Recurrent Neural Network Learning of Performance and Intrinsic Population Dynamics from Sparse Neural Data

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2020 (ICANN 2020)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12396))

Included in the following conference series:

  • 3437 Accesses

Abstract

Recurrent Neural Networks (RNNs) are popular models of brain function. The typical training strategy is to adjust their input-output behavior so that it matches that of the biological circuit of interest. Even though this strategy ensures that the biological and artificial networks perform the same computational task, it does not guarantee that their internal activity dynamics match. This suggests that the trained RNNs might end up performing the task employing a different internal computational mechanism. In this work, we introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics. We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model of motor cortical and muscle activity dynamics. Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons sampled from the biological network. Furthermore, we show that training the RNNs with this method significantly improves their generalization performance. Overall, our results suggest that the proposed method is suitable for building powerful functional RNN models, which automatically capture important computational properties of the biological circuit of interest from sparse neural recordings.

Supported by: BMBF FKZ 01GQ1704, HFSP RGP0036/2016, KONSENS-NHE BW Stiftung NEU007/1, DFG GZ: KA 1258/15-1, and ERC 2019-SyG-RELEVANCE-856495.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 11439
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 14299
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    These input terms are strong enough to suppress chaotic activity in the network [5].

  2. 2.

    Following [5] we included an L2 regularization term for \(\mathbf{J}\).

  3. 3.

    To ensure a sufficiently strong effect on the embedder network, the hint signals were scaled by a factor of 5. In addition, the final 110 samples of such signals were replaced by a smooth decay to zero, modeled by spline interpolation. This ensured that the activities go back to zero at the end of the movement phase.

  4. 4.

    We restricted our analysis to the first five singular vector canonical variables, which on average, captured \({>}92\%\) of the original data variance.

References

  1. Chaudhuri, R., Gercek, B., Pandey, B., Peyrache, A., Fiete, I.: The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep. Nat. Neurosci. 22(9), 1512–1520 (2019)

    Article  Google Scholar 

  2. Churchland, M.M.: Variance toolbox (2010). https://churchland.zuckermaninstitute.columbia.edu/content/code

  3. Churchland, M.M., et al.: Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nat. Neurosci. 13(3), 369 (2010)

    Article  Google Scholar 

  4. Dayan, P., Abbott, L.F.: Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. The MIT Press, Cambridge (2001)

    MATH  Google Scholar 

  5. DePasquale, B., Cueva, C.J., Rajan, K., Escola, G.S., Abbott, L.: Full-force: atarget-based method for training recurrent networks. PLoS ONE 13(2) (2018)

    Google Scholar 

  6. Doya, K.: Universality of fully-connected recurrent neural networks. In: Proceedings of 1992 IEEE International Symposium on Circuits and Systems, pp. 2777–2780 (1992)

    Google Scholar 

  7. Flash, T., Hochner, B.: Motor primitives in vertebrates and invertebrates. Curr. Opinion Neurobiol. 15(6), 660–666 (2005)

    Article  Google Scholar 

  8. Georgopoulos, A.P., Kalaska, J.F., Massey, J.T.: Spatial trajectories and reaction times of aimed movements: effects of practice, uncertainty, and change in target location. J. Neurophysiol. 46(4), 725–743 (1981)

    Article  Google Scholar 

  9. Hennequin, G., Vogels, T.P., Gerstner, W.: Optimal control of transient dynamics in balanced networks supports generation of complex movements. Neuron 82(6), 1394–1406 (2014)

    Article  Google Scholar 

  10. Kim, C.M., Chow, C.C.: Learning recurrent dynamics in spiking networks. eLife 7, e37124 (2018)

    Google Scholar 

  11. Machens, C.K., Romo, R., Brody, C.D.: Flexible control of mutual inhibition: a neural model of two-interval discrimination. Science 307(5712), 1121–1124 (2005)

    Article  Google Scholar 

  12. Mante, V., Sussillo, D., Shenoy, K.V., Newsome, W.T.: Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503(7474), 78–84 (2013)

    Article  Google Scholar 

  13. Matsuoka, K.: Mechanisms of frequency and pattern control in the neural rhythm generators. Biol. Cybern. 56(5–6), 345–353 (1987)

    Article  Google Scholar 

  14. Raghu, M., Gilmer, J., Yosinski, J., Sohl-Dickstein, J.: SVCCA: singular vector canonical correlation analysis for deep learning dynamics and interpretability. In: Advances in Neural Information Processing Systems, pp. 6076–6085 (2017)

    Google Scholar 

  15. Saxena, S., Cunningham, J.P.: Towards the neural population doctrine. Curr. Opinion Neurobiol. 55, 103–111 (2019)

    Article  Google Scholar 

  16. Schoner, G., Kelso, J.: Dynamic pattern generation in behavioral and neural systems. Science 239(4847), 1513–1520 (1988)

    Article  Google Scholar 

  17. Shenoy, K.V., Sahani, M., Churchland, M.M.: Cortical control of arm movements: a dynamical systems perspective. Ann. Rev. Neurosci. 36, 337–359 (2013)

    Article  Google Scholar 

  18. Stroud, J.P., Porter, M.A., Hennequin, G., Vogels, T.P.: Motor primitives in space and time via targeted gain modulation in cortical networks. Nat. Neurosci. 21(12), 1774–1783 (2018)

    Article  Google Scholar 

  19. Sussillo, D.: Neural circuits as computational dynamical systems. Curr. Opinion neurobiol. 25, 156–163 (2014)

    Article  Google Scholar 

  20. Sussillo, D., Barak, O.: Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks. Neural Comput. 25(3), 626–649 (2013)

    Article  MathSciNet  Google Scholar 

  21. Wang, J., Narain, D., Hosseini, E.A., Jazayeri, M.: Flexible timing by temporal scaling of cortical responses. Nat. Neurosci. 21(1), 102–110 (2018)

    Article  Google Scholar 

  22. Williamson, R.C., et al.: Scaling properties of dimensionality reduction for neural populations and network models. PLoS Comput. Biol. 12, e1005141 (2016)

    Article  Google Scholar 

  23. Williamson, R.C., Doiron, B., Smith, M.A., Byron, M.Y.: Bridging large-scale neuronal recordings and large-scale network models using dimensionality reduction. Curr. Opinion Neurobiol. 55, 40–47 (2019)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alessandro Salatiello .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Salatiello, A., Giese, M.A. (2020). Recurrent Neural Network Learning of Performance and Intrinsic Population Dynamics from Sparse Neural Data. In: Farkaš, I., Masulli, P., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2020. ICANN 2020. Lecture Notes in Computer Science(), vol 12396. Springer, Cham. https://doi.org/10.1007/978-3-030-61609-0_69

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-61609-0_69

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-61608-3

  • Online ISBN: 978-3-030-61609-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics