Abstract
Recurrent Neural Networks (RNNs) are popular models of brain function. The typical training strategy is to adjust their input-output behavior so that it matches that of the biological circuit of interest. Even though this strategy ensures that the biological and artificial networks perform the same computational task, it does not guarantee that their internal activity dynamics match. This suggests that the trained RNNs might end up performing the task employing a different internal computational mechanism. In this work, we introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics. We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model of motor cortical and muscle activity dynamics. Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons sampled from the biological network. Furthermore, we show that training the RNNs with this method significantly improves their generalization performance. Overall, our results suggest that the proposed method is suitable for building powerful functional RNN models, which automatically capture important computational properties of the biological circuit of interest from sparse neural recordings.
Supported by: BMBF FKZ 01GQ1704, HFSP RGP0036/2016, KONSENS-NHE BW Stiftung NEU007/1, DFG GZ: KA 1258/15-1, and ERC 2019-SyG-RELEVANCE-856495.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
These input terms are strong enough to suppress chaotic activity in the network [5].
- 2.
Following [5] we included an L2 regularization term for \(\mathbf{J}\).
- 3.
To ensure a sufficiently strong effect on the embedder network, the hint signals were scaled by a factor of 5. In addition, the final 110 samples of such signals were replaced by a smooth decay to zero, modeled by spline interpolation. This ensured that the activities go back to zero at the end of the movement phase.
- 4.
We restricted our analysis to the first five singular vector canonical variables, which on average, captured \({>}92\%\) of the original data variance.
References
Chaudhuri, R., Gercek, B., Pandey, B., Peyrache, A., Fiete, I.: The intrinsic attractor manifold and population dynamics of a canonical cognitive circuit across waking and sleep. Nat. Neurosci. 22(9), 1512–1520 (2019)
Churchland, M.M.: Variance toolbox (2010). https://churchland.zuckermaninstitute.columbia.edu/content/code
Churchland, M.M., et al.: Stimulus onset quenches neural variability: a widespread cortical phenomenon. Nat. Neurosci. 13(3), 369 (2010)
Dayan, P., Abbott, L.F.: Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. The MIT Press, Cambridge (2001)
DePasquale, B., Cueva, C.J., Rajan, K., Escola, G.S., Abbott, L.: Full-force: atarget-based method for training recurrent networks. PLoS ONE 13(2) (2018)
Doya, K.: Universality of fully-connected recurrent neural networks. In: Proceedings of 1992 IEEE International Symposium on Circuits and Systems, pp. 2777–2780 (1992)
Flash, T., Hochner, B.: Motor primitives in vertebrates and invertebrates. Curr. Opinion Neurobiol. 15(6), 660–666 (2005)
Georgopoulos, A.P., Kalaska, J.F., Massey, J.T.: Spatial trajectories and reaction times of aimed movements: effects of practice, uncertainty, and change in target location. J. Neurophysiol. 46(4), 725–743 (1981)
Hennequin, G., Vogels, T.P., Gerstner, W.: Optimal control of transient dynamics in balanced networks supports generation of complex movements. Neuron 82(6), 1394–1406 (2014)
Kim, C.M., Chow, C.C.: Learning recurrent dynamics in spiking networks. eLife 7, e37124 (2018)
Machens, C.K., Romo, R., Brody, C.D.: Flexible control of mutual inhibition: a neural model of two-interval discrimination. Science 307(5712), 1121–1124 (2005)
Mante, V., Sussillo, D., Shenoy, K.V., Newsome, W.T.: Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 503(7474), 78–84 (2013)
Matsuoka, K.: Mechanisms of frequency and pattern control in the neural rhythm generators. Biol. Cybern. 56(5–6), 345–353 (1987)
Raghu, M., Gilmer, J., Yosinski, J., Sohl-Dickstein, J.: SVCCA: singular vector canonical correlation analysis for deep learning dynamics and interpretability. In: Advances in Neural Information Processing Systems, pp. 6076–6085 (2017)
Saxena, S., Cunningham, J.P.: Towards the neural population doctrine. Curr. Opinion Neurobiol. 55, 103–111 (2019)
Schoner, G., Kelso, J.: Dynamic pattern generation in behavioral and neural systems. Science 239(4847), 1513–1520 (1988)
Shenoy, K.V., Sahani, M., Churchland, M.M.: Cortical control of arm movements: a dynamical systems perspective. Ann. Rev. Neurosci. 36, 337–359 (2013)
Stroud, J.P., Porter, M.A., Hennequin, G., Vogels, T.P.: Motor primitives in space and time via targeted gain modulation in cortical networks. Nat. Neurosci. 21(12), 1774–1783 (2018)
Sussillo, D.: Neural circuits as computational dynamical systems. Curr. Opinion neurobiol. 25, 156–163 (2014)
Sussillo, D., Barak, O.: Opening the black box: low-dimensional dynamics in high-dimensional recurrent neural networks. Neural Comput. 25(3), 626–649 (2013)
Wang, J., Narain, D., Hosseini, E.A., Jazayeri, M.: Flexible timing by temporal scaling of cortical responses. Nat. Neurosci. 21(1), 102–110 (2018)
Williamson, R.C., et al.: Scaling properties of dimensionality reduction for neural populations and network models. PLoS Comput. Biol. 12, e1005141 (2016)
Williamson, R.C., Doiron, B., Smith, M.A., Byron, M.Y.: Bridging large-scale neuronal recordings and large-scale network models using dimensionality reduction. Curr. Opinion Neurobiol. 55, 40–47 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Salatiello, A., Giese, M.A. (2020). Recurrent Neural Network Learning of Performance and Intrinsic Population Dynamics from Sparse Neural Data. In: Farkaš, I., Masulli, P., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2020. ICANN 2020. Lecture Notes in Computer Science(), vol 12396. Springer, Cham. https://doi.org/10.1007/978-3-030-61609-0_69
Download citation
DOI: https://doi.org/10.1007/978-3-030-61609-0_69
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-61608-3
Online ISBN: 978-3-030-61609-0
eBook Packages: Computer ScienceComputer Science (R0)