{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,3,7]],"date-time":"2024-03-07T11:26:34Z","timestamp":1709810794328},"reference-count":60,"publisher":"Frontiers Media SA","license":[{"start":{"date-parts":[[2022,5,27]],"date-time":"2022-05-27T00:00:00Z","timestamp":1653609600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["frontiersin.org"],"crossmark-restriction":true},"short-container-title":["Front. Comput. Neurosci."],"abstract":"Neuroscience models commonly have a high number of degrees of freedom and only specific regions within the parameter space are able to produce dynamics of interest. This makes the development of tools and strategies to efficiently find these regions of high importance to advance brain research. Exploring the high dimensional parameter space using numerical simulations has been a frequently used technique in the last years in many areas of computational neuroscience. Today, high performance computing (HPC) can provide a powerful infrastructure to speed up explorations and increase our general understanding of the behavior of the model in reasonable times. Learning to learn (L2L) is a well-known concept in machine learning (ML) and a specific method for acquiring constraints to improve learning performance. This concept can be decomposed into a two loop optimization process where the target of optimization can consist of any program such as an artificial neural network, a spiking network, a single cell model, or a whole brain simulation. In this work, we present L2L as an easy to use and flexible framework to perform parameter and hyper-parameter space exploration of neuroscience models on HPC infrastructure. Learning to learn is an implementation of the L2L concept written in Python. This open-source software allows several instances of an optimization target to be executed with different parameters in an embarrassingly parallel fashion on HPC. L2L provides a set of built-in optimizer algorithms, which make adaptive and efficient exploration of parameter spaces possible. Different from other optimization toolboxes, L2L provides maximum flexibility for the way the optimization target can be executed. In this paper, we show a variety of examples of neuroscience models being optimized within the L2L framework to execute different types of tasks. The tasks used to illustrate the concept go from reproducing empirical data to learning how to solve a problem in a dynamic environment. We particularly focus on simulations with models ranging from the single cell to the whole brain and using a variety of simulation engines like NEST, Arbor, TVB, OpenAIGym, and NetLogo.<\/jats:p>","DOI":"10.3389\/fncom.2022.885207","type":"journal-article","created":{"date-parts":[[2022,5,27]],"date-time":"2022-05-27T14:50:53Z","timestamp":1653663053000},"update-policy":"http:\/\/dx.doi.org\/10.3389\/crossmark-policy","source":"Crossref","is-referenced-by-count":5,"title":["Exploring Parameter and Hyper-Parameter Spaces of Neuroscience Models on High Performance Computers With Learning to Learn"],"prefix":"10.3389","volume":"16","author":[{"given":"Alper","family":"Yegenoglu","sequence":"first","affiliation":[]},{"given":"Anand","family":"Subramoney","sequence":"additional","affiliation":[]},{"given":"Thorsten","family":"Hater","sequence":"additional","affiliation":[]},{"given":"Cristian","family":"Jimenez-Romero","sequence":"additional","affiliation":[]},{"given":"Wouter","family":"Klijn","sequence":"additional","affiliation":[]},{"given":"Aar\u00f3n","family":"P\u00e9rez Mart\u00edn","sequence":"additional","affiliation":[]},{"given":"Michiel","family":"van der Vlag","sequence":"additional","affiliation":[]},{"given":"Michael","family":"Herty","sequence":"additional","affiliation":[]},{"given":"Abigail","family":"Morrison","sequence":"additional","affiliation":[]},{"given":"Sandra","family":"Diaz-Pier","sequence":"additional","affiliation":[]}],"member":"1965","published-online":{"date-parts":[[2022,5,27]]},"reference":[{"key":"B1","doi-asserted-by":"crossref","first-page":"274","DOI":"10.1109\/EMPDP.2019.8671560","article-title":"\u201cArbor - a morphologically-detailed neural network simulation library for contemporary high-performance computing architectures,\u201d","volume-title":"2019 27th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP)","author":"Akar","year":"2019"},{"key":"B2","first-page":"3981","article-title":"\u201cLearning to learn by gradient descent by gradient descent,\u201d","volume-title":"Advances in Neural Information Processing Systems","author":"Andrychowicz","year":"2016"},{"key":"B3","article-title":"How to train your MAML","volume-title":"arXiv preprint arXiv:1810.09502","author":"Antoniou","year":"2018"},{"key":"B4","doi-asserted-by":"publisher","first-page":"42","DOI":"10.1016\/j.conb.2018.04.014","article-title":"Personalized brain network models for assessing structure-function relationships","volume":"52","author":"Bansal","year":"2018","journal-title":"Curr. Opin Neurobiol"},{"key":"B5","first-page":"281","article-title":"Random search for hyper-parameter optimization","volume":"13","author":"Bergstra","year":"2012","journal-title":"J. Mach. Learn. Res"},{"key":"B6","article-title":"Openai gym","volume-title":"arXiv preprint arXiv:1606.01540","author":"Brockman","year":"2016"},{"key":"B7","article-title":"\u201cLearning to optimize in swarms,\u201d","volume-title":"Advances in Neural Information Processing Systems, Vol. 32","author":"Cao","year":"2019"},{"key":"B8","doi-asserted-by":"publisher","first-page":"7910","DOI":"10.1523\/JNEUROSCI.4423-13.2014","article-title":"Identification of optimal structural connectivity using functional connectivity and neural modeling","volume":"34","author":"Deco","year":"2014","journal-title":"J. Neurosci"},{"key":"B9","volume-title":"NEST 3.1.","author":"Deepu","year":"2021"},{"key":"B10","doi-asserted-by":"publisher","first-page":"2007","DOI":"10.3389\/neuro.01.1.1.001.2007","article-title":"A novel multiple objective optimization framework for constraining conductance-based neuron models by experimental data","volume":"1","author":"Druckmann","year":"2007","journal-title":"Front. Neurosci"},{"key":"B11","first-page":"1126","article-title":"\u201cModel-agnostic meta-learning for fast adaptation of deep networks,\u201d","volume-title":"International Conference on Machine Learning","author":"Finn","year":"2017"},{"key":"B12","article-title":"Meta-learning and universality: deep representations and gradient descent can approximate any learning algorithm","volume-title":"arXiv:1710.11622 [cs]","author":"Finn","year":"2017"},{"key":"B13","first-page":"1920","article-title":"\u201c Online meta-learning,\u201d","volume-title":"International Conference on Machine Learning","author":"Finn","year":"2019"},{"key":"B14","unstructured":"\u201cProbabilistic model-agnostic meta-learning,\u201d114\n FinnC.\n XuK.\n LevineS.\n 35168359Advances in Neural Information Processing System, vol. 312018"},{"key":"B15","doi-asserted-by":"publisher","first-page":"2171","DOI":"10.1145\/2330784.2330799","article-title":"DEAP: evolutionary algorithms made easy","volume":"13","author":"Fortin","year":"2012","journal-title":"J. Mach. Learn. Res"},{"key":"B16","doi-asserted-by":"crossref","DOI":"10.4249\/scholarpedia.1430","article-title":"Nest (neural simulation tool)","volume-title":"Scholarpedia","author":"Gewaltig","year":"2007"},{"key":"B17","article-title":"Meta-learning probabilistic inference for prediction","volume-title":"arXiv preprint arXiv:1805.09921","author":"Gordon","year":"2018"},{"key":"B18","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s41467-017-02718-3","article-title":"Systematic generation of biophysically detailed models for diverse cortical neuron types","volume":"9","author":"Gouwens","year":"2018","journal-title":"Nat. Commun"},{"key":"B19","doi-asserted-by":"crossref","DOI":"10.1016\/j.knosys.2020.106622","article-title":"AutoML: a survey of the state-of-the-art","volume-title":"Knowl. Based Syst","author":"He","year":"2021"},{"key":"B20","first-page":"136","article-title":"\u201cVariable metric reinforcement learning methods applied to the noisy mountain car problem,\u201d","volume-title":"Recent Advances in Reinforcement Learning. EWRL 2008. Lecture Notes in Computer Science, vol. 5323","author":"Heidrich-Meisner","year":"2008"},{"key":"B21","doi-asserted-by":"crossref","first-page":"60","DOI":"10.1145\/2616498.2616565","article-title":"\u201cOnce you SCOOP, no need to fork,\u201d","volume-title":"Proceedings of the 2014 Annual Conference on Extreme Science and Engineering Discovery Environment","author":"Hold-Geoffroy","year":"2014"},{"key":"B22","doi-asserted-by":"publisher","first-page":"2035","DOI":"10.1073\/pnas.0811168106","article-title":"Predicting human resting-state functional connectivity from structural connectivity","volume":"106","author":"Honey","year":"2009","journal-title":"Proc. Natl. Acad. Sci. U.S.A"},{"key":"B23","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-030-05318-5","volume-title":"Automated Machine Learning-Methods, Systems, Challenges","author":"Hutter","year":"2019"},{"key":"B24","doi-asserted-by":"crossref","DOI":"10.1088\/0266-5611\/29\/4\/045001","article-title":"Ensemble kalman methods for inverse problems","volume-title":"Inverse Probl","author":"Iglesias","year":"2013"},{"key":"B25","article-title":"Population based training of neural networks","volume-title":"arXiv preprint arXiv:1711.09846","author":"Jaderberg","year":"2017"},{"key":"B26","doi-asserted-by":"publisher","first-page":"755","DOI":"10.1007\/s00521-016-2398-1","article-title":"SpikingLab: modelling agents controlled by spiking neural networks in netlogo","volume":"28","author":"Jimenez-Romero","year":"2017","journal-title":"Neural Comput. Appl"},{"key":"B27","doi-asserted-by":"publisher","first-page":"2","DOI":"10.3389\/fninf.2018.00002","article-title":"Extremely scalable spiking neuronal network simulation code: from laptops to exascale computers","volume":"12","author":"Jordan","year":"2018","journal-title":"Front. Neuroinform"},{"key":"B28","doi-asserted-by":"crossref","first-page":"1942","DOI":"10.1109\/ICNN.1995.488968","article-title":"\u201cParticle swarm optimization,\u201d","volume-title":"Proceedings of ICNN'95-International Conference on Neural Networks, vol. 4","author":"Kennedy","year":"1995"},{"key":"B29","first-page":"18","article-title":"MNIST Handwritten Digit Database. ATandT Labs [Online]. Available: http:\/\/yann. lecun","volume":"2","author":"LeCun","year":"2010","journal-title":"com\/exdb\/mnist"},{"key":"B30","doi-asserted-by":"publisher","first-page":"168","DOI":"10.1038\/nature05453","article-title":"Genome-wide atlas of gene expression in the adult mouse brain","volume":"445","author":"Lein","year":"2007","journal-title":"Nature"},{"key":"B31","article-title":"Meta-sgd: Learning to learn quickly for few-shot learning","volume-title":"arXiv preprint arXiv:1707.09835","author":"Li","year":"2017"},{"key":"B32","doi-asserted-by":"publisher","first-page":"2531","DOI":"10.1162\/089976602760407955","article-title":"Real-time computing without stable states: a new framework for neural computation based on perturbations","volume":"14","author":"Maass","year":"2002","journal-title":"Neural Comput"},{"key":"B33","volume-title":"Metaheuristic and Evolutionary Computation: Algorithms and Applications","author":"Malik","year":"2021"},{"key":"B34","volume-title":"Metaheuristic Optimization: Nature-Inspired Algorithms Swarm and Computational Intelligence, Theory and Applications, volume 927","author":"Okwu","year":"2020"},{"key":"B35","doi-asserted-by":"crossref","DOI":"10.1007\/978-3-030-70542-8","volume-title":"Metaheuristics in Machine Learning: Theory and Applications","author":"Oliva","year":"2021"},{"key":"B36","doi-asserted-by":"publisher","first-page":"37113","DOI":"10.1063\/1.2930766","article-title":"Low dimensional behavior of large systems of globally coupled oscillators","volume":"18","author":"Ott","year":"2008","journal-title":"Chaos"},{"key":"B37","volume-title":"Norse - A Deep Learning Library for Spiking Neural Networks","author":"Pehle","year":"2021"},{"key":"B38","article-title":"NengoDL: combining deep learning and neuromorphic modelling methods","volume-title":"arXiv 1805.11144:1\u201322","author":"Rasmussen","year":"2018"},{"key":"B39","article-title":"\u201cOptimization as a model for few-shot learning,\u201d","volume-title":"International Conference on Learning Representations (ICLR)","author":"Ravi","year":"2017"},{"key":"B40","article-title":"Evolution strategies as a scalable alternative to reinforcement learning","volume-title":"arXiv preprint arXiv:1703.03864","author":"Salimans","year":"2017"},{"key":"B41","doi-asserted-by":"publisher","first-page":"10","DOI":"10.3389\/fninf.2013.00010","article-title":"The Virtual Brain: a simulator of primate brain network dynamics","volume":"7","author":"Sanz Leon","year":"2013","journal-title":"Front. Neuroinform"},{"key":"B42","article-title":"Es-maml: simple hessian-free meta learning","volume-title":"arXiv preprint arXiv:1910.01215","author":"Song","year":"2019"},{"key":"B43","first-page":"51","article-title":"\u201cUsing performance analysis tools for a parallel-in-time integrator,\u201d","volume-title":"Parallel-in-Time Integration Methods, volume 356 of Springer Proceedings in Mathematics and Statistics, Cham 9th Workshop on Parallel-in-Time Integration, online (online), 8 Jun 2020 - 12 Jun 2020","author":"Speck","year":"2021"},{"key":"B44","doi-asserted-by":"publisher","first-page":"24","DOI":"10.1038\/s42256-018-0006-z","article-title":"Designing neural networks through neuroevolution","volume":"1","author":"Stanley","year":"2019","journal-title":"Nat. Mach. Intell"},{"key":"B45","doi-asserted-by":"publisher","first-page":"357","DOI":"10.1016\/S0927-5452(05)80018-8","article-title":"UNICOREfrom project results to production grids","volume":"14","author":"Streit","year":"2005","journal-title":"Adv. Parallel Comput"},{"key":"B46","article-title":"Deep neuroevolution: genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning","volume-title":"ArXiv","author":"Such","year":"2017"},{"key":"B47","volume-title":"Learning to Learn","author":"Thrun","year":"2012"},{"key":"B48","first-page":"16","article-title":"\u201cNetlogo: a simple environment for modeling complexity,\u201d","volume-title":"International Conference on Complex Systems, vol. 21","author":"Tisue","year":"2004"},{"key":"B49","doi-asserted-by":"publisher","first-page":"826345","DOI":"10.3389\/fnetp.2022.826345","article-title":"RateML: a code generation tool for brain network models (accepted)","volume":"2","author":"van der Vlag","year":"2022","journal-title":"Front. Netw. Physiol"},{"key":"B50","doi-asserted-by":"publisher","first-page":"17","DOI":"10.3389\/fninf.2016.00017","article-title":"BluePyOpt: leveraging open source software and cloud infrastructure to optimise model parameters in neuroscience","volume":"10","author":"Van Geit","year":"2016","journal-title":"Front. Neuroinform"},{"key":"B51","doi-asserted-by":"publisher","first-page":"543872","DOI":"10.3389\/fncom.2021.543872","article-title":"Unsupervised learning and clustered connectivity enhance reinforcement learning in spiking neural networks","volume":"15","author":"Weidel","year":"2021","journal-title":"Front. Comput. Neurosci"},{"key":"B52","doi-asserted-by":"publisher","first-page":"949","DOI":"10.48550\/arXiv.1106.4487","article-title":"Natural evolution strategies","volume":"15","author":"Wierstra","year":"2014","journal-title":"J. Mach. Learn. Res"},{"key":"B53","doi-asserted-by":"publisher","first-page":"504","DOI":"10.3389\/fnins.2019.00504","article-title":"Analysis of liquid ensembles for enhancing the performance and accuracy of liquid state machines","volume":"13","author":"Wijesinghe","year":"2019","journal-title":"Front. Neurosci"},{"key":"B54","volume-title":"Netlogo Ants Model","author":"Wilensky","year":"1997"},{"key":"B55","doi-asserted-by":"crossref","first-page":"78","DOI":"10.1007\/978-3-030-64580-9_7","article-title":"\u201cEnsemble kalman filter optimizing deep neural networks: an alternative approach to non-performing gradient descent,\u201d","volume-title":"International Conference on Machine Learning, Optimization, and Data Science","author":"Yegenoglu","year":"2020"},{"key":"B56","doi-asserted-by":"crossref","first-page":"44","DOI":"10.1007\/10968987_3","article-title":"\u201cSlurm: simple linux utility for resource management,\u201d","volume-title":"Workshop on Job Scheduling Strategies for Parallel Processing","author":"Yoo","year":"2003"},{"key":"B57","first-page":"31","article-title":"Bayesian model-agnostic meta-learning","volume-title":"Adv. Neural Inf. Process. Syst","author":"Yoon","year":"2018"},{"key":"B58","first-page":"4185","article-title":"\u201cMetatrace actor-critic: online step-size tuning by meta-gradient descent for reinforcement learning control,\u201d","volume-title":"Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence Main Track","author":"Young","year":"2019"},{"key":"B59","doi-asserted-by":"publisher","first-page":"12","DOI":"10.1016\/j.neucom.2020.04.079","article-title":"Surrogate-assisted evolutionary search of spiking neural architectures in liquid state machines","volume":"406","author":"Zhou","year":"2020","journal-title":"Neurocomputing"},{"key":"B60","article-title":"Neural architecture search with reinforcement learning","volume-title":"arXiv:1611.01578 [cs]","author":"Zoph","year":"2016"}],"container-title":["Frontiers in Computational Neuroscience"],"original-title":[],"link":[{"URL":"https:\/\/www.frontiersin.org\/articles\/10.3389\/fncom.2022.885207\/full","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,6,1]],"date-time":"2022-06-01T16:23:05Z","timestamp":1654100585000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.frontiersin.org\/articles\/10.3389\/fncom.2022.885207\/full"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,5,27]]},"references-count":60,"alternative-id":["10.3389\/fncom.2022.885207"],"URL":"https:\/\/doi.org\/10.3389\/fncom.2022.885207","relation":{},"ISSN":["1662-5188"],"issn-type":[{"value":"1662-5188","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,5,27]]}}}