Rigorous Neural Network Simulations: A Model Substantiation Methodology for Increasing the Correctness of Simulation Results in the Absence of Experimental Validation Data - PubMed Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Nov 26:12:81.
doi: 10.3389/fninf.2018.00081. eCollection 2018.

Rigorous Neural Network Simulations: A Model Substantiation Methodology for Increasing the Correctness of Simulation Results in the Absence of Experimental Validation Data

Affiliations

Rigorous Neural Network Simulations: A Model Substantiation Methodology for Increasing the Correctness of Simulation Results in the Absence of Experimental Validation Data

Guido Trensch et al. Front Neuroinform. .

Abstract

The reproduction and replication of scientific results is an indispensable aspect of good scientific practice, enabling previous studies to be built upon and increasing our level of confidence in them. However, reproducibility and replicability are not sufficient: an incorrect result will be accurately reproduced if the same incorrect methods are used. For the field of simulations of complex neural networks, the causes of incorrect results vary from insufficient model implementations and data analysis methods, deficiencies in workmanship (e.g., simulation planning, setup, and execution) to errors induced by hardware constraints (e.g., limitations in numerical precision). In order to build credibility, methods such as verification and validation have been developed, but they are not yet well-established in the field of neural network modeling and simulation, partly due to ambiguity concerning the terminology. In this manuscript, we propose a terminology for model verification and validation in the field of neural network modeling and simulation. We outline a rigorous workflow derived from model verification and validation methodologies for increasing model credibility when it is not possible to validate against experimental data. We compare a published minimal spiking network model capable of exhibiting the development of polychronous groups, to its reproduction on the SpiNNaker neuromorphic system, where we consider the dynamics of several selected network states. As a result, by following a formalized process, we show that numerical accuracy is critically important, and even small deviations in the dynamics of individual neurons are expressed in the dynamics at network level.

Keywords: SpiNNaker; fixed-point numeric; model validation; reproducibility; spiking network models; verification and validation.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Interrelationship of the basic elements for modeling and simulation. In order to be able to apply the terminology, introduced by Schlesinger et al. (1979) for modeling and simulation processes (A), to numerical models for neural network simulations, a less generic terminology is more expedient. We propose the terminology shown in (B) which we have adapted slightly from Thacker et al. (2004). While Thacker et al. (2004) uses the terms reality of interest, conceptual model, and computerized model, we prefer the terms system of interest, mathematical model, and executable model as they better express the underlying intent. The model distinguishes between modeling and simulation activities (black solid arrows), and assessment activities (red dashed arrows).
Figure 2
Figure 2
Model verification and substantiation workflow. The workflow shown can be thought of as the combination of two separate model verification and validation processes (Figure 1) without the backward reference to the system of interest, i.e., the validation of the model. In this concept, the consistency of the simulation outcomes of two executable models that share the same system of interest and mathematical model is evaluated, in an assessment activity we term “substantiation.” Modeling and simulation activities are indicated by black solid arrows, whereas assessment activities are indicated by red dashed arrows.
Figure 3
Figure 3
Network architecture. The minimal spiking network exhibiting polychronization as decribed in Izhikevich (2006). The input to the network is a constant current of Iext = 20pA into a single neuron, which is randomly selected in each simulation time-step. Please see section 3.1.1 for a detailed description of the mathematical model.
Figure 4
Figure 4
The experimental set-up for the simulations. (A) To create the reference data, the C model is executed (with STDP) and the connectivity matrix A and delay matrix D are saved. Then five times are selected, for which the weight matrix W(ti) is recorded. Along with the input stimulus to the network I(t), these matrices determine five network states for later comparison. These initial conditions are then set for an implementation of the C model (B) and for the SpiNNaker model (C), both without STDP. This results in the network spiking activity recordings SiC(W(ti),t) and SiNM(W(ti),t) for five simulation runs for the C model and the SpiNNaker model, respectively.
Figure 5
Figure 5
Model verification and substantiation workflow as it was conducted. The figure depicts in a condensed form the instantiation of the model verification and substantiation workflow (Figure 2) introduced in section 2.3 and carried out in this study.
Figure 6
Figure 6
Model verification and substantiation iterations and activities conducted. The activities carried out as part of the model verification and substantiation process, which we briefly outlined in Figure 5, can be further broken down to a more detailed view. The diagram represents this iterative process in a linear fashion, where three iterations have been conducted. The model substantiation activity performed at the end of each iteration is marked with I, II, and III, which corresponds to the results summary shown in Figure 7.
Figure 7
Figure 7
Model substantiation assessment based on spike data analysis. Histograms (70 bins each) of the three characteristic measures computed from 60 s of network activity after the fifth hour of simulation: Left, firing rates (FR); middle, local coefficients of variation (LV); right, pairwise correlation coefficients (CC). For FR and LV, each neuron enters the histogram, for CC each neuron pair. Results are shown for three iterations (rows) of the substantiation process of the C model (dark colors) and SpiNNaker model (light colors), cf. Figure 6. On the far right, the difference between the respective distributions is quantified by the effect size: the graph shows the mean and standard deviation effect size calculated for each of the five network states (after 1, 2, 3, 4, and 5 h of simulation).
Listing 1
Listing 1
C model: algorithm of updating the neuronal dynamics (given as pseudocode) as implemented in the original C model. The algorithm implements a fixed-step size semi-implicit symplectic Forward Euler method.
Figure 8
Figure 8
Above threshold evolution of the state variable v(t). The approximation in the evolution of v(t) in the Equation (1) when using the semi-implicit symplectic Forward Euler method with a fixed-step size of h/2 = 0.5ms (the red dotted line), where h refers to the 1 ms simulation time-step, causes v(t) values to be well above the threshold and, thus, producing a propagating error over time. This is expressed in delayed spike times. The black solid line shows the evolution of v(t) around threshold for a regular-spiking type Izhikevich neuron stimulated with a constant current of Iext = 5pA. For integration, the same Forward Euler method was used but with an integration step size of h/100 = 0.01ms. The steep slope at threshold requires a precise threshold detection to prevent a numeric overflow.
Figure 9
Figure 9
Spike artifacts caused by fixed-point overflow. Large values of v(t) can cause an overflow of the fixed-point data type, which may result in short spike-trains with higher rates (marked by blue boxes). Simulations on SpiNNaker using fixed-step size symplectic Forward Euler with an integration step size of h/3 = 0.333ms and without precise threshold detection. (h refers to the simulation time-step of 1ms).
Listing 2
Listing 2
SpiNNaker model: an algorithm of updating the neuronal dynamics (given as pseudo code). The algorithm is similar to the implementation shown in Listing 1 but uses three fixed size integration steps. The additional step increases the likelihood that large values of v(t) are squared. This implementation may cause a numeric overflow.
Listing 3
Listing 3
SpiNNaker model: an improved algorithm of updating the neuronal dynamics (given as pseudo code) that uses a fixed-step size symplectic Forward Euler method and precise threshold detection.
Figure 10
Figure 10
Spike timing: comparison of different ODE solver implementations. Membrane potential v(t) recorded for a regular-spiking (A) and fast-spiking (B) Izhikevich neuron, stimulated with a constant current of Iext = 5pA. The dynamics are solved by the original SpiNNaker ESR ODE solver implementation (blue dashed curves); a fixed-step size symplectic Forward Euler approach with precise threshold detection (h/16 = 0.0625ms) (green solid curves); and, for comparison, a reference implementation of the GSL rkf45 ODE solver with an absolute integration error of 10−6 (black dotted curves). Both the SpiNNaker ESR and the fixed-step size Forward Euler implementations show considerable lags in the spike timing compared to the rkf45 reference implementation. While for the regular-spiking neuron (A) the SpiNNaker implementations have much the same accuracy, the fixed-step size Forward Euler approach with precise spike timing shows a substantial improvement over the ESR implementation for the fast-spiking neuron (B).
Listing 4
Listing 4
SpiNNaker model: the same algorithm (given as pseudo code) as shown in Listing 3, but adds fixed-point conversion to the constant 0.04.
Figure 11
Figure 11
Spike timing: with and without fixed-point data type conversion. The graphs show the development of the membrane voltages v(t) with (green solid line) and without (red dashed line) fixed-point data type conversion for a regular-spiking type (A) and a fast-spiking type (B) Izhikevich neuron, that is stimulated with a constant current of Iext = 5pA. For the ODE solver, the fixed-step size symplectic Forward Euler implementation with precise threshold detection was used (h/16 = 0.0625ms). This is shown in comparison to a reference implementation of the GSL rkf45 ODE solver with an absolute integration error of 10−6 (black dotted line). For both neuron types, a substantial improvement in the spike timing can be seen.

Similar articles

Cited by

References

    1. Association for Computing Machinery (2016). Artifact Review and Badging. Available online at: https://www.acm.org/publications/policies/artifact-review-badging(Accessed March 14, 2018).
    1. Barba L. A. (2018). Terminologies for reproducible research. arXiv [Preprint]:1802.03311.
    1. Benureau F. C. Y., Rougier N. P. (2017). Re-run, repeat, reproduce, reuse, replicate: transforming code into scientific contributions. Front. Neuroinform. 11:69 10.3389/fninf.2017.00069 - DOI - PMC - PubMed
    1. Blundell I., Brette R., Cleland T. A., Close T. G., Coca D., Davison A. P., et al. (2018a). Code generation in computational neuroscience: a review of tools and techniques. Front. Neuroinform. 12:68 10.3389/fninf.2018.00068 - DOI - PMC - PubMed
    1. Blundell I., Plotnikov D., Eppler J., Morrison A. (2018b). Automatically selecting a suitable integration scheme for systems of differential equations in neuron models. Front. Neuroinform. 12:50 10.3389/fninf.2018.00050 - DOI - PMC - PubMed