Training dynamically balanced excitatory-inhibitory networks - PubMed Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019 Aug 8;14(8):e0220547.
doi: 10.1371/journal.pone.0220547. eCollection 2019.

Training dynamically balanced excitatory-inhibitory networks

Affiliations

Training dynamically balanced excitatory-inhibitory networks

Alessandro Ingrosso et al. PLoS One. .

Abstract

The construction of biologically plausible models of neural circuits is crucial for understanding the computational properties of the nervous system. Constructing functional networks composed of separate excitatory and inhibitory neurons obeying Dale's law presents a number of challenges. We show how a target-based approach, when combined with a fast online constrained optimization technique, is capable of building functional models of rate and spiking recurrent neural networks in which excitation and inhibition are balanced. Balanced networks can be trained to produce complicated temporal patterns and to solve input-output tasks while retaining biologically desirable features such as Dale's law and response variability.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. Schematic of the target-based method.
Target currents hiT(t) are produced by a balanced teacher network (T, Left) driven by the desired output. The student network (Right) is trained to reproduce the target currents autonomously. We train the recurrent weights of both the excitatory (E) and inhibitory (I) populations, together with the connections between them. A linear decoder wout is trained with a standard online method (RLS) to reproduce the prescribed output target from a readout of the neurons in the network.
Fig 2
Fig 2. Trained balanced networks.
A: Target output Fout (in black) for all the networks in this figure. Red curve is an example readout z(t) from a trained spiking network of N = 200 units. B: Histogram of recurrent weights in three prototypical trained rate networks (N = 300, ϕ = halftanh): i, Zero external current (IE = II = 0) and L2 regularization; ii, [IE,II]=(0.3NE,0.4NE) and L2 regularization; iii, balanced initialization and J0 regularization, external currents as in ii. Regularization parameter α = 1.0 in all three cases. C: Time course of the determinant of the effective matrix Jeff during training of spiking networks of size N = 200 for I of order N (grey dashed lines) and I of order 1 (black line on horizontal axis). Both cases use L2 regularization. D: The full excitatory current and its two defined components (Eqs 4 & 5) as a function of N for a parametrically balanced network performing the task in panel A. E: Time course of the determinant of the effective matrix Jeff during training of spiking networks of size N = 200 for I of order N and J0 regularization. F: The full excitatory current and its two defined components (Eqs 4 & 5) as a function of N for a dynamically balanced network performing the task in panel A. Results in C-F are from ten different initializations of J0 or JT. G: The total average current onto E neurons (hE) and its excitatory (hEE) and inhibitory (hEI) components as a function of network size N for balanced networks (balanced initialization and J0 regularization, full lines) and networks trained with zero external currents (I = 0 and L2 regularization, dashed lines). H: Eigenvalue spectrum of the weight matrices J of two networks trained to perform the task in panel A (N = 200, ϕ = halftanh). Blue: Zero external current (IE = II = 0) and L2 regularization; red: balanced initialization and J0 regularization.
Fig 3
Fig 3. Dynamics in dynamically balanced trained spiking networks.
A: Input currents onto a neuron in a spiking network trained to produce a superposition of 4 sine waves as in Fig 2A. Red curve: total excitatory current hE + IE; Blue curve: inhibitory synaptic current hI; black curve: total current h. B: Voltage traces of 5 sample units the network with random fast synaptic currents (time constant 2 ms). C: Spike raster of 200 neurons for the network in B. D: Histogram of the coefficient of variation of interspike intervals across neurons for the network in B.
Fig 4
Fig 4. Response to perturbations in trained balanced networks.
A: An E/I spiking network of size N = 200 trained on an oscillation task receives strong input pulses at random times (dark blue vertical lines), either in the E+I direction (top) or in the E-I direction (bottom). B: Median test error of two types of rate networks of size N = 200 trained to produce the same output signal as in A. Errorbars indicate 25% and 75% percentiles over 100 networks and 50 realizations of input white noise with intensity σ. The networks are driven either by N independent white noise inputs (black curve, legend: het) or by a single common white noise input in the E+I (red curve, legend E+I) or E-I (yellow curve, legend E-I) direction. Top: dynamically balanced network; bottom: parametrically balanced network with zero external input. Halftanh activation function, see Eq (9) for the definition of Δx.
Fig 5
Fig 5. Nonlinear oscillations.
A: Top: Balanced E/I spiking network of size N = 300 producing a sawtooth wave of frequency 1 Hz. Bottom: E/I rate network producing a frequency-modulated oscillation obtained by Fout(t) = sin (ω(t)t) with ω(t) linearly increasing from 2π to 6π Hz for the first half of the oscillation period, then reflected in time around the midpoint of the period. Parameters: N = 500, ϕ = halftanh, trained using feedback (Methods, ΔtL = 1 s). B: Top: Eigenvalue spectrum of Jtestϕ′|x0 for a dynamically balanced rate network with sigmoid activation function trained to produce a square wave (N = 200, output frequency f = 0.04, τ = 1), for gtest = 0.8. The two red dots indicate the two conjugate eigenvalues λ1,2 with largest real value. Bottom: Oscillation frequency as a function of gtest comparing simulation results (solid curve) with approximate prediction (dashed lines). C: Readout signal with gtest = 0.8 (top) and gtest = 1.0 (bottom).
Fig 6
Fig 6. Learning chaotic trajectories and complex transient activity.
A: Output of a rate network (N = 1000, halftanh activation function) trained to produce the time course of the first coordinate of a Lorenz attractor (σ = 10, ρ = 28, β = 2.67). B: Input currents onto three representative neurons in a balanced spiking network trained to reproduce innate current trajectories of duration 2 s after a brief stimulus (50 ms) at time 0.5 s. Network size N = 500, synaptic time constant τs = 50 ms. C: Balanced E/I spiking network producing walking behavior in response to a strong input pulse of duration 100 ms. Top: a pictorial representation of the network with 56 distinct readouts (network size N = 300; synaptic time constant τs = 50 ms). Middle: activity of three random readout units over the course of ∼ 6 s. Bottom: spike raster plot of 200 neurons in the network.
Fig 7
Fig 7. Input-output tasks.
A: Example of output responses (red curves) of a balanced E/I spiking network trained on the temporal XOR task to two sets of input pulses (Blue curves) respectively coding for False-False (left) and False-True (right) Parameters: N = 1000, τs = 50 ms. B: Interval matching task. Left: sample output (red curves) vs desired output (dashed black curves) from a spiking E/I network trained on the Interval Matching Task to two pairs of input pulses. Right: output delay vs target delay ΔT to randomly interleaved test input pulses.
Fig 8
Fig 8. Some comparisons with RLS.
A: Test error during training as a function of testing epoch for balanced networks of N = 200 LIF units trained on the oscillatory task in Fig 2A with BCD (α = 0.05). Each curve shows the results obtained when only a random subset of the incoming synapses onto each neuron gets updated. Networks were trained with feedback stabilization. Recurrent synaptic weights were updated every 20 time steps. The network was tested each 5 periods of the oscillations (1 sec). Each point is the median over 20 random initialization of both student and teacher networks. Bars represent 90% and 10% percentile. B: Test error as a function of training epoch for networks of N = 1000 Quadratic Integrate and Fire (QIF) neurons (for details see [27]) trained to reproduce their innate currents (task in Fig 6B) using Recursive Least Square or BCD. For both algorithms, recurrent synaptic weights were trained once each 25 time-steps (50 ms). Output weights were trained at each time-step via RLS. No feedback stabilization was employed in conjunction with BCD. For comparison with RLS, we employed a large value of α in BCD and did not normalize the first term of Eq (8) by trun, both in panel B and C. Parameters for QIF neurons: τm = 10 ms, τs = 100 ms, (similar to [27]), τref = 2 ms, dt = 2 ms. C: Test error during training as a function of testing epoch for balanced networks of N = 200 LIF units trained on the oscillatory task in panel A, employing RLS with different values of the regularization parameter α. Results for BCD are shown for reference. The network was tested each 5 periods of the oscillations (1 sec). Each point is the median over 20 random initialization of both student and teacher networks. Bars represent 75% and 25% percentile.

Similar articles

Cited by

References

    1. van Vreeswijk C, Sompolinsky H. Chaos in Neuronal Networks with Balanced Excitatory and Inhibitory Activity. Science. 1996;274(5293):1724–1726. 10.1126/science.274.5293.1724 - DOI - PubMed
    1. van Vreeswijk C, Sompolinsky H. Chaotic Balanced State in a Model of Cortical Circuits. Neural Comput. 1998;10(6):1321–1371. 10.1162/089976698300017214 - DOI - PubMed
    1. Renart A, de la Rocha J, Bartho P, Hollender L, Parga N, Reyes A, et al. The Asynchronous State in Cortical Circuits. Science. 2010;327(5965):587–590. 10.1126/science.1179850 - DOI - PMC - PubMed
    1. Kadmon J, Sompolinsky H. Transition to Chaos in Random Neuronal Networks. Phys Rev X. 2015;5:041030.
    1. Harish O, Hansel D. Asynchronous Rate Chaos in Spiking Neuronal Circuits. PLOS Computational Biology. 2015;11(7):1–38. 10.1371/journal.pcbi.1004266 - DOI - PMC - PubMed

Publication types

Grants and funding

Funding is provided by NSF NeuroNex Award (LFA, DBI-1707398, https://nsf.gov/), the Gatsby Charitable Foundation (LFA, GAT3419, http://www.gatsby.org.uk/), the Simons Collaboration for the Global Brain (LFA, 542939SPI, https://www.simonsfoundation.org/collaborations/global-brain/) and the Swartz Foundation (LFA, AI, http://www.theswartzfoundation.org/). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.