Adaptive Synaptogenesis Constructs Neural Codes That Benefit Discrimination - PubMed Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2015 Jul 15;11(7):e1004299.
doi: 10.1371/journal.pcbi.1004299. eCollection 2015 Jul.

Adaptive Synaptogenesis Constructs Neural Codes That Benefit Discrimination

Affiliations

Adaptive Synaptogenesis Constructs Neural Codes That Benefit Discrimination

Blake T Thomas et al. PLoS Comput Biol. .

Abstract

Intelligent organisms face a variety of tasks requiring the acquisition of expertise within a specific domain, including the ability to discriminate between a large number of similar patterns. From an energy-efficiency perspective, effective discrimination requires a prudent allocation of neural resources with more frequent patterns and their variants being represented with greater precision. In this work, we demonstrate a biologically plausible means of constructing a single-layer neural network that adaptively (i.e., without supervision) meets this criterion. Specifically, the adaptive algorithm includes synaptogenesis, synaptic shedding, and bi-directional synaptic weight modification to produce a network with outputs (i.e. neural codes) that represent input patterns proportional to the frequency of related patterns. In addition to pattern frequency, the correlational structure of the input environment also affects allocation of neural resources. The combined synaptic modification mechanisms provide an explanation of neuron allocation in the case of self-taught experts.

PubMed Disclaimer

Conflict of interest statement

The authors have declared that no competing interests exist.

Figures

Fig 1
Fig 1. The three processes of adaptive synaptogenesis: Random synaptogenesis, bi-directional associative modification of existing synapses, and synaptic shedding.
(Top) Synaptogenesis and positive associative modification (LTP). (Left) Receptivity for new innervation is below the cutoff firing-rate and three sites for a new synapse are indicated by the broken circles. (Center) Of the three locations, one new synapse is randomly generated (small complete circle), the uppermost available synapse. (Right) The new synapse increases strength due to the associative modification equation and positive correlation with enough of the other synapses on this neuron. Because of the increased excitation due to the new synapse, the post-synaptic neuron increases its average firing-rate above the prescribed design value; as a result of exceeding this value, receptivity for new innervation goes to zero (the broken circles disappear). (Bottom) Negative associative modification (LTD) and shedding. (Left) Having randomly acquired a certain set of inputs, one positively correlated subset (shown in shades of blue) dominates the excitation of the postsynaptic neuron while another input (shown in red) is negatively correlated with this subset. (Center) Associative synaptic modification decreases the weight of the uncorrelated input as indicated by the smaller circle. (Right) Associative synaptic modification further decreases the weight of the uncorrelated input, and because the weight value falls below the threshold, the synapse is shed.
Fig 2
Fig 2. Randomly generated input vectors.
(A) shows the 100 patterns of dataset A. An input pattern is built by perturbing one of five binary, 80-dimensional orthogonal vectors, see methods. Each of these five orthogonal vectors can be considered the central, unseen prototype pattern that defines the five distinct categories. The perturbation rule randomly complements two 1's and two 0's of such a prototype. Black pixels represent 1’s, and white pixels represents 0’s. Note the different relative frequencies of the categories: 10, 15, 20, 25, and 30%. (B) shows the 225 patterns of dataset B1. A pattern is built by perturbing each one of nine binary 390-dimensional vectors, see methods. Each of these nine vectors can be considered the central, unseen prototype that defines a distinct category. The perturbation rule randomly selects 20 out of 60 input lines from the central pattern to be active. Black pixels represent 1’s and white pixels represents 0’s. Note the three orthogonal super-categories, and note the differing overlaps of the categories across super-categories. The equal frequencies of each category only applies to dataset B1.
Fig 3
Fig 3. A visualization of super-category II of the B datasets.
All three super-categories have seven constructed regions of overlap, this visualization shows the proportional overlap for the categories D, E, and F that make up super-category II. Category D is a union of the possibly active input lines from sub-regions 1, 4, 5, and 7. Similarly, Category E is a union of the possibly active input lines from sub-regions 2, 5, 6, and 7. Lastly, Category F is a union of sub-regions 3, 4, 6, and 7. Note that 30 input lines define sub-region 1, 10 define sub-region 4, which is shared between D and F, 10 define sub-region 5 which is shared between D and E, and 10 define sub-region 7 which is shared by D, E, and F.
Fig 4
Fig 4. Driven by the adaptive algorithm, synapses are acquired and discarded but eventually a stable connectivity is achieved.
This illustration follows the total connections for each of 10 representative neurons in one simulation as a function of number of blocks of input presentations. Note the wide distribution, across neurons, of the time-to-stable connectivity. Nevertheless all neurons illustrated here achieve stable connectivity by block 310 (the upper orange-red line), but one neuron achieves a stable connectivity as early as block 82 (purple).
Fig 5
Fig 5. Error rate and statistical dependence depend on network size (Dataset B1).
(A) As the number of postsynaptic neurons in a network increases, decoding error-rate monotonically decreases. A 10% error rate is reached once there are 34 neurons, and the amount of error continues to decline reaching 5.2% error at 50 neurons. (B) As the number of neurons increase, statistical dependence monotonically increases. When 34 neurons are sampled, statistical dependence is 12.84. Note that this is much less than the input statistical dependence of 102 bits.
Fig 6
Fig 6. Neuronal allocation is linear in category probability for dataset A.
The fraction of postsynaptic neurons firing to a category is proportional to category probability. For this low overlap dataset (see Fig 2A), each postsynaptic neuron fires exclusively to a single category. The linear regression slope is 1.5 (the intercept is -0.1). Each plotted data point is the fraction of 2000 neurons allocated.
Fig 7
Fig 7. Category frequency can overcome the advantage of highly overlapping super-categories in capturing post-synaptic neurons.
(A) shows neuron allocation for dataset B1. Even though the nine categories are equiprobable, categories capture post-synaptic neurons in a non-equiprobable fashion. The greater the overlap within a super-category, the more neurons that are captured by that super-category's base categories. For each category inside a super-category, there is similar neuron allocation. The x-axis labels both category and category frequency (e.g. 11.1%, 11.1%, 11.1%…). (B) shows neuron allocation for dataset B3. By changing the category frequencies, the neuron allocations change, and the effect of overlap is overcome. The change in frequency leads to more neurons allocated to categories with the highest frequency, even inside a super-category. The x-axis labels both category and category frequency (e.g. 18.4%, 17.4%, 15.5%…). Numbers 1 through 9 on each x-axis correspond, in sequence, to the three base patterns comprising the three, sequential super-catergories.
Fig 8
Fig 8. Neurons fired exclusively and non-exclusively by each category’s centroid.
All post-synaptic neurons fire exclusively to only one super-category, but some neurons are exclusive to a single category within a super-category (filled bars). Unfilled bars count neurons that fire in response to two or three categories within a super-category. Numbers 1 through 9 on the x-axis correspond, in sequence, to the three base patterns of comprising the three, sequential super-catergories. In all cases, testing threshold is 2.4 because the prototypes are noiseless. Training to B1; testing to full prototype.
Fig 9
Fig 9. Neurons fired exclusively and non-exclusively by each sub-region.
The sub-regions, as in Fig 3, can be learned in an exclusive fashion. Such novel learning is poorest for super-category I; Exclusively fired neurons do not exist in sub-regions I-4, I-5, and I-6. Nearly all of the exclusive neuron firing in super-category I occurs in sub-regions I-1, I-2, and I-3. Super-category II has an even distribution of neurons firing exclusively to II-1 through II-6, while II-7 garners approximately twice the number of neurons as any of the other sub-regions. A majority of neurons that learn super-category III are fired by sub-region III-7 (the triple overlap) and the majority of these neurons fire exclusively. There are a sizeable number of neurons that are learning to fire to III-4, III-5, and III-6 exclusively. Such results are dependent on synapse number per neuron. The 21 sub-regions arising from the totality of all super-categories are numbered here as in Fig 3 but expanded by the sequence of the three super-categories.

Similar articles

Cited by

References

    1. Levy WB, Desmond NL. The rules of elemental synaptic plasticity In: Levy WB, Anderson JA, Lehmkule S, editors. Synaptic modification, neuron selectivity, and nervous system organization. Hillsdale, NJ: Erlbaum; 1985. p. 105–121.
    1. Levy WB, Colbert CM, Desmond NL. Elemental adaptive processes of neurons and synapses: a statistical/computational perspective In: Gluck MA, Rumelhart DE, editors. Neuroscience and Connectionist Theory. Hillsdale, NJ: Erlbaum; 1990. p. 187–235.
    1. Dammasch IE, Wagner GP, Wolff JR. Self-stabilization of neuronal networks. Biological cybernetics. 1986; 54(4–5): 211–22. - PubMed
    1. Miller K. Equivalence of a sprouting-and-retraction model and correlation-based plasticity models of neural development. Neural Computation. 1998; 10(3): 529–47. - PubMed
    1. Bienenstock EL, Cooper LN, Munro PW. Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. The Journal of Neuroscience. 1982; 2(1): 32–48. - PMC - PubMed

Publication types

Grants and funding

This work was supported by the National Science Foundation 1162449 - Toby Berger & William Levy; the Department of Neurosurgery, University of Virginia, William B. Levy. The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

LinkOut - more resources