Learning latent structure: carving nature at its joints - PubMed Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2010 Apr;20(2):251-6.
doi: 10.1016/j.conb.2010.02.008. Epub 2010 Mar 11.

Learning latent structure: carving nature at its joints

Affiliations
Review

Learning latent structure: carving nature at its joints

Samuel J Gershman et al. Curr Opin Neurobiol. 2010 Apr.

Abstract

Reinforcement learning (RL) algorithms provide powerful explanations for simple learning and decision-making behaviors and the functions of their underlying neural substrates. Unfortunately, in real-world situations that involve many stimuli and actions, these algorithms learn pitifully slowly, exposing their inferiority in comparison to animal and human learning. Here we suggest that one reason for this discrepancy is that humans and animals take advantage of structure that is inherent in real-world tasks to simplify the learning problem. We survey an emerging literature on 'structure learning'--using experience to infer the structure of a task--and how this can be of service to RL, with an emphasis on structure in perception and action.

PubMed Disclaimer

Figures

Figure 1
Figure 1
The graphical models of three possible causal relationships between variables in a classical conditioning experiment. By convention, observed variables are represented by shaded nodes and unshaded nodes represent unobserved (latent) variables. Arrows represent probabilistic dependencies. The parameters of a model, for instance, structure II, define the probability of each of the nodes taking on a value (e.g., absence or presence of shock) given each setting of its parent nodes (e.g., when y=acquisition or when y=extinction).

Similar articles

Cited by

References

    1. Sutton RS, Barto AG. Reinforcement Learning: An Introduction. MIT Press; 1998.
    1. Schultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science. 1997;275(5306):1593. - PubMed
    1. Houk JC, Adams JL, Barto AG. A model of how the basal ganglia generate and use neural signals that predict reinforcement. Models of information processing in the basal ganglia. 1995:249–270.
    1. Kemp C, Tenenbaum JB. Structured statistical models of inductive reasoning. Psychological review. 2009;116(1):20–58. An elegant demonstration of how structured knowledge influences human reasoning in a large variety of domains.

    1. Mach E. The analysis of sensations. 1897

Publication types