Learning what to expect (in visual perception) - PubMed Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
Review
. 2013 Oct 24:7:668.
doi: 10.3389/fnhum.2013.00668.

Learning what to expect (in visual perception)

Affiliations
Review

Learning what to expect (in visual perception)

Peggy Seriès et al. Front Hum Neurosci. .

Abstract

Expectations are known to greatly affect our experience of the world. A growing theory in computational neuroscience is that perception can be successfully described using Bayesian inference models and that the brain is "Bayes-optimal" under some constraints. In this context, expectations are particularly interesting, because they can be viewed as prior beliefs in the statistical inference process. A number of questions remain unsolved, however, for example: How fast do priors change over time? Are there limits in the complexity of the priors that can be learned? How do an individual's priors compare to the true scene statistics? Can we unlearn priors that are thought to correspond to natural scene statistics? Where and what are the neural substrate of priors? Focusing on the perception of visual motion, we here review recent studies from our laboratories and others addressing these issues. We discuss how these data on motion perception fit within the broader literature on perceptual Bayesian priors, perceptual expectations, and statistical and perceptual learning and review the possible neural basis of priors.

Keywords: Bayesian priors; expectations; perceptual learning; probabilistic inference; statistical learning.

PubMed Disclaimer

Figures

FIGURE 1
FIGURE 1
Structural vs. contextual expectations. (A) Example of a structural expectation: the “light-from-above” prior. Are those shapes bumps or dimples? Perceiving one dimple in the middle of bumps is consistent with assuming that light comes from the top of the image. Turning the page upside down would lead to the opposite percept (seeing a bump in a middle of dimples). (B) Example of a contextual expectation. What do you see in the drawing on the right: a rabbit or duck? This ambiguous and bistable percept can be influenced by the spatial context in which it is placed, for, e.g., having just seen a flock of ducks would make one more likely the perceive a duck. (C) Structural expectations act as “default” expectations, but can be superseded by contextual expectations.
FIGURE 2
FIGURE 2
Experiment and main results of Chalk et al. (2010). (A) Stimulus and task used in the experiment. In each trial, participants were asked to give an estimate of the direction of motion of a cloud of coherently moving dots by moving the central bar (estimation task), then indicate whether they had perceived a stimulus or not, by clicking on “dots” or “no dots” (detection task). Some trials had very low contrast stimuli or no stimuli at all. Feedback was only given relative to the detection task. Inset: Two directions of motion, -32° and 32°, were presented in more trials than other directions. The question was whether participants would implicitly learn about this underlying stimulus distribution and how this would influence their performances. (B) Participants quickly exhibited attractive estimation biases: they tended to perceive motion direction as being more similar to the most frequent directions, -32° and 32° (vertical dashed lines), than they really were. (C) On trials when there was no stimulus but participants reported seeing a stimulus (blue line), they tended to report directions close to -32° and 32° (vertical dashed lines). When they correctly reported that there was no stimulus (red line), their estimation was more uniform.
FIGURE 3
FIGURE 3
Experiment and main results of Sotiropoulos et al. (2011a). (A) The stimulus is a field of lines translating rigidly along either of the two directions shown by the white arrows (the latter are not part of the stimulus). The task of the participants is to report the direction of motion (“up” or “down”), without feedback. (B) Cartoon of experimental hypothesis. Left: initially participants have a prior favoring slow speeds. Middle: the low-speed group was exposed to low speeds (blue), while the high-speed group viewed faster speeds (red). Right: training will lead the high-speed group to shift their prior expectations toward higher speeds (red) compared to the low-speed group (blue). (C) Results: Proportion of oblique perceptions (po) in low-contrast condition, for three trial durations. Each point is the po for the first (empty symbols) or last (filled symbols) test block of the session, for the high-speed (red) or the low-speed (blue) group. Lines correspond to linear fits to each block/group combination. Error bars denote between-subjects SEM. Initially participants are biased toward perceiving motion as being more often perpendicular to the orientation of the lines than it really is (consistent with estimating that the test stimulus is slower than it really is). However, this bias slowly decreases with training in the experimental group, and reverses after 3 days (consistent with estimating that the test stimulus is faster than it really is). (D) Fits from Bayesian model of motion perception (points) can account for the behavior of the two groups (lines, corresponding to the linear fits in C) when the speed prior is allowed to shift with training. Reproduced from Sotiropoulos et al., 2011a with permission.

Similar articles

Cited by

References

    1. Acerbi L., Wolpert D. M., Vijayakumar S. (2012). Internal representations of temporal statistics and feedback calibrate motor-sensory interval timing. PLoS Comput. Biol. 8:e1002771 10.1371/journal.pcbi.1002771 - DOI - PMC - PubMed
    1. Adab H. Z., Vogels R. (2011). Practicing coarse orientation discrimination improves orientation signals in macaque cortical area v4. Curr. Biol. 21 1661–166610.1016/j.cub.2011.08.037 - DOI - PubMed
    1. Adams W. J. (2007). A common light-prior for visual search, shape, and reflectance judgments. J. Vis. 7 11.1–11.7 10.1167/7.11.11 - DOI - PubMed
    1. Adams W. J., Graf E. W., Ernst M. O. (2004). Experience can change the ‘light-from-above’ prior. Nat. Neurosci. 7 1057–105810.1038/nn1312 - DOI - PubMed
    1. Adams W. J., Kerrigan I. S., Graf E. W. (2010). Efficient visual recalibration from either visual or haptic feedback: the importance of being wrong. J. Neurosci. 30 14745–1474910.1523/JNEUROSCI.2749-10.2010 - DOI - PMC - PubMed

LinkOut - more resources