The automaticity of visual statistical learning

J Exp Psychol Gen. 2005 Nov;134(4):552-64. doi: 10.1037/0096-3445.134.4.552.

Abstract

The visual environment contains massive amounts of information involving the relations between objects in space and time, and recent studies of visual statistical learning (VSL) have suggested that this information can be automatically extracted by the visual system. The experiments reported in this article explore the automaticity of VSL in several ways, using both explicit familiarity and implicit response-time measures. The results demonstrate that (a) the input to VSL is gated by selective attention, (b) VSL is nevertheless an implicit process because it operates during a cover task and without awareness of the underlying statistical patterns, and (c) VSL constructs abstracted representations that are then invariant to changes in extraneous surface features. These results fuel the conclusion that VSL both is and is not automatic: It requires attention to select the relevant population of stimuli, but the resulting learning then occurs without intent or awareness.

Publication types

  • Research Support, Non-U.S. Gov't
  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Attention
  • Automatism*
  • Humans
  • Learning*
  • Visual Perception*