Toward the neural implementation of structure learning

https://doi.org/10.1016/j.conb.2016.01.014Get rights and content

Highlights

  • Structure learning may be key to inference from sparse data.

  • The psychological and neural bases of structure learning are key challenges.

  • Normative accounts of structure learning have been put forth.

  • Hints that neural circuits implement some of the proposed computations.

Despite significant advances in neuroscience, the neural bases of intelligence remain poorly understood. Arguably the most elusive aspect of intelligence is the ability to make robust inferences that go far beyond one's experience. Animals categorize objects, learn to vocalize and may even estimate causal relationships — all in the face of data that is often ambiguous and sparse. Such inductive leaps are thought to result from the brain's ability to infer latent structure that governs the environment. However, we know little about the neural computations that underlie this ability. Recent advances in developing computational frameworks that can support efficient structure learning and inductive inference may provide insight into the underlying component processes and help pave the path for uncovering their neural implementation.

Introduction

Animals perceive complex objects, learn abstract concepts and acquire sophisticated motor skills, often from limited experience. Efficiently making these inferences is paramount for survival, such as when determining whether a looming shadow indicates a predator, learning when and where to re-hide a food cache, or deciding to abandon a rich foraging niche in anticipation of a natural calamity. Inferences from sparse data depend upon background knowledge that restricts the potentially unlimited ways of parsing and interpreting the world. The brain likely makes these inferences by efficiently exploiting regularities in the environment to learn and use latent structured relations. In essence, these structures are possible generative models that capture, at an abstract level, the relationships and causal processes underlying observations. Learned structure constraints can then be applied to solve related but novel tasks, such as parsing ambiguous sensory input and generating novel actions. How hidden structure is learned and used to support inductive leaps that go beyond the available data is an important question in contemporary neuroscience.

For more than a century, the challenge of determining how structure learning and inductive inference can be efficiently performed was tackled by statisticians [1], linguists [2], computer scientists [3] and cognitive scientists [4, 5, 6, 7]. However, insights about the neural implementation of structure learning have been rare despite some excellent attempts [8, 9]. Presumably this is because it is challenging to design experiments with the necessary task complexity, to ascertain that animals acquire and use specific structures and then to probe the underlying neural computations of structure learning and use. Nevertheless, if the need to learn structures that can support inductive inference was a selective pressure in the evolution of neural circuit function, then probing neural dynamics in this computational regime may be important for our understanding of brain function. Here we provide an overview of a computational approach to structure learning and inductive inference from contemporary cognitive science, and discuss what this framework offers to studies of the neural implementation of structure learning.

Section snippets

Structure learning in animal cognition

To gain an intuition for the advantage of knowing the appropriate generative structure of the environment, consider an entertaining anecdote about Richard Feynman. While bored at Los Alamos when he worked on the Manhattan Project, Feynman passed his time by picking the locks on filing cabinets [10]. By tinkering with the sophisticated three-disk Mosler combination locks that in principle could support one million combinations, Feynman uncovered certain regularities in the locks’ design that

Modeling structure learning to generate insight into component processes

Any effort to understand the neural implementation of cognition requires that the underlying cognitive processes be identified and exposed in specifically-tailored behavioral tasks. These component cognitive processes can sometimes be intuited, such as evidence integration in perceptual judgments [29]. However, the algorithmic steps necessary for the acquisition of and reasoning with hierarchically structured abstractions of the environment are not immediately apparent. A rigorous approach has

Toward the neural implementation of structure learning

NPHBMs have achieved human-level performance on a range of cognitive tasks including acquisition of novel concepts [38••], causal learning [39], parsing motion in the environment [40] and others (for a review see [5]). Although the extent to which this normative approach provides an adequate framework for cognition continues to be debated [41], the utility of such an abstract framework for systems neuroscience may come from the insights it offers into the component processes that might underlie

Conclusions

Since Edward Thorndike's tests of cats’ escapology from puzzle boxes, there has been a fascination with, and debate about, if and how animals internalize and use the structure of the world [61]. Contemporary cognitive scientists have recruited NPHBM to generate normative descriptions of cognition that infer this structure, but the project of describing how the mind and the brain learn internal models has remained challenging. Despite this, there is promising recent evidence that some of the

References and recommended reading

Papers of particular interest, published within the period of review, have been highlighted as:

  • • of special interest

  • •• of outstanding interest

Acknowledgements

We thank Brenden Lake for contributing Figure 2, Alla Karpova for significant contributions and Shaul Druckmann, Maksim Manakov, Mikhail Proskurin, Vivek Jayaraman and other colleagues at Janelia Research Campus for comments on and assistance with the manuscript. DGRT is funded by Howard Hughes Medical Institute.

References (62)

  • K.S. Lashley

    The neuropsychology of Lashley; selected papers

    (1960)
  • D. Tsao

    The Macaque Face Patch System: A Window into Object Representation

    Cold Spring Harb Symp Quant Biol

    (2014)
  • J.D. Wallis et al.

    Single neurons in prefrontal cortex encode abstract rules

    Nature

    (2001)
  • S. Druckmann et al.

    Neuronal circuits underlying persistent representations despite time varying activity

    Curr Biol

    (2012)
  • E.A. Wasserman et al.

    Same-different conceptualization by baboons (Papio papio): the role of entropy

    J Comp Psychol

    (2001)
  • A. Gelman et al.

    Data analysis using regression and multilevel/hierarchical models

    (2007)
  • N. Chomsky

    Aspects of the theory of syntax

    (1965)
  • M.I. Jordan

    Learning in Graphical Models

    (1998)
  • S.J. Gershman et al.

    Context, learning, and extinction

    Psychol Rev

    (2010)
  • J.B. Tenenbaum et al.

    How to grow a mind: statistics, structure, and abstraction

    Science

    (2011)
  • E.K. Miller et al.

    An integrative theory of prefrontal cortex function

    Annu Rev Neurosci

    (2001)
  • R.P. Feynman et al.

    Surely you’re joking, Mr. Feynman!: adventures of a curious character

    (1992)
  • D. Dennett

    Darwin's strange inversion of reasoning

    Proc Natl Acad Sci U S A

    (2009)
  • D. Buss

    Evolutionary psychology: the new science of the mind

    (2015)
  • L. Grosenick et al.

    Fish can infer social rank by observation alone

    Nature

    (2007)
  • A. Nieder

    Coding of abstract quantity by ‘number neurons’ of the primate brain

    J Comp Physiol A: Neuroethol Sens Neural Behav Physiol

    (2013)
  • A. Whiten et al.

    Conformity to cultural norms of tool use in chimpanzees

    Nature

    (2005)
  • D.J. Povinelli et al.

    World without weight: perspectives on an alien mind

    (2012)
  • A.M. Seed et al.

    Chimpanzees solve the trap problem when the confound of tool-use is removed

    J Exp Psychol: Anim Behav Process

    (2009)
  • H.M. Ditz et al.

    Neurons selective to the number of visual items in the corvid songbird endbrain

    Proc Natl Acad Sci U S A

    (2015)
  • L.G. Cheke et al.

    Tool-use and instrumental learning in the Eurasian jay (Garrulus glandarius)

    Anim Cogn

    (2011)
  • Cited by (0)

    View full text