Toward the neural implementation of structure learning
Introduction
Animals perceive complex objects, learn abstract concepts and acquire sophisticated motor skills, often from limited experience. Efficiently making these inferences is paramount for survival, such as when determining whether a looming shadow indicates a predator, learning when and where to re-hide a food cache, or deciding to abandon a rich foraging niche in anticipation of a natural calamity. Inferences from sparse data depend upon background knowledge that restricts the potentially unlimited ways of parsing and interpreting the world. The brain likely makes these inferences by efficiently exploiting regularities in the environment to learn and use latent structured relations. In essence, these structures are possible generative models that capture, at an abstract level, the relationships and causal processes underlying observations. Learned structure constraints can then be applied to solve related but novel tasks, such as parsing ambiguous sensory input and generating novel actions. How hidden structure is learned and used to support inductive leaps that go beyond the available data is an important question in contemporary neuroscience.
For more than a century, the challenge of determining how structure learning and inductive inference can be efficiently performed was tackled by statisticians [1], linguists [2], computer scientists [3] and cognitive scientists [4, 5, 6, 7]. However, insights about the neural implementation of structure learning have been rare despite some excellent attempts [8, 9]. Presumably this is because it is challenging to design experiments with the necessary task complexity, to ascertain that animals acquire and use specific structures and then to probe the underlying neural computations of structure learning and use. Nevertheless, if the need to learn structures that can support inductive inference was a selective pressure in the evolution of neural circuit function, then probing neural dynamics in this computational regime may be important for our understanding of brain function. Here we provide an overview of a computational approach to structure learning and inductive inference from contemporary cognitive science, and discuss what this framework offers to studies of the neural implementation of structure learning.
Section snippets
Structure learning in animal cognition
To gain an intuition for the advantage of knowing the appropriate generative structure of the environment, consider an entertaining anecdote about Richard Feynman. While bored at Los Alamos when he worked on the Manhattan Project, Feynman passed his time by picking the locks on filing cabinets [10]. By tinkering with the sophisticated three-disk Mosler combination locks that in principle could support one million combinations, Feynman uncovered certain regularities in the locks’ design that
Modeling structure learning to generate insight into component processes
Any effort to understand the neural implementation of cognition requires that the underlying cognitive processes be identified and exposed in specifically-tailored behavioral tasks. These component cognitive processes can sometimes be intuited, such as evidence integration in perceptual judgments [29]. However, the algorithmic steps necessary for the acquisition of and reasoning with hierarchically structured abstractions of the environment are not immediately apparent. A rigorous approach has
Toward the neural implementation of structure learning
NPHBMs have achieved human-level performance on a range of cognitive tasks including acquisition of novel concepts [38••], causal learning [39], parsing motion in the environment [40] and others (for a review see [5]). Although the extent to which this normative approach provides an adequate framework for cognition continues to be debated [41], the utility of such an abstract framework for systems neuroscience may come from the insights it offers into the component processes that might underlie
Conclusions
Since Edward Thorndike's tests of cats’ escapology from puzzle boxes, there has been a fascination with, and debate about, if and how animals internalize and use the structure of the world [61]. Contemporary cognitive scientists have recruited NPHBM to generate normative descriptions of cognition that infer this structure, but the project of describing how the mind and the brain learn internal models has remained challenging. Despite this, there is promising recent evidence that some of the
References and recommended reading
Papers of particular interest, published within the period of review, have been highlighted as:
• of special interest
•• of outstanding interest
Acknowledgements
We thank Brenden Lake for contributing Figure 2, Alla Karpova for significant contributions and Shaul Druckmann, Maksim Manakov, Mikhail Proskurin, Vivek Jayaraman and other colleagues at Janelia Research Campus for comments on and assistance with the manuscript. DGRT is funded by Howard Hughes Medical Institute.
References (62)
- et al.
Structure learning in action
Behav Brain Res
(2010) - et al.
The neural representation of sequences: from transition probabilities to algebraic patterns and linguistic trees
Neuron
(2015) - et al.
Belief states as a framework to explain extra-retinal influences in visual cortex
Curr Opin Neurobiol
(2015) Clever animals and killjoy explanations in comparative psychology
Trends Cogn Sci
(2010)- et al.
The faculty of language: what's special about it?
Cognition
(2005) - et al.
Factual and counterfactual action-outcome mappings control choice between goal-directed actions in rats
Curr Biol
(2015) - et al.
Hippocampal representation of related and opposing memories develop within distinct, hierarchically organized neural schemas
Neuron
(2014) The adaptive character of thought
(1990)- et al.
Discovering latent causes in reinforcement learning
Curr Opin Behav Sci
(2015) - et al.
Human-level concept learning through probabilistic program induction
Science
(2015)