Toward the neural implementation of structure learning
Introduction
Animals perceive complex objects, learn abstract concepts and acquire sophisticated motor skills, often from limited experience. Efficiently making these inferences is paramount for survival, such as when determining whether a looming shadow indicates a predator, learning when and where to re-hide a food cache, or deciding to abandon a rich foraging niche in anticipation of a natural calamity. Inferences from sparse data depend upon background knowledge that restricts the potentially unlimited ways of parsing and interpreting the world. The brain likely makes these inferences by efficiently exploiting regularities in the environment to learn and use latent structured relations. In essence, these structures are possible generative models that capture, at an abstract level, the relationships and causal processes underlying observations. Learned structure constraints can then be applied to solve related but novel tasks, such as parsing ambiguous sensory input and generating novel actions. How hidden structure is learned and used to support inductive leaps that go beyond the available data is an important question in contemporary neuroscience.
For more than a century, the challenge of determining how structure learning and inductive inference can be efficiently performed was tackled by statisticians [1], linguists [2], computer scientists [3] and cognitive scientists [4, 5, 6, 7]. However, insights about the neural implementation of structure learning have been rare despite some excellent attempts [8, 9]. Presumably this is because it is challenging to design experiments with the necessary task complexity, to ascertain that animals acquire and use specific structures and then to probe the underlying neural computations of structure learning and use. Nevertheless, if the need to learn structures that can support inductive inference was a selective pressure in the evolution of neural circuit function, then probing neural dynamics in this computational regime may be important for our understanding of brain function. Here we provide an overview of a computational approach to structure learning and inductive inference from contemporary cognitive science, and discuss what this framework offers to studies of the neural implementation of structure learning.
Section snippets
Structure learning in animal cognition
To gain an intuition for the advantage of knowing the appropriate generative structure of the environment, consider an entertaining anecdote about Richard Feynman. While bored at Los Alamos when he worked on the Manhattan Project, Feynman passed his time by picking the locks on filing cabinets [10]. By tinkering with the sophisticated three-disk Mosler combination locks that in principle could support one million combinations, Feynman uncovered certain regularities in the locks’ design that
Modeling structure learning to generate insight into component processes
Any effort to understand the neural implementation of cognition requires that the underlying cognitive processes be identified and exposed in specifically-tailored behavioral tasks. These component cognitive processes can sometimes be intuited, such as evidence integration in perceptual judgments [29]. However, the algorithmic steps necessary for the acquisition of and reasoning with hierarchically structured abstractions of the environment are not immediately apparent. A rigorous approach has
Toward the neural implementation of structure learning
NPHBMs have achieved human-level performance on a range of cognitive tasks including acquisition of novel concepts [38••], causal learning [39], parsing motion in the environment [40] and others (for a review see [5]). Although the extent to which this normative approach provides an adequate framework for cognition continues to be debated [41], the utility of such an abstract framework for systems neuroscience may come from the insights it offers into the component processes that might underlie
Conclusions
Since Edward Thorndike's tests of cats’ escapology from puzzle boxes, there has been a fascination with, and debate about, if and how animals internalize and use the structure of the world [61]. Contemporary cognitive scientists have recruited NPHBM to generate normative descriptions of cognition that infer this structure, but the project of describing how the mind and the brain learn internal models has remained challenging. Despite this, there is promising recent evidence that some of the
References and recommended reading
Papers of particular interest, published within the period of review, have been highlighted as:
• of special interest
•• of outstanding interest
Acknowledgements
We thank Brenden Lake for contributing Figure 2, Alla Karpova for significant contributions and Shaul Druckmann, Maksim Manakov, Mikhail Proskurin, Vivek Jayaraman and other colleagues at Janelia Research Campus for comments on and assistance with the manuscript. DGRT is funded by Howard Hughes Medical Institute.
References (62)
- et al.
Structure learning in action
Behav Brain Res
(2010) - et al.
The neural representation of sequences: from transition probabilities to algebraic patterns and linguistic trees
Neuron
(2015) - et al.
Belief states as a framework to explain extra-retinal influences in visual cortex
Curr Opin Neurobiol
(2015) Clever animals and killjoy explanations in comparative psychology
Trends Cogn Sci
(2010)- et al.
The faculty of language: what's special about it?
Cognition
(2005) - et al.
Factual and counterfactual action-outcome mappings control choice between goal-directed actions in rats
Curr Biol
(2015) - et al.
Hippocampal representation of related and opposing memories develop within distinct, hierarchically organized neural schemas
Neuron
(2014) The adaptive character of thought
(1990)- et al.
Discovering latent causes in reinforcement learning
Curr Opin Behav Sci
(2015) - et al.
Human-level concept learning through probabilistic program induction
Science
(2015)
The neuropsychology of Lashley; selected papers
The Macaque Face Patch System: A Window into Object Representation
Cold Spring Harb Symp Quant Biol
Single neurons in prefrontal cortex encode abstract rules
Nature
Neuronal circuits underlying persistent representations despite time varying activity
Curr Biol
Same-different conceptualization by baboons (Papio papio): the role of entropy
J Comp Psychol
Data analysis using regression and multilevel/hierarchical models
Aspects of the theory of syntax
Learning in Graphical Models
Context, learning, and extinction
Psychol Rev
How to grow a mind: statistics, structure, and abstraction
Science
An integrative theory of prefrontal cortex function
Annu Rev Neurosci
Surely you’re joking, Mr. Feynman!: adventures of a curious character
Darwin's strange inversion of reasoning
Proc Natl Acad Sci U S A
Evolutionary psychology: the new science of the mind
Fish can infer social rank by observation alone
Nature
Coding of abstract quantity by ‘number neurons’ of the primate brain
J Comp Physiol A: Neuroethol Sens Neural Behav Physiol
Conformity to cultural norms of tool use in chimpanzees
Nature
World without weight: perspectives on an alien mind
Chimpanzees solve the trap problem when the confound of tool-use is removed
J Exp Psychol: Anim Behav Process
Neurons selective to the number of visual items in the corvid songbird endbrain
Proc Natl Acad Sci U S A
Tool-use and instrumental learning in the Eurasian jay (Garrulus glandarius)
Anim Cogn
Cited by (81)
Federated inference and belief sharing
2024, Neuroscience and Biobehavioral ReviewsFormalising social representation to explain psychiatric symptoms
2023, Trends in Cognitive SciencesReconciling neuronal representations of schema, abstract task structure, and categorization under cognitive maps in the entorhinal-hippocampal-frontal circuits
2022, Current Opinion in NeurobiologyCitation Excerpt :It is thought that the brain may represent the state in a low-dimensional, or abstract, manner which reflects the underlying ‘structure’ of the task at hand [52–55]. Such a representation of task structure could provide a substrate for model-based reinforcement learning [56]. Brain lesioning and fMRI studies suggest that task structure representations depend critically on the hippocampus and orbitofrontal cortex [57,58].
Oversampled and undersolved: Depressive rumination from an active inference perspective
2022, Neuroscience and Biobehavioral ReviewsCitation Excerpt :It follows that negative mood is associated with increased mental simulation, as there is a greater demand for policies with high epistemic value. In many cases, mental simulation will indeed lead to an updated generative model (through a process of structure learning or Bayesian model reduction) (Smith et al., 2020; Tervo et al., 2016). In this case, the overall precision of predictions rises, and the epistemic value of deeper mental simulation falls because good approximations from starting states to preferred states have been detected, and in-depth instrumental learning is no longer needed.