Hierarchical models in the brain

PLoS Comput Biol. 2008 Nov;4(11):e1000211. doi: 10.1371/journal.pcbi.1000211. Epub 2008 Nov 7.

Abstract

This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Algorithms
  • Animals
  • Brain / anatomy & histology
  • Brain / physiology*
  • Humans
  • Linear Models
  • Mental Processes / physiology*
  • Models, Neurological*
  • Nerve Net / anatomy & histology
  • Nerve Net / physiology
  • Neural Networks, Computer*
  • Nonlinear Dynamics
  • Probability