Fisher and Shannon information in finite neural populations

Neural Comput. 2012 Jul;24(7):1740-80. doi: 10.1162/NECO_a_00292. Epub 2012 Mar 19.

Abstract

The precision of the neural code is commonly investigated using two families of statistical measures: Shannon mutual information and derived quantities when investigating very small populations of neurons and Fisher information when studying large populations. These statistical tools are no longer the preserve of theorists and are being applied by experimental research groups in the analysis of empirical data. Although the relationship between information-theoretic and Fisher-based measures in the limit of infinite populations is relatively well understood, how these measures compare in finite-size populations has not yet been systematically explored. We aim to close this gap. We are particularly interested in understanding which stimuli are best encoded by a given neuron within a population and how this depends on the chosen measure. We use a novel Monte Carlo approach to compute a stimulus-specific decomposition of the mutual information (the SSI) for populations of up to 256 neurons and show that Fisher information can be used to accurately estimate both mutual information and SSI for populations of the order of 100 neurons, even in the presence of biologically realistic variability, noise correlations, and experimentally relevant integration times. According to both measures, the stimuli that are best encoded are those falling at the flanks of the neuron's tuning curve. In populations of fewer than around 50 neurons, however, Fisher information can be misleading.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Animals
  • Computer Simulation*
  • Finite Element Analysis
  • Humans
  • Models, Neurological*
  • Monte Carlo Method*
  • Neurons / physiology*