Bridging large-scale neuronal recordings and large-scale network models using dimensionality reduction

https://doi.org/10.1016/j.conb.2018.12.009Get rights and content

Highlights

  • The interplay of neuronal recordings and network models is becoming ever stronger.

  • Population activity structure provides common ground for incisive comparisons.

  • Dimensionality reduction can be used to identify population activity structure.

  • This approach is used to study working memory, decision making, motor control, etc.

A long-standing goal in neuroscience has been to bring together neuronal recordings and neural network modeling to understand brain function. Neuronal recordings can inform the development of network models, and network models can in turn provide predictions for subsequent experiments. Traditionally, neuronal recordings and network models have been related using single-neuron and pairwise spike train statistics. We review here recent studies that have begun to relate neuronal recordings and network models based on the multi-dimensional structure of neuronal population activity, as identified using dimensionality reduction. This approach has been used to study working memory, decision making, motor control, and more. Dimensionality reduction has provided common ground for incisive comparisons and tight interplay between neuronal recordings and network models.

Introduction

For decades, the fields of experimental neuroscience and neural network modeling proceeded largely in parallel. Whereas experimental neuroscience focused on understanding how the activities of individual neurons relate to sensory stimuli and behavior, the modeling community sought to understand theoretically how neural networks can give rise to brain function. In recent years, developments in neuronal recording technology have enabled the simultaneous recording of hundreds of neurons or more [1]. Concurrently, increases in computational power have enabled the simulation of large neural networks [2]. Together, these developments should enable experimental data to more stringently constrain network model design and network models to better predict neuronal activity for subsequent experiments [3,4].

A key question is how to relate large-scale neuronal recordings with large-scale network models. Network models typically do not attempt to replicate the precise anatomical connectivity of the biological network from which the neurons are recorded, since the underlying anatomical connectivity is usually unknown (although technological developments are making this possible [5]). In such settings, there is not a one-to-one correspondence of each recorded neuron with a model neuron. To date, comparisons between recordings and models have primarily relied on aggregate spike train statistics based on single neurons (e.g., distribution of firing rates [6], distribution of tuning preferences [7], and Fano factor [8]) and pairs of neurons (e.g., spike time [9] and spike count correlations [10,11]), as well as single-neuron activity time courses [12,13]. To go beyond single-neuron and pairwise statistics, recent studies have examined the multi-dimensional structure of neuronal population activity to uncover important insights into mechanisms underlying neuronal computation (e.g., [14,15,16,17,18,19,20••,21,22,23,24]). This has motivated the inquiry of whether network models reproduce such population activity structure, in addition to single-neuron and pairwise statistics, raising the bar on what constitutes an agreement between a network model and neuronal recordings [3].

Population activity structure can be characterized using dimensionality reduction [25, 26, 27], which provides a concise summary (i.e., a low-dimensional representation) of how a population of neurons covaries and how their activities unfold over time. Several dimensionality reduction methods have been applied to neuronal population activity, including principal component analysis (e.g., [14,15,20••,28,29]), demixed principal component analysis [30], factor analysis [16,19,31••], Gaussian-process factor analysis [32], latent factor analysis via dynamical systems [33], tensor component analysis [34], and more (see [25] for a review). The low-dimensional representation describes a neuronal process being carried out by the larger circuit from which the neurons were recorded [32,35]. The same dimensionality reduction method can be applied to the recorded activity and to the network model activity, resulting in population activity structures that can be directly compared (Figure 1 ). This benefit is also true of related methods for comparing neuronal recordings and network models involving neuronal decoding, population response similarity, and predicting the activity of one neuron from a population of other neurons [3].

Dimensionality reduction has been adopted by recent studies to relate neuronal recordings and network models to study working memory, decision making, motor control, and more. Although many studies have separately employed large-scale neuronal recordings, large-scale network models, and dimensionality reduction, this review focuses on studies that incorporate all three components. Below we describe these studies, organized by the aspect of population activity structure used to relate neuronal recordings and network models: population activity time courses, functionally-defined neuronal subspaces, and population-wide neuronal variability. These were chosen first because they represent the key ways in which dimensionality reduction has been used in the literature to relate population recordings and network models. More importantly, these three categories represent fundamental aspects of population activity structure — how it unfolds over time, how different types of information can be encoded in different subspaces, and how it varies from trial to trial.

Section snippets

Population activity time courses

Dynamical structures, such as point attractors, line attractors, and limit cycles, arising from network models have long been hypothesized to underlie the computational ability of biological networks of neurons [36, 37, 38]. Such dynamical structures have been implicated in decision making [39,40], memory [41, 42, 43], oculomotor integration [44,45], motor control [46], olfaction [47], and more. A fundamental question in systems neuroscience is whether these dynamical structures are actually

Functionally-defined neuronal subspaces

Recent studies have investigated how distinct types of information encoded by the same neuronal population can be parsed by downstream brain circuits [58, 59, 60]. An enticing proposal is that different types of information are encoded in different subspaces within the population activity space, where the subspaces are identified using dimensionality reduction. For example, Kaufman et al. [18] asked how it is possible for neurons in the motor cortex to be active during motor preparation, yet

Population-wide neuronal variability

The previous sections focus largely on neuronal activity that is averaged across trials and on firing rate-based network models. This naturally obscures the trial-to-trial variability that is a fundamental feature of neuronal responses across the cortex [68], both at the level of single neuron responses [69] as well as variability shared by the population [11,70]. Theoretical and experimental studies have focused on how the structure of that variability places limits on information coding [71,

Conclusion

Dimensionality reduction has enabled incisive comparisons between biological and model networks in terms of population activity time courses, functionally-defined neuronal subspaces, and population-wide neuronal variability. Such comparisons result in either (i) a correspondence between the neuronal recordings and the network model, in which case the model can be dissected to understand underlying network mechanisms, or (ii) discrepancies between the neuronal recordings and standard network

Conflicts of interest statement

Nothing declared.

References and recommended reading

Papers of particular interest, published within the period of review, have been highlighted as:

  • • of special interest

  • •• of outstanding interest

Acknowledgements

This work was supported by a Richard King Mellon Foundation Presidential Fellowship in the Life Sciences (RCW), NIHR01 EB026953 (BD, MAS, BMY), Hillman Foundation (BD, MAS), NSF NCS BCS 1734901 and 1734916 (MAS, BMY), NIH CRCNS R01 MH118929 (MAS, BMY), NIH CRCNS R01 DC015139 (BD), ONRN00014-18-1-2002 (BD), Simons Foundation325293 and 542967 (BD), NIH R01 EY022928 (MAS), NIH P30 EY008098 (MAS), Research to Prevent Blindness (MAS), Eye and Ear Foundation of Pittsburgh (MAS), NSF NCS BCS 1533672

References (91)

  • M. Pagan et al.

    Quantifying the signals contained in heterogeneous neural responses and determining their relationships with task performance

    Journal of Neurophysiology

    (2014)
  • J.A. Hennig et al.

    Constraints on neural redundancy

    eLife

    (2018)
  • S. Druckmann et al.

    Neuronal circuits underlying persistent representations despite time varying activity

    Current Biology

    (2012)
  • M.N. Shadlen et al.

    The variable discharge of cortical neurons: implications for connectivity, computation, and information coding

    Journal of Neuroscience

    (1998)
  • L.F. Abbott et al.

    The effect of correlated variability on the accuracy of a population code

    Neural Computation

    (1999)
  • M.R. Cohen et al.

    Attention improves performance primarily by reducing interneuronal correlations

    Nature Neuroscience

    (2009)
  • M.R. Cohen et al.

    A neuronal population measure of attention predicts behavioral performance on individual trials

    Journal of Neuroscience

    (2010)
  • M.T. Kaufman et al.

    Vacillation, indecision and hesitation in moment-by-moment decoding of monkey motor cortex

    eLife

    (2015)
  • R. Rosenbaum et al.

    The spatial structure of correlated neuronal variability

    Nature Neuroscience

    (2017)
  • M. Jazayeri et al.

    Navigating the neural space in search of the neural code

    Neuron

    (2017)
  • I.H. Stevenson et al.

    How advances in neural recording affect data analysis

    Nature Neuroscience

    (2011)
  • R. Brette et al.

    Simulation of networks of spiking neurons: a review of tools and strategies

    Journal of Computational Neuroscience

    (2007)
  • D.L.K. Yamins et al.

    Using goal-driven deep learning models to understand sensory cortex

    Nature Neuroscience

    (2016)
  • W.C.A. Lee et al.

    Anatomy and function of an excitatory network in the visual cortex

    Nature

    (2016)
  • A. Roxin et al.

    On the distribution of firing rates in networks of cortical neurons

    Journal of Neuroscience

    (2011)
  • L. Chariker et al.

    Orientation selectivity from very sparse LGN inputs in a comprehensive model of macaque V1 cortex

    Journal of Neuroscience

    (2016)
  • A. Litwin-Kumar et al.

    Slow dynamics and high variability in balanced cortical networks with clustered connections

    Nature Neuroscience

    (2012)
  • J. Trousdale et al.

    Impact of network structure and cellular response on spike time correlations

    PLoS computational biology

    (2012)
  • C. Stringer et al.

    Inhibitory control of correlated intrinsic variability in cortical networks

    eLife

    (2016)
  • B. Doiron et al.

    The mechanics of state-dependent neural correlations

    Nature Neuroscience

    (2016)
  • D. Sussillo et al.

    A neural network that finds a naturalistic solution for the production of muscle activity

    Nature Neuroscience

    (2015)
  • K. Rajan et al.

    Recurrent Network Models of Sequence Generation and Memory

    Neuron

    (2016)
  • C.D. Harvey et al.

    Choice-specific sequences in parietal cortex during a virtual-navigation decision task

    Nature

    (2012)
  • V. Mante et al.

    Context-dependent computation by recurrent dynamics in prefrontal cortex

    Nature

    (2013)
  • M.T. Kaufman et al.

    Cortical activity in the null space: permitting preparation without movement

    Nature Neuroscience

    (2014)
  • P.T. Sadtler et al.

    Neural Constraints on Learning

    Nature

    (2014)
  • J.D. Murray et al.

    Stable population coding for working memory coexists with heterogeneous neural dynamics in prefrontal cortex

    Proceedings of the National Academy of Sciences

    (2017)
  • E.D. Remington et al.

    A dynamical systems perspective on flexible motor timing

    Trends in Cognitive Sciences

    (2018)
  • D.A. Ruff et al.

    Cognition as a window into neuronal population space

    Annual Review of Neuroscience

    (2018)
  • M.G. Perich et al.

    A neural population mechanism for rapid learning

    Neuron

    (2018)
  • J.P. Cunningham et al.

    Dimensionality reduction for large-scale neural recordings

    Nature Neuroscience

    (2014)
  • P. Gao et al.

    On simplicity and complexity in the brave new world of large-scale neuroscience

    Current Opinion in Neurobiology

    (2015)
  • J.A. Gallego et al.

    Neural Manifolds for the Control of Movement

    Neuron

    (2017)
  • B.R. Cowley et al.

    Stimulus-Driven Population Activity Patterns in Macaque Primary Visual Cortex

    PLoS Computational Biology

    (2016)
  • R.C. Williamson et al.

    Scaling properties of dimensionality reduction for neural populations and network models

    PLoS Computational Biology

    (2016)
  • Cited by (0)

    View full text