Bridging large-scale neuronal recordings and large-scale network models using dimensionality reduction
Introduction
For decades, the fields of experimental neuroscience and neural network modeling proceeded largely in parallel. Whereas experimental neuroscience focused on understanding how the activities of individual neurons relate to sensory stimuli and behavior, the modeling community sought to understand theoretically how neural networks can give rise to brain function. In recent years, developments in neuronal recording technology have enabled the simultaneous recording of hundreds of neurons or more [1]. Concurrently, increases in computational power have enabled the simulation of large neural networks [2]. Together, these developments should enable experimental data to more stringently constrain network model design and network models to better predict neuronal activity for subsequent experiments [3,4].
A key question is how to relate large-scale neuronal recordings with large-scale network models. Network models typically do not attempt to replicate the precise anatomical connectivity of the biological network from which the neurons are recorded, since the underlying anatomical connectivity is usually unknown (although technological developments are making this possible [5]). In such settings, there is not a one-to-one correspondence of each recorded neuron with a model neuron. To date, comparisons between recordings and models have primarily relied on aggregate spike train statistics based on single neurons (e.g., distribution of firing rates [6], distribution of tuning preferences [7], and Fano factor [8]) and pairs of neurons (e.g., spike time [9] and spike count correlations [10,11]), as well as single-neuron activity time courses [12•,13]. To go beyond single-neuron and pairwise statistics, recent studies have examined the multi-dimensional structure of neuronal population activity to uncover important insights into mechanisms underlying neuronal computation (e.g., [14,15,16,17•,18,19,20••,21,22,23,24]). This has motivated the inquiry of whether network models reproduce such population activity structure, in addition to single-neuron and pairwise statistics, raising the bar on what constitutes an agreement between a network model and neuronal recordings [3].
Population activity structure can be characterized using dimensionality reduction [25, 26, 27], which provides a concise summary (i.e., a low-dimensional representation) of how a population of neurons covaries and how their activities unfold over time. Several dimensionality reduction methods have been applied to neuronal population activity, including principal component analysis (e.g., [14,15,20••,28,29]), demixed principal component analysis [30], factor analysis [16,19,31••], Gaussian-process factor analysis [32], latent factor analysis via dynamical systems [33], tensor component analysis [34], and more (see [25] for a review). The low-dimensional representation describes a neuronal process being carried out by the larger circuit from which the neurons were recorded [32,35]. The same dimensionality reduction method can be applied to the recorded activity and to the network model activity, resulting in population activity structures that can be directly compared (Figure 1 ). This benefit is also true of related methods for comparing neuronal recordings and network models involving neuronal decoding, population response similarity, and predicting the activity of one neuron from a population of other neurons [3].
Dimensionality reduction has been adopted by recent studies to relate neuronal recordings and network models to study working memory, decision making, motor control, and more. Although many studies have separately employed large-scale neuronal recordings, large-scale network models, and dimensionality reduction, this review focuses on studies that incorporate all three components. Below we describe these studies, organized by the aspect of population activity structure used to relate neuronal recordings and network models: population activity time courses, functionally-defined neuronal subspaces, and population-wide neuronal variability. These were chosen first because they represent the key ways in which dimensionality reduction has been used in the literature to relate population recordings and network models. More importantly, these three categories represent fundamental aspects of population activity structure — how it unfolds over time, how different types of information can be encoded in different subspaces, and how it varies from trial to trial.
Section snippets
Population activity time courses
Dynamical structures, such as point attractors, line attractors, and limit cycles, arising from network models have long been hypothesized to underlie the computational ability of biological networks of neurons [36, 37, 38]. Such dynamical structures have been implicated in decision making [39,40], memory [41, 42, 43], oculomotor integration [44,45], motor control [46], olfaction [47], and more. A fundamental question in systems neuroscience is whether these dynamical structures are actually
Functionally-defined neuronal subspaces
Recent studies have investigated how distinct types of information encoded by the same neuronal population can be parsed by downstream brain circuits [58, 59, 60]. An enticing proposal is that different types of information are encoded in different subspaces within the population activity space, where the subspaces are identified using dimensionality reduction. For example, Kaufman et al. [18] asked how it is possible for neurons in the motor cortex to be active during motor preparation, yet
Population-wide neuronal variability
The previous sections focus largely on neuronal activity that is averaged across trials and on firing rate-based network models. This naturally obscures the trial-to-trial variability that is a fundamental feature of neuronal responses across the cortex [68], both at the level of single neuron responses [69] as well as variability shared by the population [11,70]. Theoretical and experimental studies have focused on how the structure of that variability places limits on information coding [71,
Conclusion
Dimensionality reduction has enabled incisive comparisons between biological and model networks in terms of population activity time courses, functionally-defined neuronal subspaces, and population-wide neuronal variability. Such comparisons result in either (i) a correspondence between the neuronal recordings and the network model, in which case the model can be dissected to understand underlying network mechanisms, or (ii) discrepancies between the neuronal recordings and standard network
Conflicts of interest statement
Nothing declared.
References and recommended reading
Papers of particular interest, published within the period of review, have been highlighted as:
• of special interest
•• of outstanding interest
Acknowledgements
This work was supported by a Richard King Mellon Foundation Presidential Fellowship in the Life Sciences (RCW), NIHR01 EB026953 (BD, MAS, BMY), Hillman Foundation (BD, MAS), NSF NCS BCS 1734901 and 1734916 (MAS, BMY), NIH CRCNS R01 MH118929 (MAS, BMY), NIH CRCNS R01 DC015139 (BD), ONRN00014-18-1-2002 (BD), Simons Foundation325293 and 542967 (BD), NIH R01 EY022928 (MAS), NIH P30 EY008098 (MAS), Research to Prevent Blindness (MAS), Eye and Ear Foundation of Pittsburgh (MAS), NSF NCS BCS 1533672
References (91)
Recurrent neural networks as versatile tools of neuroscience research
Current Opinion in Neurobiology
(2017)- et al.
Transient dynamics versus fixed points in odor representations by locust antennal lobe projection neurons
Neuron
(2005) - et al.
Neural population dynamics during reaching
Nature
(2012) - et al.
Cortical areas interact through a communication subspace
Neuron
(2018) - et al.
Cortical population activity within a preserved neural manifold underlies multiple motor behaviors
Nature Communications
(2018) - et al.
Demixed principal component analysis of neural population data
eLife
(2016) - et al.
Bump attractor dynamics in prefrontal cortex explains behavioral precision in spatial working memory
Nature Neuroscience
(2014) How the brain keeps the eyes still
Proceedings of the National Academy of Sciences
(1996)- et al.
Spatial gradients and multidimensional dynamics in a neural integrator circuit
Nature Neuroscience
(2011) - et al.
Linking connectivity, dynamics, and computations in low-rank recurrent neural networks
Neuron
(2018)