2007 Special IssueEdge of chaos and prediction of computational performance for neural circuit models
Introduction
What makes a neural microcircuit computationally powerful? Or more precisely, which measurable quantities could explain why one microcircuit is better suited for a particular family of computational tasks than another microcircuit ? Rather than constructing particular microcircuit models that carry out particular computations, we pursue in this article a different strategy, which is based on the assumption that the computational function of cortical microcircuits is not fully genetically encoded, but rather emerges through various forms of plasticity (“learning”) in response to the actual distribution of signals that the neural microcircuit receives from its environment. From this perspective the question about the computational function of cortical microcircuits turns into the questions:
- (a)
What functions (i.e. maps from circuit inputs to circuit outputs) can particular neurons (“readout neurons”, see below) in conjunction with the circuit learn to compute.
- (b)
How well can readout neurons in conjunction with the circuit generalize a specific learned computational function to new inputs?
We propose in this article a conceptual framework and quantitative measures for the investigation of these two questions. In order to make this approach feasible, in spite of numerous unknowns regarding synaptic plasticity and the distribution of electrical and biochemical signals impinging on a cortical microcircuit, we make in the present first step of this approach the following simplifying assumptions:
- 1.
Particular neurons (“readout neurons”) learn via synaptic plasticity to extract specific information encoded in the spiking activity of neurons in the circuit.
- 2.
We assume that the cortical microcircuit itself is highly recurrent, but that the impact of feedback that a readout neuron might send back into this circuit can be neglected.1
- 3.
We assume that synaptic plasticity of readout neurons enables them to learn arbitrary linear transformations. More precisely, we assume that the input to such readout neuron can be approximated by a term , where is the number of presynaptic neurons, results from the output spike train of the th presynaptic neuron by filtering it according to the low-pass filtering property of the membrane of the readout neuron,2 and is the efficacy of the synaptic connection. Thus models the time course of the contribution of previous spikes from the th presynaptic neuron to the membrane potential at the soma of this readout neuron. We will refer to the vector as the circuit state at time . Note that the readout neurons do not have access to the analog state of the circuit neurons, but only to the filtered version of their output spike trains.
Under these unpleasant but apparently unavoidable simplifying assumptions we propose in Sections 4 A measure for the kernel-quality, 5 A measure for the generalization capability new quantitative criteria based on rigorous mathematical principles for evaluating a neural microcircuit with regard to questions (a) and (b). We will compare in Sections 6 Evaluating the influence of synaptic connectivity on computational performance, 8 Evaluating the computational performance of neural microcircuit models in UP- and DOWN-states the predictions of these quantitative measures with the actual computational performance achieved by 102 different types of neural microcircuit models, for a fairly large number of different computational tasks. All microcircuit models that we consider are based on biological data for generic cortical microcircuits (as described in Section 2), but have different settings of their parameters. It should be noted that the models for neural circuits that are discussed in this article are subject to noise (in the form of randomly chosen initial values of membrane voltages, and in the form of biologically realistic models for background noise, see the precise definition in Section 2, and exploration of several noise levels in Section 8). Hence the classical theory for computations in noise-free analog circuits (see, e.g., Siegelmann and Sontag (1994)) cannot be applied to these models. Rather, the more negative results for computations in analog circuits with noise (see, e.g., Maass and Orponen (1998), Maass and Sontag (1999)) apply to the circuit models that are investigated in this article.
For the sake of simplicity, we consider in this article only classification tasks, although other types of computations (e.g. online computations where the target output changes continuously) are at least of equal importance for neural systems. But actually, a theoretical analysis of the capability of neural circuits to approximate a given online computation (that maps continuous input streams onto continuous output streams), see Maass, Natschläger, and Markram (2002) and in more detail Maass and Markram (2004), has shown that the so-called separation property of circuit components is a necessary (and in combination with a condition on the readout also sufficient) condition for being able to approximate a given online computation that maps continuous input streams onto continuous output streams with fading memory. Hence one can view the computational tasks that are considered in this article also as tests of the separation property of small generic circuits of neurons, and hence of their capability to serve as a rich reservoir of “basis filters” in the context of that theory, and hence as subcircuits for online computing with continuous output streams.
Several results of this article had previously been sketched in Maass, Legenstein, and Bertschinger (2005).
Section snippets
Models for generic cortical microcircuits
Our empirical studies were performed on a large variety of models for generic cortical microcircuits. All circuit models consisted of leaky-integrate-and-fire neurons3
The edge of chaos in neural microcircuit models
A recurrent neural circuit is a special case of a dynamical system. Related dynamical systems have been studied extensively in various contexts in physics, e.g. cellular automata (Langton, 1990, Packard, 1988), random Boolean networks (Kauffman, 1993), and Ising-spin models (networks of threshold elements) (Derrida, 1987). By changing some global parameters of the system, e.g. connectivity structure or the functional dependence of the output of an element on the output of other elements, one
A measure for the kernel-quality
One expects from a powerful computational system that significantly different input streams cause significantly different internal states, and hence may lead to different outputs. Most real-world computational tasks require that a readout gives a desired output not just for 2, but for a fairly large number of significantly different inputs. One could of course test whether a readout on a circuit can separate each of the pairs of such inputs. But even if the readout can do this, we do not
A measure for the generalization capability
Obviously the preceding measure addresses only one component of the computational performance of a neural circuit with linear readout. Another component is its capability to generalize a learned computational function to new inputs. Mathematical criteria for generalization capability are derived in Vapnik (1998) (see Ch. 4 of Cherkassky and Mulier (1998) for a compact account of results relevant for our arguments). According to this mathematical theory one can quantify the generalization
Evaluating the influence of synaptic connectivity on computational performance
We now test the predictive quality of the two proposed measures for the computational performance of a microcircuit with linear readout on spike patterns. One should keep in mind that the proposed measures do not attempt to test the computational capability of a circuit with linear readout for one particular computational task, but for any distribution on and for a very large (in general infinitely large) family of computational tasks that only have in common a particular bias regarding
Predicting computational performance on the basis of circuit states with limited precision
In the earlier simulations, the readout unit was assumed to have access to the actual analog circuit states (which are given by the low-pass filtered output spikes of the circuit neurons). In a biological neural system however, readout elements may have access only to circuit states of limited precision since signals are corrupted by noise. Therefore, we repeated our analysis for the case where each circuit state is only given with some fixed finite precision. More precisely, the range of
Evaluating the computational performance of neural microcircuit models in UP- and DOWN-states
Data from numerous intracellular recordings suggest that neural circuits in vivo switch between two different dynamic regimes that are commonly referred to as UP- and DOWN-states. UP-states are characterized by a bombardment with synaptic inputs from recurrent activity in the circuit, resulting in a membrane potential whose average value is significantly closer to the firing threshold, but also has larger variance. Furthermore, synaptic bombardment in UP-states leads to an increase in membrane
Discussion
We have proposed a new method for understanding why one neural microcircuit is performing better than another neural microcircuit (for some large family of computational tasks that only have to agree with regard to those features of the circuit input, e.g. rates or spike patterns, on which the target outputs may depend). More precisely, we have introduced two measures (a kernel-measure for the nonlinear processing capability, and a measure for the generalization capability) whose sum
Acknowledgements
We would like to thank Nils Bertschinger, Alain Destexhe, Wulfram Gerstner, and Klaus Schuch for helpful discussions. Written under partial support by the Austrian Science Fund FWF, project # P15386; FACETS, project # FP6-015879, of the European Union; and the PASCAL Network of Excellence.
References (24)
Computation at the edge of chaos
Physica D
(1990)- et al.
On the computational power of recurrent circuits of spiking neurons
Journal of Computer and System Sciences
(2004) - et al.
Fading memory and kernel properties of generic cortical microcircuit models
Journal of Physiology–Paris
(2004) - et al.
Analog computation via neural networks
Theoretical Computer Science
(1994) - et al.
Stimulus dependence of two-state fluctuations of membrane potential in cat visual cortex
Nature Neuroscience
(2000) - et al.
Vapnik–Chervonenkis dimension of neural nets
- et al.
Real-time computation at the edge of chaos in recurrent neural networks
Neural Computation
(2004) - et al.
Learning from data
(1998) Dynamical phase transition in non-symmetric spin glasses
Journal of Physics A: Mathematical and General
(1987)- et al.
The high-conductance state of neocortical neurons in vivo
Nature Reviews Neuroscience
(2003)
Nonlinear time series analysis
Cited by (381)
Reservoir computing and photoelectrochemical sensors: A marriage of convenience
2023, Coordination Chemistry ReviewsCriticality in FitzHugh-Nagumo oscillator ensembles: Design, robustness, and spatial invariance
2024, Communications PhysicsConnectome-based reservoir computing with the conn2res toolbox
2024, Nature CommunicationsA time-delayed physical reservoir with various time constants
2024, Applied Physics ExpressPhysical reservoir computing with emerging electronics
2024, Nature Electronics