Elsevier

Neural Networks

Volume 20, Issue 3, April 2007, Pages 323-334
Neural Networks

2007 Special Issue
Edge of chaos and prediction of computational performance for neural circuit models

https://doi.org/10.1016/j.neunet.2007.04.017Get rights and content

Abstract

We analyze in this article the significance of the edge of chaos for real-time computations in neural microcircuit models consisting of spiking neurons and dynamic synapses. We find that the edge of chaos predicts quite well those values of circuit parameters that yield maximal computational performance. But obviously it makes no prediction of their computational performance for other parameter values. Therefore, we propose a new method for predicting the computational performance of neural microcircuit models. The new measure estimates directly the kernel property and the generalization capability of a neural microcircuit. We validate the proposed measure by comparing its prediction with direct evaluations of the computational performance of various neural microcircuit models. The proposed method also allows us to quantify differences in the computational performance and generalization capability of neural circuits in different dynamic regimes (UP- and DOWN-states) that have been demonstrated through intracellular recordings in vivo.

Introduction

What makes a neural microcircuit computationally powerful? Or more precisely, which measurable quantities could explain why one microcircuit C is better suited for a particular family of computational tasks than another microcircuit C? Rather than constructing particular microcircuit models that carry out particular computations, we pursue in this article a different strategy, which is based on the assumption that the computational function of cortical microcircuits is not fully genetically encoded, but rather emerges through various forms of plasticity (“learning”) in response to the actual distribution of signals that the neural microcircuit receives from its environment. From this perspective the question about the computational function of cortical microcircuits C turns into the questions:

  • (a)

    What functions (i.e. maps from circuit inputs to circuit outputs) can particular neurons (“readout neurons”, see below) in conjunction with the circuit C learn to compute.

  • (b)

    How well can readout neurons in conjunction with the circuit C generalize a specific learned computational function to new inputs?

We propose in this article a conceptual framework and quantitative measures for the investigation of these two questions. In order to make this approach feasible, in spite of numerous unknowns regarding synaptic plasticity and the distribution of electrical and biochemical signals impinging on a cortical microcircuit, we make in the present first step of this approach the following simplifying assumptions:

  • 1.

    Particular neurons (“readout neurons”) learn via synaptic plasticity to extract specific information encoded in the spiking activity of neurons in the circuit.

  • 2.

    We assume that the cortical microcircuit itself is highly recurrent, but that the impact of feedback that a readout neuron might send back into this circuit can be neglected.1

  • 3.

    We assume that synaptic plasticity of readout neurons enables them to learn arbitrary linear transformations. More precisely, we assume that the input to such readout neuron can be approximated by a term i=1n1wixi(t), where n1 is the number of presynaptic neurons, xi(t) results from the output spike train of the ith presynaptic neuron by filtering it according to the low-pass filtering property of the membrane of the readout neuron,2 and wi is the efficacy of the synaptic connection. Thus wixi(t) models the time course of the contribution of previous spikes from the ith presynaptic neuron to the membrane potential at the soma of this readout neuron. We will refer to the vector x(t) as the circuit state at time t. Note that the readout neurons do not have access to the analog state of the circuit neurons, but only to the filtered version of their output spike trains.

Under these unpleasant but apparently unavoidable simplifying assumptions we propose in Sections 4 A measure for the kernel-quality, 5 A measure for the generalization capability new quantitative criteria based on rigorous mathematical principles for evaluating a neural microcircuit C with regard to questions (a) and (b). We will compare in Sections 6 Evaluating the influence of synaptic connectivity on computational performance, 8 Evaluating the computational performance of neural microcircuit models in UP- and DOWN-states the predictions of these quantitative measures with the actual computational performance achieved by 102 different types of neural microcircuit models, for a fairly large number of different computational tasks. All microcircuit models that we consider are based on biological data for generic cortical microcircuits (as described in Section 2), but have different settings of their parameters. It should be noted that the models for neural circuits that are discussed in this article are subject to noise (in the form of randomly chosen initial values of membrane voltages, and in the form of biologically realistic models for background noise, see the precise definition in Section 2, and exploration of several noise levels in Section 8). Hence the classical theory for computations in noise-free analog circuits (see, e.g., Siegelmann and Sontag (1994)) cannot be applied to these models. Rather, the more negative results for computations in analog circuits with noise (see, e.g., Maass and Orponen (1998), Maass and Sontag (1999)) apply to the circuit models that are investigated in this article.

For the sake of simplicity, we consider in this article only classification tasks, although other types of computations (e.g. online computations where the target output changes continuously) are at least of equal importance for neural systems. But actually, a theoretical analysis of the capability of neural circuits to approximate a given online computation (that maps continuous input streams onto continuous output streams), see Maass, Natschläger, and Markram (2002) and in more detail Maass and Markram (2004), has shown that the so-called separation property of circuit components is a necessary (and in combination with a condition on the readout also sufficient) condition for being able to approximate a given online computation that maps continuous input streams onto continuous output streams with fading memory. Hence one can view the computational tasks that are considered in this article also as tests of the separation property of small generic circuits of neurons, and hence of their capability to serve as a rich reservoir of “basis filters” in the context of that theory, and hence as subcircuits for online computing with continuous output streams.

Several results of this article had previously been sketched in Maass, Legenstein, and Bertschinger (2005).

Section snippets

Models for generic cortical microcircuits

Our empirical studies were performed on a large variety of models for generic cortical microcircuits. All circuit models consisted of leaky-integrate-and-fire neurons3

The edge of chaos in neural microcircuit models

A recurrent neural circuit is a special case of a dynamical system. Related dynamical systems have been studied extensively in various contexts in physics, e.g. cellular automata (Langton, 1990, Packard, 1988), random Boolean networks (Kauffman, 1993), and Ising-spin models (networks of threshold elements) (Derrida, 1987). By changing some global parameters of the system, e.g. connectivity structure or the functional dependence of the output of an element on the output of other elements, one

A measure for the kernel-quality

One expects from a powerful computational system that significantly different input streams cause significantly different internal states, and hence may lead to different outputs. Most real-world computational tasks require that a readout gives a desired output not just for 2, but for a fairly large number m of significantly different inputs. One could of course test whether a readout on a circuit C can separate each of the m2 pairs of such inputs. But even if the readout can do this, we do not

A measure for the generalization capability

Obviously the preceding measure addresses only one component of the computational performance of a neural circuit C with linear readout. Another component is its capability to generalize a learned computational function to new inputs. Mathematical criteria for generalization capability are derived in Vapnik (1998) (see Ch. 4 of Cherkassky and Mulier (1998) for a compact account of results relevant for our arguments). According to this mathematical theory one can quantify the generalization

Evaluating the influence of synaptic connectivity on computational performance

We now test the predictive quality of the two proposed measures for the computational performance of a microcircuit with linear readout on spike patterns. One should keep in mind that the proposed measures do not attempt to test the computational capability of a circuit with linear readout for one particular computational task, but for any distribution on Suniv and for a very large (in general infinitely large) family of computational tasks that only have in common a particular bias regarding

Predicting computational performance on the basis of circuit states with limited precision

In the earlier simulations, the readout unit was assumed to have access to the actual analog circuit states (which are given by the low-pass filtered output spikes of the circuit neurons). In a biological neural system however, readout elements may have access only to circuit states of limited precision since signals are corrupted by noise. Therefore, we repeated our analysis for the case where each circuit state is only given with some fixed finite precision. More precisely, the range of

Evaluating the computational performance of neural microcircuit models in UP- and DOWN-states

Data from numerous intracellular recordings suggest that neural circuits in vivo switch between two different dynamic regimes that are commonly referred to as UP- and DOWN-states. UP-states are characterized by a bombardment with synaptic inputs from recurrent activity in the circuit, resulting in a membrane potential whose average value is significantly closer to the firing threshold, but also has larger variance. Furthermore, synaptic bombardment in UP-states leads to an increase in membrane

Discussion

We have proposed a new method for understanding why one neural microcircuit C is performing better than another neural microcircuit C (for some large family of computational tasks that only have to agree with regard to those features of the circuit input, e.g. rates or spike patterns, on which the target outputs may depend). More precisely, we have introduced two measures (a kernel-measure for the nonlinear processing capability, and a measure for the generalization capability) whose sum

Acknowledgements

We would like to thank Nils Bertschinger, Alain Destexhe, Wulfram Gerstner, and Klaus Schuch for helpful discussions. Written under partial support by the Austrian Science Fund FWF, project # P15386; FACETS, project # FP6-015879, of the European Union; and the PASCAL Network of Excellence.

References (24)

  • Häusler, S., & Maass, W. (2007). A statistical analysis of information processing properties of lamina-specific...
  • H. Kantz et al.

    Nonlinear time series analysis

    (1997)
  • Cited by (381)

    View all citing articles on Scopus
    View full text