Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro

eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleTheory/New Concepts, Novel Tools and Methods

Short-Term Synaptic Plasticity Makes Neurons Sensitive to the Distribution of Presynaptic Population Firing Rates

Luiz Tauffer and Arvind Kumar
eNeuro 12 February 2021, 8 (2) ENEURO.0297-20.2021; DOI: https://doi.org/10.1523/ENEURO.0297-20.2021
Luiz Tauffer
1Department of Computational Science and Technology, School of Computer Science and Communication, KTH Royal Institute of Technology, 11428 Stockholm, Sweden
2Bernstein Center Freiburg, University of Freiburg, 79104 Freiburg im Breisgau, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Arvind Kumar
1Department of Computational Science and Technology, School of Computer Science and Communication, KTH Royal Institute of Technology, 11428 Stockholm, Sweden
2Bernstein Center Freiburg, University of Freiburg, 79104 Freiburg im Breisgau, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Arvind Kumar
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Visual Abstract

Figure
  • Download figure
  • Open in new tab
  • Download powerpoint

Abstract

The ability to discriminate spikes that encode a particular stimulus from spikes produced by background activity is essential for reliable information processing in the brain. We describe how synaptic short-term plasticity (STP) modulates the output of presynaptic populations as a function of the distribution of the spiking activity and find a strong relationship between STP features and sparseness of the population code, which could solve this problem. Furthermore, we show that feedforward excitation followed by inhibition (FF-EI), combined with target-dependent STP, promote substantial increase in the signal gain even for considerable deviations from the optimal conditions, granting robustness to this mechanism. A simulated neuron driven by a spiking FF-EI network is reliably modulated as predicted by a rate analysis and inherits the ability to differentiate sparse signals from dense background activity changes of the same magnitude, even at very low signal-to-noise conditions. We propose that the STP-based distribution discrimination is likely a latent function in several regions such as the cerebellum and the hippocampus.

  • excitation/inhibition balance
  • neural code
  • short-term plasticity
  • sparse code
  • synaptic depression
  • synaptic facilitation

Significance Statement

What is the optimal way to distribute a fixed number of spikes over a set of neurons so the we get a maximal response in the downstream neuron? This question is at the core of neural coding. Here, we show that when synapses show short-term facilitation, sparse code (when a few neurons increase their firing rate in a task-dependent manner) is more effective than dense code (when many neurons increase their firing rate in a task-dependent manner). By contrast, when synapses show short-term depression a dense code is more effective than a sparse code. Thus, for the first time, we show that the dynamics of synapses itself has an effect in deciding the most effective neural code

Introduction

The brain is a highly noisy system. At the cellular level, the neurons are unreliable in eliciting spikes and synapses are unreliable in transmitting the spikes to the postsynaptic neurons. At the network level, the connectivity and balance of excitation and inhibition gives rise to fluctuations in the background activity (Brunel, 2000; Kumar et al., 2008), which can be as high as the mean stimulus response (Arieli et al., 1996; Kenet et al., 2003). In such a noisy environment, a neuron is faced with a crucial task: how to discriminate stimulus-induced firing rate changes from fluctuations in the firing rate of the background activity of the same magnitude?

If synapses were static, that is, when the postsynaptic conductances (PSCs) do not depend on the immediate spike history, this task could not be accomplished, unless synapses are specifically tuned to do so. For instance, the identification of specific spiking patterns, filtering out presumed noise sequences, can be accomplished by precise tuning of synaptic weights (Gütig and Sompolinsky, 2006). This solution, however, relies on training synaptic weights using a certain supervised learning rule, and even then, it could only work for a specific set of spike timing sequences. Active dendrites (with voltage dependent ionic conductance) can also work as pattern detectors (Hawkins and Ahmad, 2016), but this mechanism would only work for signals constrained to locally clustered synapses. Therefore, despite being relevant for the understanding of signal processing in the brain, the mechanisms by which neural ensembles solve the activity discrimination problem have remained elusive.

Here, we show that short-term plasticity (STP) of synapses provides an effective and general mechanism to solve the aforementioned task. STP refers to the observation that synaptic strength changes on spike-by-spike basis, depending on the timing of previous spikes (Stevens and Wang, 1995; Zucker and Regehr, 2002), that is, STP arises because neurotransmitter release dynamics is history dependent and can be manifest as either short-term facilitation (STF) or short-term depression (STD). Thus, STP becomes a crucial part of neural hardware when information is encoded as firing rate. Indeed, STP has been suggested to play several important roles in neural information processing (Buonomano, 2000; Fuhrmann et al., 2002; Izhikevich et al., 2003; Abbott and Regehr, 2004; Middleton et al., 2011; Rotman et al., 2011; Scott et al., 2012; Rotman and Klyachko, 2013; Jackman and Regehr, 2017; Grangeray-Vilmint et al., 2018; Naud and Sprekeler, 2018).

An immediate consequence of STP is that the effective PSCs depend on the firing rates of individual presynaptic neurons (Fig. 1). This suggests that postsynaptic targets of populations with dynamic synapses could distinguish among different input firing rate distributions even without supervised learning. To demonstrate this feature of STP, we measured the response of postsynaptic neurons for a weak stimulus with amplitude one order of magnitude smaller than the background activity. By systematically changing the distribution of firing rates over the presynaptic neuron ensemble, we found that weak signals can be differentiated from the noisy fluctuations if the signal is appropriately distributed over the input ensemble. The optimal distribution that maximizes the discriminability depends on the nature of STP. We found that, for facilitatory synapses, sparse codes give better discrimination between a weak signal and dense background changes of the same intensity. By contrast, for depressing synapses, sparse codes result in highly negative gains in relation to dense background changes of the same magnitude. We also investigated feedforward networks with excitation and disynaptic inhibition, with target-dependent STP, and found that this arrangement allows for extra robustness for the output gain.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Distribution of the spiking activity over presynaptic neurons and STP. A, top, A neuron receives input from a presynaptic population, with only one of the neurons eliciting seven spikes. Bottom, The postsynaptic conductance (PSC) generated by the consecutive spikes for three different types of synapses (static, black; facilitatory, blue; depressing, red). The PSCs are different for each of these three types of synapses. B, top, Similar to panel A, a neuron receives inputs from a presynaptic population, but in this scenario the spikes were distributed among all presynaptic neurons. Bottom, The PSC generated by a sequence of seven consecutive spikes arriving at the same time as in panel A coming from three different types of synapses (static, black; facilitatory, blue; depressing, red). The PSCs are identical for each of these three types of synapses (lines overlapped). C, Feedforward excitation followed by feedforward inhibition configuration and two distributions of an extra spike rate Rext in addition to the basal firing rate rbas. Top, The extra rate is distributed into a few presynaptic neurons Next units (gray), with each chosen unit increasing its rate by Embedded Image . Bottom, The extra rate is distributed homogeneously throughout the population of N units, with each unit increasing its rate by Embedded Image .

Finally, we demonstrate how STP can endow a postsynaptic neuron with the ability to differentiate sparsely encoded activity from dense activity of the same magnitude, a function that would be especially important at very low signal-to-noise regimes. Thus, our results reveal that the nature of STP may also constrain the nature of firing rate-based population code.

Materials and Methods

Model of STP

One parsimonious and yet powerful mathematical description of short-term synaptic dynamics was proposed already 20 years ago (Tsodyks and Markram, 1997). The Tsodyks–Markram (TM) model could first account for activity-dependent synaptic depression observed in pairs of neocortical pyramidal neurons and was soon extended to cover for facilitation (increase in probability) of vesicle release (Tsodyks et al., 1998). With a small set of parameters, the TM model is able to explain the opposed effects of depletion of available synaptic vesicles and of the increase in release probability caused by accumulation of residual calcium in the presynaptic terminal, making it suitable as a framework to conjecture general impact of STP in neural information processing.

Here, we use the TM model (Eq. 1) to describe the short-term synaptic dynamics. The effect of depression is modeled by depletion of the proportion of available resources, represented by the variable x (0 ≤ x ≤ 1), which instantaneously decreases after each spike and returns to 1 with recovery time τrec. The gain effect of short-term facilitation is modeled by the facilitation factor U (0 ≤ U ≤ 1), which accounts for the accumulation of calcium at the presynaptic terminal after the arrival of an action potential. U transiently increases the release probability u (0 ≤ u ≤ 1), which returns to 0 with time constant τf: Formula (1)where tsp is the last spike time.

Proportion of released resources (PRR)

The change in the PSC gs after a presynaptic spike is proportional to the instantaneous PRR (Formula ) and to the absolute synaptic strength Bs. The average instantaneous PRR of a presynaptic unit can also be described as a function of a time-dependent Poissonian firing rate r(t) (Tsodyks et al., 1998) as: Formula (2)where the brackets denote the average over many realizations. The total PRR contribution of a single synapse, for a time window of duration Ts, can then be obtained by integrating Equation 2 over this period: Formula (3)

Total effective input to a postsynaptic neuron

For a homogeneous presynaptic population with same STP parameters and individual basal firing rate rbas, the population basal activity is Formula , where N is the population size. We quantify Rext as a multiple of Rbas. Our analysis is restricted to the case of low signal-to-noise ratio, i.e., Formula . We consider a simplified scenario where Rext is distributed homogeneously through a number Next of selected presynaptic units, which will increase their firing rate by Formula , while the remaining presynaptic units will keep their activity unchanged.

The total PRR released to a target neuron by the entire population, during Ts, will then be Formula (4)where Formula and Formula are the total PRR (Eq. 3) delivered by a stationary unit (firing at rbas) and a stimulus encoding unit (firing at rbas + rext), respectively.

Gain in the effective input

We are interested in the effects of varying the presynaptic distribution (over Next inputs) of this total extra rate (Rext) on the effective input to postsynaptic targets. To estimate the change in the gain because of STP we used the maximally dense distribution, when Next = N as the reference point: Formula (5)where the δ subscript denotes the smallest possible increase in individual firing rates, rδ (maximally distributed Rext). We refer to this as the dense distribution case and it ideally represents a situation of homogeneous increase in the basal activity of the system, against which a stimulus would need to be distinguished from. Next = N also implies smallest increase in individual operating rates (rext = rδ), therefore in the dense distribution case STP nonlinearities will be minimal. In other words, Next = N is the point where dynamic synapses will operate as close to static as possible.

We then quantify the gain in Formula for a given Next always relative to Formula caused by an input of the same intensity but with dense distribution, as Formula (6)

We calculate the curves of G as a function of Next for different sets of STP parameters and basal rates and search for points where it is maximized (Next = Nopt), which we call the optimal distribution (see example in Fig. 2D).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

A, Temporal profile of the PRR for a facilitatory synapse (top, U = 0.1, Embedded Image and Embedded Image ) and a depressing synapse (bottom, U = 0.7, Embedded Image and Embedded Image ), with increased rates during a period of Embedded Image . B, The amount of resources released by a single synapse, Qs. This was obtained by integrating Embedded Image over Ts (area under the curve in A, Eq. 3). Embedded Image for depressing synapses saturates at lower firing rates than facilitatory synapses. Inset, The derivative of Embedded Image and highlights the nonlinearities in Embedded Image , with depressing synapses showing monotonically decreasing slopes (decreasing release rate) and facilitatory synapses showing an initial region of increasing slopes (increasing release rate) with respect to rext. C, The extra PRR (Embedded Image ) as a function of the number of presynaptic neurons (Next) whose firing rate increases by two different values of Rext. Dashed lines mark the value achieved when Embedded Image , i.e., the dense distribution case (Embedded Image ). A population of facilitatory synapses (left) maximizes its release with low Next, while a population of depressing synapses (right) maximizes its release with Embedded Image . D, The gain (G; Eq. 5) as a function of Next for a fixed Rext. The Next that maximizes G, for this particular extra rate, is Nopt = 64 for facilitatory synapses and Embedded Image for depressing synapses. Notice that if the extra rate is allocated in even fewer input units, G can be negative. E, G surface for a facilitatory synapse as a function of Embedded Image and Next. The black line marks the maximum values of G, i.e., Nopt for each Embedded Image . The gain curves at panel D, where Embedded Image of rbas, is marked with a gray line for reference. F, The relationship between Nopt and Embedded Image is linear. At maximum gain (Gmax), the firing rate of the event-related neurons (Next) is Embedded Image for this specific example). G, Gmax for the two STP regimes shown in panel B. For a low signal-to-basal ratio Embedded Image , the gain can be considered independent from the stimulus intensity Embedded Image .

Optimal distribution

The optimal distribution of the activity (OD) can be framed as the fraction of the optimal number of encoding units Nopt in a given population of size N, that is, Formula . Because the optimal code (Formula ) is the distribution that maximizes the gain over the dense distribution with the same input magnitude (Formula ), OD can be written as Formula (7)

We define Rext as a fraction of Rbas to keep the same signal-to-noise ratio (Formula ) for populations of different sizes N. We find that ropt is fixed given the STP parameters and rbas (see Results), therefore by defining rδ (Formula ) as a fraction of rbas (Formula ) we reach the interesting consequence of OD being independent of any particular choices of population size (Eq. 7). That is, given the same STP parameters and value of rbas, populations of different sizes will optimally encode the same stimulus intensity (relative to their basal activity) with the same OD. Because the optimal encoding rate is constrained by Formula , the optimal distribution will also be constrained to Formula (see Fig. 3D), with values close to zero or one characterizing sparse or dense distributions, respectively.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Effects of STP attributes on maximum gain of the neural population. Here, the STP parameters U, τrec and τf were varied in the following range, from more facilitatory to more depressing: Embedded Image and Embedded Image . A, The optimal frequency ropt as a function of STP properties that gradually and monotonically change the synapse from facilitatory to depressing. ropt is high for facilitatory synapses and low for depressing synapses. ropt monotonically decreases as synapses change from facilitatory to depressing. As the basal rate is increased, ropt decreased for all types of synapses. The circles mark the parameters position for the facilitatory (s1, blue) and a depressing (s2, red) synapses used as reference in this work. B, Gmax as a function of STP properties that gradually and monotonically change the synapse from facilitatory to depressing. Similar to ropt, Gmax also decays rapidly for more depressing types and for higher basal rates. C, Relationship between Gmax and ropt. Notice the approximately linear relationship for facilitatory synapses, with the slope steadily decreasing with increasing rbas. This summarizes our prediction that higher basal firing rates will constrain the amplitudes of activity distribution-dependent gains and optimal encoding rates. D, Optimal distribution of presynaptic activity that maximizes the gain G. The change from sparse (Embedded Image ) to dense (Embedded Image ) optimal distribution is abrupt and occurs approximately at the same STP region for different stimulus durations (Ts). However, the transition point where OD changes from sparse to dense code is strongly modulated by rbas, higher basal rates allow for sparse code only for much more facilitatory synapses. E, Optimal distribution of rate as a function of the three key model parameters (U, τrec, and τf). The variable U is the most influential in defining the optimal encoding distribution, with Embedded Image defining the OD transition point for Embedded Image and Embedded Image . Marker sizes represent OD values, with large ones for OD = 1 and small ones for Embedded Image . Effects of STP attributes on maximum gain of the neural population can be seen in Extended Data Figure 3-1.

Extended Data Figure 3-1

Effects of STP attributes on maximum gain of the neural population. A, Similar to Figure 3 but with Embedded Image . B, Similar to Figure 3 but with Embedded Image . Both panels reproduce the main findings of Figure 3, showing that the distribution-dependent gain is high for facilitatory synapses and it is strongly affected by the basal rate even if we consider longer integration time windows. The relative importance of the recovery time constant and facilitation time constant in defining the optimal distribution OD grow with larger Ts (large circles for OD = 1 and small circles for Embedded Image ), but the facilitation factor U keeps being the most relevant attribute in defining the OD. Download Figure 3-1, EPS file.

Optimal rate (Formula ) and maximum gain (Formula ) estimation

Equation 6 describes the gain G obtained by encoding a stimulus Rext into Next units (with rates increased by Formula ) as opposed to N units (with rates increased by Formula ). The peak of this function (Gmax) is achieved by an optimal number of encoding units Next = Nopt with their rate increased by Formula . This maximum point can be found by taking the derivative of the gain function with respect to rext and setting it equal to zero, Formula (8)given that Formula , this can be further simplified into Formula (9)where Formula and the value of ropt is the solution of the equation 9. This solution is independent of the stimulus intensity Rext and population size N (see results in Fig. 2F).

For the optimal rate ropt, the gain (Eq. 6) can be written as Formula (10)

Assuming that Formula is linear with slope Ss for small rδ, that is, Formula (see below, Linear approximation of Formula ), then Gmax can be further simplified into Formula (11)which makes Gmax independent of the stimulus intensity Rext and population size N.

Combined optimal rate (Formula ) and maximum gain (Formula ) estimation

When an axon branches to connect to different targets, STP properties might be target dependent. In the case of excitatory fibers driving feedforward excitation-inhibition (FF-EI) motifs, with synapses type 1 (s1) directly exciting a readout neuron and synapses type 2 (s2) driving the local inhibitory circuit (Fig. 1C), the combined gain is given by Formula (12)

To find the activity distribution that maximizes the combined gain, we take the derivative of Gcom with respect to rext, set it equal to zero and, assuming again that Formula is linear with slope Ss for both synapses, find the equivalence Formula (13)for which the solution, Formula , is independent of the stimulus intensity Rext and population size N. The optimal combined gain is then Formula (14)which is also independent of the stimulus intensity Rext and population size N.

Numerical simulations

As a proof of concept of the potential relevance that the estimated presynaptic gains could have on postsynaptic targets, we performed numerical simulations of a conductance-based integrate-and-fire (I&F) neuron model acting as the readout device for a FF-EI circuit (See section Sparse code identification by a postsynaptic neuron mode). The I&F model’s membrane voltage Vm is described by Formula (15)where Cm = 250 pF is the membrane capacitance, ge and gi are, respectively, the excitatory and inhibitory input conductances and V e = 0 mV and V i = –75 mV are the excitatory and inhibitory synaptic reverse potentials. When a spike occurs, the membrane voltage is reset to V reset = –60 mV and held at this value for a refractory period of 2 ms. The synapses were modeled by α-functions (Kuhn et al., 2004) with time constants τe = 0.5 ms for excitatory and τi = 2 ms for inhibitory synapses.

The presynaptic population consisted of N = 160,000 units that connected to the I&F neuron in a FF-EI arrangement. The population stationary basal rate was Rbas = 80 kHz, with the individual basal rate rbas = 0.5 Hz. At the stationary basal rate, the synaptic states are described by Formula (16)where Formula is the expected rate of PRR by each synapse with STP parameters Formula .

We simulate a neuron that, during stationary basal activity, is kept in the fluctuation-driven regime through excitation-inhibition input balance (Kuhn et al., 2004). While excitation is provided directly by s1, disynaptic inhibition is modulated by s2 in a linear fashion, Formula (17)

The inhibitory firing rate that keeps the target neuron membrane potential fluctuating around the mean value of Formula during stationary basal activity can be approximated by a linear function of the excitation (adapted from Kuhn et al., 2004): Formula (18)where Be and Bi are the maximum amplitudes for the excitatory and inhibitory synaptic conductances. Equation 18 allows to find the linear scale of Equation 17 that fulfills the condition Formula . The inhibitory synapses are kept static (no STP). The extra presynaptic activity happens in blocks of Formula and is defined as sparse (when Next = Nopt) or dense (when Next = N).

Continuous rate distribution

Although some bursting networks [e.g., cerebellar parallel fibers (PFs)] seem to operate in a quasi-binary fashion (burst or no-burst), it is important to extend the analysis to continuous distributions, which most parts of the brain seem to operate under. We do this by assuming that the distribution of event-related neural firing rates follows a γ distribution, which allows us parameterized control of the sparseness of the neural code (with the mean of the distribution) and of the distribution shape (with the skewness and kurtosis): Formula (19)where k is the shape parameter and θ is the scale parameter. When k = 1, this is equivalent to an exponential distribution and, for increasing values of k, this becomes a right-skewed distribution, with the skewness approaching zero for higher values of k (becoming approximately Gaussian). For each shape parameter, we controlled the mean of the distribution by varying the scale parameter, because for a γ distributed rext the expected value is Formula (20)

For the γ-specified distribution of extra rates and a given presynaptic set of STP parameters, the expected amount of resources released by a population is Formula (21)which we solved numerically for two synapse types (s1-facilitatory and s2-depressing) and a range of rate distributions. The distribution gain G for Formula was then calculated in relation to the dense case, where Next = N and Formula .

A glossary of of key symbols used throughout the work is given in Table 1. All the analyses and simulations were performed in MATLAB and Python. The model simulations were performed using Euler’s method with time step of 0.1 ms implemented in the neural simulator Brian2 (Stimberg et al., 2014). The simulation and analysis code is available on GitHub at https://github.com/luiztauffer/stp-activity-distribution.

Results

Here, we are interested in a mechanism by which a neuronal network or a single postsynaptic neuron receiving multiple inputs may distinguish between different spiking distributions with the same intensity (e.g., the same number of spikes). This problem is schematically illustrated in Figure 1. Consider two scenarios. In the first scenario, seven spikes arrive from a single presynaptic neuron while others six remain silent (Fig. 1A, sparse distribution). In the second scenario, each of the seven presynaptic neurons spikes once. In both trials, the postsynaptic neuron receives seven spikes (Fig. 1B, dense distribution). Here, we test the hypothesis that when synapses exhibit STP (facilitation or depression) the two scenarios can be differentiated without any specific tuning of synaptic weights.

Static synapses evoke exactly the same PSC sequence for both sparse and dense distributions (black lines), making them indistinguishable for a readout neuron. However, when synapses are dynamic, short-term facilitation (blue line) enhances the PSC amplitudes compared with the static synapses (compare Fig. 1A,B, bottom traces). Short-term depression (red line) results in a weaker response as compared with the static synapses (compare Fig. 1A,B, bottom traces). If the incoming spikes are distributed along different synapses, the sequence of PSCs is identical for all types of synaptic dynamics (compare Fig. 1A,B, bottom traces).

In vivo neural coding is certainly more complex than the above example. However, this simple example suggests that in the case of a neuron receiving synaptic inputs via thousands of noisy synapses, STP could be a mechanism to differentiate between an evoked signal from the background activity fluctuations of the same amplitude, provided the former is encoded as a specific pattern that can exploit the STP properties of the synapses. In the following, we describe how well dynamic synapses could endow feedforward circuits with such activity distribution discrimination properties in low signal-to-noise regimes (Fig. 1C).

Optimal activity distribution with dynamic synapses

We implemented dynamic synapses with the rate-based TM model (Tsodyks et al., 1998; Eq. 2). In this model, the instantaneous PRR depends on the resource release probability (u+) and the proportion of available resources (x–), which have their dynamics guided by the choice of STP model parameters Formula . For a transient increase in firing rate, a facilitatory synapse produces an average profile of sustained PRR, while a depressing synapse produces an average profile of rapid decaying PRR (Fig. 2A). Throughout this work, the two reference sets of values for STP types are: U = 0.1, Formula and Formula (facilitatory) and U = 0.7, Formula and Formula (depressing).

To quantify the effects that different profiles will have on the presynaptic output, for varying transient increases in firing rate (rext), we calculate the total amount of extra resources (Formula ) a synapse releases over a time period of Ts (Eq. 3; Fig. 2B). We found that Formula varied in a nonlinear fashion as a function of rext, with depressing dynamics approaching Formula saturation much faster than facilitatory dynamics. The slope of Formula (Fig. 2B, inset) for depressing synapses is monotonically decreasing, indicating that any increase in the firing rate in those synapses will produce sublinear increase in Formula , whereas for facilitatory synapses the slope initially grows, indicating that increases in the firing rate of those synapses, up to some point, will produce supralinear increase in Formula .

In the brain, neurons typically receive inputs from a large ensemble of presynaptic neurons. In the ongoing activity state, these neurons spike at a low-basal firing rate (rbas) with the total synaptic output of Formula . In the event-related activity state, the firing rate of a subset of presynaptic neurons (Next) is transiently increased and the total synaptic output (Eq. 4) changes accordingly. We distribute a fixed event-related population rate increase Rext into varied numbers of chosen synapses Next, each of these chosen synapses increasing its firing rate by rext, that is, Formula , and report the changes in Formula .

We found that, for a population of facilitatory synapses, Formula varied in a non-monotonic fashion as a function of Next, initially increasing up to a peak point, then decreasing (Fig. 2C, left). By contrast, for depressing synapses (Fig. 2C, right), Formula varied in a monotonically increasing fashion. For both facilitatory and depressing synapses, Formula converged to their respective Formula when the total extra input rate Rext was distributed over all the neurons such that Next = N and Formula .

These results suggest that, when synapses are facilitatory, the total amount of synaptic resources released during a event-related activity state is maximized when event-related spiking activity is confined to a small number of synapses. Formula was smaller than Formula when Rext was distributed into a small subset of presynaptic neurons, because those chosen neurons spiked at very high rates and the synapses ran out of vesicle resources rapidly. When the event-related input was distributed over all the presynaptic neurons, the Formula also decreased because in such a scenario Formula was too small to fully exploit the benefits of synaptic facilitation. In contrast to the facilitatory synapses, for depressing synapses it was more beneficial to distribute the event-related spiking activity over the whole input ensemble to maximize the total amount of synaptic resources released. In this condition, Formula was small enough to avoid any losses in vesicle release caused by depression.

Activity distribution-dependent gain

To further quantify the effect of distribution of event-related activity over the input ensemble (that is, how neurons increase their rate in the event-related phase), we defined the distribution gain G as the proportional change in Formula in relation to Formula (Eqs. 5, 6). We found that Formula is approximately a linear function of rδ for a wide range of scenarios (see Materials and Methods) and, because of that, with the dense distribution of the activity (when all the presynaptic neurons change their firing rate by a small amount rδ in the event-related activity state), even dynamic synapses behave approximately as static synapses. Therefore, G can be understood either as a gain over a dense distribution or as a gain over static synapses. For facilitatory synapses, just as for Formula , G follows a non-monotonic curve as a function of Next, with a single peak at Nopt (Fig. 2D, blue line). By contrast, depressing synapses resulted in negative gains to every distribution, except for Next = N where G = 0% (Fig. 2D, red line).

Next, we estimate Nopt and G for a range of extra activity intensities (Fig. 2E, for facilitatory synapses). For these calculations, we parameterized the extra activity Rext as a fraction of the basal firing rate Rbas (correspondingly, rδ as % of rbas; see Materials and Methods). We found that, for facilitatory synapses, Nopt increased linearly with the extra activity intensity (Fig. 2G), resulting in an optimal encoding rate ropt which is independent of the input intensity. For depressing synapses, the optimal distribution Nopt = N did not change with the extra activity intensity, making the optimal encoding rate always ropt = rδ.

Because the presynaptic neurons are assumed to be Poisson processes, an advantage of parametrize Rext in terms of fraction of Rbas is that it directly translates to signal-to-noise ratio. For the example shown in Figure 2G, we found that STP could amplify the presynaptic output for weak signals (which were <10% of the basal activity) by up to 60% if the extra rate was distributed over Nopt synapses as opposed to N synapses. For low signal-to-noise ratios (Formula ), the gain at the optimal distribution (Gmax) was approximately constant and always positive for facilitatory synapses, while depressing synapses keep Gmax = 0 at Nopt = N (Fig. 2G). Finally, we analytically show that the independence of ropt and Gmax from the extra activity intensity is a good approximation for a wide range of basal rates and STP types (see Materials and Methods).

These results suggest that when synapses are facilitatory, the input should be distributed sparsely (or sparse code, that is, only a small set of neurons change their firing rate in the event-related state) to maximize the total amount of synaptic resources released at the downstream neuron. By contrast, when synapses are depressing, the input should be distributed densely (or dense code, that is, all the neurons change their firing rate in the event-related state) to maximize the synaptic resources released at the downstream neuron. Thus, for sparse population activity, while facilitatory synapses are optimally used, depressing synapses are subutilized.

Effects of STP parameters on optimal rate and gain

Next, we investigated how Nopt, ropt, and Gmax vary with STP parameters. To this end, we systematically changed synapses from facilitatory to depressing by jointly varying the set of parameters: Formula ms and Formula ms. We found that ropt decayed exponentially as the synapses became more depressing (Fig. 3A). This follows from the fact that facilitatory synapses profit from high firing rates and depressing synapses avoid negative gains at lower rates.

The maximum gain Gmax also decreased exponentially as synapses were systematically changed from facilitatory to depressing (Fig. 3B). We found that the relationship between gain and optimal rate was linear from mildly to strongly facilitatory synapses (Fig. 3C), with larger basal rates constraining the optimal conditions to lower rates with lower gains.

Interestingly, increasing the basal firing rate rbas substantially reduced ropt and Gmax. This is surprising because, at such low values of spiking rates, STP effects are hardly perceivable in traditional paired-pulse ratio analyses. The high value of Gmax, when the system operates at low rbas, happens because of synapses taking advantage of the nonlinearities in their individual Formula (Fig. 2B). Increased basal activity attenuates these nonlinearities, therefore impairing the distribution-dependent gain.

Relationship between facilitatory synapses and sparse coding

We quantified the optimal distribution of an evoked neural signal by OD (see Materials and Methods). High OD (Formula ) indicates a dense distribution in which many neurons spike to encode the extra activity, whereas low OD (Formula ) indicates a sparse distribution. We found that OD changed abruptly from sparse to dense as synapses were changed from facilitatory to depressing (Fig. 3D). Facilitatory synapses yielded maximum response for sparse while depressing synapses yielded maximum response (avoid negative gains) for dense distributions. The transition point from sparse to dense OD did not depend on the stimulus duration. However, the basal rate strongly modified the transition point, with higher rbas allowing only strongly facilitatory synapses to take advantage of sparse distributions. This configuration remained independent of the stimulus intensity as long as the circuit operates at low signal-to-noise conditions (Formula ; Fig. 2G).

In the above, we changed the synapses from facilitatory to depressing by linearly modifying the whole set of parameters together. Next, we systematically varied each of the STP parameters independently and measured the OD for maximum gain. We found that the transition region was primarily governed by the facilitation factor U (Formula ), with a weak dependence on τrec and τf (Fig. 3E). The relative contribution of τrec and τf became more relevant at higher Ts (Extended Data Fig. 3-1).

These results clearly highlight the importance of the stationary basal rate in how well the synaptic gain modulation operates, as only low rbas allows for significant gains. Importantly, the switch-like behavior of the optimal distribution indicates that, for a given population code, there is a robust range of STP attributes that could produce positive gains. This transition point seems to be relatively independent of the signal duration but is strongly affected by rbas. Finally, having a low initial release probability (defined in the model by a low U) seems to be the preeminent feature in defining the optimal OD.

The Equation 7 suggests that OD is independent of the population size (N). However, there is a lower limit of N below which the sparsity argument does not hold. We have shown that given the STP parameters, there is an optimum firing rate ropt at which signal carrying neurons should operate to maximize the gain (Fig. 2F). For a given rate it is optimal to distribute spikes over Nopt input channels (Fig. 2F). However, when Formula , then clearly the optimal distribution will not be sparse. The argument for sparseness arises when Formula . When Formula , and we increase N while keeping all other parameters constant, OD will decrease. However, if we change N while keeping all other parameters constant, the signal-to-noise ratio will change. The signal-to-noise ratio is defined as Formula , where Formula and if we change N, Rbas will also change. In order to maintain the signal-to-noise ratios comparable for low and high N scenarios, we need to scale Rext accordingly. Therefore, here, we defined Rext in proportion to Rbas so that it can accommodate the changes in N. With this choice, OD is indeed independent of N (see Eq. 7).

Effects of different sources of enhancement on Gmax

The enhancement of the output at facilitatory synapses could, in principle, have many causes (Valera et al., 2012; Thanawala and Regehr, 2013; Jackman and Regehr, 2017). Using the TM model (Eq. 1), we phenomenologically accounted for two important sources: a low initial release probability which sequentially increases with each incoming spike (Jackman et al., 2016) and fast replenishment of readily available resources (Crowley et al., 2007). The first characteristic is mimicked by a low facilitation factor U, which determines the initial release probability after a long quiescent period and the proportional increase in it after each spike. The second mechanism is captured by a fast recovery time constant τrec.

We systematically varied U and τrec and measured Gmax and ropt. We found that several different combinations of U and τrec resulted in the same optimal distribution gain and rate. However, when we changed U and τrec while keeping the ropt fixed, Gmax could no longer be kept constant and vice versa. For instance, the two parameter sets Formula and Formula gave Formula (Fig. 4A), but the first parameter set gave Formula and the second parameter set gave Gmax = 92% (Fig. 4B). Holding U fixed and choosing τrec to match with different ropt showed that Gmax consistently dropped for higher U (Fig. 4C,D).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Effects of resources recovery time constant τrec and facilitation factor U on Gmax and ropt for Embedded Image , for facilitatory synapses. A, The ropt surface, as a function of U and τrec, shows that a given optimal encoding rate can be matched by different combinations of synaptic parameters. For example, the iso-frequency curve of 150 Hz (black line) can be achieved with either Embedded Image ( °) or Embedded Image ( □). B, The Gmax surface, as a function of U and τrec, shows that the same maximum gain can observed for many different combinations of U and τrec. The black line shows the contour for Embedded Image . The two configurations with same ropt marked in panel A have distinct gains (Embedded Image ). C, We fix U and vary τrec (circle sizes) to match ropt (x-axis), then observe the gain. Larger values of U systematically produce smaller gains. Recovery time has a lower boundary Embedded Image . D, Gmax as a function of U for three different values of ropt. Larger values of U require smaller values of τrec (circle sizes) to match the same ropt, but as a consequence the gain decreases as we increase U. Effects of resources recovery time constant τrec and facilitation factor U on Gmax and ropt for facilitatory synapses can be seen in Extended Data Figure 4-1.

Extended Data Figure 4-1

Effects of resources recovery time constant τrec and facilitation factor U on Gmax and ropt for facilitatory synapses. A, Similar to Figure 4 but with Ts = 100 ms. B, Similar to Figure 4 but with Ts = 300 ms. Both panels reproduce the main findings of Figure 4, showing that a given optimal encoding rate can be matched by different combinations of synaptic parameters, but resulting in different gains. Download Figure 4-1, EPS file.

These results indicate that, in terms of maximum gain Gmax, the fine tuning of intracellular mechanisms that work to steadily increase a low initial release probability might be more important than fast vesicle replenishment mechanisms. This remains true for larger Ts (Extended Data Fig. 4-1).

In summary, our results show that a set of presynaptic STP parameters generates a gain surface G that, in principle, could be tuned to match presynaptic population activity characteristics. The optimal rate and the maximum gain are independent of the stimulus intensity for a low signal-to-noise ratio, with facilitatory synapses yielding high gains for sparse distributions while depressing synapses avoid negative gains only with dense distributions. For low basal activity (rbas = 0.5 Hz) and short duration integration window (Ts = 40 ms) conditions, the parameter U is the principal determinant of the optimal distribution. Furthermore, lower U yields a higher gains than lower τrec when the optimal encoding rate is kept constant.

Feedforward inhibition (FFI) and heterogeneous STP

In the above we ignored the fact that presynaptic STP can be target dependent (Markram et al., 1998; Reyes et al., 1998; Rozov et al., 2001; Sun et al., 2005; Pelkey and McBain, 2007; Bao et al., 2010; Blackman et al., 2013; Larsen and Sjöström, 2015; Éltes et al., 2017), and the spike trains coming from the same axon can be modulated by different short-term dynamics at different synapses. In the following, we describe the effects of such heterogeneity in a FF-EI motif (Fig. 1C), an ubiquitous circuit motif across the brain (Klyachko and Stevens, 2006; Dean et al., 2009; Isaacson and Scanziani, 2011; Wilson et al., 2012; Jiang et al., 2015; Grangeray-Vilmint et al., 2018).

We extend our previous analysis to a scenario in which the presynaptic population makes synaptic contacts not only with a readout neuron, but also with the local inhibitory population which projects to the readout neuron creating the FF-EI motif. Both, the readout neuron and the inhibitory group receive the same spike trains via two different types of synapses, s1 and s2 (Fig. 1C). Because the presynaptic population activity is the same for both synapses (Formula ), the differences in gain (G) are governed by the STP properties of the two synapses. Figure 5A shows G for a facilitatory (s1, U = 0.1, Formula ) and a depressing (s2, U = 0.7, Formula ) synapse.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Combined optimal distribution of activity in a FF-EI circuit with target-dependent STP. A, G as a function of Embedded Image and Next for a facilitatory synapse (s1, top) and for a depressing synapse (s2, bottom). This is similar to the Figure 2E. B, The combined gain (Embedded Image ) of the FF-EI circuit as a function of Embedded Image and Next obtained by combining the gains of the FFE and FFI branches. The black line marks the Embedded Image for every stimulus intensity Embedded Image and is represented with dashed black lines in panel A. In-box, Schematic of the FF-EI circuit. C, Gain as a function of Next for Embedded Image of rbas (gray lines in panels A, B). Gcom inherits the non-monotonicity from Embedded Image (blue, compare with Fig. 2D). The gain for a depressing synapse is negative (Embedded Image , red) for every Embedded Image . D, Nopt as a function of Embedded Image produces iso-frequency lines (compare with Fig. 2F). Embedded Image is markedly larger than Embedded Image (top). Embedded Image is independent of Embedded Image (for Embedded Image , compare with Fig. 2G). Gain for both synapse types at Embedded Image (dashed lines). The small decrease in synaptic gain for s1 is compensated by putting s2 in a very negative gain region (bottom). E, Embedded Image surface for different combinations of STP characteristics of s1 and s2. Notice that Embedded Image steadily increases for Embedded Image or Embedded Image , and that Embedded Image whenever s1 is more depressing than s2. The circle marks the specific Embedded Image combination used in the other panels and throughout the work. F, Effects of ongoing basal activity rbas on optimal conditions for Embedded Image . Increasing basal activity decreases the combined optimal rate (top) and combined maximum gain (center). Results for three different U at the facilitatory synapse. Increasing basal activity consistently decreases Gmax. Embedded Image decay happens mostly because of decay of the positive gain at the facilitatory synapse s1 (blue), while the negative gain at the depressing synapse s2 is kept negative and change only slightly (red). Dashed vertical line marks the basal activity used for most part of our analysis, Embedded Image , where both branches contribute significantly to increase the combined gain (bottom).

In the case of a FF-EI network, those two synapse types may be associated with the two branches, for example s1 to the feedforward excitation (FFE) branch (targeting a principal neuron) and s2 to the feedforward inhibition (FFI) branch (targeting local interneurons which eventually project to principal neurons; Fig. 5B, inset). In this arrangement, the combined gain is determined by the two branches Formula . We found that the combined gain of the FF-EI circuit also varied non-monotonically as a function of Next and peaked at Formula which corresponded to the combined optimal encoding rate Formula (Fig. 5B,C). Note that the combined maximum gain of the FF-EI circuit is larger than the gain obtained via the FFE branch with facilitatory synapses alone (Fig. 2C). This substantial increase is a consequence of the strictly negative profile of Formula . When the extra input is distributed in Formula units (sparse coding), the depressing branch of the FF-EI drove the local inhibitory group with weaker strength than a scenario in which Formula (dense coding). Therefore, with sparse distribution of the input, the readout neuron experienced stronger excitation from the FFE branch and weaker inhibition from FFI branch.

Similar to the behavior of facilitatory synapses, in the FF-EI network Formula increased linearly as a function of Formula , maintaining a constant optimal encoding rate Formula (Fig. 2D, top). We also observed that Formula was larger than Formula (Formula ), making the isolated gain of s1 suboptimal. However, this can be compensated by putting s2 into a very negative gain region (Fig. 5D, bottom, red dashed line), with a sparse distribution of the inputs. We show analytically that Formula and Formula are independent of the extra rate for a wide range of conditions (see Materials and Methods).

We extended this analysis to a large range of Formula STP combinations by gradually changing the set of parameters Formula (Fig. 5E). We found that Formula increased monotonically when we made the synapse s1 more facilitatory or when we made the synapse s2 more depressing. The anti-diagonal (where Formula ) marked the region of zero gain and any point above it (s2 more facilitatory than s1) resulted in Formula , whereas any point below it (s1 more facilitatory than s2) resulted in Formula . As expected, if s1 is highly facilitatory and s2 highly depressing the combined effect will be of very high gains, given that the presynaptic activity is optimally distributed.

Effects of basal activity on the FF-EI network

Next, we investigated the effects of the stationary basal activity at the combined optimal conditions of a FF-EI network. We found that the optimal rate and optimal gain both decreased as rbas was increased (Fig. 5F). Separation of the individual contributions of s1 and s2 branches revealed that this decrease was primarily because of a reduction in the gain of facilitatory synapses (s1) whereas the strong negative gain of depressing synapses (s2) remained approximately unaltered. This suggests that a population of facilitatory synapses will lose most of its activity distribution-dependent gain as the basal firing rate is increased, whereas a population of depressing synapses can preserve this capability even at larger basal rates.

Thus, these results show that a FF-EI network with target-dependent STP can make the discrimination of sparse activity more robust than what could be achieved by the FFE alone. This can be achieved when the excitatory branch is facilitatory while the activation of the inhibitory branch is depressing (by placing s1 and s2 at the region below the anti-diagonal in Fig. 5E).

Sparse code identification by a postsynaptic neuron model

The ability of STP to amplify the output of a presynaptic population would be functionally relevant only if this amplification is transferred to the postsynaptic side. We tested the postsynaptic effects of the STP based modulation of the presynaptic activity distribution by simulating an I&F neuron model (Eq. 15) as a readout device for a FF-EI circuit (Fig. 6A). We simulate a presynaptic population with characteristics similar to the cerebellar molecular layer, a massively feedforward system with properties much alike the ones we have described so far (Ito, 2006).

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Transfer of STP gain from presynaptic population to postsynaptic neuron. A, Schematic of a readout neuron receiving feedfoward excitation and FFI input. The inhibitory group was driven by the PRR of synapses of type s2 (Eq. 17). Sparse and dense activity patterns are schematically shown. The population temporally (Embedded Image marked in gray) increases its firing rate by Embedded Image of Rbas in two different configurations: sparse (top, Next = Nopt) or dense distribution (bottom, Embedded Image ). For panels B–E, Embedded Image of Rbas. B, PSTH of the spiking rate (3000 realizations) of readout neuron receiving sparse (black) or dense (gray) distribution of input activity. C, Mean membrane potential for sparse input (black) and dense input (gray) when the synapse s1 was facilitatory and synapse s2 was depressing. Red trace, Membrane potential when the synapse s1 was facilitatory and synapses s2 was static. Blue trace, Membrane potential when the synapse s1 was static and synapses s2 was depressing. The red and blue traces show the contributions from synaptic facilitation (FFE branch) and depression (FFI branch) to the neuron response. D, Changes in the total excitatory and inhibitory conductances for the four configurations of synapses (as show in the panel C). The dashed line marks the conductance changes for the static synapses condition. E, Effect of varying Next as a proportion of Embedded Image on the expected spike count (top, black circles) and the mean membrane potential (bottom, black circles) during the event-related activity period. Both profiles match the combined gain curve (gray line, compare with Fig. 5C), with peak at Embedded Image . F, Probability distribution of output spike counts within Ts. G, Separation (1–BC) between spike count distributions as a function of Embedded Image . The sparse distribution produced increasingly substantial separation when compared with basal (dark gray) and dense distribution (black), whereas the separation was always small when comparing dense distribution with basal activity (light gray).

Specifically, the readout neuron received input from 160,000 presynaptic neurons. The presynaptic background activity was modeled as independent and homogeneous Poisson spike trains with average firing of Formula (Formula ). In addition, the population of presynaptic neurons increased their firing rate (Formula of Rbas) during a brief time window (Formula ) to mimic an event-related activity. The extra presynaptic activity was either confined to a small set of presynaptic neurons (Next = Nopt, sparse) or distributed over a large number of neurons (Formula , dense). The excitatory synapses onto the readout neuron (s1) were facilitatory and the STP parameters for each synapse were drawn from a Gaussian distribution (s1, Formula and Formula ). The FFI activity was modeled as a Poisson process whose firing rate (λi; Eq. 9) was linearly dependent on the excitatory input of depressing synapses, whose STP parameters for each synapse were drawn from a Gaussian distribution (s2, Formula , and Formula ). Maximum weights of each excitatory and inhibitory were drawn from Gaussian distributions (Formula ).

The distribution of the input had a noticeable effect in the output of the target neuron, as shown by the peristimulus time histogram (Fig. 6B). While the dense distribution elicited transients at the beginning and ending of the stimulus period because of the inhibition slow time constant, the sparse code elicited a sustained elevated firing rate response throughout the stimulus period. The stimulus induced membrane potential responses for the two types of input patterns (dense and sparse) were also similar to the firing rate responses (Fig. 6C). By interchangeably setting s1 and s2 to static, we identified that both branches contributed significantly to keep the mean membrane potential high in the presence of extra sparse input.

The contribution of each branch becomes clear at the average change in the total excitatory and inhibitory conductances of the readout neuron. When both synapses were dynamic and the stimulus was sparse (Fig. 6D, leftmost), the average excitation was larger (because of synaptic facilitation) and the average inhibition was lower (because of synaptic depression) than the average changes caused by a stimulus of the same intensity but with dense distribution (Fig. 6D, rightmost). Note how, with dynamic synapses and dense distribution of the stimulus, the conductance changes matched the expected change for static synapses (dashed line). When we kept the stimulus distribution sparse, but interchangeably set s1 and s2 to static, the conductance trace related to the static branch reached the same value as for the dense distribution and the system was left with the gain produced at the dynamic branch. Dense distributions, therefore, do not exploit the STP nonlinearities and the synapses behave approximately as static, as predicted.

Next, we systematically changed Next as percentages of Nopt (Next = 1%, 10%, 25%, 50%, 100%, 200%, 400%, 1000% of Nopt, black circles in Fig. 6E) and found that both the mean membrane potential and the average spike count during the stimulus period followed profiles that closely matched the predicted Gcom curve (Fig. 6E). This result confirms that the modulation of the PRR from the presynaptic population is faithfully translated into postsynaptic variables (gain estimated at the presynaptic side and membrane potential and spike rate measured on the postsynaptic neuron side). Furthermore, this result also highlights the robustness of this mechanism, even with considerable deviations from the optimal encoding distribution (Next = 50% or Next = 200% of Nopt, marked as the first black points at left and right from Next = Nopt), the evoked responses remained reasonably close to the optimal.

To further assess how individual realizations of the sparse input could be distinguished from a dense input of the same intensity, we sampled the output spike count of the readout neuron for a period of 40 ms during the ongoing basal activity just before the stimulus and during the 40 ms stimulus period for both sparse and dense distributions (Fig. 6F). We used the Bhattacharyya coefficient (BC) as a measure of overlap between these sample distributions and 1–BC as a measure of difference (Fig. 6G). The dense input had almost complete overlap with the basal condition. On the other hand, the sparse input produced increasingly different response distributions from both the dense input and basal condition, with almost complete separation at Formula of rbas.

Taken together, these results illustrate the potential role of dynamic synapses in amplification of sparse signals at the presynaptic side (Qp, G), even when such signal intensity is just a small fraction of the ongoing basal activity and, therefore, likely to be buried in proportionally large noise fluctuations. In addition, for a dense distribution of the input, the system can preserve short periods (Formula ) of increased (decreased) spike probability right after stimulus onset (offset) because of delayed inhibition, which is a known characteristic of FF-EI motifs and might serve as indication of global background rate changes.

Continuous extra rate distribution

Thus far, we have considered a binary distribution of the extra rate: a fraction of presynaptic cells increased their rate by rext or not at all. Although some neural networks might roughly operate in this binary fashion, it is important to ask how would such STP-driven gains operate under continuous distributions, a perhaps more comprehensive way of describing the activity distribution of many neural populations. We therefore estimate the optimal conditions for when the extra presynaptic activity follows a γ distribution (Eq. 19).

The variation of the shape parameter (Formula ) changes the distribution from an exponential to a quasi-Gaussian. For each fixed shape, we control the mean (therefore the sparsity; Eq. 20) of the distribution with the scale parameter (Formula ). For each set Formula we calculate the expected gain (Eq. 21) yielded by a population of facilitatory synapses (s1, 7A left), of depressing synapses (s2; Fig. 7A, center) and the combined gain (Fig. 7A, right). Nine particular parameters choices are demonstrated in Figure 7B, where the central panels follow the choices that maximize the combined gain.

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

Synaptic gain when firing rates of individual neurons (rext) were draw from continuous distributions (γ distribution). A, Gain surfaces for a facilitatory synapse (left), for a depressing synapse (middle) and for the combined effect in a FF-EI (right). Increasing the shape parameter (k) moved the distribution from exponential to approximately Gaussian. Decreasing the scale parameter (θ) moved the distribution from a high mean and high variance (sparse) to a low mean and low variance one (dense). Black lines mark the θ that resulted in maximum combined gain for each value of k. Similar to the binary distribution, the Embedded Image is obtained by putting s1 in positive and s2 in negative gain regions. Note that the gain values are in the same range as in Figure 5A,B. B, Representative examples of γ functions used to model the distribution rext. The nine examples correspond to the points marked in panel A. Changing k affects the shape of the distribution: exponential (top row), right-skewed (middle row), and approximately Gaussian (bottom row) shapes. Changing θ affects the scale of the distribution (sparsity of the population code): high mean and variance (left column), optimal mean and variance (middle column), and low mean and variance (right column). C, Gain curves for a facilitatory synapse (blue), for a depressing synapse (red), and for the combined effect in a FF-EI (black). These curves were obtained for a fixed k = 10 (gray lines on panel A) and gradually changing θ. The gains as a function of the γ activity distribution follow a profile similar to the binary distribution (compare with Fig. 5C). D, γ-Distributed rext (color plot) as a function of shape parameter. Mean rext from the γ distributions obtained with the optimal θ for each value of k (black lines on panel A). As the γ shape moves from an exponential to a Gaussian one (increasing k), the mean of the optimal distribution approaches the ropt for the binary distribution.

We found that, similar to the binary distribution case, the gain for facilitatory synapses followed a non-monotonic curve as a function of θ (for a fixed k), with negative values at high θ (overly sparse distribution), a single peak at the optimal θ choice and convergence to 0 at low θ (dense distribution). By contrast, depressing synapses showed negative gains, monotonically converging to zero at low θ. The combined gain reached high values when s1 synapses were in very positive and s2 synapses were in very negative operating regions (Fig. 7C; see Fig. 7A, gray line).

Interestingly, not only the gain magnitudes were very similar to the ones obtained with binary distributions (compare colorbars of Figs. 5A,B and 7A), but also with continuously distributed rates the points of maximum gain were obtained at high mean rates (in relation to rδ) and, therefore, representative of sparse distributions of the population activity. For increasing values of k, the skewness of these distributions approached zero (i.e., became closer to a Gaussian) and the mean rext of the optimal θ approaches the ropt obtained by binary distributions. These results further corroborate the effects of the activity distribution-dependent gain modulation in presynaptic populations with STP.

Continuous basal rate distribution

In the preceding analysis we assumption that rbas is fixed and the same for all presynaptic units. A more natural scenario, however, would be to consider a continuous distribution of basal firing rates. We extend our analysis to account for this continuous rbas scenario in a similar way to what we did for rext: we modelled the distribution of basal rates with a γ distribution, with varying shape and scale parameters. The variation of the shape parameter (Formula ) changed the distribution from an exponential to a quasi-Gaussian (Fig. 8A). For each fixed shape, we controlled the mean of the distribution (Eq. 20) with the scale parameter.

Figure 8.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 8.

Continuous distribution of rbas. A, Two configurations of a γ distribution with the same mean value Embedded Image = 0.5 Hz. When k = 1, the distribution is exponential and there is a higher probability that any chosen neuron will have rbas < 0.5 Hz as opposed to rbas > 0.5 Hz. A higher shape value k = 20 brings the distribution closer to a Gaussian and decreases variance. B, Expected values of Embedded Image and Embedded Image for varied γ shape values. The higher k is, the smaller is the variance and the results converge to the estimated values with fixed rbas. C, Similar to the other variables, the optimal gains start with higher values (when k is low) and converge to the estimated values with fixed rbas for higher k. For lower rates the estimates do not diverge much (see overlapping solid and dashed lines when rbas = 0.5 Hz), but for higher rbas the increased variance of the continuous distribution has non trivial effects. This indicates that distribution-dependent gains from facilitatory populations might be more resilient to higher basal levels than initially predicted.

We calculated the expected values for Formula , and Formula for each distribution shape (Fig. 8B) and found that these values converged to the values estimated with a fixed rbas for higher k (quasi-Gaussian) and diverged for lower k (exponential). Using these traces, we calculated the gains (Fig. 8C) and again found that the results converged to the estimated values for fixed rbas. For higher rbas, however, the differences between fixed and continuously distributed rbas were more pronounced.

The divergence we observed can be explained by the probabilities that any chosen unit will have a rate below or above the mean rbas value. As discussed above, higher rbas will hinder the exploitation of STP nonlinearities and, therefore, reduce the possibility of higher gains. For exponential-like distributions, a higher proportion of the population has Formula , which reduces this hindering effect, even if a smaller part of the population (for which Formula ) gets more impaired. As the distribution gets closer to a Gaussian (increase in shape parameter), the proportions of the population with rbas below or above the Formula become almost equal. In the limit of Formula , the variance of the γ distribution will approximate zero (for our fixed mean) and the gains will converge to the values estimated with fixed rbas.

It is worth noting that, for higher Formula , the spike rate variances are also higher and the estimates of gains with fixed rbas become less accurate. This means that our simplified predictions of the hindering impact that higher rbas have on the distribution-dependent gains will likely be an overestimate of the actual effects in real neuron populations. In other words, the distribution-dependent gains in facilitatory populations can be more resilient to higher rbas than what is predicted by a fixed rbas models.

From presynaptic gains to postsynaptic rate changes

The readout neuron in our simulations operates in a regime where the presynaptic gains are reliably translated into readout firing rate gains, which is equivalent to saying that the postsynaptic transfer function is independent of the input distribution. However, both synapses and readout neuron dendrites/soma can operate in a nonlinear regime and further transform the presynaptic gain described above. These non-linearities reflect in the transfer function of the neuron, i.e. the probability of an output spike given a certain input.

To identify in which circumstances changes in postsynaptic transfer function may affect the transfer of presynaptic gains into output firing rate, let us consider a neuron with two possible transfer functions (Fig. 9, blue and red curves). The transfer function TF-1 is similar to the one we have considered previously in Figure 6. The TF-2 shows a sharp change. Such a sharp change in the transfer function may arise, for example, because of NMDA receptors. When input is strong, postsynaptic depolarization can remove the Mg2+ block and creates a larger EPSP and increase the spike probability (Du et al., 2017). Similarly, nonlinear local dendritic integration (Polsky et al., 2004), input correlations (de la Rocha and Parga, 2005), and voltage dependent ion channels may also create input-dependent changes in the neuron transfer function. When the neuron transfer function can change between TF-1 and TF-2, the output firing rate is not only determined by the effective input (sparse > dense, for s1-facilitatory) but also by the qualitative differences in the two transfer functions.

Figure 9.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 9.

Transfer of presynaptic gains onto postsynaptic rate changes. Distribution-dependent postsynaptic transfer function. For facilitatory synapses, sparse input distributions will yield higher effective input than dense distributions. If both distributions keep the neuron operating under the same transfer function, the presynaptic gain will be efficiently transferred to the postsynaptic rate (1–2 or 3–4). If sparse distributions put the neuron operating under TF-2 while dense distributions keep the neuron under TF-1, the presynaptic gains will be magnified (1–4). Otherwise, if dense distributions put the neuron under TF-2 while sparse distributions keep it under TF-1, the presynaptic gains might be overtaken by the postsynaptic integration effects (2–3).

Sparse input distributions will allocate extra incoming spikes as bursts, which could potentially cause extra accumulation of neurotransmitters (for s1-facilitatory) in specific dendritic sites, triggering supralinear integration (TF-2, red curve). If a dense input distribution does not attain the triggering of TF-2 and instead keeps operating under TF-1, the difference between presynaptic gains of sparse and dense distributions will be further increased (see the difference between points 1 and 4; Fig. 9).

In cases where both input distributions operate under the same TF the presynaptic gains will be reliably transferred into output rates (compare points 1 and 2 for TF-1 and points 3 and 4 for TF-2 in Fig. 9). Finally, when a dense distribution of inputs makes the output neuron operate under TF-2 and a sparse distribution brings the neuron to operate under TF-1, the presynaptic gains could potentially be overcome (Fig. 9, compare points 2 and 3).

Linear approximation of Formula

We solve Qs numerically (Eq. 3) and show that it behaves linearly for a moderate range of rates in different STP regimes (Fig. 10). The approximation by a linear function, Formula , allows Gmax and ropt to be independent of the stimulus intensity and population size (Eqs. 13, 14).

Figure 10.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 10.

Linear approximation of Embedded Image . A, For basal rate Embedded Image and Embedded Image is approximately linear for all STP regimes, facilitatory (U = 0.1,Embedded Image ), facilitatory/depressing (U = 0.4,Embedded Image ), or depressing (U = 0.7,Embedded Image ). B, Slope deviation for increasing Embedded Image in comparison to the slope for Embedded Image is always smaller than 0.6% for the three synapse types. C, R2 is always close to 100% for the three synapse types. D, Absolute value of slope deviation, similar to panel B, but for Embedded Image departing from several different values of rbas. The gray line marks Embedded Image dev Embedded Image . We observe that the linear approximation will work well throughout a large space (left from the gray line) for facilitation (left) and facilitation/depression (middle) regimes and gets a bit more constrained for depressing (right) synapses. E, Similar to panel C, but for Embedded Image departing from several different values of rbas.

To which extent is the linear approximation valid? To investigate this, we solve Formula for gradually increasing Formula (Fig. 10A) departing from a range of different basal levels Formula Hz. We then compare the slopes for each Formula to the slope for Formula and see how much they deviate from it (Fig. 10B,D). If, for a given rbas, increasing rδ would result in significant change in the regressed Ss, then Gmax would be dependent on the stimulus intensity rδ. We also show the R2 statistics to confirm the accuracy of the linear approximation (Fig. 10C,E).

As we observe, for low signal-to-basal ratios (Formula ), there is a wide range of rates for which the approximation is good enough, with Formula dev Formula and R2 > 99.9%. Specially for low rbas, the approximation is valid for the whole range of rδ.

Discussion

Our results suggest how the activity distribution of a presynaptic population can exploit the nonlinearities of short-term synaptic plasticity and, with that, the theoretical potential of synaptic dynamics to endow a postsynaptic target with the ability to discriminate between weak signals and background activity fluctuations of the same amplitude. Such mechanisms have the advantage of being in-built in synapses, not requiring further recurrent computation or any sort of supervised learning to take place. This feature is likely to be present in different brain regions, e.g. the cerebellum and the hippocampus, and might have critical implications for general information processing in the brain.

Relevance to specific brain circuits

We have shown that STP can enhance the effective input when (1) stimulus is sparse, temporally bursty and (2) FFE synapses on the principal cells are facilitatory and FFE synapses on local fast-spiking, inhibitory interneurons are depressing. These two conditions are fulfilled in several brain regions. Sparse coding provides many advantages for neural representations (Babadi and Sompolinsky, 2014) and associative learning (Litwin-Kumar et al., 2017). As discussed in the following, a number of experimental studies provide support for sparse coding in several brain regions such as the neocortex, cerebellum and hippocampus.

In the cerebellum, glomeruli in the granular layer actively sparsify the multimodal input from mossy fibers into relatively few simultaneously bursting PFs (Billings et al., 2014) projecting to Purkinje cells (PuC). A single PuC might sample from hundreds of thousands of PFs (Tyrrell and Willshaw, 1992; Ito, 2006). In behaving animals, PF present two stereotypical activity patterns, a noisy basal state with rates lower than 1 Hz during long periods interleaved by short-duration (Formula ), high-frequency (usually Formula ) bursts carrying sensory-motor information (Chadderton et al., 2004; Jörntell and Ekerot, 2006; van Beugen et al., 2013). Given the large number of PFs impinging on to a PuC, the fluctuations in basal rate are as big as the event-related high-frequency bursts. As our analysis shows, if PF synapses were static, the PuC would not be able to discriminate between high-frequency bursts and background fluctuations. However, PF synapses show short-term facilitation when targeting PuC and short-term depression when targeting Basket cells (Atluri and Regehr, 1996; Bao et al., 2010; Blackman et al., 2013; Grangeray-Vilmint et al., 2018). Basket cells perform strong, phasic somatic inhibition to PuCs (Jörntell et al., 2010). This circuit motif closely matches the FF-EI circuit investigated in this work (Fig. 6). Based on these similarities, we argue that one of the functional implications of the specific properties of STP is to enable the PuC to discriminate between information encoded in high-frequency bursts and background activity fluctuations.

In the neocortex, the population code in the layer 2/3 of the somatosensory (De Kock and Sakmann, 2008) and visual cortex of rats (Greenberg et al., 2008) and mice (Rochefort et al., 2009) is believed to be sparse (Petersen and Crochet, 2013), with short-lived bursts (usually Formula ) of high firing rates occurring over low rate spontaneous activity (Formula ). Additionally, it has been recently found that pyramidal cells at layer 2/3 of the mouse somatosensory cortex show short-term facilitation when targeting cells at layers 2/3 and 5 (Lefort and Petersen, 2017). The receptive field properties in the visual cortex are also consistent with the sparse code (Olshausen and Field, 1996). These characteristics suggest that the mechanism to discriminate between weak signals and background fluctuations may also be present in the neocortex. It is believed that such sparse representation at superficial cortical layers indicates strong stimulus selectivity (Petersen and Crochet, 2013), in which case the transient gain, provided by the target-dependent STP configuration of local pyramidal neurons, would be a suitable property for interlayer communication.

In the hippocampus, the Schaffer collaterals bringing signals from CA3 to CA1 operate under low basal firing rates with evoked bursts of high-frequency activity during short periods of time (Schultz and Rolls, 1999). The synapses from pyramidal cells in CA3 to pyramidal cells in CA1 are facilitatory and provide this pathway with extra gain control (Klyachko and Stevens, 2006). Simultaneously, Schaffer collaterals synapses to CA1 stratum radiatum interneurons show larger release probability than to pyramidal neurons (Sun et al., 2005). Therefore, it is likely that this STP-based stimulus/noise discrimination mechanism is also used to improve the transmission of sequential activity from CA3 to CA1.

As we have pointed above, STP configuration in the neocortex, hippocampus and cerebellum are consistent with the configuration that enables the neural networks to take advantage of sparse coding. However, it is important to notice that facilitatory excitatory inputs to other inhibitory cells also exist in the aforementioned circuits. These facilitatory inputs mostly target interneurons that form synapses on distal dendrites. The presence of facilitatory excitatory drive to these classes of inhibitory neurons is, however, unlikely to counteract the distribution-dependent transient gains, because they produce weaker, slower and persistent dendritic inhibition. Consistent with this idea, only parvalbumin-expressing neurons (that synapse on the soma), but not somatostatin-expressing neurons (that synapse on distal dendrites), modulate stimulus response gain (Wilson et al., 2012).

The initial release probability is the most distinguishable STP parameter between Schaffer collaterals synapses onto CA1 pyramidal cells versus CA1 interneurons (Sun et al., 2005). In line with that, our approach predicts that facilitatory mechanisms that steadily increase a low initial release probability during a fast sequence of spikes (low U) will have a greater impact on the optimal OD and gain amplitude than mechanisms for fast replenishment of resources (low τrec). However, the speed of recovery has been shown to be itself an activity-dependent feature (Fuhrmann et al., 2004; Crowley et al., 2007; Valera et al., 2012; Doussau et al., 2017) and this could in principle increase the relevance of τrec.

The facilitatory or depressing nature of STP depends on the postsynaptic neuron type (Markram et al., 1998; Reyes et al., 1998; Rozov et al., 2001; Sun et al., 2005; Pelkey and McBain, 2007; Bao et al., 2010; Blackman et al., 2013; Larsen and Sjöström, 2015; Éltes et al., 2017). Target-dependent STP is a strong indication that such short living dynamics are relevant for specific types of information processing in the brain (Middleton et al., 2011; Naud and Sprekeler, 2018). Here, we predict that, when accompanied by specific arrangements of target-dependent STP found experimentally in different brain regions, disynaptic inhibition could further increase the gain of sparse over dense distributions and make it robust even at higher basal activity, when the gain at facilitatory excitation decreases substantially.

Disynaptic inhibition following excitation is a common motif throughout the brain, and different classes of inhibitory neurons are believed to serve distinct computations within their local circuits (Wilson et al., 2012; Jiang et al., 2015). Despite a wide diversity of inhibitory cell types, a classification of FF-I into two main types, perisomatic and dendritic targeting, seems to be coherent with findings throughout the central nervous system. A remarkable attribute of this configuration is the consistency of the short-term dynamics of excitatory synapses across local circuits: depressing to perisomatic and facilitating to dendritic interneurons (Sun et al., 2005; Bao et al., 2010; Blackman et al., 2013; Éltes et al., 2017).

Disynaptic inhibition has been implicated in controlling the precision of a postsynaptic neuron’s response to brief stimulation in the cerebellum (Mittmann et al., 2005; Ito, 2014) and hippocampus (Pouille and Scanziani, 2001). Additionally, the combination of disynaptic inhibition with target-dependent STP has been recently associated with the ability of networks to decode multiplexed neural signals in the cortex (Naud and Sprekeler, 2018). In line with these, our results show a bimodal profile of the readout neuron response to sparse or dense input code. We also demonstrate that, coexisting with the sustained gain during sparse code transmission, in a dense coding scenario, the system produces shorter periods (∼10 ms) of increased (decreased) spike probability right after stimulus onset (offset; Fig. 6B, gray line). This results from inhibitory conductances (GABA) which are slower than the excitatory conductances (AMPA). This very short period of firing rate modulation might work as an indication of a widespread basal rate change in the presynaptic population.

Relationship with previous work

Historically, STP has been prominently explored as a frequency filter which renders an individual neuron as a low-pass filter (when synapses are depressing) or high pass filter (when synapses are facilitatory; Markram et al., 1998; Dittman et al., 2000; Abbott and Regehr, 2004). It has been suggested that under some conditions STD can also interact with subthreshold oscillation to modulate the gain of the neurons (Latorre et al., 2016). With STP the synaptic strength depends on recent history of the incoming spikes in a particular synapse. This automatically makes the downstream neurons more sensitive to transient fluctuations in input spike trains. In most of the previous work this specific property has been exploited for neural coding.

For instance, history dependence of STP means that the effect of serial correlations (that can be seen in the autocorrelogram of spike trains) and spike bursts in the presynaptic activity depends on whether the synapses express STF or STD. Synapses with STD reduce redundancy in the input spike train by reducing the PSPs of spikes that appear with a certain serial correlation or periodicity (Goldman et al., 2002). By contrast, when synapses express STF, they enhance the effect of serial correlations or spike bursts and the readout neuron can function as a burst detector (Lisman, 1997). In fact, both STF and STD can be combined to de-multiplex spike bursts from single spikes (Izhikevich et al., 2003; Middleton et al., 2011; Naud and Sprekeler, 2018). Thus, much emphasis has been put on understanding how STP can be used to extract information encoded in the pattern of spikes of a single input neuron.

Here, we extend this line of work and show how STP may affect the impact of a neuron ensemble on downstream neurons. Previous work has suggested that STP makes a neuron sensitive to transient rate changes. Given this property, when synapses show STD, input correlations can still modulate the neuron output for a wide range of firing rates (de la Rocha and Parga, 2005). Our work reveals a new consequence of the same effect as we show that STP renders the neurons in the brain with input distribution dependent gain, through which sparse-bursty codes could have stronger downstream impact than dense codes with same intensity. Furthermore, we investigate the relative importance of different STP parameters and baseline firing rates for these gains. This novel feature could be a highly valuable asset in low signal-to-noise ratio conditions. Moreover, our results also show how synapses can impose further constrains on the neural code.

Experimental verification of model predictions

Experimentally, these results can be tested by measuring the distribution of evoked firing rates of the neurons and STP properties of the synapses in the same brain area. Recent technological advances in stimulation systems, allowing for submillisecond manipulation of single and multiple cells spike activity, might soon provide means for fine control of population spike codes in intact tissues (Shemesh et al., 2017). These, together with refined methods for single cell resolution imaging of entire populations (Xu et al., 2017; Weisenburger and Vaziri, 2018), may also allow for scrutinizing the extent of which the proposed synaptic mechanisms for distribution-dependent gain are present in neural networks. Our prediction about the role of background activity in determining the gain of the sparse or dense codes can be tested by changing the overall background activity using chemogenetic techniques.

Limitations and possible extensions

Here, we made several simplifications and assumptions to reveal that STP of synapses has important consequences for neural coding. Relaxing each of these simplifications and assumptions may affect our conclusions in certain conditions and should be investigated in further studies. In the following we briefly discuss a few crucial simplifications and how they might affect our results.

Our analyses considered the presynaptic activity to be comprised of independent Poisson processes, that is, whenever we choose a set of Next presynaptic units to increase their firing rates, we choose them randomly. Because STP is a synapse specific property, input cross-correlation will not affect PRR and Q and therefore, the presynaptic gain. However, it is well known that input correlation can change the gain of a neuron (Kuhn et al., 2003; de la Rocha and Parga, 2005).

It is conceivable that in some conditions, input correlations can potentially neutralize the advantage of sparse codes over dense codes. The readout neuron fluctuations (and therefore, their output firing rate) are dependent on the input correlation. For the same amount of pairwise correlation, the size of fluctuations in the readout neurons is directly proportional to the number of signal-carrying units (Next). A larger Next (dense distribution) will elicit larger fluctuations than a smaller Next (sparse distribution). This is because for a larger Next more input spikes can occur together in the same time bin. Thus, input correlations may amplify the downstream impact of dense input distributions more than the sparse input distributions. The size of this effect of correlations depends on the number of inputs (Next) and the amount of correlations (both pairwise and higher-order). However, because cortical activity is weakly correlated (Ecker et al., 2010) such an effect of correlation may not be enough to completely neutralize the advantage of sparse distribution over dense distribution.

We also did not study the effect of spatial location of synapses in transferring the effect of sparse codes over dense codes to the readout neurons. There are at least two possibilities in which dendritic locations of the synapses may weaken the advantage of a sparse input distributions over a dense input distributions. First, when synaptic strength decreases as a function of distance from the soma: it is possible that in the sparse input distribution case, the synapses bringing the information are located far away from the soma while for dense input distribution at least some inputs will be closer to the soma. Therefore, even if on the presynaptic side, a sparse input distribution generates stronger outputs than a dense input distribution, its effect on the postsynaptic neurons may be weakened because of weaker synapses. However, even in this case, because of dendritic nonlinearities and Na+/Ca2+ spikes (Larkum et al., 2009), distally located sparse distribution may still have a stronger response than proximally located dense input distributions. Second, the effect of synapses on certain dendrites is cancelled by strategically placed inhibitory synapses (Gidon and Segev, 2012). It is possible that a sparse distribution (because of fewer synapses) may get cancelled or weakened by strategically placed inhibition. The effect of such inhibition will indeed be weaker on dense input distributions as many more synapses will carry the input information. Thus, for sparse input distributions, their location on the neuron may be an important factor. A proper treatment of this question requires the knowledge of, e.g., neuron morphology, distribution of inhibition, and dendritic non-linearities, and should addressed in a separate study.

We also assumed that all the synaptic weights are sampled from the same Gaussian distribution as our goal was to consider a naive situation in which weights have not been “trained” for any specific task. Having different synaptic weight distributions may affect the value of the gains, especially when synaptic weights and input are associated (stimulus-specific tuning). Such different distributions may arise because of supervise/unsupervised learning. A systematic study of a network with stimulus-specific tuning of synaptic weights raises several pertinent questions and should be investigated in a separate study.

The transient enhancement or depression of synaptic efficacy by presynaptic mechanisms consists of many independent processes (Zucker and Regehr, 2002). The TM model is a tractable and intuitive way to account for these two phenomena of interest, but this parsimony comes at the cost of biophysical simplifications. For example, it assumes the space of available resources is a continuum (0 < x < 1) as opposed to the known discrete nature of transmitter-carrying vesicles. However, we argue that when modeling a large number of simultaneously active synapses, the variable of interest (population PRR) can be approximated by a continuous variable. The nonuniform amount of transmitters per vesicle might further justify this assumption.

Detailed STP models that try to account for specific intracellular mechanisms (Dittman et al., 2000), and stochasticity of the release process (Sun et al., 2005; Kandaswamy et al., 2010) have been proposed in the literature. We argue that, with more complex models of STP, our results might change quantitatively but the qualitative outcome of our analysis would remain: that presynaptic short-term facilitation (depression) yields a substantial positive (negative) gain to sparse over dense population codes. Nevertheless, it would be interesting to see how the gain and optimal rate predictions may be shaped by more detailed models.

Our analyses do not account for use-dependent recovery time, changes in the readily releasable pool size (Kaeser and Regehr, 2017) or vesicles properties heterogeneity. The effects of postsynaptic receptor desensitization and neurotransmitter release inhibition by retrograde messengers (Brown et al., 2003) are likely to decrease the estimated gain by counteracting facilitation. Another interesting extension could be used to further investigate the effects of input STP heterogeneity at compartment-dependent input using multi-compartment neuron models (Vetter et al., 2001; Grillo et al., 2018).

If the same patterns of bursts tend to happen repeatedly (e.g., PFs in cerebellum during continuously repetitive movement), there might be an optimal interburst interval (IBIopt) for which, if bursts happen faster than IBIopt, the signal would be compromised (because of slow vesicles recovery time) and if bursts happen separated by intervals longer then IBIopt no extra gain will happen. Experimental evidence points to the importance of resonance in the band oscillations (Formula IBI) for cortical-cerebellar drive (Gandolfi et al., 2013; Chen et al., 2016) and for hippocampus (Buzsáki, 2002). In these cases, the slower interaction between different pools of vesicles (Rizzoli and Betz, 2005) are likely to play a role in information transfer. Augmentation, a form of transient synaptic enhancement that can last for seconds, is also likely to play a role in these cases (Kandaswamy et al., 2010; Deng and Klyachko, 2011).

View this table:
  • View inline
  • View popup
Table 1.

List of recurrent symbols

Acknowledgments

Acknowledgements: We thank Dr. Gilad Silberberg, Dr. Erik Fransén, Dr. Philippe Isope, and Martino Sindaci for helpful suggestions and feedback.

Footnotes

  • The authors declare no competing financial interests.

  • This work was supported in part by the EU Erasmus Mundus Joint Doctorate Program EUROSPIN, The International Graduate Academy (IGA) of the Freiburg Research Services (L.T.), and the Swedish Research Council (Research Project Grant, StratNeuro, India-Sweden collaboration grants; A.K.).

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

References

  1. ↵
    Abbott LF, Regehr WG (2004) Synaptic computation. Nature 431:796–803. doi:10.1038/nature03010 pmid:15483601
    OpenUrlCrossRefPubMed
  2. ↵
    Arieli A, Sterkin A, Grinvald A, Aertsen A (1996) Dynamics of ongoing activity: explanation of the large variability in evoked cortical responses. Science 273:1868–1871. doi:10.1126/science.273.5283.1868 pmid:8791593
    OpenUrlAbstract/FREE Full Text
  3. ↵
    Atluri PP, Regehr WG (1996) Determinants of the time course of facilitation at the granule cell to Purkinje cell synapse. J Neurosci 16:5661–5671. doi:10.1523/JNEUROSCI.16-18-05661.1996
    OpenUrlAbstract/FREE Full Text
  4. ↵
    Babadi B, Sompolinsky H (2014) Sparseness and expansion in sensory representations. Neuron 83:1213–1226. doi:10.1016/j.neuron.2014.07.035 pmid:25155954
    OpenUrlCrossRefPubMed
  5. ↵
    Bao J, Reim K, Sakaba T (2010) Target-dependent feedforward inhibition mediated by short-term synaptic plasticity in the cerebellum. J Neurosci 30:8171–8179. doi:10.1523/JNEUROSCI.0276-10.2010 pmid:20554867
    OpenUrlAbstract/FREE Full Text
  6. ↵
    Billings G, Piasini E, Lőrincz A, Nusser Z, Silver RA (2014) Network structure within the cerebellar input layer enables lossless sparse encoding. Neuron 83:960–974. doi:10.1016/j.neuron.2014.07.020 pmid:25123311
    OpenUrlCrossRefPubMed
  7. ↵
    Blackman AV, Abrahamsson T, Costa RP, Lalanne T, Sjöström PJ (2013) Target-cell-specific short-term plasticity in local circuits. Front Synaptic Neurosci 5:11–13. doi:10.3389/fnsyn.2013.00011 pmid:24367330
    OpenUrlCrossRefPubMed
  8. ↵
    Brown SP, Brenowitz SD, Regehr WG (2003) Brief presynaptic bursts evoke synapse-specific retrograde inhibition mediated by endogenous cannabinoids. Nat Neurosci 6:1048–1057. doi:10.1038/nn1126 pmid:14502290
    OpenUrlCrossRefPubMed
  9. ↵
    Brunel N (2000) Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci 8:183–208. doi:10.1023/a:1008925309027 pmid:10809012
    OpenUrlCrossRefPubMed
  10. ↵
    Buonomano DV (2000) Decoding temporal information: a model based on short-term synaptic plasticity. J Neurosci 20:1129–1141. doi:10.1523/JNEUROSCI.20-03-01129.2000
    OpenUrlAbstract/FREE Full Text
  11. ↵
    Buzsáki G (2002) Theta oscillations in the hippocampus. Neuron 33:325–340. doi:10.1016/s0896-6273(02)00586-x pmid:11832222
    OpenUrlCrossRefPubMed
  12. ↵
    Chadderton P, Margrie TW, Häusser M (2004) Integration of quanta in cerebellar granule cells during sensory processing. Nature 428:856–860. doi:10.1038/nature02442 pmid:15103377
    OpenUrlCrossRefPubMed
  13. ↵
    Chen H, Wang YJ, Yang L, Sui JF, Hu ZA, Hu B (2016) Theta synchronization between medial prefrontal cortex and cerebellum is associated with adaptive performance of associative learning behavior. Sci Rep 6:20960. doi:10.1038/srep20960 pmid:26879632
    OpenUrlCrossRefPubMed
  14. ↵
    Crowley JJ, Carter AG, Regehr WG (2007) Fast vesicle replenishment and rapid recovery from desensitization at a single synaptic release site. J Neurosci 27:5448–5460. doi:10.1523/JNEUROSCI.1186-07.2007 pmid:17507567
    OpenUrlAbstract/FREE Full Text
  15. ↵
    Dean P, Porrill J, Ekerot CF, Jörntell H (2009) The cerebellar microcircuit as an adaptive filter: experimental and computational evidence. Nat Rev Neurosci 11:30–43. doi:10.1038/nrn2756 pmid:19997115
    OpenUrlCrossRefPubMed
  16. ↵
    De Kock CPJ, Sakmann B (2008) High frequency action potential bursts (100 Hz) in L2/3 and L5B thick tufted neurons in anaesthetized and awake rat primary somatosensory cortex. J Physiol 586:3353–3364. doi:10.1113/jphysiol.2008.155580 pmid:18483066
    OpenUrlCrossRefPubMed
  17. ↵
    de la Rocha J, Parga N (2005) Short-term synaptic depression causes a non-monotonic response to correlated stimuli. J Neurosci 25:8416–8431. doi:10.1523/JNEUROSCI.0631-05.2005 pmid:16162924
    OpenUrlAbstract/FREE Full Text
  18. ↵
    Deng PY, Klyachko VA (2011) The diverse functions of short-term plasticity components in synaptic computations. Commun Integr Biol 4:543–548. doi:10.4161/cib.4.5.15870 pmid:22046457
    OpenUrlCrossRefPubMed
  19. ↵
    Dittman JS, Kreitzer AC, Regehr WG (2000) Interplay between facilitation, depression, and residual calcium at three presynaptic terminals. J Neurosci 20:1374–1385. doi:10.1523/JNEUROSCI.20-04-01374.2000
    OpenUrlAbstract/FREE Full Text
  20. ↵
    Doussau F, Schmidt H, Dorgans K, Valera AM, Poulain B, Isope P (2017) Frequency-dependent mobilization of heterogeneous pools of synaptic vesicles shapes presynaptic plasticity. Elife 6:e28935. doi:10.7554/eLife.28935
    OpenUrlCrossRefPubMed
  21. ↵
    Du K, Wu YW, Lindroos R, Liu Y, Rózsa B, Katona G, Ding JB, Kotaleski JH (2017) Cell-type–specific inhibition of the dendritic plateau potential in striatal spiny projection neurons. Proc Natl Acad Sci USA 114:E7612–E7621. doi:10.1073/pnas.1704893114 pmid:28827326
    OpenUrlAbstract/FREE Full Text
  22. ↵
    Ecker AS, Berens P, Keliris GA, Bethge M, Logothetis NK, Tolias AS (2010) Decorrelated neuronal firing in cortical microcircuits. Science 327:584–587. doi:10.1126/science.1179867 pmid:20110506
    OpenUrlAbstract/FREE Full Text
  23. ↵
    Éltes T, Kirizs T, Nusser Z, Holderith N (2017) Target cell type-dependent differences in Ca(2+) channel function underlie distinct release probabilities at hippocampal glutamatergic terminals. J Neurosci 37:1910–1924. doi:10.1523/JNEUROSCI.2024-16.2017 pmid:28115484
    OpenUrlAbstract/FREE Full Text
  24. ↵
    Fuhrmann G, Segev I, Markram H, Tsodyks M (2002) Coding of temporal information by activity-dependent synapses. J Neurophysiol 87:140–148. doi:10.1152/jn.00258.2001 pmid:11784736
    OpenUrlCrossRefPubMed
  25. ↵
    Fuhrmann G, Cowan A, Segev I, Tsodyks M, Stricker C (2004) Multiple mechanisms govern the dynamics of depression at neocortical synapses of young rats. J Physiol 557:415–438. doi:10.1113/jphysiol.2003.058107 pmid:15020700
    OpenUrlCrossRefPubMed
  26. ↵
    Gandolfi D, Lombardo P, Mapelli J, Solinas S, D’Angelo E (2013) θ-Frequency resonance at the cerebellum input stage improves spike timing on the millisecond time-scale. Front Neural Circuits 7:64. doi:10.3389/fncir.2013.00064 pmid:23596398
    OpenUrlCrossRefPubMed
  27. ↵
    Gidon A, Segev I (2012) Principles governing the operation of synaptic inhibition in dendrites. Neuron 75:330–341. doi:10.1016/j.neuron.2012.05.015 pmid:22841317
    OpenUrlCrossRefPubMed
  28. ↵
    Goldman MS, Maldonado P, Abbott LF (2002) Redundancy reduction and sustained firing with stochastic depressing synapses. J Neurosci 22:584–591. doi:10.1523/JNEUROSCI.22-02-00584.2002
    OpenUrlAbstract/FREE Full Text
  29. ↵
    Grangeray-Vilmint A, Valera AM, Kumar A, Isope P (2018) Short-term plasticity combines with excitation/inhibition balance to expand cerebellar Purkinje cell dynamic range. J Neurosci 38:5153–5167. doi:10.1523/JNEUROSCI.3270-17.2018 pmid:29720550
    OpenUrlAbstract/FREE Full Text
  30. ↵
    Greenberg DS, Houweling AR, Kerr JND (2008) Population imaging of ongoing neuronal activity in the visual cortex of awake rats. Nat Neurosci 11:749–751. doi:10.1038/nn.2140 pmid:18552841
    OpenUrlCrossRefPubMed
  31. ↵
    Grillo FW, Neves G, Walker A, Vizcay-Barrena G, Fleck RA, Branco T, Burrone J (2018) A distance-dependent distribution of presynaptic boutons tunes frequency-dependent dendritic integration. Neuron 99:275–282. doi:10.1016/j.neuron.2018.06.015 pmid:29983327
    OpenUrlCrossRefPubMed
  32. ↵
    Gütig R, Sompolinsky H (2006) The tempotron: a neuron that learns spike timing-based decisions. Nat Neurosci 9:420–428. doi:10.1038/nn1643 pmid:16474393
    OpenUrlCrossRefPubMed
  33. ↵
    Hawkins J, Ahmad S (2016) Why neurons have thousands of synapses, a theory of sequence memory in neocortex. Front Neural Circuits 10:23. doi:10.3389/fncir.2016.00023 pmid:27065813
    OpenUrlCrossRefPubMed
  34. ↵
    Isaacson J, Scanziani M (2011) How inhibition shapes cortical activity. Neuron 72:231–243. doi:10.1016/j.neuron.2011.09.027 pmid:22017986
    OpenUrlCrossRefPubMed
  35. ↵
    Ito M (2006) Cerebellar circuitry as a neuronal machine. Prog Neurobiol 78:272–303. doi:10.1016/j.pneurobio.2006.02.006 pmid:16759785
    OpenUrlCrossRefPubMed
  36. ↵
    Ito M (2014) Cerebellar microcircuitry. In: Reference module in biomedical sciences. San Diego: Elsevier.
  37. ↵
    Izhikevich EM, Desai NS, Walcott EC, Hoppensteadt FC (2003) Bursts as a unit of neural information: selective communication via resonance. Trends Neurosci 26:161–167. doi:10.1016/S0166-2236(03)00034-1 pmid:12591219
    OpenUrlCrossRefPubMed
  38. ↵
    Jackman SL, Regehr WG (2017) The mechanisms and functions of synaptic facilitation. Neuron 94:447–464. doi:10.1016/j.neuron.2017.02.047 pmid:28472650
    OpenUrlCrossRefPubMed
  39. ↵
    Jackman SL, Turecek J, Belinsky JE, Regehr WG (2016) The calcium sensor synaptotagmin 7 is required for synaptic facilitation. Nature 529:88–91. doi:10.1038/nature16507 pmid:26738595
    OpenUrlCrossRefPubMed
  40. ↵
    Jiang X, Shen S, Cadwell CR, Berens P, Sinz F, Ecker AS, Patel S, Tolias AS (2015) Principles of connectivity among morphologically defined cell types in adult neocortex. Science 350:aac9462. doi:10.1126/science.aac9462 pmid:26612957
    OpenUrlAbstract/FREE Full Text
  41. ↵
    Jörntell H, Ekerot CFF (2006) Properties of somatosensory synaptic integration in cerebellar granule cells in vivo. J Neurosci 26:11786–11797. doi:10.1523/JNEUROSCI.2939-06.2006 pmid:17093099
    OpenUrlAbstract/FREE Full Text
  42. ↵
    Jörntell H, Bengtsson F, Schonewille M, De Zeeuw CI (2010) Cerebellar molecular layer interneurons - computational properties and roles in learning. Trends Neurosci 33:524–535. doi:10.1016/j.tins.2010.08.004 pmid:20869126
    OpenUrlCrossRefPubMed
  43. ↵
    Kaeser PS, Regehr WG (2017) The readily releasable pool of synaptic vesicles. Curr Opin Neurobiol 43:63–70. doi:10.1016/j.conb.2016.12.012 pmid:28103533
    OpenUrlCrossRefPubMed
  44. ↵
    Kandaswamy U, Deng PY, Stevens CF, Klyachko VA (2010) The role of presynaptic dynamics in processing of natural spike trains in hippocampal synapses. J Neurosci 30:15904–15914. doi:10.1523/JNEUROSCI.4050-10.2010 pmid:21106829
    OpenUrlAbstract/FREE Full Text
  45. ↵
    Kenet T, Bibitchkov D, Tsodyks M, Grinvald A, Arieli A (2003) Spontaneously emerging cortical representations of visual attributes. Nature 425:954–956. doi:10.1038/nature02078 pmid:14586468
    OpenUrlCrossRefPubMed
  46. ↵
    Klyachko VA, Stevens CF (2006) Excitatory and feed-forward inhibitory hippocampal synapses work synergistically as an adaptive filter of natural spike trains. PLoS Biol 4:e207. doi:10.1371/journal.pbio.0040207
    OpenUrlCrossRefPubMed
  47. ↵
    Kuhn A, Aertsen A, Rotter S (2003) Higher-order statistics of input ensembles and the response of simple model neurons. Neural Comput 15:67–101. doi:10.1162/089976603321043702 pmid:12590820
    OpenUrlCrossRefPubMed
  48. ↵
    Kuhn A, Aertsen A, Rotter S (2004) Neuronal integration of synaptic input in the fluctuation-driven regime. J Neurosci 24:2345–2356. doi:10.1523/JNEUROSCI.3349-03.2004 pmid:15014109
    OpenUrlAbstract/FREE Full Text
  49. ↵
    Kumar A, Schrader S, Aertsen A, Rotter S (2008) The high-conductance state of cortical networks. Neural Comput 20:1–43. doi:10.1162/neco.2008.20.1.1 pmid:18044999
    OpenUrlCrossRefPubMed
  50. ↵
    Larkum ME, Nevian T, Sandler M, Polsky A, Schiller J (2009) Synaptic integration in tuft dendrites of layer 5 pyramidal neurons: a new unifying principle. Science 325:756–760. doi:10.1126/science.1171958 pmid:19661433
    OpenUrlAbstract/FREE Full Text
  51. ↵
    Larsen RS, Sjöström PJ (2015) Synapse-type-specific plasticity in local circuits. Curr Opin Neurobiol 35:127–135. doi:10.1016/j.conb.2015.08.001 pmid:26310110
    OpenUrlCrossRefPubMed
  52. ↵
    Latorre R, Torres JJ, Varona P (2016) Interplay between subthreshold oscillations and depressing synapses in single neurons. PLoS One 11:e0145830. doi:10.1371/journal.pone.0145830
    OpenUrlCrossRef
  53. ↵
    Lefort S, Petersen CC (2017) Layer-dependent short-term synaptic plasticity between excitatory neurons in the C2 barrel column of mouse primary somatosensory cortex. Cereb Cortex 27:3869–3878. doi:10.1093/cercor/bhx094 pmid:28444185
    OpenUrlCrossRefPubMed
  54. ↵
    Lisman J (1997) Bursts as a unit of neural information: making unreliable synapses reliable. Trends Neurosci 20:38–43. doi:10.1016/S0166-2236(96)10070-9
    OpenUrlCrossRefPubMed
  55. ↵
    Litwin-Kumar A, Harris KD, Axel R, Sompolinsky H, Abbott L (2017) Optimal degrees of synaptic connectivity. Neuron 93:1153–1164. doi:10.1016/j.neuron.2017.01.030 pmid:28215558
    OpenUrlCrossRefPubMed
  56. ↵
    Markram H, Wang Y, Tsodyks M (1998) Differential signaling via the same axon of neocortical pyramidal neurons. Proc Natl Acad Sci USA 95:5323–5328. doi:10.1073/pnas.95.9.5323 pmid:9560274
    OpenUrlAbstract/FREE Full Text
  57. ↵
    Middleton JW, Yu N, Longtin A, Maler L (2011) Routing the flow of sensory signals using plastic responses to bursts and isolated spikes: experiment and theory. J Neurosci 31:2461–2473. doi:10.1523/JNEUROSCI.4672-10.2011 pmid:21325513
    OpenUrlAbstract/FREE Full Text
  58. ↵
    Mittmann W, Koch U, Häusser M (2005) Feed-forward inhibition shapes the spike output of cerebellar Purkinje cells. J Physiol 563:369–378. doi:10.1113/jphysiol.2004.075028 pmid:15613376
    OpenUrlCrossRefPubMed
  59. ↵
    Naud R, Sprekeler H (2018) Sparse bursts optimize information transmission in a multiplexed neural code. Proc Natl Acad Sci USA 115:E6329–E6338. doi:10.1073/pnas.1720995115 pmid:29934400
    OpenUrlAbstract/FREE Full Text
  60. ↵
    Olshausen BA, Field DJ (1996) Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381:607–609. doi:10.1038/381607a0 pmid:8637596
    OpenUrlCrossRefPubMed
  61. ↵
    Pelkey KA, McBain CJ (2007) Differential regulation at functionally divergent release sites along a common axon. Curr Opin Neurobiol 17:366–373. doi:10.1016/j.conb.2007.04.008 pmid:17493799
    OpenUrlCrossRefPubMed
  62. ↵
    Petersen CCH, Crochet S (2013) Synaptic computation and sensory processing in neocortical layer 2/3. Neuron 78:28–48. doi:10.1016/j.neuron.2013.03.020 pmid:23583106
    OpenUrlCrossRefPubMed
  63. ↵
    Polsky A, Mel B, Schiller J (2004) Computational subunits in thin dendrites of pyramidal cells. Nat Neurosci 7:621–627. doi:10.1038/nn1253 pmid:15156147
    OpenUrlCrossRefPubMed
  64. ↵
    Pouille F, Scanziani M (2001) Enforcement of temporal fidelity in pyramidal cells by somatic feed-forward inhibition. Science 293:1159–1163. doi:10.1126/science.1060342 pmid:11498596
    OpenUrlAbstract/FREE Full Text
  65. ↵
    Reyes A, Lujan R, Rozov A, Burnashev N, Somogyi P, Sakmann B (1998) Target-cell-specific facilitation and depression in neocortical circuits. Nat Neurosci 1:279–285. doi:10.1038/1092 pmid:10195160
    OpenUrlCrossRefPubMed
  66. ↵
    Rizzoli SO, Betz WJ (2005) Synaptic vesicle pools. Nat Rev Neurosci 6:57–69. doi:10.1038/nrn1583 pmid:15611727
    OpenUrlCrossRefPubMed
  67. ↵
    Rochefort NL, Garaschuk O, Milos RI, Narushima M, Marandi N, Pichler B, Kovalchuk Y, Konnerth A (2009) Sparsification of neuronal activity in the visual cortex at eye-opening. Proc Natl Acad Sci USA 106:15049–15054. doi:10.1073/pnas.0907660106 pmid:19706480
    OpenUrlAbstract/FREE Full Text
  68. ↵
    Rotman Z, Klyachko VA (2013) Role of synaptic dynamics and heterogeneity in neuronal learning of temporal code. J Neurophysiol 110:2275–2286. doi:10.1152/jn.00454.2013 pmid:23926043
    OpenUrlCrossRefPubMed
  69. ↵
    Rotman Z, Deng PY, Klyachko VA (2011) Short-term plasticity optimizes synaptic information transmission. J Neurosci 31:14800–14809. doi:10.1523/JNEUROSCI.3231-11.2011 pmid:21994397
    OpenUrlAbstract/FREE Full Text
  70. ↵
    Rozov A, Burnashev N, Sakmann B, Neher E (2001) Transmitter release modulation by intracellular Ca2+ buffers in facilitating and depressing nerve terminals of pyramidal cells in layer 2/3 of the rat neocortex indicates a target cell-specific difference in presynaptic calcium dynamics. J Physiol 531:807–826. doi:10.1111/j.1469-7793.2001.0807h.x pmid:11251060
    OpenUrlCrossRefPubMed
  71. ↵
    Schultz SR, Rolls ET (1999) Analysis of information transmission in the Schaffer collaterals. Hippocampus 9:582–598. doi:10.1002/(SICI)1098-1063(1999)9:5<582::AID-HIPO12>3.0.CO;2-P
    OpenUrlCrossRefPubMed
  72. ↵
    Scott P, Cowan AI, Stricker C (2012) Quantifying impacts of short-term plasticity on neuronal information transfer. Phys Rev E Stat Nonlin Soft Matter Phys 85:041921. doi:10.1103/PhysRevE.85.041921
    OpenUrlCrossRefPubMed
  73. ↵
    Shemesh OA, Tanese D, Zampini V, Linghu C, Piatkevich K, Ronzitti E, Papagiakoumou E, Boyden ES, Emiliani V (2017) Temporally precise single-cell-resolution optogenetics. Nat Neurosci 20:1796–1806. doi:10.1038/s41593-017-0018-8
    OpenUrlCrossRefPubMed
  74. ↵
    Stevens CF, Wang Y (1995) Facilitation and depression at single central synapses. Neuron 14:795–802. doi:10.1016/0896-6273(95)90223-6 pmid:7718241
    OpenUrlCrossRefPubMed
  75. ↵
    Stimberg M, Goodman DFM, Benichoux V, Brette R (2014) Equation-oriented specification of neural models for simulations. Front Neuroinform 8:6. doi:10.3389/fninf.2014.00006 pmid:24550820
    OpenUrlCrossRefPubMed
  76. ↵
    Sun HY, Lyons SA, Dobrunz LE (2005) Mechanisms of target-cell specific short-term plasticity at Schaffer collateral synapses onto interneurones versus pyramidal cells in juvenile rats. J Physiol 568:815–840. doi:10.1113/jphysiol.2005.093948 pmid:16109728
    OpenUrlCrossRefPubMed
  77. ↵
    Thanawala MS, Regehr WG (2013) Presynaptic calcium influx controls neurotransmitter release in part by regulating the effective size of the readily releasable pool. J Neurosci 33:4625–4633. doi:10.1523/JNEUROSCI.4031-12.2013 pmid:23486937
    OpenUrlAbstract/FREE Full Text
  78. ↵
    Tsodyks MV, Markram H (1997) The neural code between neocortical pyramidal neurons depends on neurotransmitter release probability. Proc Natl Acad Sci USA 94:719–723. doi:10.1073/pnas.94.2.719 pmid:9012851
    OpenUrlAbstract/FREE Full Text
  79. ↵
    Tsodyks M, Markram H, Pawelzik K, Markram H (1998) Neural networks with dynamic synapses. Neural Comput 10:821–835. doi:10.1162/089976698300017502
    OpenUrlCrossRefPubMed
  80. ↵
    Tyrrell T, Willshaw D (1992) Cerebellar cortex: its simulation and the relevance of Marr’s theory. Philos Trans R Soc Lond B Biol Sci 336:239–257.
    OpenUrlPubMed
  81. ↵
    Valera AM, Doussau F, Poulain B, Barbour B, Isope P (2012) Adaptation of granule Cell to Purkinje cell synapses to high-frequency transmission. J Neurosci 32:3267–3280. doi:10.1523/JNEUROSCI.3175-11.2012 pmid:22378898
    OpenUrlAbstract/FREE Full Text
  82. ↵
    van Beugen BJ, Gao Z, Boele HJ, Hoebeek F, De Zeeuw CI (2013) High frequency burst firing of granule cells ensures transmission at the parallel fiber to purkinje cell synapse at the cost of temporal coding. Front Neural Circuits 7:95. doi:10.3389/fncir.2013.00095 pmid:23734102
    OpenUrlCrossRefPubMed
  83. ↵
    Vetter P, Roth A, Häusser M (2001) Propagation of action potentials in dendrites depends on dendritic morphology. J Neurophysiol 85:926–937. doi:10.1152/jn.2001.85.2.926 pmid:11160523
    OpenUrlCrossRefPubMed
  84. ↵
    Weisenburger S, Vaziri A (2018) A guide to emerging technologies for large-scale and whole brain optical imaging of neuronal activity. Annu Rev Neurosci 41:431–452. doi:10.1146/annurev-neuro-072116-031458 pmid:29709208
    OpenUrlCrossRefPubMed
  85. ↵
    Wilson NR, Runyan CA, Wang FL, Sur M (2012) Division and subtraction by distinct cortical inhibitory networks in vivo. Nature 488:343–348. doi:10.1038/nature11347 pmid:22878717
    OpenUrlCrossRefPubMed
  86. ↵
    Xu Y, Zou P, Cohen AE (2017) Voltage imaging with genetically encoded indicators. Curr Opin Chem Biol 39:1–10. doi:10.1016/j.cbpa.2017.04.005 pmid:28460291
    OpenUrlCrossRefPubMed
  87. ↵
    Zucker RS, Regehr WG (2002) Short-term synaptic plasticity. Annu Rev Physiol 64:355–405. doi:10.1146/annurev.physiol.64.092501.114547 pmid:11826273
    OpenUrlCrossRefPubMed

Synthesis

Reviewing Editor: Liset Menendez de la Prida, Instituto Cajal CSIC

Decisions are customarily a result of the Reviewing Editor and the peer reviewers coming together and discussing their recommendations until a consensus is reached. When revisions are invited, a fact-based synthesis statement explaining their decision and outlining what is needed to prepare a revision will be listed below. The following reviewer(s) agreed to reveal their identity: Pablo Varona.

Your ms was evaluated by two reviewers and myself. We all agree the paper has some merits but it requires some additional work before a decision can be made.

1- We feel the ms structure should be optimized. In the current version, figures do not follow the standard order and fundamental aspects of the model are introduced later than others. The paper would benefit from a better organization especially at the beginning, so that the modeling approach is more accessible to the reader.

2- While you devoted a section to previous work, we feel you are missing some historical perspective on the subject. There are previous reports on distinct coding schemes emerging by facilitating and depressing synapses. You need to revise bibliography more carefully.

3- Both reviewers agree that a limitation of your model is that it considers purely passive IF neurons that only dependent on the synaptic conductances. In particular, your study fails to address how the sensitivity to input distribution (and gain) is affected by interactions between presynaptic STP mechanisms and postsynaptic filtering mechanisms. This may include, for instance effects of nonlinearities such as those introduced by NMDA receptors or HCN channels, the effect of subthreshold membrane potential oscillations or active somatodendritic conductances (see for instance Latorre et al, PLoS ONE 2006 https://doi.org/10.1371/journal.pone.0145830; de la Rocha and Parga JN 2005 https://doi.org/10.1523/JNEUROSCI.0631-05.2005). While we feel you can acknowledge this in the section on Limitations, we all agree your work will benefit from some additional modeling/analysis to better extrapolate your conclusions. We leave to you the decision on what is more convenient to evaluate but we would like to see this additional work implemented somehow.

4- There are some open questions you should further consider. We compile all of them here. While we agree each of them can be the matter of a new work, we think they are useful for you to consider in relation with the previous point 3. Open questions: Does the model display the same results for less populated inputs channels? Why the effect of asymmetry in the synaptic weights was not studied in the simulations? How changes in the expression of postsynaptic receptors and ion-channels (such as NMDA receptors and HCN channels) interplay with STP induced non-linear synaptic dynamics? How does synapse location (i.e. soomatodendritic) interact with STP gain and input distribution preference shaped by facilitation and depression?

5- Please, report your model code in a public repository.

6- The visual abstract is missing

Author Response

Dear Dr. de la Prida

Thank you for giving us an opportunity to revise the manuscript. The reviewers made many insightful remarks and provided us with crucial feedback. We thank them also. We also apologize for the delay in revising the manuscript.

We have now revised the manuscript according to the feedback from the reviewers. The changes in the manuscript are marked in blue.

In the following we have provided point-by-point replies to show how we have addressed all the concerns raised by the reviewers. Our reply is marked in normal font and reviewers concerns are in boldface.

Best regards

Authors

1- We feel the ms structure should be optimized. In the current version, figures do not follow the standard order and fundamental aspects of the model are introduced later than others. The paper would benefit from a better organization especially at the beginning, so that the modeling approach is more accessible to the reader.

We agree that in the Methods section figures are not cited in a numerical order. But believe that it is helpful to assist the reader to locate where results corresponding to a certain method can be found. As such we think that the Methods are also presented in a logical fashion. In the Results section all the figures are presented and discussed in their numerical order. So unless the reviewers are thinking of a particular reordering, we would like to keep the flow of the manuscript the same. Now with further clarifications we think that the manuscript has become more accessible. One of things that might have made the manuscript a bit cumbersome is the number of symbols we have used. To further assist the reader we have extended Table 1 with symbols corresponding to several key variables that were previously missing from the table of symbols.

2- While you devoted a section to previous work, we feel you are missing some historical perspective on the subject. There are previous reports on distinct coding schemes emerging by facilitating and depressing synapses. You need to revise bibliography more carefully.

We agree with the reviewers that some historical perspective on computational properties of short-term synaptic dynamics was lacking. We extended the subsection Relationship with previous works. [See lines 696-718]

3- Both reviewers agree that a limitation of your model is that it considers purely passive IF neurons that only dependent on the synaptic conductances. In particular, your study fails to address how the sensitivity to input distribution (and gain) is affected by interactions between presynaptic STP mechanisms and postsynaptic filtering mechanisms. This may include, for instance effects of nonlinearities such as those introduced by NMDA receptors or HCN channels, the effect of subthreshold membrane potential oscillations or active somatodendritic conductances (see for instance Latorre et al, PLoS ONE 2006 https://doi.org/10.1371/journal.pone.0145830; de la Rocha and Parga JN 2005 https://doi.org/10.1523/JNEUROSCI.0631-05.2005). While we feel you can acknowledge this in the section on Limitations, we all agree your work will benefit from some additional modeling/analysis to better extrapolate your conclusions. We leave to you the decision on what is more convenient to evaluate but we would like to see this additional work implemented somehow.

We agree with the reviewers that we should have discussed how non-linearities in the postsynaptic neuron (e.g. HCN channels and NMDA receptors) may affect the distribution-dependent presynaptic gains. In the following we discuss the effects of synaptic and dendritic non-linearities and how we have modified the manuscript.

Dendritic/postsynaptic non-linearities: A study of the effects of dendritic/synaptic non-linearities is a highly non-trivial task, but we have investigated the effect of non-linearities introduced by including NMDA receptors. To this end, we simulated a two-compartment neuron model (Figure 1 below) -- in a single compartment model we cannot see the effects of non-linear change due to the removal of the magnesium (Mg2+)

block. Indeed with NMDA receptors the neuron output as a function of synaptic input magnitude (transfer function) shows a threshold-like non-linearity.

Our main claim is that given STP, sparse and dense distribution of inputs will result in different effective input (QS1, QS2) to the postsynaptic side. When we consider the case that dense input patterns elicit weaker response then sparse input patterns (facilitatory synapses), three scenarios can arise on the postsynaptic side.

1. The effective inputs for sparse and dense inputs are too weak to activate the NMDA receptors: This would imply that the two inputs will be transformed into spikes using the same linear transfer function of the neuron (blue curve in the figure 1). The two inputs corresponding to dense and sparse are marked as points ‘1’ and ‘2’ in the figure. In this case, the gain in output rate between sparse and dense inputs will be exclusively due to the presynaptic gain.

2. The effective inputs for sparse inputs are strong enough to activate NMDA receptors, while dense inputs are not: This would be caused by the extra accumulation of neurotransmitters in specific sites due to bursts of the sparse distribution, which a dense distribution would not attain. In this scenario, dense inputs will still be transformed to spikes by the linear transfer function of the neuron (blue curve in the figure 1) while the sparse inputs will be transformed to spikes by the non-linear transfer function (red curve in Fig 1). The two inputs corresponding to dense and sparse are marked as points ‘1’ and ‘4’ in the figure. In this case, the gain in output rate between sparse and dense inputs will be further magnified by the NMDA nonlinearity.

3. The effective inputs for sparse inputs and dense inputs are both strong enough to activate NMDA receptors: this would imply that both inputs will be transformed to spikes using the non-linear transfer function (red curve in Fig 1). The two inputs corresponding to dense and sparse are marked as points ‘3’ and ‘4’ in the figure. Similar to the first scenario, sparse and dense inputs will elicit different numbers of output spikes solely due to the presynaptic gains.

These results show that non-linearities induced by NMDA receptors may not obscure the effects of presynaptic distribution-dependent gains, and may only further enhance these.

These simulations also give us insights into how other non-linearities might affect our main result. As long as the neuron transfer function is not flat for a range of inputs, i.e. inputs do not drive the neuron into its saturation regime, we will see the effect of presynaptic distribution-dependent gains -- the magnitude of the effect will depend on the slope of the neuron transfer function.

In the manuscript we have not included this figure because this will require a long description of the modelling of a two-compartment model. Moreover, we have chosen the parameters which seemed reasonable i.e. we obtained reasonable firing rates and the model behaved identically with or without NMDA receptors for weak inputs. Finally, if we have to add this model in the manuscript it would be necessary to do a proper analysis of when and how much non-linearity is seen -- which we think is beyond the scope of this manuscript.

Therefore, we have used a schematic of the transfer function shown in the figure 1 (see below) to make the arguments in the manuscript in a more general way. We hope that the reviewers will agree with us on this choice. Based on these results and insights, we have added a new subsection From presynaptic gains to postsynaptic rate changes in the discussion section to discuss the issue of non-linear synaptic integration in the postsynaptic neuron [See lines 552-581].

Figure 1: We simulated a two compartment neuron model. The two compartments were coupled using a resistance. Synapses were placed on the non-spiking (dendritic) compartment. The synapses were modelled as either AMPA type or combined AMPA and NMDA type. NMDA and AMPA components had equal area. To mimic different effective input (==QS1, QS2) we varied the strength of the input and measured the number of spikes elicited in a short time window of 100ms. The neuron transfer function in the absence of NMDA synapses is shown in the blue curve. Individual pale blue lines show individual trials and the dark blue curve is an average of

30 independent trials. The neuron transfer function with NMDA synapses is shown in the red curve. Individual pale red lines show individual trials and the dark red curve is an average of 30 independent trials. Beyond a certain input, Mg block was released and the dendritic compartment observed a strong depolarization, therefore the slope of the neuron transfer was higher for a small range of inputs. The parameters of the model were tuned in such a way that for weak inputs, the neuron responded with similar inputs in both cases (with and without NMDA).

Effect of correlations: The study by la Rocha and Praga 2005 investigated the effect of correlations on neuron transfer function. But that analysis was done for the steady state of neurons i.e. inputs were stationary and after some time PSP reached their steady state values. So that part does not relate to our work directly.

However, this made us think about the role of input correlations. Indeed correlations in the input could potentially reduce the gain of sparse input patterns. If there are input correlations, in a dense input pattern, more inputs can spike together. By contrast, in sparse input patterns, because only a few neurons carry spiks, only a few neurons can spike together. That is, for dense inputs, if inputs are highly correlated, fluctuations in the input current can be larger than those observed for sparse inputs with similar amount of pairwise correlations. This is a possibility by which input correlation could amplify the effect of dense inputs and such amplification would be smaller for the sparse inputs. The size of this effect of correlations depends on the number of inputs and amount of correlations (both pairwise and higher-order). And significant effects can only be seen for highly correlated input patterns. However, most experimental evidence suggests that stimulus-evoked activity often has rather low correlations (e.g. see Ecker et al. 2010). We have added new text to discuss this issues [See lines 741-757]

Effect of subthreshold oscillations: The results of Lattore et al. 2006 will indeed be relevant for more detailed accounts of neurons operating under subthreshold oscillatory regimes. We believe that some of these results could be intuitively explained by a similar rationale of qualitatively different transfer functions, as developed on our arguments above and also on the subsection From presynaptic gains to postsynaptic rate changes. In short, the combined effects of input-structure and STP on the readout neuron firing, operating through changes in

subthreshold oscillations, can be understood as changes in the neuron’s effective transfer function, see added section From presynaptic gains to postsynaptic rate changes.

4- There are some open questions you should further consider. We compile all of them here. While we agree each of them can be the matter of a new work, we think they are useful for you to consider in relation with the previous point 3. Open questions:

4a: Does the model display the same results for less populated inputs channels? Why the effect of asymmetry in the synaptic weights was not studied in the simulations?

We thank the reviewers for raising these questions. We missed on discussing the role of population size. We have now revised the text [See lines 105-113 and 329-341] to elaborate on this. Briefly, our results will hold as long as N is larger than N_opt.

We have shown that given the STP parameters, there is an optimum firing rate r_opt through which signal carrying neurons should operate to maximize gain (Figure 2F), this optimum rate is independent of population size N. For a given incoming stimulus rate R_ext, it is optimal to distribute the spikes over N_opt = R_ext/r_opt input channels (e.g. Figure 2F). If N

<= N_opt, then we cannot talk about sparseness. The argument for sparseness arises when

N >> N_opt.

When N>> N_opt, if we change the population size N while keeping all other parameters constant, OD will monotonically decrease as a function of N. However, it is important to note that if we change N we should also change R_ext otherwise the signal-to-noise ratio will change. Signal to noise ratio here is defined as R_{ext}/R_{bas} (for the activity is Poissonian). If we keep r_bas constant and change N, then R_{bas} = Nxr_bas will also change. Therefore, we need to scale R_ext accordingly. That is, without scaling R_ext we cannot meaningfully compare low and high N scenarios. Therefore, to compare different N, we should scale the total input rate R_ext with N. And with this choice OD is independent of N (eq 7).

Asymmetry in synaptic weights: I meant heterogeneity in synaptic weights, not asymmetry. [This question was clarified with the editor. In the following lines we have appended the clarification.] Weights of dynamic synapses do affect the output when interacting with the intrinsic dynamics of the postsynaptic neurons beyond gain changes (see references cited in my report). This comment is oriented to assess the robustness of the phenomena under the more realistic case of heterogeneous weights. In the discussion you seem to suggest this kind of study for a future paper, but I find it easy to test in your current model. You decide whether you want to include this or not. In my opinion, it would improve the impact of your paper.

The parameters values for every input unit in our neuronal network simulations were randomly sampled from Gaussian distributions, including the synaptic weights [See lines

446-448].

However, it is possible that input synaptic weights follow a different synaptic weight distributions (e.g. that emerge due to some supervised/unsupervised learning). Such distributions may affect the value of the gains, especially when weights and input are associated (stimulus-specific tuning). Indeed, if we are allowed to change each synapse we can find appropriate weight distributions that will magnify the effects of either sparse or dense or both types of input distributions. Our goal here was to consider a naive situation in

which weights were not ‘trained’ for any specific task. A systematic study of a network with stimulus-specific tuning of synaptic weights raises several pertinent questions and should be investigated in a separate study. We have clarified in the text [See lines 776-782].

4b: How changes in the expression of postsynaptic receptors and ion-channels (such as NMDA receptors and HCN channels) interplay with STP induced non-linear synaptic dynamics?

We have addressed this question above in response to question 3 (see figure 1).

4c: How does synapse location (i.e. soomatodendritic) interact with STP gain and input distribution preference shaped by facilitation and depression?

This is an interesting question. We can think of two possibilities in which dendritic locations of the synapses may neutralize the advantage of a sparse input distribution over a dense distribution (with facilitatory synapses). First, when synaptic strength (measured as PSP amplitude in the soma) decreases as a function of distance from the soma: It is possible that in the sparse input pattern case, the bursting synapses are mostly located far away from the soma while, for dense input distribution at least some inputs will be close to the soma. Therefore, it is possible that sparse distribution may have the same or even weaker effect on the soma than dense input patterns. However, even in this case, far away synapses may be rescued by dendritic nonlinearities (as shown in Figure 1 above) and calcium spikes etc. Second, the effect of synapses on a certain dendrites is cancelled by strategically placed inhibitory synapses (e.g. Gidon and Segev 2012): In this scenario again, since sparse input is carried by fewer synapses, it is possible that they may get cancelled or weakened by strategically placed inhibition. The effect of such inhibition will indeed be weaker on dense inputs as many more synapses will carry the input information.

Thus, for sparse inputs, their location on the neuron may indeed be an important factor. A proper treatment of this question requires the knowledge of neuron morphology, distribution of inhibition, dendritic non-linearities etc. So we cannot expand the current manuscript to address this. But we have discussed this question in the discussion and it makes for a very interesting next study [See lines 758-775].

5- Please, report your model code in a public repository.

We will have the code on Github but we were asked to remove the link for double-blind peer-review.

6- The visual abstract is missing

We have uploaded a figure with the visual abstract.

Back to top

In this issue

eneuro: 8 (2)
eNeuro
Vol. 8, Issue 2
March/April 2021
  • Table of Contents
  • Index by author
  • Ed Board (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Short-Term Synaptic Plasticity Makes Neurons Sensitive to the Distribution of Presynaptic Population Firing Rates
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Short-Term Synaptic Plasticity Makes Neurons Sensitive to the Distribution of Presynaptic Population Firing Rates
Luiz Tauffer, Arvind Kumar
eNeuro 12 February 2021, 8 (2) ENEURO.0297-20.2021; DOI: 10.1523/ENEURO.0297-20.2021

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Short-Term Synaptic Plasticity Makes Neurons Sensitive to the Distribution of Presynaptic Population Firing Rates
Luiz Tauffer, Arvind Kumar
eNeuro 12 February 2021, 8 (2) ENEURO.0297-20.2021; DOI: 10.1523/ENEURO.0297-20.2021
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Visual Abstract
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Acknowledgments
    • Footnotes
    • References
    • Synthesis
    • Author Response
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • excitation/inhibition balance
  • neural code
  • short-term plasticity
  • sparse code
  • synaptic depression
  • synaptic facilitation

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Theory/New Concepts

  • How Do Spike Collisions Affect Spike Sorting Performance?
  • Linking Brain Structure, Activity, and Cognitive Function through Computation
  • Understanding the Significance of the Hypothalamic Nature of the Subthalamic Nucleus
Show more Theory/New Concepts

Novel Tools and Methods

  • Behavioral and Functional Brain Activity Alterations Induced by TMS Coils with Different Spatial Distributions
  • Bicistronic expression of a high-performance calcium indicator and opsin for all-optical stimulation and imaging at cellular resolution
  • Synthetic Data Resource and Benchmarks for Time Cell Analysis and Detection Algorithms
Show more Novel Tools and Methods

Subjects

  • Novel Tools and Methods

  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.