Abstract
Inhibitory neurons take on many forms and functions. How this diversity contributes to memory function is not completely known. Previous formal studies indicate inhibition differentiated by local and global connectivity in associative memory networks functions to rescale the level of retrieval of excitatory assemblies. However, such studies lack biological details such as a distinction between types of neurons (excitatory and inhibitory), unrealistic connection schemas, and nonsparse assemblies. In this study, we present a rate-based cortical model where neurons are distinguished (as excitatory, local inhibitory, or global inhibitory), connected more realistically, and where memory items correspond to sparse excitatory assemblies. We use this model to study how local-global inhibition balance can alter memory retrieval in associative memory structures, including naturalistic and artificial structures. Experimental studies have reported inhibitory neurons and their subtypes uniquely respond to specific stimuli and can form sophisticated, joint excitatory-inhibitory assemblies. Our model suggests such joint assemblies, as well as a distribution and rebalancing of overall inhibition between two inhibitory subpopulations, one connected to excitatory assemblies locally and the other connected globally, can quadruple the range of retrieval across related memories. We identify a possible functional role for local-global inhibitory balance to, in the context of choice or preference of relationships, permit and maintain a broader range of memory items when local inhibition is dominant and conversely consolidate and strengthen a smaller range of memory items when global inhibition is dominant. This model, while still theoretical, therefore highlights a potentially biologically-plausible and behaviorally-useful function of inhibitory diversity in memory.
Significance Statement
Broadly, there are two types of neurons: excitatory and inhibitory. Inhibitory neurons are amazingly diverse compared with excitatory neurons. Why? Using a computational model with realistically-sized groups of excitatory neurons (representing memories) associated together in a network of memories, we highlight a potentially biologically-plausible and behaviorally-useful function of inhibitory neuron diversity in memory. Two findings in particular standout: (1) inhibitory diversity can quadruple the range of memory retrieval; and (2) balancing the strength of different inhibitory neurons’ influence on excitatory neurons can dramatically change how the network of memories become activated, balancing and extracting both geometric and topological information about the network.
Introduction
The mechanisms by which our brains flexibly perform the complex tasks of learning and memory are not completely understood. Hebbian learning (Hebb, 1949), the relative increase in synaptic strength between neurons as a result of shared, causal activity, seems important. Hebb postulated memories were formulated in the brain by assemblies of highly-interconnected neurons (Hebb, 1949). Evidence for this “neuron assembly” hypothesis was found in hippocampus, where groups of neurons become synchronously activated in response to an animal’s spatial location, indicating a neural correspondence to and potential memory of the location (Harris et al., 2003). These memories are often mutually related, in physical or behavioral space for the case of navigation (Tolman, 1948), in reward space for the case of rewarded learning tasks (Dusek and Eichenbaum, 1997), in linguistic space for the case of language comprehension (Goldstein et al., 2021), and theoretically in any arbitrary semantic space for generalized graph-based reasoning (e.g., family trees; Whittington et al., 2020). How can the structure of these mutual relations be identified dynamically in cortical networks? Inhibitory mechanisms may hold an answer. Here, we computationally explore the possible role of inhibitory circuits in extracting graph-based relationships in the space of behaviorally relevant information.
The majority of experimental and computational work focusing on assemblies as representations of memory items has focused on the role of excitatory neurons. However, emerging evidence suggests inhibitory neurons play a nontrivial role in cortical networks. Throughout the brain, inhibitory neurons have classically been thought to coarsely keep excitation in check with a broad, nonspecific blanket of inhibition (Amit et al., 1994; Brunel, 2000). But more recent work has shown inhibitory neurons are tuned to specific external stimuli (Okun and Lampl, 2008; Xue et al., 2014), have specific associations with behavior (Dudok et al., 2021), have a large diversity of forms and functions within and across brain areas (Gouwens et al., 2020; Burns and Rajan, 2021), and form inhibitory assemblies (Zhang et al., 2017), often jointly with excitatory subnetworks (Otsuka and Kawaguchi, 2009; Koolschijn et al., 2019). A hallmark of many neuropathologies is inhibitory dysfunction (Amieva et al., 2004; Baroncelli et al., 2011; Burns and Rajan, 2022; Yao et al., 2022). If specific inhibitory dysfunction alone is sufficient for explaining these pathologies, then we could expect subtle inhibitory changes to cause dramatic changes in global function in complex tasks like those involving learning and memory. A greater understanding of the neurophysiological mechanisms underlying these changes may help us target treatments for such disorders and provide fundamental insight into the computational roles of inhibitory neurons in such circuits.
Previous modeling work in a formal model with binary neurons (Haga and Fukai, 2019) has shown how anti-Hebbian learning (i.e., involving inhibitory synapses) in an associative memory model was able to extend the span of association between mutually-related memory items organized in a simple ring structure, compared with a regular Hebbian learning rule (i.e., not involving inhibitory synapses). Later work extended this formal model to arbitrary graph structures (Haga and Fukai, 2021). These results suggest inhibition may play a nontrivial role in relational memory systems. However, these models lacked biological features, most prominently a lack of distinction between excitatory and inhibitory neuron populations, breaking Dale’s Law. Dale’s Law (sometimes also called Dale’s Principle or Dale’s Hypothesis), first appearing in Eccles et al. (1954), is the view that a neuron’s terminals do not transmit multiple, differently-acting chemical or electrical signals to postsynaptic targets, e.g., an excitatory neuron has the exclusive electrical effect of exciting postsynaptic targets and never inhibiting them. Another limitation of prior work is that the excitatory assemblies were also not nearly as sparse as those seen in biology and the neurons took on binary states. Nevertheless, the results indicate global functional changes can result from subtle inhibitory changes (Ferguson et al., 2013; Rich et al., 2017). This study proposes a more realistic connection scheme of distinct excitatory and inhibitory neurons to embed sparse cell assemblies which represent memory items mutually linked through arbitrary graph structures. Formulated in this way, the model allows us to confirm the previous suggestion that a balance between local inhibition and global inhibition on cell assemblies determines the scale and extent of memories retrieved in a neural network. We show this for various naturalistic and artificial associative memory structures, including as a potentially behaviorally-useful function to maintain a choice distribution given a juncture or decision point in physical or memory space. We find a balance between local and global inhibition allows control over the range of recall within arbitrary graph structures, as well as graph clustering effects which may be useful in navigation and memory tasks.
Materials and Methods
Model
In order to embed memories in the network, we generate binary patterns as vectors of length
Extended Data Figure 1-1
An example of spiking rates of all units in the stimulated pattern, its first neighbor (pattern adjacent to the stimulated pattern), the neighbor’s neighbor (second neighbor), the neighbor’s neighbor’s neighbor (third neighbor), and all other patterns. This is from a simulation using
Two populations of inhibitory neurons are also modelled, one with global connectivity (uniform connection probabilities as indicated in Fig. 1) of size
Neurons are modelled as proportions of their maximum firing rates, based on an established method (Amit et al., 1994; note: the following completely describes our implementation, including modifications, so readers need not be familiar with the prior work (Amit et al., 1994)). At each timestep, currents are calculated for each excitatory neuron
Extended Data Figure 2-1
Each panel shows the spiking rates over 5 s for 100 random excitatory neurons drawn from separate, unique random seed simulations where M was a 1D-chain with
The excitatory-to-excitatory weights are considered balanced by setting
Associative memory structures
Analysis
We noted changes to c systematically changed the number of memory patterns in M which became activated during the simulated memory retrieval phase (from
The covariance between two memories μ and v is:
The correlation between two memories μ and v is:
We then calculate the mean correlation between two memories at the shortest path distance d away from each other by:
Finally, we quantify the range of retrieval D using the following algorithm:
1. Calculate
2. D is the first value of d for which the next Y memory patterns have
We use
We observed how the activity of the excitatory population spread through associative memory structure for different values of c and across time. We chose to visualize this spread in three classical graphs, Zachary’s karate club graph (Zachary, 1977), the
In order to quantify the similarity between the activity of the network and graph theoretic properties in the associative memory structures, we compared the approximate steady-state activity to the community detection and classification of vertices using the label propagation algorithm (Raghavan et al., 2007). We denote two vertices, e.g., μ and v, being members of the same community according to this algorithm with
The clustering index is a measure of how our model’s activity corresponds to topological features of M. To test how the activities correspond to geometric distance for arbitrary graphs, we define a local area around a vertex in M. This local area is the closed d-neighborhood of a vertex, i.e., the set of the vertex v and all vertices within distance d as measured by their shortest path to v. For a choice of d and v, we construct a local area function
Code availability
The model was implemented using Julia 1.5.2. A copy of the code is publicly available at https://github.com/tfburns/BurnsHagaFukai (also see Extended Data 1).
Extended Data 1
Code. Download Extended Data 1, ZIP file.
Results
The general structure of the model is illustrated in Figure 1A. Memories are modelled as strongly-interconnected assemblies of excitatory neurons. Each memory item’s assembly is also interconnected to the assemblies of memory items which it is connected to in the associative memory structure, M. The associative memory structure can take on any form. Inhibition to the network is provided by two equally-sized populations: (1) a global inhibitory population, which has an excitatory to global inhibitory connection probability of 0.1 and global inhibitory to excitatory connection probability of 0.5; and (2) local inhibitory populations (one for each excitatory assembly), which are fully connected to individual excitatory assemblies in the associative memory structure. The balance between these two activities was governed by the parameter c:
Extended range of retrieval
Setting M as a 1D chain with
Spread of excitation in associative memory structures
We also tested more sophisticated associative memory structures, namely: the
In most cases (karate club graph,
Correlations between the vertices (assemblies) of the underlying neurons (neurons belonging to those assemblies, see Eq. 11) showed different resolutions of clustering. For most graphs, there was a trend of more and small clusters at
We also observed how excitation spreads across the associative memory structure across time, after activation of vertices of interest, in the Tutte and multiroom graphs. For the Tutte graph we chose the central vertex, which branches off into three separate “rooms,” and for the multiroom graph we chose a location within one of the rooms that also led through a “doorway” to a neighboring room. We chose these vertices since they represent points of behavioral interest and ecological importance in animals—they are points at which an animal may make significant choice between which room to enter, explore, or exploit. In the Tutte graph, for
The multiroom graph showed a similar trend in broadening and maintaining a larger range of retrieval with increases in c. However, possibly because of the size of the network and because the chosen vertex was located within one of the rooms (thus biasing toward activation of that room’s other vertices, unlike the central vertex in the Tutte graph), observation of the effect required an increase in c. For illustration of the effect, we chose
The clustering and geometric indices, Q and R, for each graph, at different values of c are given in Table 1. Since R depends on a choice of distance d in the local area, we calculated R for all values of d (from 1 up to the diameter) and report the largest value of R (and its d) in Table 1 and for all values in Table 2. In general, the larger the value of Q, the more agreement between the community structure measured by label propagation and by the correlations of vertex activities in the final network states (by our model). High values of R indicate the final activity states are similar to geometric distance. We analyze the activity based on all neurons and a subset of neurons which reach a firing rate of at least 0.02 of the maximum firing rate during the simulation. We call this subset the selective neurons.
Clustering indices (Q) using only the selective neurons are generally larger than for all neurons, indicating these more-active neurons generally contribute positively to clustering. This is especially noticeable when the network settles into a state where assemblies take on a wide range of values (e.g., in the
Geometric indices (R) were generally greater than the clustering indices, indicating a greater emphasis of the geometry rather than the topology in these memory graphs at these values of c. Nonetheless, some topological information is captured and almost all of the geometric indices were of a comparable order as the clustering indices. As we increase c, the distance is increased or unchanged (i.e., not decreased). However, whether the clustering index increases with c depends on the structure of the graph. Importantly, either the clustering index or the distance becomes larger when c becomes larger, implying the approximate steady states can reflect the broader structure of the graph as the ratio of local inhibition to global inhibition is increased.
Discussion
Previous modeling studies have conflated excitatory and inhibitory neuron identities and learning rules (Griniasty et al., 1993; Haga and Fukai, 2019) or ignored inhibitory neurons’ functional participation (Amit et al., 1994) in associative memory structure retrieval. This work uniquely disentangles excitatory and inhibitory neurons and uses sparse excitatory assemblies to demonstrate the potential functional role of global-local inhibitory balance in a more biologically-plausible setting. In the simplistic case of a 1D memory chain (like might correspond to discrete memories in a sequence of events through time), shifting inhibition to a locally-dominant state quadrupled the range of activation or retrieval. In the case of more sophisticated memory structures, globally-dominant inhibition tended to emphasize finer scale partitions of the memory structure and consolidated strong local associations. Whereas, locally-dominant inhibition tended to capture broader scale partitions and allow excitation to extend across a larger range of the memory structure.
It is important to emphasize these results are generated in the context of a memory structure which relies on the correlation of semantically close units, implying that memory retrieval in such a structure is functionally optimized when nearby units are correlated. Biological evidence for such correlations was first prominently shown in monkey anterior ventral temporal cortex by Miyashita (1988), which showed that the activity of units selective for arbitrary complex visual patterns was correlated by the stimulus-stimulus associations in the temporal ordering of the stimuli presentations. However, this kind of correlated, associative memory structure is not only found in the visual system, it is also noticeable and widely studied in hippocampus. Within a spatial environment, place cells representing nearby place fields show correlated activity (Monsalve-Mercado and Roudi, 2020) and can maintain correlations in the same environment over different tasks (Hampson et al., 1996), mostly because of overlapping place fields. When the environment changes, however, these correlations are typically inconsistent with one another (Alme, et al., 2014), suggesting contextual cues alter or switch between different memory structures.
In our study, we selectively stimulate single memory patterns and see memory retrieval of the pattern and surrounding associating patterns in ∼100–200 ms of simulation time. Is this biologically realistic? Single neurons in human medial temporal lobe which learn to selectively encode associative episodic memories within just a few trials can be recruited in subsequent activations within ∼500–700 ms (Ison et al., 2015); maximal pattern completion of cortical ensembles in visual cortex after subensemble optogenetic stimulation typically takes on the order of 2–4 s (Carrillo-Reid et al., 2019); biasing memory-guided spatial behavior by selectively stimulating clusters of place cells for ∼1 s has been shown to improve performance in reward-attaining behavior (Robinson et al., 2020). Therefore, the speed of memory retrieval in our model is likely on a faster timescale than should be generally expected in actual biological systems, although this could also be because of simplifications in the model, or disanalogous stimulation methods or assembly/memory structures.
Recent experimental evidence in mice (Rolotti et al., 2022) shows that when optogenetic techniques are used to induce place field formation in CA1 neurons, feedback inhibition limits the number of neurons which become activated, thereby limiting the size of the neural assembly which becomes activated by the induced place field. However, using disinhibition, this effect can be nullified and the neural assemblies can be made larger. Rolotti et al. (2022) showed such disinhibition can improve performance on a head-fixed spatial goal-oriented learning task via overrepresentation of the rewarded locations used for performance in the task. Another functional benefit of such disinhibition may be in rapid place field formation, as is seen in the behavioral timescale synaptic plasticity mechanism (Bittner et al., 2017; Zhao et al., 2020; Milstein et al., 2021). Our modeling results suggest similar effects may be possible without the use of disinhibition but rather simply via a rebalancing of the relative activity or strength between different inhibitory populations.
In Rolotti et al. (2022), the feedback inhibition comes from the hippocampus, but they do not explore distinctions between different inhibitory populations therein. There are many different types of inhibitory neurons, each with distinct connectivity, dynamics, and morphology (Pelkey et al., 2017; Burns and Rajan, 2021; Campagnola et al., 2022). In our model, we speculate that the “local” inhibitory neurons are parvalbumin-expressing while “global” inhibitory neurons are somatostatin-expressing, given there exists some evidence for such connectivity profiles in visual cortex (Adesnik et al., 2012; Litwin-Kumar et al., 2016). However, it is possible different areas may recruit and use inhibitory neurons and their circuits differently, for example to develop different scales of representations in hierarchical planning (Brunec and Momennejad, 2022). It could also be the case that there are even more functional groups of inhibitory neurons involved in these phenomena (e.g., see later in this discussion regarding a potential additional “global” inhibitory group for decreasing the correlation between neighboring memory patterns).
Inhibitory neurons also contribute to the initiation, maintenance, and modulation of rhythmic oscillations in local electrical activity (Traub et al., 1998; Fries, 2005; Bartos et al., 2007; Buzsáki and Wang, 2012; Aton et al., 2013). One example is in the pyramidal interneuron network gamma (PING) mechanism (Whittington et al., 1995), which can generate rhythmic dynamics which can ultimately result in the synchronous firing of excitatory neurons. Classically, the PING mechanism is thought of as involving just one group of excitatory neurons and one group of inhibitory neurons, and this is generally sufficient for the generation of PING dynamics. However, Rich et al. (2017) showed by expanding the diversity of inhibitory neurons into two groups with different recurrent disinhibitory connectivity, one weakly connected and one strongly connected, it is possible to achieve richer and more robust PING dynamics. Although we do not study disinhibition in our model and our techniques are substantially different to Rich et al. (2017), we partly followed in the theme of Rich et al. (2017; albeit in a different mechanism and showing a different phenomenon) by showing how by considering a greater diversity of inhibitory neurons acting simultaneously in a network, we are able to generate more interesting and novel dynamics. How inhibitory diversity related to different mechanisms or phenomena (e.g., the PING mechanism and the multiscale and extended retrieval of associative memory structures we demonstrate here) interact with one another is an open question for both computational and experimental neuroscientists.
Theoretically, in the absence of noise and with a sufficiently large network, an associative memory structure with N neurons can expect to accurately store (and retrieve via pattern completion) a maximum of
Conceivably, it is possible to functionally enter into the range of spatial correlation
The effects generated by these modifications, such as stable extension in the range of retrieval, appears limited because of increases in noise in strongly local-inhibition dominant states. This is likely because of a finite field effect and may indicate a necessary minimum size of local excitatory-inhibitory assemblies for such states. For example, stability in the case of
Alternatively, it is possible this mechanism requires a hybrid sparse-dense coding schema, as has long been suggested operates in hippocampus (Barnes et al., 1990), cerebellum (Marr, 1969), and more recently in sensory areas (Laurent, 2002; Sakata and Harris, 2009). In such a schema, sparse assemblies report their activity to densely-connected assemblies which broadcast information to other sparse assemblies. In our model, we could consider the global inhibitory population as a densely-connected assembly which broadcasts the overall level of excitation in the network to all local, sparse assemblies. It is just not excitatory, as in classic dense-sparse schemas. Through this interpretation, a reduction in the relative strength of global inhibition (as in the unstable region of
Extension of the range of retrieval was not simply the only apparent function of the inhibitory mechanism in sophisticated associative memory structures, the mechanism also permitted multiscale segmentation of the associative memory structure. Local-inhibition dominant states typically activated coarser topological segments of the graphs whereas global-inhibition dominant states consolidated activity in more densely associated clusters, highlighting finer topological features. These results were similar to those found in a more abstract model of binary neurons (Haga and Fukai, 2021), except that the current model was unable to eliminate the spread of excitation totally (as the more abstract model (Haga and Fukai, 2021) was capable of). This is because the current model does not include direct potentiation of excitatory weights, but rather modulation of local-global inhibitory balance. In this model, where association is embedded ubiquitously, to sustain highly-specific activity within a narrow range of memory items or even a single memory item, it is necessary to create very strong self-excitation within an assembly and have stronger overall inhibition with
An intriguing aspect of this inhibitory mechanism is its ability to dramatically affect not just the range of retrieval but also which parts of the memory structure become dominant given an initial stimulation. For example, it appears in global-inhibition dominant states, global inhibition drives a “winner-takes-all” dynamic (Grossberg, 1973) whereby only the globally strongest memories remain active. In local-inhibition dominant states, this “winner-takes-all” dynamic appears to dissipate and permit a general extension of retrieval, or a more egalitarian sharing of the winners. However, this extension can also be mediated and a “winner-takes-all” dynamic can appear at the peripheries of the retrieval range, with different peripheries competing against each other (Fig. 5B). This may be considered as a global state transition from “winner-takes-all” to “winner-shares-all” (Fukai and Tanaka, 1997). We therefore hypothesize an inhibitory mechanism like we have described may be used to aid in the learning or retrieval of graph-based cognitive tasks in cortical networks (Whittington et al., 2020; Wang et al., 2021). Cognitive control or exploitation of this mechanism might also occur in concert with, for example, gamma oscillations, which are strongly tied to inhibitory activity (Buzsáki and Wang, 2012). This may be especially useful when faced with competing behavioral choices and maintaining the distribution of these choices is meaningful, such as in perceptual decision-making (Najafi et al., 2020). Indeed, Roach et al. (2022) report that tuned local inhibition can alter the attractor dynamics of perceptual decision-making networks to balance between the speed or accuracy of perceptual decisions.
Probing such circuits and behaviors may provide insights on the potential influence such inhibitory mechanisms have on neuropathologies, especially those associated with cognitive defects (Amieva et al., 2004; Baroncelli et al., 2011). For instance, the coordination and interaction of inhibitory-driven oscillatory activity in hippocampus and prefrontal cortex is known to play a role in spatial memory tasks (Jones and Wilson, 2005) and spatial decision-making (Tavares and Tort, 2022). This coordination and interaction can be disrupted in epilepsy, leading to decreased behavioral flexibility (Kleen et al., 2011). Perhaps the associated behavioral deficits are in part because of maladaptations or dysfunction of local-global inhibitory balance or other subtle disruptions to networks involving multiple inhibitory neuron types.
While this study has made some advances over prior models (Griniasty et al., 1993; Amit et al., 1994; Haga and Fukai, 2019) in terms of improving the “biological realism” of the model, there exist many simplifications and unrealistic features in our model. We treat neurons as having a single point of intracellular space, i.e., without dendrites or specific morphology, which other than itself being unrealistic also prevents us from allowing different classes of inhibitory neurons to preferentially synapse onto different regions of other neurons, which is known to vary widely across inhibitory neurons (Otsuka and Kawaguchi, 2009; Burns and Rajan, 2021; Dudok et al., 2021). We also assume that joint excitatory-inhibitory assemblies are completely connected, which is a simplification that does not match biology (Otsuka and Kawaguchi, 2009; Koolschijn et al., 2019; Rolotti et al., 2022). Therefore, these and other limitations mean that whether and how actual biological networks achieve the same functional benefits we described here using inhibitory neuron diversity currently remains unknown. Experimentalists may therefore wish to design studies to test the presence or absence of such computational benefits in biological networks with diverse inhibitory populations.
In our model, making a seemingly subtle change to the network structure by introducing some of the complexities and diversities of inhibitory neurons had a profound impact on retrieval. We have shown how this phenomenon mainly persists in a sparse, associative memory structure which obeys Dale’s Law and has more biologically-plausible connections than prior models. We have also shown and discussed some of the potential functional roles of this mechanism in graph-based cognitive tasks and discussed how this mechanism may contribute to a type of sparse-dense coding schema.
Footnotes
The authors declare no competing financial interests.
This work was partially supported by the Japan Society for the Promotion of Science KAKENHI Grants 21K15611 (to T.H.) and 18H05213 (to T.F.).
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.