## Abstract

Epilepsies are characterized by paroxysmal electrophysiological events and seizures, which can propagate across the brain. One of the main unsolved questions in epilepsy is how epileptic activity can invade normal tissue and thus propagate across the brain. To investigate this question, we consider three computational models at the neural network scale to study the underlying dynamics of seizure propagation, understand which specific features play a role, and relate them to clinical or experimental observations. We consider both the internal connectivity structure between neurons and the input properties in our characterization. We show that a paroxysmal input is sometimes controlled by the network while in other instances, it can lead the network activity to itself produce paroxysmal activity, and thus will further propagate to efferent networks. We further show how the details of the network architecture are essential to determine this switch to a seizure-like regime. We investigated the nature of the instability involved and in particular found a central role for the inhibitory connectivity. We propose a probabilistic approach to the propagative/non-propagative scenarios, which may serve as a guide to control the seizure by using appropriate stimuli.

- computational modeling
- epilepsy
- inhibitory population
- seizure control
- seizure propagation
- spiking network model

## Significance Statement

Our computational study investigates how epileptic activity invades normal brain tissue, and shows the specific role of the inhibitory population, and its dynamical and structural aspects, using three different neuronal networks. We find that both structural and dynamic aspects are important to determine whether seizure activity invades the network. We show the existence of a specific time window favorable to the reversal of the seizure propagation by appropriate stimuli.

## Introduction

Epilepsy is one of the most common neurologic disorders (Beghi, 2020) and can take numerous forms. It is associated with the presence of paroxysmal electrophysiological events and seizures, usually recorded in humans using the electroencephalogram (EEG). However, EEG recordings do not allow us to probe the activity of single neurons within the network. More recently, the recording conducted with microelectrode arrays made it possible to obtain spike information on the order of a hundred neurons in human epileptic patients (Peyrache et al., 2012; Dehghani et al., 2016; Paulk et al., 2022).

Such microelectrode recordings showed that neuronal activity during seizures does not necessarily correspond to synchronized spikes over the whole neuron population, as previously modeled (Soltesz and Staley, 2008), including models at different scales from cellular to whole-brain levels (Depannemaecker et al., 2021, 2022). In fact, it turns out that the dynamics of neural networks during seizures are more complex (Jiruska et al., 2013) and still poorly understood. In particular, it is not known how the paroxysmal activity of the seizure propagates, driving other networks into seizure activity.

Here, we investigate this problem using computational models. As a starting point, we consider examples of seizures where the inhibitory network is strongly recruited, while the firing of excitatory cells is diminished. Figure 1 shows three seizures from a patient that were recorded, using Utah-arrays, before resection surgery in a case of intractable epilepsy. From these intracranial recordings, 92 neurons have been identified and isolated and were classified into the following two groups: fast-spiking (FS) neurons and regular-spiking (RS) neurons, based on spike shape, autocorrelograms, firing rates and cell-to-cell interactions (Peyrache et al., 2012). Remarkably, direct cell-to-cell functional interactions were observed, which demonstrated that some of the FS cells are inhibitory while some of the RS cells are excitatory (for details, see Peyrache et al., 2012). The three seizures shown in Figure 1 were taken from the analysis of Dehghani et al. (2016; see this article for details) and are shown with the firing rate of each population of cells. During the seizure, we can observe a plateau of high activity of FS cells, and a strongly reduced activity of RS cells. This phenomenon of unbalanced dynamics between RS and FS cells was only seen during seizures in this patient (Dehghani et al., 2016). It shows that, in these three examples, the seizure was manifested by a strong “control” by the inhibitory FS cells, which almost silenced excitatory RS cells. Interestingly, a very different conclusion would have been reached if no discrimination between RS and FS cells were performed, which underlies the importance of discriminating RS and FS cells for a correct interpretation of the dynamics during seizures. Based on such measurements, we built computational models based on a larger number of cells to consider network effects that are not directly accessible with the recordings. We were interested in how seizure activity propagates or not, and in the determinants of such propagation.

The region of the brain where the seizure starts is called the seizure focus, although in certain patients it can be distributed over several foci (Nadler and Spencer, 2014), then the seizure spreads to other regions of the brain. When another such region is reached, it can in turn be driven into seizure activity, in which case the seizure activity propagates. It can also control it (as in Fig. 1), in which case the seizure would remain confined to a more restricted brain region.

To gain an understanding of the dynamics underlying these two scenarios, we study the response of networks using three different neuron models (adaptive exponential integrate and fire (AdEx), conductance-based adaptive exponential integrate-and-fire (CAdEx), and Hodgkin–Huxley (HH) models), interacting through conductance-based synapses, to an incoming paroxysmal (seizure-like) perturbation. We observe two types of behavior, which we represent in Figure 2: one where the incoming perturbation successfully increases the activity of the excitatory population, thus making it stronger than the input; and the other where only the inhibitory population strongly increases its activity, thus controlling the perturbation. In the first case, where the excitatory population discharges very strongly, it is therefore likely to transmit, or even amplify, the perturbation transmitted to the next cortical column. We have therefore called this situation the propagative scenario. In the opposite case, where the firing rate of the excitatory population remains much lower than the perturbation, the seizure-like event will not spread to the neighboring region, we therefore call this situation the nonpropagative scenario. We then propose a more precise approach, based on the AdEx network, that mixes structural and dynamic ingredients to unravel key aspects of the mechanisms at play. Focusing on the different input connectivity profiles for each node in the network, we are able to build separate groups of neurons that display significantly different dynamics with respect to the perturbation. Finally, we study the possibility of a proactive approach, based on the application of an extra stimulus with the aim of reversing the propagative behavior, thus controlling the spread of the seizure.

## Materials and Methods

### Computational models

We use for this study a mathematical model of electrophysiological activity based on ordinary differential equations, describing the dynamics of the membrane potential of the neurons through their interactions.

Each neuron model in the network is described by Equations 1 and 2, the AdEx model (Brette and Gerstner, 2005; Naud et al., 2008), as follows:
*I*_{syn} is the synaptic input. When the membrane potential crosses a threshold, a spike is emitted, and the system is reset as follows:

Parameters used for the excitatory (RS) and inhibitory (FS) populations are respectively as follows:

To compare some of the results obtained with the AdEx model, we used two other models of neuronal activity. First, the CAdEx model, which solves some of the limitations of the AdEx model (Górski et al., 2021). The equations read:

When the membrane potential crosses a threshold, a spike is emitted, and the system is reset as in:

Parameters used for inhibitory (FS) populations are as follows:

For all types of neuron models, the parameters have been chosen in biophysical range (Hodgkin and Huxley, 1952; Hille, 1992; Naud et al., 2008; Górski et al., 2021) to keep the basal asynchronous irregular (AI) activities (Brunel, 2000) into a range of firing rates coherent with experimental observations (El Boustani et al., 2007; Destexhe, 2009; Zerlaut et al., 2018).

The network is built according to a sparse and random (Erdos–Renyi type) architecture where a fixed probability of connection between each neurons is set to 5%. We consider a network model of 10,000 neurons, built according to specific properties of the cortex. This network is made of an inhibitory (FS) and an excitatory (RS) population, respectively, representing 20% and 80% of the total size of the system as previously done in the study by Carlu et al. (2020). The communication between neurons occurs through conductance-based synapses. The synaptic current is described by the following:
*E _{E}* = 0 mV is the reversal potential of excitatory synapses and

*E*= –80 mV is the reversal potential of inhibitory synapses.

_{I}*g*and

_{E}*g*are, respectively, the excitatory and inhibitory conductances, which increase by quantity

_{I}*Q*= 1.5 nS and

_{E}*Q*= 5 nS for each incoming spike. The increment of conductance is followed by an exponential decrease according to the following:

_{I}The network thus formed receives an external input, based on the activity of a third population (excitatory) of the same size as the excitatory population. Each of its neurons is connected to the rest of the network according to the same rule as mentioned earlier (fixed probability, 5% for each connection). This external population produces spikes with a Poissonian distribution at a given tunable rate. The external perturbation that mimics the incoming seizure occurs through the augmentation of this firing rate.

The shape of the latter is described by the following:
*H* is the Heaviside function and *β* = 6 Hz is the basal constant input. This function takes the general form of a high plateau, where *T*_{1} and *T*_{2} are the times when the perturbation reaches its beginning and end, respectively, and *α* defines its maximal height. *τ*_{on} and *τ*_{off} are, respectively, time constants associated with the exponential rise and decay of the perturbation.

For all three types of networks, it is possible to have different connectivities (i.e., different set of random connectivities) and realizations of a Poisson drive (i.e., the generator of the Poisson noise can vary). It is also possible to fix the seed of the noise either for the connectivity or for the Poisson drive (or both) to analyze specific conditions.

We create network connectivities by allowing a 5% chance of connection between any two neurons, which will indeed lead to an average of 5% of connection, but with some variation. Some neurons can have more afferent connections from inhibitory neurons than others, which will make them more inhibited, and the same goes with excitatory connections, creating a variation between neurons because of the random nature of the network.

### Coarse graining and continuous analysis

To analyze in details what mechanisms are at play in the network during a seizure-like event, we resort to a combination of two methods: a so-called “structural coarse graining”; that is, we gather neuron models in *n* groups according to their inhibitory in-degree (the number of inhibitory connections they receive, as introduced before, and we study their time evolution through statistics of their membrane potential (mean and alignment) over these groups. In other words, at each integration time step, we will obtain *n* values of mean membrane potentials, one for each group, as well as *n* values of the Kuramoto order parameter (measuring alignment in groups).

To obtain the Kuramoto order parameter, we first transform the single-neuron membrane potentials into phase variables by applying a linear mapping

*R *=* *1 implies full alignment, while *R *=* *0 implies no alignment whatsoever.

Let us mention one caveat here. The membrane potentials are not mapped on the full circle to avoid artificial periodicity of the obtained angles: having *V* = *V _{R}* is not the same as having

*V*=

*Vd*. One may thus ask why such a measure is used instead of the usual measures of dispersion such as the Standard deviation (SD). We use the Kuramoto order parameter because it gives a naturally normalized quantity, thus allowing a direct comparison of what is happening at each time step.

### Data availability

The code/software described in the paper is freely available online at https://github.com/HumanBrainProject/PropNoProp.

## Results

We start by showing how, in networks of various neuron models, a paroxysmal external stimulation can trigger a seizure, depending on various parameters. We show how the dynamics can differ from model to model and in their common features. Then, we propose a structural analysis based on the mean firing rates of individual neuron models to guide a particular coarse graining, which we use as a filter to observe the dynamics and gain further understanding, from both qualitative and quantitative perspectives. Finally, we show how this study can guide a proactive approach to reduce the chances of seizure propagation.

### Propagative and non-propagative scenarios

Throughout this study, we assume that the networks depicted in the previous section represent a small cortical area receiving connections from an epileptic focus. Specifically, the arrival of the seizure is modeled by a sudden rise in the firing rate of the external (afferent) Poisson input from the region where the seizure originates. In other words, we are not concerned with how seizures *originate* (epileptogenesis), but how they can propagate. Therefore, we will frame our analysis into two main scenarios: propagative (i.e., the network develops an excitatory firing rate greater than the input), which makes it able to propagate the seizure to efferent regions; and non-propagative, where the excitatory firing rate is lower than the input, thus attenuating the incoming signal. As described in the Materials and Methods section, the perturbation starts with an exponential growth followed by a plateau and ends with an exponential decrease, going back to the basal level (Fig. 3, blue curves). We show in Figure 3 the response of the various networks to this type of perturbation.

Here we can distinguish between two classes of macroscopic differences between propagative and non-propagative scenarios.

In the first class (for AdEx and CAdEx), the difference is binary, which means that the network either features a very strong increase in the firing rate of the inhibitory and excitatory populations, or that the sharp increase in the firing rate only concerns the inhibitory population, thus strongly limiting the activity of the excitatory population (consequently preventing the seizure from spreading to the next region). From this perspective, the propagative scenario can be understood as a loss of balance between excitatory and inhibitory firing rates, which the network struggles to find once the excitatory population has reached very high firing rates. Interestingly, these two scenarios can occur for the same global shape of the perturbation but changing only the noise and network realizations. It must be noted that the 200 Hz maximum frequencies measured here are the results of the temporal binning of the global spiking dynamics, taken as *T *=* *5 ms, which corresponds to the refractory time of the single neurons in AdEx and CAdEx. Upon choosing a shorter binning (e.g., *T *=* *1 ms), higher-frequency peaks are observed, going up to 800 Hz, thus hinting at overall faster dynamics.

In the second class (HH), there is a rather continuous difference between propagative and non-propagative scenarios, as can be seen in Figure 4*c*, depending on the amplitude of the perturbation.

### Influence of the perturbation shape

To study how the shape of the perturbation affects the networks response, in Figure 4 we screened different time constants of the exponential growth rates and maximum amplitudes of the plateau with 100 seeds (for both network and noise realizations for each couple of values) and probed, in the case of AdEx (Fig. 4*a*) and CAdEx (Fig. 4*b*), the number of realizations, which yield propagative behavior, as it shows binary possible scenarios. Meanwhile, in the HH case, the perspective is a little different: we chose to show two figures, displaying means and SDs over realizations of the difference in firing rate between excitatory and Poisson populations (averaged over the plateau),
*c*,*d*, respectively). As can be expected, for all networks (AdEx, CAdEx, and HH) the amplitude of the perturbation plays a determinant role in the type of scenario we eventually find (propagative or not), although, in opposite directions and of different natures. Indeed, for both AdEx and CAdEx, increasing the amplitude increases the chance of having a propagative scenario for a fixed slope, in a binary fashion, while in the case of HH the contrary is observed, and in a continuous fashion. Also, we observe a slight coupling effect between slope and amplitude: for higher amplitudes, the propagation range extends to slower perturbations. On the contrary, in the HH network, it seems that the slope does not play any major role, hinting at a much less dynamic effect: the difference manifests itself as local equilibria of the networks under consideration that were reached no matter the time course. Moreover, the SDs, in addition to showing no clear dependence on either amplitude or slope, are very small compared with the means, thus providing evidence that noise also does not play any significant role here. These observations highlight once again the deep differences between the two types of networks and their respective phenomenology.

Interestingly, in the case of AdEx and CAdEx, there exists a limited, bistable region here, ∼80 Hz, where the perturbation may or may not propagate in the network, depending on the noise realization. Thus, the scenario does not trivially depend on the amplitude and time constants of the perturbation in this region, which makes the latter a perfect test bed to study more deeply the internal mechanisms at play, and will thus be the main focus of the remainder of this article.

### Influence of structural aspects on the dynamics

In the following, we turn our attention to the bistable region of AdEx networks, where the two scenarios are present, and investigate the origin of the source of the divergence. There are two main differences between the simulations under consideration: the realization of the network connectivity and the realization of the external input, as both rely on random number generators. We have therefore successively fixed each of them and observed that the two behaviors were still present. Also, the global scenarios were indistinguishable from those showed so far. First, this allows us to fix the network connectivity (which will become determinant in this part) without losing the richness of the phenomenology. Second, this tells us that what shapes the distinction between the two phenomena is more complex than a simple question of structure, or realization of the input. Another perspective is then needed to explore the internal dynamics of the network in both scenarios. As the models into consideration have very large number of dimensions, as well as quite intricate structures, brute force analytical approaches are simply not conceivable.

Let us then take a step back and investigate the relationship between the firing rate of each neuron and its number of afferent input connections for the three kinds of input (inp): excitatory (
*a* shows the average (firing rates (

Note here that averaging over simulations for the sake of robustness might be a delicate matter, as we might lose constitutive differences in the process. As we are dealing with highly variable situations, we have to make compromises between generalizability and relevance. Therefore, we start with a single realization to then guide larger and more systematic investigations.

Interestingly, we see a much stronger influence coming from the inhibitory in-degree than from the Poissonian and excitatory ones. Counterintuitively, it even seems that excitatory in-degree has almost no effect at all on total measured firing rates. Indeed, from the point of view of Pearson’s correlation, inhibitory in-degree is much more (anti)-correlated with the firing rate than the excitatory in-degree (almost no correlation) or the Poissonian in-degree (little correlation). Note that we observe the same structure for propagative scenarios (results not shown). Based on these results, we can analyze whether the most salient in-degrees (inhibitory and Poissonian) has any influence on the difference between propagative and non-propagative scenarios (Fig. 5*b*). Here, we see that the global dependency of the average single-neuron firing rates on inhibitory and Poissonian connectivity does not qualitatively change between propagative and non-propagative scenarios. However, the differences

To further understand the effect of the inhibitory connectivity, we chose two points from Figure 4*a*, one known to be always non-propagative, with *τ* = 70 ms and *α* = 70 Hz, and the other to be always propagative, with *τ* = 70 ms and *α* = 95 Hz. In both situations, we varied the probability of connection from the inhibitory population between *p *=* *0.04 and *p *=* *0.06, as shown in Figure 6. Note that the influence of the incoming inhibitory connectivity shifts the boundary between propagative and non-propagative behaviors. This is an important influencing factor in relation to the shape of the perturbation and in particular its amplitude.

It is worth pointing that these results establish a clear link between structure and dynamics, but structure is by itself not a sufficient criterion to understand the underlying mechanisms. We therefore focus on the temporal evolution of the propagating activity.

Beforehand, we take a step back and probe whether the differences in the individual mean firing rates are associated with specific roles in the dynamics. To achieve so, we start classifying, for the AdEx network, the indices of neurons in the raster plot according to the total number of spikes they emit during the whole simulation. We chose for this purpose a representative propagative scenario. The sequence of propagation of the perturbation then appears visually in Figure 7*a*. We observe, in the case of propagation, a fast cascade (consistent with the experimental observations; Neumann et al., 2017): some neurons are quickly driven into a sequence at the onset of the seizure. In addition, there is no perfect synchronization of the action potentials of all neurons. This is an interesting result, coherent with experimental observations on epilepsy (Jiruska et al., 2013).

Second, we examine the same situation, but sorting neuron indices as a function of the number of inhibitory inputs they receive, as shown Figure 7*b*, as it is the most influential structural feature we observed in our model. Here too, the cascade phenomenon is clearly visible, indicating that the inhibitory input connectivity has a central influence on the dynamics at play during the perturbation in the propagative scenario.

Figure 8 shows the same pictures for CAdEx and HH networks. We see here that the behaviors of the CAdEx network are very similar to AdEx: sorting with firing rate or inhibitory in-degree gives very similar structures, and we can also distinguish here the cascading effect at the onset of the perturbation, following the indices. HH networks show quite a different phenomenology. First, the two sorting do not show the same structures, which hints at a more subtle mapping between inhibitory in-degree and long-term single-neuron model dynamics. In the firing rate sorting, we can still distinguish three blocs of distinct activity, and thus of populations, corresponding to the three key periods of the simulation: before stimulation, at the onset, and during the stimulation. Interestingly, it seems that before and during the stimulation different populations of neuron models are distinctly mobilized. While before the stimulation, the central neurons (with respect to their indices) are active, a double cascade contaminates the rest of them (toward higher and lower indices) at the onset, ending in a general surge of activity. This must be contrasted with the in-degree sorting panel, where the cascade is more unidirectional, as the main activity slides from low-connectivity indices (less connected) to the higher ones, until all neurons fire. This emphasizes the importance of the perspective chosen to analyze complex behavior: none of these perspectives alone completely explains the intricate interplay among structure, long-term, and short-term dynamics.

Altogether, these results show the relevance of adopting a perspective based on the inhibitory in-degree: it gives an operational method to rank single neurons, and this ranking is clearly associated with specific dynamic features, hence allowing us to study the role of the internal organization of the network before and during the paroxysmal event. As the cascade phenomenon is similarly visible in all types of networks, in the next section we focus on the AdEx network. We push further this analysis by comparing propagative and non-propagative scenarios and make use of the continuous measures introduced in Materials and Methods.

### Continuous measures on subgroups of neurons

Focusing on the AdEx network, we first consider groups of neurons defined by their inhibitory in-degree. Note that these are somewhat artificial, as they are only statistical reflections of topological aspects of the network (i.e., there is no reason to think a priori that all neurons having *n* inhibitory inputs would have more privileged links among themselves than with those having a different number). However, they allow in principle a variable degree of categorization, based on the sampling of the inhibitory in-degree distribution, which eventually leads to different levels of (nonlinear) coarse graining (although we will consider only one such sampling here). Second, we switch our analysis to continuous variables, which allow a finer and more systematic analysis of the dynamics, as they do not depend on spike times. Indeed, although spike timings are the most accessible collective measures in real-life systems, which make them the most fitted candidates for “transferable” studies, we want here to take advantage of the virtues of mathematical modeling to probe the underlying mechanisms in these simulations, to then be able to draw conclusions on more accessible observables. We focus here uniquely on membrane potentials, as they are the closest proxy of the firing dynamics in the network and chose to use two main measures based on them: the mean *μ _{V}* and a modified Kuramoto order parameter

*R*, which gives a naturally normalized measure of instantaneous alignment (or similarity) of the membrane potentials. Both are defined in time, over a class of neurons. As randomness plays a crucial role in our simulations, through network connectivity as well as noise realization, it is important to control how much it affects the results we obtain. To achieve so, we start by fixing the network connnectivity while averaging over noise realizations, and then average over connectivities while looking for noise realizations that lead to propagative and non-propagative scenarios for each structure.

#### Mean membrane potential in time

In Figure 9, *a* and *b*, the mean membrane potential *μ _{V}* defined for each group of excitatory (RS) and inhibitory (FS) neurons, in time. The top and bottom rows, respectively, refer to the averages and SDs over noise realizations (input), as the network connectivity is held fixed. For propagative (Fig. 9

*a*) and non-propagative (Fig. 9

*b*) scenarios, all the data presented from this point were obtained by regrouping neurons having the exact same inhibitory in-degree, thus corresponding to a discrete one-to-one sampling of the input distribution. Note that, given the network architecture under consideration, the number of afferent inhibitory synapses defined over both populations of neurons follows a binomial distribution with a mean of ∼100 connections. From that, we arrange neurons in groups of identical number of inhibitory connections, which gives us ∼60 groups (varying with population and connectivities) containing at least one neuron.

To confirm that our results were not depending on the specific connectivity we had, we simulated 50 different networks with different connectivities (which otherwise were identical) and found a couple of noise realizations for each corresponding to propagative and non-propagative scenarios. Those various networks still have the same metastructure and follow similar statistics. They only show that within those specific choices, the variations that exist do not impact the results we show. We applied the same method to create the different groups, but the number of said groups could differ because of the random variability in the connections. Therefore, many “extreme” groups are poorly represented among the various connectivities, which would make them hard to average. We thus discarded them. The average and SD of *μ _{V}* over the 50 different connectivities is shown in Figure 10,

*a*and

*b*. The white lines could be a weak manifestation of the previous effect, which made the SD very high, coupled with the fixed range of color scales, imposed for the need of clarity. We observe that this figure looks very similar to Figure 9, which suggest that the results are not limited to a specific connectivity.

We see from Figures 9 and 10 that the inhibitory in-degree profile seems to play a major role in the overall dynamics. Indeed, as the perturbation is growing (starting 250 ms before the maximum at 2 s), we can first observe a strong increase of the mean membrane potential of all excitatory neurons, starting from low indices, then followed by a low-potential cascade, also starting from low indices and then contaminating higher ones.

This latter effect is much clearer in the case of inhibitory neurons, where the cascade follows very well the input profile, in both propagative and non-propagative scenarios. Note that the low-potential area can be easily understood as a high-firing regime: neurons fire as soon as they leave their resting potential, thus displaying very low values of membrane potential when calculated (and sampled) over time.

Interestingly, these pictures show that, up to the decisive point of the seizures, the continuous measures look very similar, thus hinting at an instantaneous finite-size fluctuation causing the whole network to explode. Also, it is noteworthy that the new “hierarchy” set by the cascade is conserved in the non-propagative regime, while the propagative regime seems to have an overall reset effect.

Also, we see from these graphs that there is a particular time window where the variance of the mean membrane potential is larger for the most inhibitory-connected neurons, in both RS and FS populations (although it appears clearer for RS ones here, because of the need to rescale the FS colorbar to have comparable results). This increase of variance, while still present, is weaker and on a smaller time window in the average over connectivities compared with the average over noise realizations. This suggests that connectivity plays a role in the intensity of the effect, although it remains qualitatively similar. We found that this time window defines the period when the network can actually switch to propagation: the high variance corresponds to different times when various realizations “explode,” and thus defines a region of instability.

A central point to raise here is that what makes the difference between propagative and non-propagative scenarios is most likely not an infinitesimal instability defined from a macroscopic perspective [i.e., that is because of a positive eigenvalue of a Jacobian defined from a large-scale representation (e.g., mean-field)], otherwise the non-propagative behavior would simply not be observable (as, except for chaotic dynamics, we do not observe unstable trajectories in phase space). Indeed, what differs between the various simulations is either the noise, or the connectivity realization, which may, or may not, bring the system to a point of instability. The external Poissonian drive, with finite-size fluctuations is thus constitutive of the scenarios we observe.

To gain more insight into the diversity of dynamics across neuron groups, we turn our attention to a measure of alignment, or synchronization, namely the Kuramoto order parameter *R*.

#### Kuramoto order parameter

The Kuramoto parameter represents a degree of alignment, with a value of 0 meaning there is no alignment and a value of 1 meaning everyone is perfectly aligned. We show in Figure 11, *a* and *b*, that the Kuramoto order parameter *R* defined for each group of excitatory (RS) and inhibitory (FS) neurons in time, averaged over noise realizations (Fig. 11*a*,*b*, top row), and SD over realizations (Fig. 11*a*,*b*, bottom row), in propagative (Fig. 11*a*) and non-propagative (Fig. 11*b*) scenarios (network connectivity held fixed).

Again, we reproduce the results with 50 network connectivities, for both propagative and non-propagative scenarios (Fig. 12*a*,*b*). We clearly see that the results are qualitatively similar, although with seemingly higher contrast than Figure 11.

The cascade previously observed is clearly visible for the average *R*, in the form of a “desynchronization cascade.” For the propagative scenario, we note here a recruitment process between two radically different regimes having nonetheless alignment features: a fluctuation-driven AI dynamics, where membrane potentials are mostly conditioned by the balance of inhibitory versus excitatory inputs, and a seizure characterized by high spiking and membrane potentials clamped by refractoriness. Interestingly, in the non-propagative scenario, it appears that the misalignment of the inhibitory neuron groups finally attained is fueled by the joint activity (of the network and the input), thus hinting at an out-of-equilibrium steady state (that continues until the end of the plateau of the perturbation, 1 s later). From the SD perspective, two main features are worth pointing out. First, we again observe the instability window, characterized by high SD between realizations in propagative scenarios. Second, we see that the two types of averaging leads to strikingly similar results, although slightly different quantitatively speaking, the average over connectivities leading to a higher contrast during the cascade. Therefore, our results are independent of both the noise realization and the specific connectivity, although an average over one or the other is useful to observe a typical case.

### Dynamic versus static approach

We have seen that changing the slope and the amplitude of the signal alters the chances of triggering a seizure, thus hinting that the time evolution of the perturbation is central. Then we observed a hierarchical structure setting in from the point of view of continuous measures, following the perturbation. However, fundamental questions remain. How much of this latter phenomenon is actually dynamic? Would we find the same structures if we bombarded the network with a fixed input at, say, 80 Hz? Can we observe the same dynamic structures for scenarios that are always, or never, propagative (no matter the noise or connectivity realization)? This would indicate that the structures observed thus far might have little to do with the seizure phenomenology itself but would either be the mere results of strong conditioning of the network by the level of input (if static structures are similar), or simply not yield any explanation for the instability we observe (if always/never propagative scenarios show similar features).

We now turn our attention to Figure 13*a*, which displays the static *μ _{V}* profiles in RS population obtained for fixed external inputs (”Stat” curves), together with the profiles captured at the typical onset of the seizure, for various amplitudes: 60 Hz (never propagative), 80 Hz (sometimes propagative), and 100 Hz (always propagative). The network realization is the same as previously analyzed, except when explicitly stated (Net 2), where we refer to another connectivity realization. For the 80 Hz scenarios with the first network (the one we have been investigating so far), we kept the splitting of the realizations between propagative and non-propagative to highlight the potential differences of structures.

First, as previously observed, the profiles obtained for propagative versus non-propagative regimes are very similar for lower values of inhibitory connectivity. Then, we clearly see that the *μ _{V}* profiles extracted from the dynamic situations (hereafter called the “dynamic profiles”) are very different from the static ones.

In addition, it is worth pointing out that the profile obtained for an 80 Hz amplitude with a different random realization of the network (where all 50 noise realizations are put together, based on the previous observation that propagative and non-propagative scenarios show very similar structures) is very similar to those already shown, with small SE, which, together with the previous observation that noise and network realizations seem to play similar roles, underlie a robust network phenomenology. Furthermore, we see that the profiles obtained for 60, 80, and 100 Hz amplitudes are different. The nature of their differences is of great interest for low indices, where we observe that 60 and 100 Hz profiles are located on opposite sides of the central 80 Hz profile: their ordering in this region is consistent with that of their response to the perturbation we have observed so far (Fig. 4). This said, the dynamic profiles yet show similar qualitative features: they all are nonmonotonous and display two well separated parts. Indeed, for low indices (until 30) *μ _{V}* is increasing with values starting around the lowest of the static profiles (10 Hz), while their high-indices part is more aligned with high static profiles. Interestingly, we see that for 60 and 80 Hz the right part is well aligned with the static profile obtained for similar inputs. This does not seem to be the case for 100 Hz, although the static input simulation displays some instability, which makes their comparison less relevant. Although it is not straightforward to link

*μ*with the instantaneous regime, we have seen that low values can be associated with high firings (the neurons spending most of their time clamped at –65 mV). This helps in understanding what is happening here: for higher values of amplitude, the less inhibitory-connected neurons are firing more, and can thus lead the rest of the network to higher activities.

_{V}Figure 13*b* shows the Kuramoto order parameter aspect of the latter figure. Here the *R* profiles display structures quite different from those observed for *μ _{V}.* Indeed, the various static profiles do not display such clear variability as for

*μ*, although little differences can still be observed: high inputs seem to show more variability in low indices, while ending at higher values for higher indices. More importantly, the dynamic profiles are here very different from the static ones and among themselves. In addition, the simulated propagative and non-propagative scenarios show little differences here as well, and the profiles corresponding to same amplitude (80 Hz) and different network architecture (Net 2) also overlap here. Interestingly, we can also observe that the 60 and 100 Hz profiles are different and located apart from the 80 Hz, although they also show different magnitudes of their inverted peaks. Given that the ordering of these magnitudes is not consistent with the various degrees of instability, we suggest that the position of the peak might be the most discriminating factor to establish whether the scenario is propagative. This would be consistent with the observations we made thus far and would confirm our previously suggested scenario: the more we approach the centre group, the more neurons are considered (binomial distribution). In Fig. 13, the green peak (100Hz) tells us that more neurons in that region have undergone the desynchronization cascade we mentioned earlier; that is, more neurons have already “switched sides” and entered a high-firing regime, thus giving more inertia to the cascade phenomenon. The middle scenario (80 Hz) would then sit on a tipping point, that is a point separating two radically different dynamic regimes of the system.

_{V}These latter observations show that, from the perspective of both mean membrane potential and Kuramoto order parameter calculated inside the groups formed from inhibitory in-degree, we are in the presence of a structured behavior that emerges from an intricate interaction between dynamics and architecture, and which cannot be recovered from static approaches.

### Can seizure propagation be controlled by external inputs?

After having established that the structure of the dynamics allows or does not allow the propagation of the paroxysmal perturbation, we now investigate whether we could use the previous finding of a strong instability window for the 80 Hz dynamic scenario to alter the fate of the AdEx network dynamics. This approach is based on the following reasoning: we have observed, with a detailed analysis, that switching to one scenario or another is determined in a short time window (just before the eventual seizure). Thus, we want to design a stimulation protocol to reduce the chance of seizure propagation, based on this observation, but one that does not require the same level of analysis, hence making it applicable inline and without the need of extensive computational power. To do so, we will study the region around the seizure to determinate this relevant time window.

To achieve this, we apply a Gaussian stimulation, with a 10 ms time constant, two different amplitudes (1 and 5 Hz), positive or negative, through a variation of the external excitatory input (which, depending on when the simulation is applied, can be the drive of 6 Hz or the drive plus somewhere on the perturbation of 80 Hz with a time constant of 100 ms). For simulations performed under the same conditions, the stimulations were applied at different times, as detailed in Table 1. Table 1 shows, for a total of 100 simulations (with same network connectivity but different noise realizations), among which 72 were propagative, that the relative percentage of simulations has undergone a triggering and a cancelation of the seizure, respectively.

We see that it is possible to “reverse” the scenario from propagative to non-propagative in the time windows between 1975 and 2000 ms (and vice versa, albeit for a larger time window) thanks to (or because of) the stimulation (as can be seen in Table 1, the Percentage of prevented events section; for the opposite, see Table 1, the Percentage of triggered propagation events section). A notably interesting case is that >50% of the seizures are prevented if a stimulation of −5 Hz is applied in the same time window. This could open interesting leads in furthering qualitative comparisons between computational simulations and real-life situations, and could eventually guide future interventions.

## Discussion

In this computational work, we studied the response of various spiking neural networks to paroxysmal inputs. We observed that the same networks can display various types of responses, depending on their nature (the neuron model used at its nodes), the shape of the perturbation (here we analyzed particularly a plateau-like input with various slopes and amplitudes), and the realization of the random number generator. In the case of AdEx and CAdEx networks, two radically different responses to a qualitatively similar incoming excitatory perturbation are observed. Indeed, the latter could either recruit the excitatory population and thus allow the seizure to propagate to efferent areas or be “controlled” by the activity of the inhibitory population, keeping the excitatory population at a low activity level, thus preventing further propagation. The response of the network depends not only on the amplitude of the perturbation but also on its rising speed. This is consistent with experimental observations (Saggio et al., 2020). Interestingly, in the case of an HH network, our investigations show very different network responses, where mostly the amplitude of the perturbation plays a role and where no variability on noise realizations was observed.

A rich literature shows that seizures can be classified according to their onset/offset features described by bifurcation types (Jirsa et al., 2014; Saggio et al., 2017, 2020). The most observed bifurcation at the onset of a seizure is a saddle-node bifurcation (Saggio et al., 2017), which is characterized by an abrupt change in the baseline of the electrophysiological signal (Jirsa et al., 2014). We observed in the current work that seizures are propagative in AdEx and CAdEx networks when they rise abruptly enough in the network. There is here an interesting correspondence revealing the importance of the onset of seizure dynamics, as it has been shown from a clinical point of view (Lagarde et al., 2019). It is worth noting that the absence of such phenomenology in HH networks (for the scenarios we considered) raises interesting questions in the modeling of seizure dynamics, but also more generally in neuronal networks. How do the quantitative differences (number of variables) and qualitative differences (types of processes taken into account) in the single-neuron models affect the global dynamics? Are more precise models always the best in all respects? This underlines the importance of the choice of model and of parameters: by modeling a neuronal network and observing a phenomenon that resembles reality, we are not testing whether the specific ingredients we chose are constitutive of this phenomenon, but how they would be if they were chosen a priori. It is only the systematic cross-model observations and comparisons that can yield an answer as to which are the necessary and sufficient ingredients to observe a given phenomenon.

Note that, in clinical observations, the most accessible measurements are made on a macroscopic scale. In the study proposed here, we observe the activities at a smaller descriptive scale by building a network of neuron models. We thus have a complex system of very high dimension, rendering impossible, a priori, the obtaining of a simple description of the dynamics, which motivates the statistical approach proposed here. With this type of analysis, we were able to track in time key features of the underlying dynamics, especially those supported by the structure of the network: inhibitory in-degree can be mobilized to explain global differences in network response. Indeed, we proposed a coarse-grained description of the network dynamics based on inhibitory in-degree, allowing us to capture internal processes that were not visible at first, and that play a significant role in the global out-of-equilibrium dynamics. We chose inhibitory in-degree as it was found to be the most influential aspect determining the firing rate (Fig. 5*a*). It is interesting to note that inhibitory neurons were also the ones that had the highest firing rate (∼15 Hz), while the excitatory neurons were way lower (∼2 Hz) and the Poisson noise was also lower (∼6 Hz by construction). That difference could be the reason for the disparity in influence more than the nature of the neurons, and while it could be interesting to investigate, it is not in the scope of this study and does not change the main results as the categorization was only used as a tool to visualize the data. This opens the way to a flexible modeling framework of internal subpopulations, whose precision can be adapted to the most significant level of description, depending on the context and the questions asked. This is a first bottom-up step toward a coarser description of the system and, hence, may guide reliable modeling attempts at larger scales.

We have also established that not only does this structure matter, but also its interaction with instantaneous finite-size fluctuations of the noise and the time evolution of the global dynamics. These are all constitutive of the observed behaviors, and none can be neglected to understand them.

Also, our results showed that, for the AdEx network, there exists a time window, characterized by a high variance across noise realizations, during which it is possible to reverse the behavior by applying an appropriate stimulation. The use of a stimulus to interrupt a seizure has been applied in the past in the case of absence seizure (Rajna and Lona, 1989). These results have been used as bases of computational studies at the scale of the EEG (Taylor et al., 2014). Computational work on the response of a network model to stimuli to disrupt seizure-like activities has shown the importance of the precise timing of the stimulation (Anderson et al., 2007). Then, the use of electrode stimulation has been developed in rodents (Pais-Vieira et al., 2016). These different approaches have been implemented, including deep brain stimulation, vagus nerve stimulation (Boon et al., 2018), and magnetic stimulation (Ye and Kaszuba, 2019). However, experimental recordings of the response to stimuli do not allow us to understand the mechanisms of large populations of neurons. Indeed, even if progress in calcium imaging or in multielectrode arrays has made it possible since this last decade to record a large number of neurons simultaneously, we do not yet have access to the exact structure of the network they constitute. The study presented here is thus a proof of concept, based on a specific network model.

Finally, we also found that it is possible to “control” the propagation of the seizure by appropriate stimulation in a given time window. We think that this constitutes not only an important prediction of the model, but also a potentially important possibility for the treatment of some types of intractable focal epilepsies. This prediction could be tested in future modeling work at the mesoscopic scale, with realistic connectivity between the focus and neighboring areas. Such a model could be used to test the hypothesis that appropriate stimulation in areas adjacent to the focus may prevent the propagation of the seizure.

Perhaps the most exciting perspective is that the same paradigm could be used experimentally to control seizures. This would require a system to detect the onset of the seizure in the focus, and another system to deliver appropriate stimuli in adjacent areas. Such a system could be applied to experimental models of focal seizures, to evaluate whether such a paradigm could reverse the propagation, and thus generalization, of the seizure. This could be another way of controlling seizures, not by suppressing the focus, but by making sure that the paroxysmal activity does not propagate.

## Footnotes

The authors declare no competing financial interests.

This work was funded from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3) and the Centre National de la Recherche Scientifique (France).

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.