Is There Sufficient Evidence for Criticality in Cortical Systems?

Abstract Numerous studies have proposed that specific brain activity statistics provide evidence that the brain operates at a critical point, which could have implications for the brain’s information processing capabilities. A recent paper reported that identical scalings and criticality signatures arise in a variety of different neural systems (neural cultures, cortical slices, anesthetized or awake brains, across both reptiles and mammals). The diversity of these states calls into question the claimed role of criticality in information processing. We analyze the methodology used to assess criticality and replicate this analysis for spike trains of two non-critical systems. These two non-critical systems pass all the tests used to assess criticality in the aforementioned recent paper. This analysis provides a crucial control (which is absent from the original study) and suggests that the methodology used may not be sufficient to establish that a system operates at criticality. Hence whether the brain operates at criticality or not remains an open question and it is of evident interest to develop more robust methods to address these questions.

Whether the brain operates at criticality or not has been debated since Beggs and Plenz (2003) reported that power-law statistics of firing patterns show similar behavior to that of physical systems at a phase transition. This suggestion sparked the interest of theoreticians, in search of a rigorous validation of this hypothesis and for a theory of the origin and implications of criticality in neural systems (Mora and Bialek, 2011). In tandem, experimentalists searched for evidence of criticality in neural systems and physiological or pathologic brain states (Hahn et al., 2010;Friedman et al., 2012;Massobrio et al., 2015). Some papers also hypothesized that criticality was a hallmark of healthy brain function and optimal information processing (Beggs, 2008;Shew et al., 2009;Shew and Plenz, 2013).
However, many theoretical studies have shown that the evidence provided for criticality in experiments is not specific to critical systems. Some papers proposed that simple phenomena could play a role in the emergence of the positive detections of criticality reported in the literature. For example, artifacts of thresholding noisy signals (Touboul and Destexhe, 2010), intermittent activity (Miller, 1957), and how that may affect high-dimensional neural data (Laurence et al., 2014) or large-scale interacting networks (Schwab et al., 2014;Touboul and Destexhe, 2017) have been shown to generate purported signatures of critical behavior. One of the key difficulties related to the criticality hypothesis is the lack of a univocal statistical test, which in turn points to the difficulty in identifying a specific type of phase transition associated with the putative critical dynamics.
With the aim of bringing together theoretical findings and experimental data, we investigate the conclusions of a recent paper reporting remarkable power-law scalings on an extensive dataset. The paper in question analyzes neural data recorded in various species, with distinct preparations and different brain states using a unified methodology (Fontenele et al., 2019). Strikingly, data ranging from freely moving or anesthetized mammals to ex vivo preparations of reptile nervous system or cultured slices of rat cortex all show common scaling in a specific activity regime. The authors interpret this common scaling as an unspecified critical regime (which is said to be distinct from the classical mean-field directed percolation model). The extraordinary consistency of the scalings observed in different brain states and preparations is very surprising. In particular, the fact that in vitro neuronal cultures and deeply anesthetized states are found to be critical raises some intriguing questions about how that "critical" state relates to optimal information processing in the brain.
It is thus essential to determine whether the evidence provided in existing experimental studies is sufficient to conclude that the system studied operates at criticality. To assess criticality, Fontenele et al. (2019) used a test based on the relationship between power-law scaling exponents of neuronal avalanches in experiments and in a model at criticality inspired from classical crackling-noise systems. However, the authors in Fontenele et al. (2019) did not provide any control (non-critical systems) to assess whether the methodology distinguishes critical and non-critical models of neural networks. In fact, all the systems they considered passed their criticality tests.
The methodology used by Fontenele et al. (2019) consists of fitting the distribution of neuronal avalanche size (exponent t ) and duration (exponent t t ) with power-laws truncated to a cutoff. The fit is validated by comparing the Akaike information criterion (AIC) associated with the AIC of a log-normal fit. The authors found acceptable support for power-law distributions (compared with the log-normal distribution), but noted that exponents found are not compatible with the mean-field directed percolation systems generally used as a reference to assess criticality. Nonetheless, they classified systems as critical when exponents satisfy Sethna's crackling relationship: where a is the power-law scaling of the average avalanche size as a function of duration. Renormalization theory shows that this scaling is universal, at criticality, for a specific class of systems called crackling noise systems. The choice of this test implicitly assumes that the neuronal systems belong to the universality class of crackling systems, which to date remains an open question and has not been established. In fact, the authors refer to our own theoretical paper (Touboul and Destexhe, 2017) as support for the use of this relationship to distinguish critical from non-critical systems. However, the results in our paper do not support the test performed in Fontenele et al. (2019). We showed that for two non-critical models all hallmarks used to identify criticality in the experimental literature were satisfied. These counterexamples would not satisfy Sethna's relationship (1) in the thermodynamic limit and for the scaling of the tails of the distributions of avalanches. We do not make any claim about the scaling related to the bulk of avalanche distributions, and our results should not be construed as demonstrating that systems not satisfying (1) are critical [or that those for which (1) is not satisfied are not critical]. To assess whether the two non-critical systems studied in Touboul and Destexhe (2017) indeed provide signatures distinct from the neural systems analyzed in Fontenele et al. (2019), we replicated the analysis in Fontenele et al. for the two models studied in Touboul and Destexhe [2017;the Brunel network;Brunel (2000) and a stochastic surrogate (Touboul and Destexhe (2017)]. We investigated how fitting the bulk up to a cutoff affects the tests and performed extensive simulations, computed avalanche distributions for size and duration, fitted power-law distributions with various cutoffs, and used the AIC difference test proposed in Fontenele et al. (2019) to validate the power-law fits (100% of the n = 32,000 distributions considered in Fig. 1 passed the test). We next checked, based on the fitted exponents and for each set of threshold, whether (1) was statistically valid using a twosample t test on multiple independent repetitions of the simulation. We found a connected region of pairs of threshold values for which the statistics are statistically significantly consistent with (1), and would therefore be classified as critical by the criterion used in Fontenele et al. (2019). These counterexamples to the test used in Fontenele highlight the fact that the evidence provided is not sufficient to establish that the data they analyzed is from a system at criticality. The truncation of the data performed in Fontenele et al. (2019; with thresholds as low as 15-25 duration bins) is a good practice and inevitable for experimental datasets. However, truncation of the data may substantially alter the statistics, particularly when it comes to estimating the tails of a distribution. In Fontenele et al. (2019), the truncation is a crucial step in the methodology as fits are performed from the smallest observable avalanche, and therefore small cutoffs will significantly impact up to the cutoff distribution of small avalanches and are less likely to accurately catch the behavior of the tails. Therefore, while the authors do report evidence that the brain, in some regimes, shows statistics that are consistent with a given type of critical system, they did not establish that the brain operates at criticality, because the methodology used appears insufficient to distinguish critical from non-critical systems.
Brunel's model is a well-known network model displaying activity states relevant to cortical activity. The fact that this model can satisfy all aspects of the analysis in Fontenele et al. (2019) away from criticality suggests that the most parsimonious explanation for the data does not require criticality, a regime that entails fine tuning of physiology parameters or homeostatic mechanisms for constraining selforganization. Moreover, such basic phenomena appear consistent with the ubiquity of these observations in a variety of neural systems from awake animals to reptile ex vivo neurons. This report underlines once more that the criticality hypothesis is yet to be established, and that rigorous methods should be developed. To make progress in this area, experimentalists and theoreticians should come together to make precise definitions of the type of criticality that could arise in the brain and establish rigorous, univocal tests for that criticality.

Code Availability
The program code used to reproduce the figure of the paper is available in Destexhe and Touboul (2021).