**presented by Tawan T. A. Carvalho and Mauricio Girardi Schappo***Departamento de Fı́sica, Universidade Federal de Pernambuco, Recife, PE, Brazil*

Email us!

*Recent experimental results on spike avalanches measured in the urethane-anesthetized rat cortex have revealed scaling relations that indicate a phase transition at a specific level of cortical firing rate variability. The scaling relations point to critical exponents whose values differ from those of a branching process — a model canonically employed to understand brain criticality. This suggested that a different model, with a different phase transition, might be required to explain the data. Here we show that this is not necessarily the case. By employing two different models belonging to the same universality class as the branching process (mean-field directed percolation, MF-DP) and parsing the simulation data exactly like the experimental data, we reproduce most of the experimental results. The parsing includes measuring only from a very small fraction of the neurons in the network (a procedure called subsampling), and sweeping over a range of sampling time bins to define the avalanches. These data parsing protocols are sufficient ingredients to change the measured exponents of the known underlying critical point. The part of the experimental data where scaling laws are valid is only matched by the parsed model within a very narrow range in parameter space around the MF-DP critical point.*

The hypothesis of brain criticality relies mainly on the measurement of neuronal avalanches. Over the years, different experimental setups have yielded different avalanche exponents. The empirical evidence and the nature of the brain (a network of spiking neurons) strongly suggests that the neuronal avalanches should be represented by a branching process. Beggs & Plenz (2003) indeed measured neuronal avalanches that matched the branching process exponents. However, many experiments done after them did not agree with the standard branching process exponents for avalanche distributions. The standard branching process exponents are: τ = 3/2 (size distribution), τ_{t} = 2 (duration distribution), and 1/(σ ν z) = 2 (crackling noise, or Sethna’s, scaling law), and they correspond to the so-called mean-field Directed Percolation (MF-DP) universality class. In critical systems, avalanche sizes and duration are expected to scale proportionally to each other, hence yielding the crackling noise exponent scaling relation: (τ_{t} – 1) / (τ – 1) = 1/(σ ν z). Both sides of this equation may be fitted independently in experiments, and they must agree within error bars for the system to be considered critical. Fontenele et al. (2019) used a novel analysis of neuronal avalanches data: they used the coefficient of variation (CV) of the spiking activity to parse the network recordings in their own data (urethane-anesthetized rats), as well as in publicly available data sets (freely moving mice, ex vivo turtle, anesthetized macaque monkey and neuronal slices). They found a startling fact: all these experiments fall on top of the same crackling noise scaling law corresponding to 1/(σ ν z) = 1.28 ± 0.02. Although all the experiments beautifully agree, they fall far from the standard branching process exponent, 1/(σ ν z) = 2. This made the authors question: although the scaling law is obeyed as expected for critical systems, what phase transition underlies the experiments’ critical point with such unusual exponents? Here, we will show that subsampling holds the key to reconcile the agreement between the critical exponents of the theoretical MF-DP branching process to the empirical data of Fontenele et al. and other experiments.

For that, we measured from 5 urethane-anesthetized rats and compared to the spiking neuronal network model from Girardi-Schappo et al. (2020). Urethane is a well-established drug that provides spontaneous changes of brain states that resemble sleep state alternations (Clement et al., 2008). In the last decade, experimental preparations using urethane have helped elucidate questions concerning mechanisms and the functional relevance of state-dependent patterns of brain activity (Curto et al., 2009; Renart et al., 2010; Mochol et al., 2015; de Vasconcelos et al., 2017). The property to promote spontaneous change in the levels of spiking variability cannot be achieved in other anesthesia approaches, such as pentobarbital and isoflurane. In addition to promoting a richer diversity of brain states, urethane anesthesia is remarkably stable during long periods of recordings. We subjected both model and experiments to the same CV-parsing procedure. However, due to the fact that experiments can only record from a few hundred neurons from the rats’ brains, we only record from 100 neurons of the total 100,000 neurons of the model. This procedure of recording from only a few units, a number that is drastically less than the total number of units, is known as “subsampling” the data. The model has a phase transition in terms of the excitatory/inhibitory strength ratio given by the parameter g. At g=1.5, it presents a well-known MF-DP critical point (analytically calculated). When parsing by CV, we discovered that the model could only reproduce the experimental CV distribution if we recorded from its theoretical critical point (g=1.5), as well as from a slightly supercritical state (g<1.5 close to 1.5). The model subcritical state CV time series did not contribute to generating a CV distribution that closely resembled the experimental one.

For both experiments and model, we independently fitted power laws for the avalanche distributions of size and duration, and also for the crackling noise scaling relation. As the CV is varied, the two sides of the crackling noise relation, L.H.S given by (τ_{t} – 1) / (τ – 1) and R.H.S. given by 1/(σ ν z), intersected each other at the average <CV> = 1.46 ± 0.08 (for experiments) and <CV> = 1.41 ± 0.05 (for model) — a very good agreement within error bars, indicating that both experiments and subsampled model are considered critical in the same CV range. Notwithstanding, the crackling noise scaling law of both experiments and subsampled model agreed very well, yielding 1/(σ ν z) = 1.30 ± 0.02 (for experiments) and 1/(σ ν z) = 1.34 ± 0.02 (for model). The model agrees with this scaling law up to 3% away from the critical point (g=1.5) towards the supercritical state (g<1.5). The model also agrees with the previous experiment of Fontenele et al. (2019), and with the data that the authors parsed from the other mentioned experiments. Thus, the model that is a well-known MF-DP branching process had its true exponents hidden due to subsampling. The true values of the exponents, τ = 3/2, τ_{t} = 2 and 1/(σ ν z) = 2 yield “apparent critical exponents” with the values τ = 1.65 ± 0.02, τ_{t} = 1.87 ± 0.03 and 1/(σ ν z) = 1.34 ± 0.02. It’s important to notice that these apparent values do not pertain to any other known universality class. Rather, here they stand for the MF-DP phase transition under subsampling condition.

To investigate further the role of subsampling on the apparent values of the exponents, we varied the sampling fraction keeping the model at its critical point g=1.5 and sampled avalanches using the natural time step of the model. We measured n neurons from the N=100,000 total cells of the model. We start with the sampling fraction n/N = 0.001 (a drastic subsampling) up to n/N = 1 (full sampling). As the fraction increases, the apparent scaling law of the model is obeyed within the range 0.01 < n/N < 0.02. For fractions 0.02 < n/N < 1 there’s no apparent scaling law (i.e., under these fractions, even the underlying system being critical by construction, subsampling hides completely the crackling noise scaling law, leading to the wrong conclusion that the system would not be critical). At n/N = 1 (full sampling), the MF-DP exponents are recovered, as expected, even after parsing for CV, confirming that subsampling is the only factor hiding the MF-DP phase transition of the model. This shows the big influence of subsampling in telling apart a critical system from a non-critical system: subsampling can either confirm the criticality with apparent non-usual exponents (even having a usual MF-DP present), or entirely break the crackling noise scaling law (even the underlying system obeying it by construction). In addition, it strongly suggests that the answer to Fontenele et al. (2019) question (“what is the phase transition?”) is simply the long-expected one: the phase transition is MF-DP corresponding to a standard branching process, but its true nature is hidden by subsampling effects inherent to experimental resolution.

Glad you guys looked into potential deviations introduced by sub sampling! That should clear some things up. Don’t forget we already know additional issues that might prevent you seeing these exponents such as anesthesia and cortical layer dependency. Nice work!

Thank you very much for the comment! The subsampling problem needed to be revisited now using the crackling noise scaling relation. So we managed to show that the MF-DP universality class is still in the game. Regarding the experimental data, the choice of anesthesia is fundamental, and in the case of urethane, it is already proven that it promotes spontaneous change in the levels of spiking variability, generating a very rich diversity of brain states. And understanding these data from rats anesthetized with urethane was a step forward and we are committed to continuing this investigation.

We definitely agree, these are important factors to consider when analyzing cortical data…

Although, here, we are showing three things that go in parallel to those that you mentioned: a) a known critical MF-DP system will appear not to be critical at all for a wide range of sampling fractions; b) for the sampling fractions in which the Sethna’s scaling law is obeyed, the known MF-DP critical system will have its true exponents hidden by weird non-usual (i.e. apparent) exponents; and c) these apparent exponents agree very well with the data from these experiments with unusual exponents, suggesting that indeed MF-DP should be lurking in the background of these systems and reconciling them with the experiments that show standard expoenents.

We ourselves were surprised by these facts, because usually we tend to assume that the underlying system is scale-free (meaning that some of its properties have a fractal-like behavior, such as avalanches), and then we take for granted that the underlying scale-freeness ensures that the way we sample the system doesn’t matter. Which we now showed to be false from the theoretical point of view (notice that results a and b, above, are theoretical, and we confirmed them using an independent model that’s also MF-DP, the Kinouchi-Copelli network).

Now what amazes me the most is the fact that if we use the nLFP, like you showed in your talk, we get back the tau=1.5 exponent very neatly. As you said, the nLFP dip is thought to be formed by many neurons firing almost together. And this makes me think that, thus, the nLFP is a sort of coherent coarse-graining of the underlying activity, and due to renormalization around the critical point, it keeps the exponent tau untouched (although the tau_t, related to P(T) becomes obfuscated).

These are indeed issues that need a more thorough investigation from both the theoretical and empirical points of view: whether nLFP are indeed a coarse-grained way of coherently renormalizing (or rescaling) the data.

We definitely agree, these are important factors to consider when analyzing cortical data…

Although, here, we are showing three things that go in parallel to those that you mentioned: a) a known critical MF-DP system will appear not to be critical at all for a wide range of sampling fractions; b) for the sampling fractions in which the Sethna’s scaling law is obeyed, the known MF-DP critical system will have its true exponents hidden by weird non-usual (i.e. apparent) exponents; and c) these apparent exponents agree very well with the data from these experiments with unusual exponents, suggesting that indeed MF-DP should be lurking in the background of these systems and reconciling them with the experiments that show standard expoenents.

We ourselves were surprised by these facts, because usually we tend to assume that the underlying system is scale-free (meaning that some of its properties have a fractal-like behavior, such as avalanches), and then we take for granted that the underlying scale-freeness ensures that the way we sample the system doesn’t matter. Which we now showed to be false from the theoretical point of view (notice that results a and b, above, are theoretical, and we confirmed them using an independent model that’s also MF-DP, the Kinouchi-Copelli network).

Now what amazes me the most is the fact that if we use the nLFP, like you showed in your talk, we get back the tau=1.5 exponent very neatly. As you said, the nLFP dip is thought to be formed by many neurons firing almost together. And this makes me think that, thus, the nLFP is a sort of coherent coarse-graining of the underlying activity, and due to renormalization around the critical point, it keeps the exponent tau untouched (although the tau_t, related to P(T) becomes obfuscated).

These are indeed issues that need a more thorough investigation from both the theoretical and empirical points of view: whether nLFP are indeed a coarse-grained way of coherently renormalizing (or rescaling) the data.