**by Giorgio Nicoletti, Samir Suweis, Amos Maritan** (Università di Padova, Italy)

Email me!

We present a systematic study [1] to test a recently introduced phenomenological renormalization group, proposed to coarse-grain data of neural activity from their correlation matrix [2]. The approach allows, at least in principle, to establish whether the collective behavior of the network of spiking neurons is described by a non-Gaussian critical fixed point. We test this renormalization procedure in a variety of models focusing in particular on the contact process, which displays an absorbing phase transition at λ=λ_{c} between a silent and an active state. We find that the results of the coarse graining do not depend on the presence of long-range interactions and, overall, the method proves to be able to distinguish the critical regime from the supercritical one. However, some scaling features persist in the supercritical regime, at least for a finite system, as we see in a contact process above λ_{c}. Our results provide both a systematic test of the method and insights on the possible subtleties that one needs to consider when applying such phenomenological approaches directly to data to infer signatures of criticality.

_{[1] G. Nicoletti, S. Suweis, A. Maritan. Scaling and criticality in a phenomenological renormalization group. Phys. Rev. Research 2, 023144 (2020)[2] L. Meshulam et al. Coarse Graining, Fixed Points, and Scaling in a Large Population of Neurons. Phys. Rev. Lett. 123, 178103 (2019)}

Hello Everyone! I am Giorgio Nicoletti, a PhD student at the Laboratory of Interdisciplinary Physics at the University of Padova in Italy. This poster is about a paper we recently published and deals about the possibility of applying a phenomenological Renormalization Group to single-neuron recordings. In particular, we want to understand what these methods can teach us by testing them in meaningful models of neural activity in order to quantify their relation with criticality.

One of the most promising approaches was proposed last year by Meshulam et al., and it essentially amounts to two kind of procedures. The first one is similar to coarse-graining in direct space and amounts to cluster together pairs of maximally correlated neurons; if the system is scale invariant, we should see power-law scaling in various quantities as we grow the clusters. The second approach, instead, is inspired by coarse- graining in Fourier space: we project out small variance modes and see if the joint probability distribution of the system approaches a critical fixed form.

We first test this approach in an archetypal model for the spreading of activity, the contact process, which displays an absorbing phase transition between a silent state and an active state. We find mixed results: this phenomenological procedure typically induces some scaling properties even away from the critical point, but there seems to be a fixed form of the joint probability due to the strong scaling of the eigenvalues of the covariance matrix that emerges at the phase transition.

But then we also introduce some very simple models of variables that are independent conditioned to the state of an external, global parameter. The idea is that in this way we can consider only the external contribution, but this class of models is also interesting because in general we can write many things exactly as a superstatistic over the external variable. If we take the simplest case, a binomial firing rate, we find that the spectrum of the covariance matrix is degenerate and the whole procedure is basically driven by statistical errors in the estimate of the eigenvalues. Surprisingly, we find that these statistical errors are enough to produce a fixed form of the joint probability, which undermines the precision that this type of approach can achieve.

With our work, we provided a systematic and solid framework to test these methods and their relation with criticality. Up to now, we still lack a tight relation between phenomenological scaling along the renormalization procedure and criticality, so we need to come up with new and more precise ideas. Another fundamental question is how low-dimensional models are related to critical points, and if we can write down more general null models that can help us distinguish between critical and non-critical features that we might see in data.

Thank you for your attention, and if you want to know more you can scan the QR code to download directly the paper!