by Victor Buendia – Departamento de Electromagnetismo y Física de la Materia. Universidad de Granada, E-18071, Granada, Spain. Email me!

In this work, we studied a model proposed by Larremore et al. (Phys. Rev. Let. 2014) where it was originally claimed that inhibition could originate stable values of low activity.
The model is as follows: each neuron can be either active or inactive. At each timestep, one selects a neuron, looks for its active neighbours, and sums all its contributions, which could be weighted. Then, the input \Gamma is passed through a response function which returns a probability to activate (hence, maps the input to a number in the interval [0,1]). The selected neuron will activate with this probability. The reponse function used for most of the paper is shown at the bottom of the left panel, and it is a piecewise linear function.
Originally, the model was run in weighted Erdos-Renyi networks and was analysed using the network leading eigenvalue. Activity would start (for excitatory-only network) at an eigenvalue = 1, but for excitatory-inhibitory networks, activity could be seen even for eigenvalues slightly below this value. How is this possible?
We decided to take a closer look to this mistery, by simplifying the model and making analytical computations. In this first page, the left column shows a schematical model description, while the right column shows the differential equation for the activity (obtained from the microscopic model via the master equation) and its mean-field approximation: when the network is all-to-all, fluctuations around the mean input vanish, meaning that the average of any function of the input is given by the function of the average input.

The original model was run in directed Erdos-Renyi networks, whose weights were distributed in a certain interval. In order to check if the low activity was coming from any closed-loops in the network or some fluctuation coming from heterogeneity, we decided to start eliminating elements, one by one: (1) all the links now have a homogeneous weight, +γ or -γ depending on the nature of the neuron (excitatory or inhibitory). Also, simulations were run on undirected networks (2) networks are hyperregular: each node has exactly the same amount of excitatory and inhibitory connections. This means that a neuron in a network with average connectivity k=20 and α=0.2 (20% inhibitory neurons) will always receive exactly 4 inhibitory inputs and 16 excitatory ones (though they are not all necessarily active at the same time). (3) We even removed the network entirely, just connecting the nodes at random at each timestep (“annealed” model).
For all these conditions, a low activity intermediate phase appears (LAI phase). This phase is surrounded by two critical points: γec at the left, which coincides with the mean-field prediction for the only excitatory system, and γc at the right, the mean-field prediction for the system of excitatory and inhibitory neurons (compare the positions of γec and γc shown here with the ones given by mean-field analysis in previous page). These are critical points because the transitions are continuous and the variance (σs) has a peak that scales accordingly as expected.
As the connectivity k is increased (as we approach mean-field), the activity of the LAI phase is reduced -see the top panel. Also, infinite system sizes are needed to detect activity at γec (demonstrated at inset).
The eigenvalue method previously mentioned always coincides with the mean-field prediction, hence giving the point at γc. The eigenvalue = 1 is related to different average link weights depending on the number of inhibitory nodes. Instead of the network approach, we computed the probability distribution of a certain neuron input, and used that probability to evaluate all the averages beyond the mean-field approach. The result is given in the bottom panel, and it can be seen that it actually matches exactly the data from simulations.

The core result from the computations is a difference between the average of the response and the response of the average, which is exactly the difference between the exact equation and the mean-field approximation. The relation that links both averages, in statistics, in called “Jensen’s inequality”, so we coined the term “Jensen’s force” to refer to the stochastic force F able to sustain a low level of activity in the LAI phase (portrayed at the inset of the figure). At the end, the Jensen’s force is nothing but the difference – f(<Λ>).
At a microscopic level, in an excitatory-inhibitory system with low activity, only the excitatory part of the network is active. Being the weight of the links over the mean field excitatory critical point, activity tends to go up, until inhibition is activated and starts controlling excitation. Basically, the constributions to the average of the response of the input is given the input of just one excitatory neuron or just one inhibitory neuron -and the second one is zero. Negative input is disregarded by the response function.
The dynamics of the LAI phase coincide with those observed at the asynchronous irregular state observed in the brain and many classical computational neuroscience models (see e. g. Van Vreeswijk and Sompolinsky 1996). The bottom panel shows some of those characteristics: coefficient of variation (CV) around 1 at this phase, which means that the dynamics are irregular and random; cross-correlation (CC) between excitatory and inhibitory time series peaks has a maximum for negative values of time, meaning that excitation precedes inhibition; pairwise correlation (PC) between random pairs of neurons decreases a 1/N, a characterization of the asynchronous irregular states also proposed by Sompolinsky. In this case, however, correlation is maximal at the critical points, as could be expected from the theory of critical phenomena.

One could argue that the problem is that in our mean-field model, both silent and active phases are absorbing, with s=0 or s=1 respectively. A phase where everybody is active has no dynamical behaviour, and therefore has no complexity at all. If the transition was continuous, would the LAI phase be different from any other active phase?
In order to answer this question, we studied the effect of changing the response function from linear piecewise to hyperbolic tangent, which leads to a continuous phase transition in mean field. At the top panel, it can be seen that the LAI phase still appears (in mean-field, the absorbing phase would last to γc, where activity would start continuously), and that its dynamical properties are different from the active phase: its coefficient of variation is way larger, and the cross-correlation between excitatory and inhibitory time series is larger and the peak representing the E/I lag is more pronounced.
Thanks for the answer! That makes sense. I’d be curious to see whether you do increase CV levels with a more complex topology/connectivity and what effects that would bring to the states that you observe in the system. For instance, if CV is also raised during the LAI phase, does it mean you lose the asynchronous irregular state?
Very interesting investigation of the Larremore model! I was surprised by the small CV at the second transition, between the LAI and the active phase. Does this mean there’s no gamma in this model capable of displaying large CV values that are found in experimental data?
Hi Tiago, thanks for your comment!
That is a very good question. In the setup that we are using I believe it is not possible to obtain higher values of the CV. Take in account that this model has been runs hyper-regular networks, with the same weight for all the links, both excitatory and inhibitory. The only free parameter is the weight of the links, gamma, so what you see is the complete range of possibilities. In order to increase the CV, maybe a clever combination of heterogeneity in link weights + more complex topology could lead to some multistability that enriches the system, but I am not sure.