**by Nicola Pedreschi, Wesley Clawson, Christophe Bernard, Pascale Quilichini, Alain Barrat & Demian Battaglia** (Aix-Marseille University – Institut de Neurosciences des Systèmes & Centre de Physique Théorique)

Email me!

*Neural computation is associated with the emergence, reconfiguration, and dissolution of cell assemblies in the context of varying oscillatory states. Here, we describe the complex spatiotemporal dynamics of cell assemblies through temporal network formalism. We use a sliding window approach to extract sequences of networks of information sharing among single units in hippocampus and entorhinal cortex during anesthesia and study how global and node-wise functional connectivity properties evolve through time and as a function of changing global brain state (theta vs. slow-wave oscillations). First, we find that information sharing networks display, at any time, a core-periphery structure: The units participating to the core or to the periphery substantially change across time windows, with units entering and leaving the core in a smooth way. Second, we find that discrete network states can be defined on top of this continuously ongoing liquid core-periphery reorganization. Switching between network states results in a more abrupt modification of the units belonging to the core and is only loosely linked to transitions between global oscillatory states. Third, we characterize different styles of temporal connectivity that cells can exhibit within each state of the sharing network. Cells can change temporal connectivity style when the network changes state. Altogether, these findings reveal that the sharing of information mediated by the intrinsic dynamics of hippocampal and entorhinal cortex cell assemblies have a rich spatiotemporal structure, which could not have been identified by more conventional time-averaged analyses of functional connectivity.*

_{Nicola Pedreschi, Christophe Bernard, Wesley Clawson, Pascale Quilichini, Alain Barrat, and Demian Battaglia. Dynamic core-periphery structure of information sharing networks in entorhinal cortex and hippocampus. Network Neuroscience 2020 4:3, 946-975}

Neural computation is associated with the emergence, reconfiguration, and dissolution of cell assemblies, i.e. an ensemble of cells firing in tight synchrony, in the context of varying oscillatory states. In our work, we describe the complex spatiotemporal dynamics of cell assemblies through temporal network formalism.

Single unit recordings of neuronal activity were acquired simultaneously from the CA1 region in the hippocampus and in the mEC (blue and orange in the top left of the figure, respectively) for 16 rats under anaesthesia (18 recordings for 16 rats). Following Clawson et al. (2019), we constructed time-resolved unweighted and weighted networks of functional connectivity, adopting a sliding window approach. Within each 10-s-long time window, we took connection weights (functional links – 0s and 1s if unweighted) between pairs of neurons (network nodes) to be proportional to the amount of *shared information *between their firing rates. Only links whose strength exceeded a general significance threshold were kept. We then slide the time window by 1 s, in order to achieve a 90% overlap between consecutive windows. This procedure thus maps each multichannel recording of length *T *seconds to a time series of *T *network representations, obtaining finally a temporal network of information sharing among neurons, formed by the temporal succession of these *T *network snapshots. Cartoon representations of the temporal network snapshots *G*(*t _{a}*),

*G*(

*t*), and

_{b}*G*(

*t*) in the three highlighted time windows are shown.

_{c}The two key questions regarding these temporal networks are: first, are the connections of the network stable in time, or rapidly changing? Second, does the network have a clear and specific structural organization, and if so, is it persistent in time or unstable and only transient? In order to answer the first question, we quantified, for each neuron *i*, how much its neighborhood changed between successive time windows: to this aim we computed for each *i *and at each time *t *the Jaccard index (cosine similarity for the weighted networks) *J _{i}*(

*t*) between its neighborhoods at times t and t-1. Values of these quantities close to or equal to 1 suggest that the node has not changed neighbors in successive time windows: hence its neighborhood shows low

*liquidity.*On the contrary, values close to or equal to 0 mean that the neuron has completely changed neighbors between subsequent times: Its neighborhood is highly liquid. At each time

*t*, the Jaccard index values

*J*(

_{i}*t*),

*i*ε [1,

*N*] form the time-dependent feature vector and

**J**(

*t*), of dimension

*N*

In order to answer the second question and probe for the presence of specific network architectures, we considered the *core-periphery *organization of the graph. We thus computed the *coreness coefficient C _{i}* (

*t*) of each node

*i*in each snapshot

*t*, which quantifies how peripheral or centrally integrated each node is in the. We thus obtain a time-dependent vector

**C**(

*t*) of dimension

*N*by computing at each time

*t*the coreness

*C*(

_{i}*t*) for each node

*i*ε [1,

*N*] in the network of time window

*t*.

If we look at the overall distribution of coreness values of nodes throughout a whole recording, on the right, we see how a majority of nodes have low coreness values and are therefore to be considered peripheral nodes (red dot), while there is on a little minority of core nodes (blue dot).

In order to follow dynamic changes in the coreness of individual neurons, we studied the time evolution of this feature for each neuron of each recording: on the left we plot the coreness *C ^{w}_{i }*(

*t*) versus time, for each node

*i*ε [1,

*N*] of a representative recording. The two highlighted lines in the figure represent the coreness evolu- tion of two particular nodes. In light green, we show the instantaneous coreness of the node with maximum average coreness ⟨

*(*

*C*^{w}_{i}*t*)⟩

*T*(averaged over the recording length

*T*). The figure shows clearly that this neuron’s instantaneous coreness is always large: The corresponding neuron is persistently part of the network’s core throughout the whole recording. This contrasts with the purple line, which displays the instantaneous coreness of the neuron with largest core- ness standard deviation (

*σ*(⟨

*(*

*C*^{w}_{i}*t*)⟩

*T*)): The curve fluctuates from high to low coreness values, indicating that the corresponding neuron switches several times between central core positions in the network and more peripheral ones.

The continuous range of observed instantaneous coreness values and the fluctuations in individual coreness values indicate that the set of most central neurons changes in time. We thus examined whether some regions were contributing more than others to this core. To this aim, we define the core, at each time frame, as the set of neurons whose instantaneous coreness lies above the 95th percentile of the distribution. On the bottom right we then plot the *core filling factors *of the CA1 and mEC layers. We define the core filling factor of each region as the percentage of the overall number of neurons of the recording located in that region that belong to the core. We plot the time evolution core filling factors separately for neurons located in different hippocampal CA1 layers (light and dark blue lines, top panel) and for neurons in different medial entorhinal cortex (mEC) layers (red, orange, and yellow lines, center panel). The figure illustrates that the core filling factors vary substantially through time. In the example shown, the core filling factor of CA1 stratum pyramidale (SP) neurons belonging to the core increases from ~2% to near 7% during the recording.

For each recording and each time window *t*, we computed for each neuron *i *ε [1, *N*] several temporal network properties, tracking notably the “liquidity” of its neighborhood (Jaccard index and cosine similarity) and its position within the core-periphery architecture (weighted and unweighted instantaneous coreness values). To investigate how these properties change dynamically at the global network level, we computed for each of these four quantities the correlation between their values at different times, obtaining four correlation matrices of size *T *× *T*. For instance, the element (*t*, *t*′) of the unweighted liquidity correlation matrix is given by the Pearson correlation between the *N *values of the Jaccard coefficient computed at *t *{*J _{i}*(

*t*),

*i*ε [1,

*N*]} and the

*N*values computed at

*t*′ {

*J*(

_{i}*t*′),

*i*ε [1,

*N*]}. The block-wise structure of these correlation matrices suggests the existence of epochs in time where neurons’ feature values are strongly correlated (red blocks on and outside the diagonal). Each block on the diagonal (epoch in which the node properties are strongly correlated) can be interpreted as a network connectivity configuration associated with specific liquidity and coreness assignments of the various neurons. We call these configurations

*network states.*

To quantitatively extract such discrete network states, we use the time series of the feature vectors **Θ**(*t*), **J**(*t*), **C*** ^{w}*(

*t*), and

**C**(

*t*). We concatenate these vectors two by two at each time. We then perform in each case (weighted and unweighted) an unsupervised clustering of these

*T*2

*N*-dimensional feature vectors. As a result of this clustering procedure, we obtain a sequence of states (temporal clusters of the feature vectors) that the network finds itself in at different times (yellow state spectrum for the unweighted case, above, red for the weighted case, below).

On the right are the boxplots of the values of Mutual Information between the sequence of unweighted and weighted network states for all recordings (above, in light blue), and the those computed for the weighted (and the unweighted) network sequence and the sequence of global oscillatory states (Theta or Slow oscillations). From these we note how network state switching can occur within each oscillatory global state. Nevertheless, it is possible that each given network state would tend to occur mostly within one specific global oscillatory state. To check whether this is the case, we computed for each network state the fraction of times that this state occurred during THE or SO epochs. On the far right, the light blue histogram corresponds to the fractions of time a network state manifested itself during the THE state (the dark blue histogram gives the same information but for the SO state). Both histograms are markedly bimodal, indicating that a majority of states occur during either the THE or the SO states, but not in both. In other words, network states are to a large degree oscillatory state specific. Therefore, the global oscillatory states do not fully determine the observed coreness and liquidity but most network states can be observed only during one specific global oscillatory state and not during the other.

To refine our analysis, we now investigate and characterize the temporal network properties at the level of single neurons within each of the detected network states.

In order to do so, we computed, for each node and in each state, a set of dynamical features averaged over all time frames assigned to the specifically considered state. We focus here on the weighted features, since the weighted and unweighted analyses provide similar results. The state-specific *connectivity profile *of a given neuron *i *in a given state included: a node’s strength; its connectivity number, i.e. the number of times the node transits from having no edges to having at least one; its total connectivity time, hence the sum of the duration of the intervals in which the node has at least one edge; the Fano factor, the ratio between the variance and the mean of the distribution of connectivity intervals durations.

Displaying on a radar plot the values of these network-state averaged node-wise properties we have what we call a connectivity profile for each node in each state. Traditionally neuroscience has tried identifying hub neurons, as important neurons, that control the collective activity of neuronal populations better than other. In most cases, hubness has been determined uniquely in terms of large degree or strength in a static network description. Furthermore, the temptation has been strong to claim that hubs belong to a specific “neuronal elite”, composed of neurons of specialized physiological types. In reality, we show here that both these ideas are incomplete. First, there are many different ways of “being important” in a temporal network, an our connectivity profiles allow us to capture some of them. Second, a neuron may not be “important” all the time because of birth or technical specialization, but may just be elected to be a hub at specific moments to later become again a “common person”. We therefore find core nodes, indeed “hubs” that within a state act as streamers of information to a vast and persistent audience in a continuous manner; they are to be distinguished from peripheral nodes, behaving like callers that periodically update and are updated by a small portion of the debating (or information processing) crowd. Then there are the free-lance and staff helpers, nodes that are not exactly peripheral nor core nodes: they intervein into the general debate intermittently. the former in a random manner (whenever needed) and the latter in a rare but periodic manner (working hours only).

So we have seen a whole reperoire of important neuronal behaviours. Going to the second question, is then a node, a neuron, constantly a streamer of information, or a caller, or a helper, throughout a whole recording? we see that neurons do not have a fixed connectivity style but can change it through network states. Most neurons, in facts, switch to a different connectivity style from a network state to the next.

very cool! Why do you think there’s drift in the coreness for some neurons? Do you think it’s due to the limited window of recording and that it will go back down and up again if recorded for an extended period of time? Or is it the anaesthesia? Is there any difference between rats?

Thanks for the question!

I imagine you are referring to the single-neuron coreness time series and the fact that some, very few, neurons do have quasi-constant (both high and low) coreness values.

Starting from the second question, there is no difference between rats: in all recordings (18 recordings for 16 rats) we find the same persistent (yet inherently “liquid” in terms of neurons recruitment/dismissal into or out of the integrated core) core-periphery structure and the same repertoire of single-neurons coreness time series.

It is possible that the limited window of recording does not allow us to investigate, for example, whether some neurons display fluctuating coreness values but in a much slower time-scale: possibly neurons that in our analysis display quasi-constant coreness values (especially those that have high values, so core nodes that remain so throughout the whole recording) are involved in more complex computational processing operations that unfold over longer time, and perhaps, if we were to record for longer times, we would find fluctuations in their coreness values as well. However, most core neurons in our recordings behave as streamers of information only in some network states and display radically different computational roles in others.

I hope I answered to your questions, however I’d be happy to further discuss!