By: Nicole Voges
Institut de Neuroscience de la Timone, UMR 7289, Aix-Marseille Université, Marseille, France
Cognitive function arises from the coordinated activity of neural populations distributed over large-scale brain networks. However, it is challenging to understand how specific aspects of neural dynamics translate into operations of information processing, and, ultimately, cognitive functions. To address this question, we combine novel approaches from information theory with computational simulations of canonical neural circuits, emulating well-defined cognitive functions. Specifically, we simulate circuits composed of one or multiple brain areas, each modeled as a 1D ring network of simple rate units. Despite its simplicity, such model can give rise to rich neuronal dynamics [Roxin et al. 2005]. These models can be used to reproduce functions such bottom-up transfer of stimuli, working memory and even top-down attentional modulation [Ardid et al. 2007].
We then apply recent tools from the Information Dynamics framework to simulated data.
Information Dynamics is a novel theoretical approach that formalize the decomposition of generic information processing into “primitive” operations of active storage, transfer and modification of information [Wibral et al. 2017]. In particular, we analyze simulated recordings from our models, quantifying how its nonlinear dynamics implement specific mix of these different primitive processing operations, varying depending on the emulated cognitive function. For instance, we show that the neuronal subsets maintaining sensory representations in working memory (via reverberant self-sustained activity) can be revealed by high values of the active Information Storage metric. Or, the integration of top-down signals (mediated by nonlinear interactions between active sub-populations) is detected by increased values of information modification.Our models thus highlight transparently the capacity of information dynamics metrics to characterize which network units participate to cognition-related information processing, and how they do it. Such capability can be exploited for the analysis of actual human MEG datasets.This work has been supported by a postdoctoral grant to NV from the Institute of Language, Communication and the Brain (ILCB).
By means of computational modeling we develop a set of measures that are able to detect and track cognitive processing in the dynamics of neural circuits.
These measures are called “Information Processing Primitives” (IPPs).
Neural function emerges from neural dynamics that are based on neural structures. Structure, dynamics and function are principally accessible/measurable, but how do they relate to each other?
It’s an algorithm that defines how function emerges from dynamics, a set of specific types of information processing that builds up to a function.
So, here we decompose a function performed by a circuit into simpler, low-level generic information processing operations, our “Information Processing Primitives”.
information can be held in mind, called “storage”,
information can be moved from where it was, called “transfer”
and distributed information can be locally integrated, called “information modification”.
Thus, IPPs are elementary computations to be performed on information.
Quantifying IPPs is probably difficult in empirical data.
Simulated data, however, offers an arbitrary large amount of data with well-known controlled dynamical states
=> we know where & when to look for what!
1D ring networks are models which can reproduce cognitive functions, so we perform our algorithmic decompositions on such simulations.
A 1D ring model (blue) represents a cortical area, it consists of 100 nodes representing cortical columns, coupled via spatially modulated probabilities of exc/inh.
It can show localized responses to stimuli, i.e., stimulus selectivity,
it can also exhibit working memory,
and a coupling of 2 rings can even exhibit bottom-up & top-down modulation.
Let’s start with two basic dynamical states of 1 ring: stationary uniform (SU) and stationary bump (SB) activity, as shown in the spatial maps (nodes vs time):
These maps show the color-coded firing rates averaged across 4000 trials.
Each trial receives a stimulus (red line) from time 100 to 250, centered at 1 out of 4 possible stimulus injection sites (red arrows).
During stimulation, both states show enhanced firing rates at the stimulus positions, called “activity bumps”.
Contrary to the SU state, the SB state keeps these bumps also after stimulation – visible with a zoom in on the color scale.
Thus, SU activity shows a stimulus evoked bump while SB is a state with a self-maintained bump.
This slide shows the results of applying our IPPs to ring model simulations.
First, the function of working memory is represented by the algorithmic decomposition associated to active & stimulus specific information storage,
to be calculated by the corresponding mutual informations.
Here, the spatial maps (nodes vs time) focus on node 50, the color code represents the amount of stored information which is high during stimulation and maximal at the stimulation sites.
It vanishes after stimulation for SU but not for the SB state — best seen when averaging across all nodes: the SB state (green line) exhibits a persistent stimulus specific working memory!
Second, the function of information propagation represented by the algorithmic decomposition associated to information transfer, calculated by the transfer entropy.
Here we only show the results averaged across nodes:
Information transfer is maximal at stimulus on- & off-set for both states (green & black line).
The propagation of information can also be shown for a feed-forward chain of 3 rings, representing a bottom-up flow through cortical hierarchy: the bottom ring R1 receives the stimulus and sends it to hierachically higher cortical areas.
Third, the algorithmic decomposition of information modification via integrating input from two channels.
For this, we need 2 reciprocally connected rings R1 and R2 in 2 different modes called attend-IN and attend-OUT.
Ring1 (bottom) represents s sensory cortical area that receives input, while ring2 (top) represents a cortical area with working memory.
Attend-IN (left top) means that ring2 is in SB state – as shown in the spatial maps of the average firing rates with a memory bump!
And we now need 2 stimulation periods: a first one with one stimulus position centered at node 50 (black bar in ring1) and a second stimulation called CUE at time 460 with 10 different stimulus positions: nodes 0,10,20 etc.
Thus, only at node 50 first and second stimulus positions are at identical, which is called a MATCH (red circle).
Only if there is a Match, and only in attend-IN, we observe a firing rate enhancement upon CUE at MATCH,
visible with a zoom-in for the color scale.
This enhancement is reflected in the synergy measured in ring1 as expressed in the formula, based on the Partial Information Decomposition from Williams & Beer. We see maximal synergy upon CUE presentation at the flanks surrounding node 50, the MATCH position,
only present in attend-IN (left) but not in attend-OUT. It shows the integration of information from the memory bump in ring2 (blue) and the match in ring1 (red).