By: Ruomin Zhu, School of Physics, the University of Sydney

Email me

 The rise of neuromorphic technologies brings artificial intelligence into a new regime. Not only because these systems respond to electrical stimuli in a way similar to biological synapses, but also because they exhibit memory and brain-like dynamics such as avalanches that cannot be readily implemented in software-based artificial neural networks. Recent studies have demonstrated learning ability in neuromorphic systems comprised of nanowires that self-assemble into a complex network topology.

The intrinsic dynamics of these neuromorphic networks are driven by external signals. One effective approach to study the dynamics is through information-theoretic metrics. In this work, transfer entropy (TE) and active information storage (AIS) are employed to investigate information transduction and short-term memory in the networks. In addition, time-series analysis is performed to study the dynamics during network activation. The results suggest that central parts of the networks contribute the most to the information flow. Most importantly, TE and AIS are found to be maximized when these networks are activated.

The performance of neuromorphic networks on benchmark tasks (memory and computing capacity) are demonstrated to be dependent on their internal states as well as topological structure. The results indicate that performance is optimized when these networks are pre-initialized to a state approaching transition from quiescent to active dynamical regimes. Furthermore, networks with optimal computing resources (i.e. denser/ larger networks) are identified for performance in these benchmark tasks.

Panel a: Graphical representation of the network around activation.
In this work, the properties and dynamics of nanowire network (NWN) are studied via its
graphical representation. An external signal is applied to activate the system. A winnertakes-all (WTA) current pathway consists of a particular class of junctions will emerge during
the activation of the network (yellow nodes). Colors of edges represent their corresponding
conductance at this time point (t = 1.4s).


Panel b: A schematic figure showing how TE and AIS are calculated.
The transfer entropy (TE) on the edge connecting node 1 and 2 is calculated as the sum of
TE from node 1 to 2 and TE from node 2 to 1. The out-going TE of node 1 can be calculated
as TE from node 1 to all its adjacent nodes, and similarly for the in-going TE. As a result, the
TE of node 1 is calculated as the sum of its in-going and out-going TE.
The active information storage (AIS) of an edge is calculated based on its past states.

Panel a: Network time series subject to a Mackey-Glass signal.
Red, green and beige shaded regions indicate pre-activation, activation and post-activation
periods, respectively. The dashed vertical green line indicates the activation time, coinciding
with formation of the first current path.
Top: The conductance time series of the network.
Bottom: The TE time-series, a moving average with window size 0.1 s (100 steps) is applied.


Panel b: Snapshots of the network at different stages.

Nodes and edges are colored with their corresponding TE. Information dynamics are
maximized around the network’s activation. The regions with stronger TE are more likely to
form a current pathway

Page 3: Benchmarks of the network with different initial states.
A Mackey-Glass signal is delivered to the network. With varying durations of the signal
input, the network is pre-initialized to different internal states. Memory capacity (MC) /
non-linear transformation (NLT) tests are then performed accordingly initiating from those
pre-initialized states.
Red, green and beige shaded regions indicate pre-activation, activation and post-activation
periods, respectively. The dashed vertical green line indicates the activation time, coinciding
with formation of the first current path.


Panel a: Memory capacity result with respect to different initial states.
The network is optimized for the MC test when its initial states is around activation.


Panel b: Active information storage of memory capacity tests.
AIS is playing a key role in the MC test – AIS is maximized when the network is optimal for
the MC test.


Panel c: Non-linear transformation performance with respect to different initial states.
The network’s performance in the NLT test is also optimized around its activation.


Panel d: Transfer entropy of non-linear transformation tests.
TE is maximized when the network is optimal for the NLT test.

Page 4: The influence of computing resources on the network’s performance.
In traditional computing chips, more transistors usually mean extra computing resources
and thus better performance. Here, we draw an analogy as extra nanowires/ connections in
the network might be beneficial for its performance. The same pre-initialized MC test from
page 3 is applied to networks with varying computing resources.


Panel a: The MC performance of networks with 100 nanowires and sparse connections.
Networks with higher average degree, i.e., more connections are exhibiting better
performance in the MC test. The optimal performances of these networks are achieved with
appropriate pre-initialization time.


Panel b: The MC performance of networks as a function of average degree.
Networks with 100 nanowires and average degrees varying from 5.22 to 93.68 are tested.
Networks with sparse connections (⟨𝑑𝑒𝑔⟩ < 20) show increased performance as average
degree increases (computing resources). For networks with 20 < ⟨𝑑𝑒𝑔⟩ < 70, the
performance plateaus. For ⟨𝑑𝑒𝑔⟩ > 70, performance drops significantly.


Panel c: The MC performance of networks with varying number of nanowires.
The average degrees of these networks are controled to be similar. Networks with more
nanowires (computing resources) are more likely to achieve significantly better
performance.

2 thoughts on “Virtual Poster #32 – Information dynamics in neuromorphic nanowire networks

    1. Hi Wesley! Thanks for asking!

      So for the Memory Capacity task, what we do is delivering some randomly sampled signal into the system. Then try to fit for the past inputs by the network’s current state.

      For the non-linear transformation, it’s a rather simple task just to test how “non-linear” or diverse the states of the network can be. We did that by inputing a sinusoidal wave, and use the states of the network and fit for like sqaure wave/ cos-wave, etc. Basically just something that cannot be generated by linear combination from the original signal.

      If you are looking for more details or references, maybe we can talk about it through email! My email address is: rzhu0837@uni.sydney.edu.au

Leave a Reply to wesleypclawsonCancel reply