by Alon Loeffler (The University of Sidney)
Email me!

Graph theory has been extensively applied to the topological mapping of complex networks, ranging from social networks to biological systems. It has also increasingly been applied to neuroscience as a method to explore the fundamental structural and functional properties of human neural networks. Here, we apply graph theory to a model of a novel neuromorphic system constructed from self-assembled nanowires (ASNs), whose structure and function may mimic that of human neural networks. Simulations of neuromorphic nanowire networks allow us to directly examine their topology at the individual nanowire–node scale. This type of investigation is currently practically impossible experimentally. We apply network cartographic approaches to compare neuromorphic nanowire networks with: random networks (including an untrained artificial neural network); grid-like networks and the structural network of C. elegans. We also run simulations of these networks and apply functional connectivity measures to determine how they differ. Our results demonstrate that neuromorphic nanowire networks exhibit a small–world architecture similar to biological system of C. elegans, and significantly different from random and grid-like networks. Furthermore, neuromorphic nanowire networks appear more segregated and modular than random, grid-like and simple biological networks and more clustered than artificial neural networks. Given the inextricable link between structure and function in neural networks, these results may have important implications for mimicking cognitive functions in neuromorphic nanowire networks.

Neuromorphic Nanowire Networks are self-assembled networks, made from polymer-coated metals such as Silver or Titanium. These networks exhibit recurrent nonlinear dynamics that is seen as essential for brain-like function (Avizienis et al., Plos One, 2012). The inherent complexity of the self-assembled NW network confers an advantage over existing neuromorphic technologies comprised of conventional regularly and sparsely-arranged electronic device components.


Albert, R., & Barabási, A. L. (2002). Statistical mechanics of complex networks. Reviews of Modern Physics, 74(1), 47–97.

Demis, E. C., Aguilera, R., Sillin, H. O., Scharnhorst, K., Sandouk, E. J., Aono, M., Stieg, A. Z., & Gimzewski, J. K. (2015). Atomic switch networks – Nanoarchitectonic design of a complex system for natural computing. Nanotechnology, 26(20), 204003.

Diaz-Alvarez, A., Higuchi, R., Sanz-Leon, P., Marcus, I., Shingaya, Y., Stieg, A. Z., Gimzewski, J. K., Kuncic, Z., & Nakayama, T. (2019). Emergent dynamics of neuromorphic nanowire networks. Scientific Reports, 9(1), 14920.

Loeffler, A., Zhu, R., Hochstetter, J., Li, M., Fu, K., Diaz-Alvarez, A., Nakayama, T., Shine, J. M., & Kuncic, Z. (2020). Topological Properties of Neuromorphic Nanowire Networks. Frontiers in Neuroscience, 14, 184.

Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of ‘small-world’ networks. Nature, 393(6684), 440–442.

A comparison between biological neural networks and Neuromorphic Nanowire Networks (specifically Silver Nanowire networks).

When we zoom in to the individual junctions between wires, we see similar ‘synaptic’ switch-like behaviours.

Ag-PVP-Ag nanowire junctions exhibit memristive switching in response to electrical inputs (Diaz-Alvarez et al. Sci. Rep. 2019)

  • Synthetic synapses modeled as voltage-controlled memristive junctions, in which a filament is formed between wires that cross-over in response to electrical inputs.
  •  Once the filament is fully formed, there is a drop in resistance across the junction by orders of magnitude, resulting in a switch-like effect from ‘OFF’ to ‘ON’
  • There is also capacity for electron tunneling as the filament becomes close to fully formed, which we take into consideration in our models 

To this end, we created a computer model of nanowire networks, where wires are randomly placed within a virtual 2D plane of fixed size (30 × 30 μm), with horizontal and vertical positions of the wire centers generated from a uniform spatial distribution. The angular orientation of each wire was generated from a uniform distribution on 2π. We also simplify the assumption that every time two wires crossover a junction is formed (this has negligible effect on network functionality vs real experimental measurements (Diaz Alvarez et al., 2019)

Our model allows us to visualise nanowire network graphically and compare them to other types of networks (such as C. Elegans or Mathematical Nulls e.g. Watts-Strogatz, 1998 + Barabasi-Albert, 2002), using Graph Theory measures.

Through this, we showed that self-assembled nanowire networks typically demonstrate a Small-World architecture, with low path length and high node clustering. This has been shown in many biological and complex networks (such as the C. Elegans) and is thought to allow for quick and efficient information transfer across the system (Loeffler et al., 2020).

We also have found that nanowire networks typically have more highly segregated modules than null-models, and C. Elegans (Loeffler et al., 2020).

Our computational model also allows us to simulate the functionality of NNNs. The model captures the physical network architecture, the individual synapse dynamic, and the system-level collective behaviour that emerges as a result of nonlinear network dynamics.

To test how NNNs’ structure affects their function, we applied two benchmark Reservoir Computing tasks, using NNNs as the reservoir. The task presented here is a Nonlinear Transformation, or Wave-Transformation task, and some preliminary results for Memory-Capacity task:

  1. Nonlinear Transformation: Feed in a sinusoidal AC input signal into an input node (Source). Select one node as the drain node so the current path is formed between source and drain – this is the ‘training’ period. Next, we use the voltage of each node in the network after training as a readout and train a linear regression model on these outputs. Our target for training is a square wave signal, with same frequency as the input wave.
  2. MC: Similar method, except we convert a series of random integers to a temporal signal, in which readout nodes are trained to reproduce a delayed input sequence.


a) Modularity:

  • NNNs = medium-highly segregated.
    • NNNs that perform best seem to have a modularity balanced between segregation and integration (0.4-0.6).
  • Random networks + BA scale-free = medium-highly integrated.
    • Range of modularity for higher performance in random networks (0.2-0.3), which is more highly integrated than NNNs

b) Average Degree:

  • Random networks ↑ average degree = ↓ performance (after threshold at avg deg = 20)
  • NNNs with higher average degree don’t fail as much, potentially indicating more flexibility on task performance

c) Small Worldness:

  • Neuromorphic Nanowire Networks (NNN) = relatively high Small Worldness (↓ SW may indicate some ↑ performance)
  • Random Networks + BA scale-free networks = low Small Worldness
    • SW is not always necessary for high performance.

d) NLT vs MC:

  • Networks that do well, or poorly on MC tend to do better, or worse on NLT too, and vice versa.
  • There are exceptions, such as networks which perform well on NLT but poorly on MC.
    • This may be the case in more segregated networks, in which less parts of the network are activated, meaning memory capacity is weakened.
  • The opposite is also true, with networks that are highly integrated (e.g. BA or Random networks), where Memory Capacity is much stronger due to more sections being activated.

Network-Level Graph Theory measures such as Small Worldness and Modularity do not paint the whole picture of why some networks perform better than others. We are currently exploring Node-Level measures, such as Participation Coefficient, Centrality and Degree, with the hope to fill the gaps.

Leave a Reply