By Michelle Girvan (College Park, MD).
Empirical evidence suggests that biological and technological networks may have evolved to operate near the tipping point between quiescence and large-scale perturbation amplification in order to maximize functionality. However, proximity to criticality as an explanation for observed power law distributions in a wide range of systems remains controversial, with many critics claiming that precise external- or self- tuning to the critical point is implausible. Here we explore the dynamical and structural origins of ‘robust critical-like dynamics’ and their utility in building brain-inspired machine learning systems. We say that a system exhibits robust critical-like dynamics if, for a wide range of parameters away from the phase transition, it displays statistics that approximately match the expected power law behavior at criticality. We hypothesize that systems exhibiting robust critical-like dynamics are able to leverage the functional advantages of criticality without the need for complex tuning mechanisms. We explore these questions using the machine learning framework of reservoir computing and find that an appropriate excitatory/inhibitory balance dramatically improves the robustness of the system’s information processing capabilities.