Non-random brain connectome wiring enables robust and efficient neural network function under high sparsity
McAllister, J.; Houghton, C. J.; Wade, J.; O'Donnell, C.
Show abstract
The connectivity of brain networks is extremely sparse due to metabolic, physical and spatial constraints. Although wiring sparsity can confer computational advantages for biological and artificial neural networks, sparse networks require fine parameter tuning and exhibit strong sensitivity to perturbations. How brains achieve their efficiency and robustness is unclear. Here we addressed this by analysing the dynamical properties of Echo State Networks with wiring based on the Drosophila melanogaster fruit fly connectome, compared with sparsity-matched random-wiring networks. We evaluated these networks on a set of eight cognitive tasks, and found that connectome-based neural networks (CoNNs) typically showed narrowly distributed task engagement across their neurons. The importance of a neuron for task performance correlated with its node degree, local clustering, and selfrecurrency, and these correlations were stronger in CoNNs than in random networks. CoNNs were more robust to neuronal loss, retaining their task performance and beneficial dynamical properties such as criticality and spectral radius better than random networks. Similarly, CoNNs were more robust to hyperparameter variations in both input and recurrent weight scaling. Using theoretical arguments and numerical simulations, we show that excess CoNN node self-recurrency is sufficient to explain this enhanced robustness. Overall, these results identify non-random features of connectome wiring that allow brains to reconcile extreme sparsity with reliable computation. SignificanceBrain networks support robust computation even though they operate under extreme wiring sparsity due to metabolic and spatial constraints. While sparse networks typically require fine-tuning and are sensitive to perturbations, we show that biological connectomes support specialised, efficient task engagement and remain robust to neuron loss and parameter variation. We identify excess neuronal selfrecurrency as a key structural feature underlying this stability. These results reveal how non-random connectivity stabilises computation in extremely sparse networks, providing principles for understanding brain function and designing robust, efficient artificial neural systems.
Matching journals
The top 3 journals account for 50% of the predicted probability mass.