Back

Learning better with Dale's Law: A Spectral Perspective

Li, P.; Cornford, J.; Ghosh, A.; Richards, B.

2023-06-30 neuroscience
10.1101/2023.06.28.546924 bioRxiv
Show abstract

Most recurrent neural networks (RNNs) do not include a fundamental constraint of real neural circuits: Dales Law, which implies that neurons must be excitatory (E) or inhibitory (I). Dales Law is generally absent from RNNs because simply partitioning a standard networks units into E and I populations impairs learning. However, here we extend a recent feedforward bio-inspired EI network architecture, named Dales ANNs, to recurrent networks, and demonstrate that good performance is possible while respecting Dales Law. This begs the question: What makes some forms of EI network learn poorly and others learn well? And, why does the simple approach of incorporating Dales Law impair learning? Historically the answer was thought to be the sign constraints on EI network parameters, and this was a motivation behind Dales ANNs. However, here we show the spectral properties of the recurrent weight matrix at initialisation are more impactful on network performance than sign constraints. We find that simple EI partitioning results in a singular value distribution that is multimodal and dispersed, whereas standard RNNs have an unimodal, more clustered singular value distribution, as do recurrent Dales ANNs. We also show that the spectral properties and performance of partitioned EI networks are worse for small networks with fewer I units, and we present normalised SVD entropy as a measure of spectrum pathology that correlates with performance. Overall, this work sheds light on a long-standing mystery in neuroscience-inspired AI and computational neuroscience, paving the way for greater alignment between neural networks and biology.

Matching journals

The top 4 journals account for 50% of the predicted probability mass.

1
PLOS Computational Biology
1633 papers in training set
Top 0.8%
22.3%
2
Neural Computation
36 papers in training set
Top 0.1%
18.4%
3
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 8%
8.3%
4
Frontiers in Computational Neuroscience
53 papers in training set
Top 0.3%
8.3%
50% of probability mass above
5
Neural Networks
32 papers in training set
Top 0.1%
4.8%
6
Scientific Reports
3102 papers in training set
Top 38%
3.5%
7
eLife
5422 papers in training set
Top 36%
2.1%
8
Nature Communications
4913 papers in training set
Top 49%
1.9%
9
Network Neuroscience
116 papers in training set
Top 0.5%
1.8%
10
eneuro
389 papers in training set
Top 5%
1.8%
11
Frontiers in Neuroscience
223 papers in training set
Top 4%
1.7%
12
PLOS ONE
4510 papers in training set
Top 57%
1.5%
13
Neuron
282 papers in training set
Top 7%
1.2%
14
Frontiers in Neural Circuits
36 papers in training set
Top 0.5%
0.9%
15
Entropy
20 papers in training set
Top 0.4%
0.8%
16
Cell Reports
1338 papers in training set
Top 32%
0.8%
17
iScience
1063 papers in training set
Top 30%
0.8%
18
Neurocomputing
13 papers in training set
Top 0.5%
0.8%
19
Frontiers in Cellular Neuroscience
79 papers in training set
Top 1%
0.7%
20
Journal of Computational Neuroscience
23 papers in training set
Top 0.4%
0.7%
21
The Journal of Neuroscience
928 papers in training set
Top 9%
0.6%
22
Biological Cybernetics
12 papers in training set
Top 0.3%
0.6%
23
Physical Review E
95 papers in training set
Top 1%
0.6%