Back

Operative dimensions in unconstrained connectivity of recurrent neural networks

Krause, R.; Cook, M.; Kollmorgen, S.; Mante, V.; Indiveri, G.

2022-06-05 neuroscience
10.1101/2022.06.03.494670 bioRxiv
Show abstract

Recurrent Neural Networks (RNNs) are commonly used models to study neural computation. However, a comprehensive understanding of how dynamics in RNNs emerge from the underlying connectivity is largely lacking. Previous work derived such an understanding for RNNs fulfilling very specific constraints on their connectivity, but it is unclear whether the resulting insights apply more generally. Here we study how network dynamics are related to network connectivity in RNNs trained without any specific constraints on several tasks previously employed in neuroscience. Despite the apparent high-dimensional connectivity of these RNNs, we show that a low-dimensional, functionally relevant subspace of the weight matrix can be found through the identification of operative dimensions, which we define as components of the connectivity whose removal has a large influence on local RNN dynamics. We find that a weight matrix built from only a few operative dimensions is sufficient for the RNNs to operate with the original performance, implying that much of the high-dimensional structure of the trained connectivity is functionally irrelevant. The existence of a low-dimensional, operative subspace in the weight matrix simplifies the challenge of linking connectivity to network dynamics and suggests that independent network functions may be placed in specific, separate subspaces of the weight matrix to avoid catastrophic forgetting in continual learning.

Matching journals

The top 5 journals account for 50% of the predicted probability mass.

1
PLOS Computational Biology
1633 papers in training set
Top 1%
18.8%
2
Neural Computation
36 papers in training set
Top 0.1%
14.8%
3
Frontiers in Computational Neuroscience
53 papers in training set
Top 0.2%
9.2%
4
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 9%
6.9%
5
Nature Communications
4913 papers in training set
Top 37%
4.0%
50% of probability mass above
6
Network Neuroscience
116 papers in training set
Top 0.3%
3.6%
7
Scientific Reports
3102 papers in training set
Top 40%
3.3%
8
Physical Review E
95 papers in training set
Top 0.4%
2.7%
9
Neural Networks
32 papers in training set
Top 0.3%
2.6%
10
eneuro
389 papers in training set
Top 5%
1.9%
11
eLife
5422 papers in training set
Top 38%
1.9%
12
Journal of Computational Neuroscience
23 papers in training set
Top 0.2%
1.8%
13
Physical Review Research
46 papers in training set
Top 0.3%
1.7%
14
Cell Reports
1338 papers in training set
Top 24%
1.7%
15
NeuroImage
813 papers in training set
Top 4%
1.7%
16
Communications Biology
886 papers in training set
Top 17%
1.0%
17
Biological Cybernetics
12 papers in training set
Top 0.2%
1.0%
18
PLOS ONE
4510 papers in training set
Top 64%
0.9%
19
Journal of The Royal Society Interface
189 papers in training set
Top 5%
0.8%
20
Neurocomputing
13 papers in training set
Top 0.6%
0.8%
21
Frontiers in Neuroscience
223 papers in training set
Top 7%
0.8%
22
Neuroscience
88 papers in training set
Top 3%
0.7%
23
The Journal of Neuroscience
928 papers in training set
Top 9%
0.7%
24
Journal of Neural Engineering
197 papers in training set
Top 2%
0.6%
25
Neuron
282 papers in training set
Top 9%
0.6%
26
Chaos, Solitons & Fractals
32 papers in training set
Top 2%
0.6%
27
PRX Life
34 papers in training set
Top 1%
0.6%
28
Journal of Neurophysiology
263 papers in training set
Top 1%
0.6%
29
Cerebral Cortex
357 papers in training set
Top 3%
0.5%
30
Entropy
20 papers in training set
Top 0.6%
0.5%