Back

Local gated-Hebbian learning of deep cerebellar networks with quadratic classification capacity

Hiratani, N.

2026-04-20 neuroscience
10.64898/2026.04.17.718957 bioRxiv
Show abstract

A central goal of neuroscience is to understand how neural circuit architecture supports learning. While recent work has clarified the computational role of depth in sensory cortical hierarchies, it remains unclear why predominantly feedforward, non-convolutional circuits such as the cerebellum and olfactory system also contain multiple processing layers. Theoretical work in deep learning has shown that two-hidden-layer networks can achieve classification capacity that scales quadratically with the number of intermediate neurons, but these results rely on nonlocal synaptic optimization and are therefore difficult to reconcile with biological learning rules. Here, we show analytically and numerically that a two-hidden-layer network with feedforward gating can achieve quadratic capacity using local three-factor Hebbian learning when intermediate activity is sparse. This architecture supports efficient one-shot learning and, in settings where backpropagation requires many repeated weight updates, offers an advantage in learning speed. Beyond random perceptron tasks, the model also performs well on structured cerebellum-related tasks, including reinforcement-learning-based motor control. Mapping the model onto cerebellar microcircuitry further suggests functional roles for dendritic compartmentalization, branch-specific inhibition, and disinhibitory interneuron pathways. Together, these results extend the Marr-Albus-Ito framework by showing how the presence of multiple intermediate layers in cerebellum-like circuits can support fast, local, and high-capacity learning.

Matching journals

The top 5 journals account for 50% of the predicted probability mass.

1
PLOS Computational Biology
1633 papers in training set
Top 2%
13.8%
2
Frontiers in Computational Neuroscience
53 papers in training set
Top 0.2%
11.8%
3
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 5%
11.8%
4
Nature Communications
4913 papers in training set
Top 20%
9.7%
5
Neural Computation
36 papers in training set
Top 0.1%
8.1%
50% of probability mass above
6
Scientific Reports
3102 papers in training set
Top 21%
6.1%
7
Neural Networks
32 papers in training set
Top 0.2%
2.9%
8
Physical Review E
95 papers in training set
Top 0.4%
2.6%
9
Cerebral Cortex
357 papers in training set
Top 0.6%
2.0%
10
eLife
5422 papers in training set
Top 37%
2.0%
11
Nature Neuroscience
216 papers in training set
Top 4%
1.8%
12
Communications Biology
886 papers in training set
Top 10%
1.6%
13
Science Advances
1098 papers in training set
Top 21%
1.4%
14
PLOS ONE
4510 papers in training set
Top 59%
1.3%
15
Physical Review Research
46 papers in training set
Top 0.5%
1.3%
16
Neuron
282 papers in training set
Top 7%
1.3%
17
Cell Reports
1338 papers in training set
Top 28%
1.3%
18
Nature Machine Intelligence
61 papers in training set
Top 3%
0.9%
19
iScience
1063 papers in training set
Top 28%
0.9%
20
PNAS Nexus
147 papers in training set
Top 1%
0.9%
21
The Journal of Neuroscience
928 papers in training set
Top 8%
0.8%
22
Bulletin of Mathematical Biology
84 papers in training set
Top 2%
0.8%
23
Network Neuroscience
116 papers in training set
Top 1%
0.7%
24
Journal of Computational Neuroscience
23 papers in training set
Top 0.4%
0.7%
25
NeuroImage
813 papers in training set
Top 6%
0.7%
26
Physical Review X
23 papers in training set
Top 0.7%
0.7%
27
eneuro
389 papers in training set
Top 10%
0.7%
28
PRX Life
34 papers in training set
Top 1%
0.7%
29
Nature Human Behaviour
85 papers in training set
Top 5%
0.7%