Back

A Closer-to-Brain Heterosynaptic Learning Rule for Spatiotemporal Spike Pattern Detection with Low-Resolution Synapse

Furuichi, S.; Kohno, T.

2026-04-22 neuroscience
10.64898/2026.04.19.719429 bioRxiv
Show abstract

The brain is believed to process information efficiently in a different manner from deep learning-based artificial intelligence (AI). Brain-like next-generation AI is gaining attention owing to its potential to perform human-like, highly adaptive, robust, and power-efficient computation. To realize such AI, one crucial approach is the bottom-up implementation of the neuronal systems, capturing their electrophysiological characteristics in electronic circuits. However, this neuromorphic approach generally focuses on simplified neuronal models that do not refer to many biological findings. Developing closer-to-brain models is a natural direction that serve as a fundamental computing model for next-generation AI. One of the constraints of neuromorphic circuits is the bit resolution of synaptic efficacy memory, as the memory footprint scales with it precision. Although low-resolution synaptic efficacy is essential for minimizing memory circuit footprint and energy consumption, it generally leads to performance degradation in many tasks such as the spatio-temporal spike pattern detection. This study proposed a closer-to-brain learning rule that incorporates heterosynaptic plasticity (HP) induced by glutamate spillover. It is demonstrated that our model mitigates the performance degradation associated with low-bit resolution synaptic efficacy, achieving the pattern detection success rate with 3-bit resolution synaptic efficacy, which is comparable to 64-bit floating-point precision. Furthermore, the findings of the study indicate that HP based model accelerates the convergence of the synaptic effcacy and effectively potentiates the synapses relevant to the pattern detection while suppressing irrelevant ones, thereby promoting a bimodal distribution of synaptic efficacies. These findings may provide a basic framework for constructing an energy-efficient, brain-like next-generation AI that maintains high performance under hardware constraints.

Matching journals

The top 8 journals account for 50% of the predicted probability mass.

1
Frontiers in Computational Neuroscience
53 papers in training set
Top 0.2%
10.2%
2
Neurocomputing
13 papers in training set
Top 0.1%
7.3%
3
Cognitive Neurodynamics
15 papers in training set
Top 0.1%
6.9%
4
Chaos, Solitons & Fractals
32 papers in training set
Top 0.2%
6.9%
5
Neural Computation
36 papers in training set
Top 0.1%
6.5%
6
Neural Networks
32 papers in training set
Top 0.1%
4.9%
7
PLOS Computational Biology
1633 papers in training set
Top 7%
4.6%
8
PLOS ONE
4510 papers in training set
Top 35%
4.0%
50% of probability mass above
9
IEEE Journal of Biomedical and Health Informatics
34 papers in training set
Top 0.4%
3.7%
10
Frontiers in Neuroscience
223 papers in training set
Top 1%
3.6%
11
Journal of Neural Engineering
197 papers in training set
Top 0.9%
2.6%
12
iScience
1063 papers in training set
Top 8%
2.6%
13
Physical Review E
95 papers in training set
Top 0.5%
2.1%
14
Frontiers in Neural Circuits
36 papers in training set
Top 0.2%
2.1%
15
Scientific Reports
3102 papers in training set
Top 49%
2.1%
16
IEEE Transactions on Biomedical Engineering
38 papers in training set
Top 0.5%
1.7%
17
Neuroscience
88 papers in training set
Top 1%
1.7%
18
Frontiers in Aging Neuroscience
67 papers in training set
Top 2%
1.7%
19
Computational and Structural Biotechnology Journal
216 papers in training set
Top 7%
1.0%
20
Nonlinear Dynamics
10 papers in training set
Top 0.4%
1.0%
21
IEEE Transactions on Neural Systems and Rehabilitation Engineering
40 papers in training set
Top 0.5%
1.0%
22
Neuroscience Research
14 papers in training set
Top 0.1%
0.9%
23
Neuroscience Bulletin
11 papers in training set
Top 0.5%
0.9%
24
Journal of Computational Neuroscience
23 papers in training set
Top 0.3%
0.9%
25
PNAS Nexus
147 papers in training set
Top 2%
0.8%
26
Network Neuroscience
116 papers in training set
Top 1%
0.7%
27
Journal of Chemical Information and Modeling
207 papers in training set
Top 3%
0.7%
28
Neuroinformatics
40 papers in training set
Top 1%
0.7%
29
National Science Review
22 papers in training set
Top 3%
0.7%
30
Frontiers in Human Neuroscience
67 papers in training set
Top 3%
0.5%