Back

Direct Training of Networks of Morris-Lecar Neurons withBackprop

Akbari, N.; Mason, K.; Gruber, A.; Nicola, W.

2025-11-26 neuroscience
10.1101/2025.11.23.690009 bioRxiv
Show abstract

Spiking Neural Networks (SNNs) have the potential to replicate the brains computational efficacy by explicitly incorporating action potentials or "spikes", which is not a feature of most artificial neural networks. However, training SNNs is difficult due to the non-differentiable nature of the most common spiking models: integrate-and-fire neurons. This study investigates if some of the difficulty in training SNNs arises from the use of integrate-and-fire neurons, rather than smoother alternatives, like conductance-based neurons. To that end, we considered networks of Morris-Lecar (ML) neurons, a conductance-based neuron model which is differentiable. Networks were built using kinetic synaptic models that smoothly link presynaptic voltage dynamics directly to postsynaptic conductance changes, ensuring that all components remain fully differentiable. Switching to biophysically detailed models of synapses and neurons enabled direct end-to-end training through Backpropagation Through Time (BPTT). Biophysically detailed networks were successfully trained on image classification, regression, and time series prediction tasks. These results demonstrate the feasibility of employing biophysically detailed differentiable point neuron models to create SNNs that function as more accurate paradigms for the study of neural computations and learning. Further, this work confirms that some aspects of the difficulty in translating gradient-based learning algorithms from machine learning may arise from model choice, rather than SNNs being intrinsically difficult to train. 1. Author summaryThe brains information-processing efficiency arises in part from neurons communicating via discrete spikes. Spiking Neural Networks (SNNs) mimic this process at the neuronal level but have been difficult to train as most machine learning algorithms are not directly applicable. Most SNNs use integrate-and-fire neurons, a modelling framework that simplifies spikes into non-differentiable, abrupt voltage changes, which makes them difficult to train with powerful, standard AI training methods that use derivatives to compute gradients (e.g. Backprop). In our work, we asked if this difficulty could be overcome by considering end-to-end differentiable spiking neural networks. We used completely differentiable SNNs using the Morris-Lecar neuron, a biophysically detailed neuron model that produces smooth spikes, along with differentiable kinetic synapses. With the entire network being mathematically differentiable, we found that we could train it directly using standard backpropagation through time on different tasks (regression, classification, and chaotic time series prediction). This work demonstrates that the use of integrate-and-fire models may be limiting applications of machine learning algorithms towards understanding how learning functions in the brain.

Matching journals

The top 5 journals account for 50% of the predicted probability mass.

1
Neural Computation
36 papers in training set
Top 0.1%
23.6%
2
PLOS Computational Biology
1633 papers in training set
Top 2%
13.1%
3
Journal of Computational Neuroscience
23 papers in training set
Top 0.1%
8.6%
4
PLOS ONE
4510 papers in training set
Top 33%
4.4%
5
Frontiers in Neuroinformatics
38 papers in training set
Top 0.1%
3.8%
50% of probability mass above
6
eneuro
389 papers in training set
Top 2%
3.8%
7
Frontiers in Computational Neuroscience
53 papers in training set
Top 0.6%
3.8%
8
Scientific Reports
3102 papers in training set
Top 40%
3.2%
9
Neural Networks
32 papers in training set
Top 0.2%
2.9%
10
Chaos, Solitons & Fractals
32 papers in training set
Top 0.7%
2.5%
11
Frontiers in Neuroscience
223 papers in training set
Top 3%
2.2%
12
Frontiers in Neural Circuits
36 papers in training set
Top 0.2%
2.0%
13
Neuroinformatics
40 papers in training set
Top 0.4%
1.9%
14
Biological Cybernetics
12 papers in training set
Top 0.1%
1.6%
15
Cognitive Neurodynamics
15 papers in training set
Top 0.3%
1.0%
16
Network Neuroscience
116 papers in training set
Top 0.9%
0.9%
17
Journal of Neuroscience Methods
106 papers in training set
Top 1%
0.8%
18
Brain Sciences
52 papers in training set
Top 2%
0.8%
19
Neuroscience
88 papers in training set
Top 2%
0.8%
20
Journal of Neural Engineering
197 papers in training set
Top 2%
0.8%
21
Frontiers in Cellular Neuroscience
79 papers in training set
Top 1%
0.8%
22
Journal of Neurophysiology
263 papers in training set
Top 0.8%
0.8%
23
Frontiers in Artificial Intelligence
18 papers in training set
Top 0.7%
0.8%
24
Neurocomputing
13 papers in training set
Top 0.7%
0.7%
25
Frontiers in Physiology
93 papers in training set
Top 7%
0.7%
26
Wellcome Open Research
57 papers in training set
Top 2%
0.7%
27
Biology
43 papers in training set
Top 4%
0.5%
28
Biology Methods and Protocols
53 papers in training set
Top 3%
0.5%