Back

From Resonance to Computation:A Six-Layer Framework for Analog Neural Processing in Coupled RLC Oscillator Networks

SENDER, J. M.

2026-04-13 neuroscience
10.64898/2026.04.09.717435 bioRxiv
Show abstract

Subthreshold neuronal membranes exhibit resonant, band-pass impedance characterised by an effective inductance arising from voltage-gated channel kinetics--principally Ih. This paper presents a six-layer computational framework that builds from this single-neuron RLC description to a complete account of how coupled neural oscillator networks compute. Layer 1 establishes the RLC neuron as a frequency-selective bandpass filter. Layer 2 shows that coupled RLC neurons encode relational information in phase differences (binding). Layer 3 demonstrates that networks of coupled oscillators form attractor landscapes supporting memory and pattern completion, with fixed-point, limit-cycle, and chaotic attractor classes. Layer 4 identifies the synaptic coupling matrix as a learned impedance network whose topology determines attractor geometry. Layer 5 maps neuromodulatory systems to bias controls that sweep RLC parameters (resonant frequency, quality factor, gain) without modifying stored memories. Layer 6 assembles the full system with cross-frequency multiplexing and homeostatic stabilisation. The framework is grounded in measurable electrical quantities and generates testable predictions distinguishing it from rate-coding and RC integrate-and-fire models. We explicitly address the linearisation gap between the subthreshold regime where the RLC description is rigorous and the nonlinear regime where attractor dynamics operate, the noise and precision limits of analog neural computation ([~] 3.3 effective bits per neuron, compensated by massive parallelism), and the distinction between causal and correlative evidence for oscillation-based coding claims. The framework does not replace existing models; it extends them by showing that rate coding is one (coarse) description of the output of an analog computation whose richer dynamics-- resonance, phase, temporal fine structure--may carry additional computational content.

Matching journals

The top 9 journals account for 50% of the predicted probability mass.

1
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 3%
14.2%
2
PLOS Computational Biology
1633 papers in training set
Top 4%
8.3%
3
Nature
575 papers in training set
Top 5%
4.8%
4
Nature Communications
4913 papers in training set
Top 33%
4.8%
5
Neural Computation
36 papers in training set
Top 0.1%
4.2%
6
Frontiers in Computational Neuroscience
53 papers in training set
Top 0.5%
4.2%
7
Nature Neuroscience
216 papers in training set
Top 2%
3.9%
8
Journal of The Royal Society Interface
189 papers in training set
Top 1%
3.5%
9
Bulletin of Mathematical Biology
84 papers in training set
Top 0.6%
3.5%
50% of probability mass above
10
Science
429 papers in training set
Top 9%
3.5%
11
eLife
5422 papers in training set
Top 27%
3.5%
12
PLOS ONE
4510 papers in training set
Top 44%
2.7%
13
Cell Reports
1338 papers in training set
Top 21%
2.1%
14
Physical Review X
23 papers in training set
Top 0.2%
2.0%
15
Biological Cybernetics
12 papers in training set
Top 0.1%
1.9%
16
Scientific Reports
3102 papers in training set
Top 54%
1.9%
17
Neuron
282 papers in training set
Top 5%
1.9%
18
Physical Review E
95 papers in training set
Top 0.6%
1.8%
19
Science Advances
1098 papers in training set
Top 16%
1.8%
20
Physical Review Letters
43 papers in training set
Top 0.3%
1.5%
21
Journal of Computational Neuroscience
23 papers in training set
Top 0.3%
1.3%
22
The Journal of Neuroscience
928 papers in training set
Top 6%
1.3%
23
eneuro
389 papers in training set
Top 7%
1.2%
24
Cell Systems
167 papers in training set
Top 9%
1.2%
25
Neural Networks
32 papers in training set
Top 0.6%
1.1%
26
PRX Life
34 papers in training set
Top 0.7%
0.9%
27
Entropy
20 papers in training set
Top 0.3%
0.9%
28
Physical Review Research
46 papers in training set
Top 0.7%
0.8%
29
Philosophical Transactions of the Royal Society B
51 papers in training set
Top 6%
0.7%
30
iScience
1063 papers in training set
Top 33%
0.7%