Back

Neural Population Models for EEG: From Canonical Models to Alternative Model Structures

Omejc, N.; Roman, S.; Todorovski, L.; Dzeroski, S.

2026-04-14 neuroscience
10.64898/2026.04.10.717643 bioRxiv
Show abstract

Neural population models are widely used to interpret electroencephalography (EEG), yet their relationships remain far less systematically understood than those among single-neuron models. More fundamentally, it remains unclear whether EEG can support a uniquely plausible population-level mechanism, or whether multiple structurally distinct models can explain the data equally well. To address this question, we combine comparative analysis of canonical model families with grammar-based generation of new candidate architectures. We assembled 17 canonical neural mass and phenomenological models and embedded them in a shared structural space. From their common processes, we defined a probabilistic grammar over interpretable dynamical components and developed ENEEGMA (Exploring Neural EEG Model Architectures), a Julia-based framework for grammar-based model generation, simulation, and parameter optimization, to generate additional candidate models. We then assessed both canonical and generated models by fitting them to EEG independent-component spectra from four datasets for each condition: resting state and steady-state visual evoked potentials. Canonical models formed six structural clusters. Across conditions, compact low-dimensional polynomial oscillators performed best overall, with Montbrio-Pazo-Roxin, FitzHugh-Nagumo, and Stuart-Landau models offering the best balance of fit quality, stability, and simplicity. Grammar-based exploration further showed that the space of viable EEG node models extends beyond canonical formulations: even a restricted search over 1,000 generated models produced compact alternatives competitive with nearly all canonical families and achieving the strongest cluster-level SSVEP fits. Together, these findings suggest that EEG spectra constrain plausible neural population mechanisms without uniquely determining them. Beyond this, grammar-based model exploration provides a principled, data-driven framework for EEG-constrained model discovery. Author summaryElectroencephalography (EEG) lets us measure brain activity non-invasively, but the signals are indirect, so we rely on mathematical models to explain how neural populations generate them. Many such models exist, yet it is unclear whether standard models cover the full range of plausible explanations for EEG data, or whether several very different models can explain the same signal equally well. In this study, we compared a broad set of established neural population models and then used a grammar-based equation discovery framework to automatically generate new candidate models from interpretable building blocks. We found that simple low-dimensional oscillator models often matched EEG spectra better than more complex canonical models. We also found that newly generated models could perform nearly as well as, and sometimes better than, established ones, especially for stimulus-driven responses. These results suggest that EEG spectra alone may not be enough to identify a unique underlying neural mechanism. More broadly, our work shows how automated, biologically informed model generation can help to compare, understand, expand, and test the space of candidate neural population models.

Matching journals

The top 3 journals account for 50% of the predicted probability mass.

1
PLOS Computational Biology
1633 papers in training set
Top 0.2%
34.0%
2
eneuro
389 papers in training set
Top 0.3%
12.7%
3
Journal of Neural Engineering
197 papers in training set
Top 0.4%
7.4%
50% of probability mass above
4
NeuroImage
813 papers in training set
Top 2%
5.0%
5
eLife
5422 papers in training set
Top 21%
4.1%
6
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 19%
3.7%
7
Neural Computation
36 papers in training set
Top 0.1%
3.7%
8
Frontiers in Neuroscience
223 papers in training set
Top 3%
2.2%
9
Frontiers in Computational Neuroscience
53 papers in training set
Top 1%
1.9%
10
Network Neuroscience
116 papers in training set
Top 0.5%
1.9%
11
Scientific Reports
3102 papers in training set
Top 54%
1.8%
12
PLOS ONE
4510 papers in training set
Top 52%
1.7%
13
Journal of Neuroscience Methods
106 papers in training set
Top 1%
1.1%
14
Journal of Neurophysiology
263 papers in training set
Top 0.6%
1.0%
15
Journal of Computational Neuroscience
23 papers in training set
Top 0.3%
0.8%
16
iScience
1063 papers in training set
Top 30%
0.8%
17
Bulletin of Mathematical Biology
84 papers in training set
Top 2%
0.8%
18
The Journal of Neuroscience
928 papers in training set
Top 8%
0.8%
19
Neural Networks
32 papers in training set
Top 0.9%
0.7%
20
Frontiers in Neural Circuits
36 papers in training set
Top 0.9%
0.5%
21
Imaging Neuroscience
242 papers in training set
Top 4%
0.5%