Back

How to Improve the Reliability of Aperiodic Parameter Estimates in M/EEG: A Method Comparison

Kalamala, P.; Clements, G. M.; Gyurkovics, M.; Chen, T.; Low, K.; Fabiani, M.; Gratton, G.

2026-02-21 neuroscience
10.1101/2025.11.10.687541 bioRxiv
Show abstract

Interest in broadband aperiodic brain activity (1/f phenomenon) has increased exponentially over recent years, partly fueled by the development of tools to parameterize it (i.e., estimate its offset/intercept and exponent/slope) using the M/EEG power spectrum. Broadband aperiodic activity needs to be separated from narrowband periodic activity before its parameters are computed. A popular method, the fooof toolbox (Donoghue et al., 2020), is based on the data-driven detection of narrowband-periodic peaks, whose maximum number is set by the user. While increasing analytic flexibility, variability in the number of detected peaks may increase sensitivity to noise and reduce the reliability of aperiodic parameter estimates and the power of analytic pipelines. Here, we present an investigation of the effects of analytic choices (e.g., number of peaks, spectral estimation method) on metrics indicating the adequacy of spectral parametrization. These include the internal consistency (odd-even reliability) of aperiodic estimates, the number of outliers generated, and their ability to detect effects. Across two different data sets (resting state and task-based) we found a decrease in the reliability of intercept and slope estimates as more peaks were allowed to be extracted. To ameliorate this problem, we propose a theory-driven modification of fooof labelled censored regression, whereby a theory-driven range of frequencies expected to contain periodic activity is removed from all spectra, and the remaining power values are regressed on the remaining frequencies to obtain parameter estimates. This method shows more reliable and robust estimates compared to fooof, while avoiding overfitting.

Matching journals

The top 4 journals account for 50% of the predicted probability mass.

1
Imaging Neuroscience
242 papers in training set
Top 0.1%
17.2%
2
Journal of Neuroscience Methods
106 papers in training set
Top 0.1%
17.2%
3
NeuroImage
813 papers in training set
Top 1%
12.1%
4
eneuro
389 papers in training set
Top 2%
4.2%
50% of probability mass above
5
Journal of Neural Engineering
197 papers in training set
Top 0.7%
3.9%
6
European Journal of Neuroscience
168 papers in training set
Top 0.1%
3.6%
7
PLOS Computational Biology
1633 papers in training set
Top 10%
3.5%
8
Brain Topography
23 papers in training set
Top 0.1%
3.0%
9
Human Brain Mapping
295 papers in training set
Top 2%
2.4%
10
Clinical Neurophysiology
50 papers in training set
Top 0.3%
2.0%
11
Network Neuroscience
116 papers in training set
Top 0.5%
1.9%
12
Frontiers in Neuroscience
223 papers in training set
Top 4%
1.8%
13
Frontiers in Neuroinformatics
38 papers in training set
Top 0.4%
1.6%
14
Neuroinformatics
40 papers in training set
Top 0.5%
1.6%
15
eLife
5422 papers in training set
Top 46%
1.5%
16
Developmental Cognitive Neuroscience
81 papers in training set
Top 0.4%
1.3%
17
BMC Bioinformatics
383 papers in training set
Top 6%
1.2%
18
Psychophysiology
64 papers in training set
Top 0.3%
1.2%
19
PLOS ONE
4510 papers in training set
Top 65%
0.9%
20
Cortex
102 papers in training set
Top 0.4%
0.9%
21
Scientific Reports
3102 papers in training set
Top 73%
0.8%
22
Journal of Neurophysiology
263 papers in training set
Top 0.8%
0.8%
23
Frontiers in Human Neuroscience
67 papers in training set
Top 3%
0.7%
24
Communications Biology
886 papers in training set
Top 30%
0.6%