Back

Neural Correlates of Listening States, Cognitive Load, and Selective Attention in an Ecological Multi-Talker Scenario

Shahsavari Baboukani, P.; Ordonez, R.; Gravesen, C.; Ostergaard, J.; Rank, M. L.; Alickovic, E.; Cabrera, A. F.

2026-03-15 neuroscience
10.64898/2026.03.13.711289 bioRxiv
Show abstract

This study assessed neural responses to continuous speech to classify listening state, cognitive load, and selective auditory attention in complex acoustic environments. EEG was recorded while participants listened to concurrent male and female talkers under two conditions: active listening, where attention was directed to one of two competing speakers (target vs. masker), or passive listening, where attention was diverted to a visual task. Cognitive load was varied by manipulating target-to-masker (TMR) ratio (TMR: +7 dB, -7 dB), with lower TMR representing more demanding listening conditions. Spectral EEG features across frequency bands were ranked with univariate statistics and used to classify listening state (active vs passive) and cognitive load (low vs. high TMR). Auditory attention decoding (AAD) was performed using linear stimulus reconstruction to identify the target talker during active listening. Classification of listening state achieved 90.3% accuracy, and AAD reached 84.4% accuracy, demonstrating robust tracking of attentional engagement. In contrast, classification of cognitive load was near chance, suggesting that more extreme acoustic manipulations may be required to elicit distinct neural signatures. Comparable performance using a reduced set of electrodes near the ear indicates the potential for integration with wearable hearing devices. Overall, these results demonstrate that EEG can distinguish attentional states and selectively track target speech in realistic auditory scenarios. The findings provide a foundation for future applications in monitoring listening behavior, supporting auditory processing, and improving brain-controlled hearing aids in complex acoustic environments. HighlightsO_LIListening state (active vs. passive) can be classified from EEG spectral features. C_LIO_LIAttended speech can be decoded by reconstructing speech envelopes from EEG. C_LIO_LIComparable accuracy is achieved using only electrodes placed around the ears. C_LIO_LIEEG can monitor listening state and track auditory attention in two-speaker settings. C_LI Graphical AbstractEEG signals were recorded while participants listened to two concurrent speech streams, either by actively attending to one speaker or by focusing on an unrelated visual task. Spectral features of the EEG were used to classify listening state (active vs. passive) and cognitive load (low vs. high TMR). Auditory attention decoding (AAD) was performed by reconstructing the speech envelope from the EEG time signal. O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=80 SRC="FIGDIR/small/711289v1_ufig1.gif" ALT="Figure 1"> View larger version (32K): org.highwire.dtl.DTLVardef@1079628org.highwire.dtl.DTLVardef@1135404org.highwire.dtl.DTLVardef@1f0d950org.highwire.dtl.DTLVardef@14b4c9a_HPS_FORMAT_FIGEXP M_FIG C_FIG Classification of listening state (active vs. passive): 90.3% accuracy. EEG difference between active and passive listening. Left, power spectrum, right, topographic map (alpha band 8-12 Hz). Classification of cognitive load (low vs high TMR): near chance level. EEG difference between low and high TMR. Left, power spectrum, right, topographic map (alpha band 8-12 Hz). O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=80 SRC="FIGDIR/small/711289v1_ufig2.gif" ALT="Figure 2"> View larger version (34K): org.highwire.dtl.DTLVardef@9229b1org.highwire.dtl.DTLVardef@1ef394corg.highwire.dtl.DTLVardef@9adecforg.highwire.dtl.DTLVardef@199f8c2_HPS_FORMAT_FIGEXP M_FIG C_FIG AAD achieved 84.4% accuracy, indicating robust decoding of the attended speaker during active listening, while performance dropped to near chance during passive listening.

Matching journals

The top 7 journals account for 50% of the predicted probability mass.

1
Journal of Neural Engineering
197 papers in training set
Top 0.2%
18.4%
2
Frontiers in Neuroscience
223 papers in training set
Top 0.1%
10.0%
3
NeuroImage
813 papers in training set
Top 2%
6.2%
4
Hearing Research
49 papers in training set
Top 0.1%
4.5%
5
PLOS ONE
4510 papers in training set
Top 34%
4.3%
6
Scientific Reports
3102 papers in training set
Top 32%
3.9%
7
Ear & Hearing
15 papers in training set
Top 0.1%
3.6%
50% of probability mass above
8
Journal of Neuroscience Methods
106 papers in training set
Top 0.4%
3.5%
9
Frontiers in Human Neuroscience
67 papers in training set
Top 0.5%
3.5%
10
eneuro
389 papers in training set
Top 4%
2.7%
11
Imaging Neuroscience
242 papers in training set
Top 1%
2.7%
12
Trends in Hearing
12 papers in training set
Top 0.1%
2.7%
13
IEEE Transactions on Biomedical Engineering
38 papers in training set
Top 0.4%
2.1%
14
IEEE Transactions on Neural Systems and Rehabilitation Engineering
40 papers in training set
Top 0.3%
1.9%
15
Journal of Visualized Experiments
30 papers in training set
Top 0.2%
1.8%
16
PLOS Computational Biology
1633 papers in training set
Top 17%
1.6%
17
iScience
1063 papers in training set
Top 20%
1.3%
18
European Journal of Neuroscience
168 papers in training set
Top 0.7%
1.3%
19
IEEE Journal of Biomedical and Health Informatics
34 papers in training set
Top 1%
1.2%
20
Frontiers in Psychiatry
83 papers in training set
Top 2%
1.2%
21
Behavior Research Methods
25 papers in training set
Top 0.2%
1.2%
22
The Journal of the Acoustical Society of America
33 papers in training set
Top 0.2%
0.9%
23
Neurophotonics
37 papers in training set
Top 0.5%
0.9%
24
Sensors
39 papers in training set
Top 2%
0.8%
25
Frontiers in Neuroinformatics
38 papers in training set
Top 0.9%
0.7%
26
Biomedical Signal Processing and Control
18 papers in training set
Top 0.6%
0.6%
27
Scientific Data
174 papers in training set
Top 3%
0.6%