Back

EEG-based classification of natural sounds reveals specialized responses to speech and music

Zuk, N. J.; Teoh, E. S.; Lalor, E. C.

2019-09-05 neuroscience
10.1101/755553 bioRxiv
Show abstract

Humans can easily distinguish many sounds in the environment, but speech and music are uniquely important. Previous studies, mostly using fMRI, have identified separate regions of the brain that respond selectively for speech and music. Yet there is little evidence that brain responses are larger and more temporally precise for human-specific sounds like speech and music, as has been found for responses to species-specific sounds in other animals. We recorded EEG as healthy, adult subjects listened to various types of two-second-long natural sounds. By classifying each sound based on the EEG response, we found that speech, music, and impact sounds were classified better than other natural sounds. But unlike impact sounds, the classification accuracy for speech and music dropped for synthesized sounds that have identical \"low-level\" acoustic statistics based on a subcortical model, indicating a selectivity for higher-order features in these sounds. Lastly, the trends in average power and phase consistency of the two-second EEG responses to each sound replicated the patterns of speech and music selectivity observed with classification accuracy. Together with the classification results, this suggests that the brain produces temporally individualized responses to speech and music sounds that are stronger than the responses to other natural sounds. In addition to highlighting the importance of speech and music for the human brain, the techniques used here could be a cost-effective and efficient way to study the human brains selectivity for speech and music in other populations.\n\nHighlightsO_LIEEG responses are stronger to speech and music than to other natural sounds\nC_LIO_LIThis selectivity was not replicated using stimuli with the same acoustic statistics\nC_LIO_LIThese techniques can be a cost-effective way to study speech and music selectivity\nC_LI

Matching journals

The top 4 journals account for 50% of the predicted probability mass.

1
Frontiers in Neuroscience
223 papers in training set
Top 0.1%
34.7%
2
NeuroImage
813 papers in training set
Top 1%
8.5%
3
Scientific Reports
3102 papers in training set
Top 17%
6.4%
4
eneuro
389 papers in training set
Top 1%
6.4%
50% of probability mass above
5
Hearing Research
49 papers in training set
Top 0.1%
3.6%
6
Frontiers in Human Neuroscience
67 papers in training set
Top 0.6%
3.1%
7
PLOS ONE
4510 papers in training set
Top 44%
2.8%
8
Ear & Hearing
15 papers in training set
Top 0.1%
1.8%
9
The Journal of Neuroscience
928 papers in training set
Top 5%
1.8%
10
Journal of Neuroscience Methods
106 papers in training set
Top 0.8%
1.8%
11
European Journal of Neuroscience
168 papers in training set
Top 0.4%
1.7%
12
Cerebral Cortex
357 papers in training set
Top 0.8%
1.7%
13
Neuroscience Letters
28 papers in training set
Top 0.4%
1.7%
14
Imaging Neuroscience
242 papers in training set
Top 2%
1.5%
15
PLOS Computational Biology
1633 papers in training set
Top 18%
1.5%
16
iScience
1063 papers in training set
Top 21%
1.2%
17
Neuroscience
88 papers in training set
Top 2%
1.1%
18
Brain Topography
23 papers in training set
Top 0.3%
1.0%
19
Communications Biology
886 papers in training set
Top 23%
0.8%
20
Journal of Neurophysiology
263 papers in training set
Top 0.9%
0.8%
21
Neurophotonics
37 papers in training set
Top 0.6%
0.7%
22
Journal of Neural Engineering
197 papers in training set
Top 2%
0.7%
23
Human Brain Mapping
295 papers in training set
Top 4%
0.7%
24
BMC Biology
248 papers in training set
Top 6%
0.7%
25
Journal of Cognitive Neuroscience
119 papers in training set
Top 2%
0.5%
26
Cognitive Neurodynamics
15 papers in training set
Top 0.6%
0.5%
27
The Journal of the Acoustical Society of America
33 papers in training set
Top 0.2%
0.5%