Back

Time-resolved hemodynamic responses to sentence-level speech perception, production, and self-monitoring

Leong, T. I.; Li, A.; Ang, J. H.; Reynolds, B. L.; Leong, C. T.; Choi, C. U.; Sereno, M. I.; Li, D.; Lei, V. L. C.; Huang, R.-S.

2026-04-14 neuroscience
10.64898/2026.04.13.715885 bioRxiv
Show abstract

Functional magnetic resonance imaging (fMRI) has been widely utilized to explore the neural mechanisms underlying speech processing. However, the intertwining of perception and production that exists in real-world scenarios remains underexplored due to challenges such as gradient noise and head motion artifacts from speaking. Previous research has often employed sparse-sampling designs, pausing image acquisition intermittently to present auditory stimuli or record overt speech. While this approach mitigates some challenges, it cannot capture continuous brain activity during speech processing and does not separate the mixed hemodynamic responses to external and self-generated speech occurring in succession. We overcame these limitations and continuously scanned thirty-one participants as they listened to and recited English sentences. Through independent component analysis (ICA), we decomposed each functional scan into spatially independent components (ICs), identifying task-related ICs in the superior temporal cortex, inferior frontal gyrus, and orofacial sensorimotor cortex. These ICs demonstrated time-resolved hemodynamic responses corresponding to distinct stages of speech perception, planning, and production. A linear subtraction between the IC time courses from the listening-reciting (perception-to-production) and listening (perception-only) tasks further revealed a secondary hemodynamic response to self-generated speech in the superior temporal cortex. Furthermore, we established precise temporal relationships between overt speech output and the peak, rise, and fall of hemodynamic responses for each independent component. Together, we present a methodological framework that can inform future fMRI studies on naturalistic tasks involving the perception of external auditory stimuli and monitoring of self-generated sounds.

Matching journals

The top 3 journals account for 50% of the predicted probability mass.

1
NeuroImage
813 papers in training set
Top 0.2%
28.6%
2
The Journal of Neuroscience
928 papers in training set
Top 0.9%
13.1%
3
Imaging Neuroscience
242 papers in training set
Top 0.2%
10.4%
50% of probability mass above
4
eneuro
389 papers in training set
Top 1%
5.0%
5
Frontiers in Neuroscience
223 papers in training set
Top 0.6%
5.0%
6
Scientific Reports
3102 papers in training set
Top 21%
5.0%
7
Human Brain Mapping
295 papers in training set
Top 1%
4.1%
8
eLife
5422 papers in training set
Top 29%
3.2%
9
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 24%
2.8%
10
Communications Biology
886 papers in training set
Top 4%
2.5%
11
Journal of Cognitive Neuroscience
119 papers in training set
Top 0.8%
1.8%
12
Frontiers in Human Neuroscience
67 papers in training set
Top 1%
1.7%
13
Cerebral Cortex
357 papers in training set
Top 1%
1.4%
14
Nature Communications
4913 papers in training set
Top 59%
0.9%
15
PLOS ONE
4510 papers in training set
Top 63%
0.9%
16
Neurophotonics
37 papers in training set
Top 0.5%
0.9%
17
Neurobiology of Language
28 papers in training set
Top 0.1%
0.8%
18
Science Advances
1098 papers in training set
Top 29%
0.8%
19
PLOS Computational Biology
1633 papers in training set
Top 24%
0.8%
20
Neuron
282 papers in training set
Top 9%
0.7%
21
PLOS Biology
408 papers in training set
Top 22%
0.7%
22
NeuroImage: Clinical
132 papers in training set
Top 4%
0.7%
23
Journal of Neuroscience Methods
106 papers in training set
Top 2%
0.5%
24
Cell Reports
1338 papers in training set
Top 36%
0.5%