Time-resolved hemodynamic responses to sentence-level speech perception, production, and self-monitoring
Leong, T. I.; Li, A.; Ang, J. H.; Reynolds, B. L.; Leong, C. T.; Choi, C. U.; Sereno, M. I.; Li, D.; Lei, V. L. C.; Huang, R.-S.
Show abstract
Functional magnetic resonance imaging (fMRI) has been widely utilized to explore the neural mechanisms underlying speech processing. However, the intertwining of perception and production that exists in real-world scenarios remains underexplored due to challenges such as gradient noise and head motion artifacts from speaking. Previous research has often employed sparse-sampling designs, pausing image acquisition intermittently to present auditory stimuli or record overt speech. While this approach mitigates some challenges, it cannot capture continuous brain activity during speech processing and does not separate the mixed hemodynamic responses to external and self-generated speech occurring in succession. We overcame these limitations and continuously scanned thirty-one participants as they listened to and recited English sentences. Through independent component analysis (ICA), we decomposed each functional scan into spatially independent components (ICs), identifying task-related ICs in the superior temporal cortex, inferior frontal gyrus, and orofacial sensorimotor cortex. These ICs demonstrated time-resolved hemodynamic responses corresponding to distinct stages of speech perception, planning, and production. A linear subtraction between the IC time courses from the listening-reciting (perception-to-production) and listening (perception-only) tasks further revealed a secondary hemodynamic response to self-generated speech in the superior temporal cortex. Furthermore, we established precise temporal relationships between overt speech output and the peak, rise, and fall of hemodynamic responses for each independent component. Together, we present a methodological framework that can inform future fMRI studies on naturalistic tasks involving the perception of external auditory stimuli and monitoring of self-generated sounds.
Matching journals
The top 3 journals account for 50% of the predicted probability mass.