Back

Optimizing the multivariate temporal response function(mTRF) framework for better identification of neural responses to partially dependent speech variables

Dapper, K.; Hollywood, S.; Dool, T.; Butler, B.; Joanisse, M.

2026-02-26 neuroscience
10.64898/2026.02.25.707435 bioRxiv
Show abstract

An increasingly popular approach to investigating the neural bases of speech processing is forward modeling via a multivariate temporal-response function (mTRF). This approach uses stimulus characteristics to predict neural responses, especially in EEG and MEG. A central question in this regard is how best to represent the input stimulus. In the case of speech processing, established representations include the speech envelope or spectrogram, as well as feature-based linguistic representations of phonetic content. However, when multiple representations are used as input, a key challenge is how best to isolate their relative effects. This is particularly challenging because such representations have nonvanishing mutual information. To address this problem, we propose optimizations to the mTRF framework via a novel statistical approach of cyclic permutation. Additionally, we propose methodological improvements to the mTRF model targeting three key challenges: effectively managing spatial and temporal autocorrelations endemic to multi-sensor EEG data; mitigating the effects of endogenous drift; and introducing robust artifact rejection to enhance data quality. To demonstrate the effectiveness of this approach, the novel method was applied to a novel EEG data set of natural language listening in 27 adults with normal hearing. Our data showed that including ICA decomposition, artifact rejection, and cyclic permutations in an mTRF analysis improves the isolation of neural responses specific to phonetic and acoustic input variables. Author SummarySpeech processing happens in different stages. It starts with recognizing basic sounds, then categorizes them into discrete categories called phonemes, and goes on to understanding words and sentences. The multivariate temporal response function (mTRF) is a method for predicting brain activity from different features of the speech stimulus. Features that can be used as input to the mTRF model include acoustic features, such as sound envelopes, as well as more abstract language features, such as phonemes, which are a fundamental building block of words. One problem in speech research is distinguishing neural responses to different features. This is challenging because knowing one feature of the speech stimulus enables educated guesses about others and educated predictions about how this feature will behave in the future. Both of these properties of speech make multivariate temporal statistical analysis more difficult. To address this, we propose changes to the preprocessing of the EEG recordings and a new mathematical model that uses a partially rearranged version of the features of the speech stimulus to isolate the predictive power of a particular type of speech feature.

Matching journals

The top 5 journals account for 50% of the predicted probability mass.

1
NeuroImage
813 papers in training set
Top 0.5%
21.8%
2
Frontiers in Neuroscience
223 papers in training set
Top 0.1%
9.8%
3
Journal of Neuroscience Methods
106 papers in training set
Top 0.1%
8.1%
4
PLOS Computational Biology
1633 papers in training set
Top 5%
7.0%
5
PLOS ONE
4510 papers in training set
Top 26%
6.6%
50% of probability mass above
6
Journal of Neural Engineering
197 papers in training set
Top 0.6%
4.7%
7
Imaging Neuroscience
242 papers in training set
Top 1%
3.5%
8
eneuro
389 papers in training set
Top 3%
3.5%
9
Human Brain Mapping
295 papers in training set
Top 2%
3.5%
10
Scientific Reports
3102 papers in training set
Top 39%
3.5%
11
Hearing Research
49 papers in training set
Top 0.2%
3.5%
12
Frontiers in Human Neuroscience
67 papers in training set
Top 1%
1.6%
13
Brain Topography
23 papers in training set
Top 0.2%
1.2%
14
Neural Computation
36 papers in training set
Top 0.8%
0.7%
15
The Journal of Neuroscience
928 papers in training set
Top 9%
0.7%
16
Ear & Hearing
15 papers in training set
Top 0.2%
0.7%
17
Neuroscience
88 papers in training set
Top 3%
0.7%
18
Neuroinformatics
40 papers in training set
Top 1%
0.7%
19
Communications Biology
886 papers in training set
Top 27%
0.7%
20
Biomedical Signal Processing and Control
18 papers in training set
Top 0.6%
0.6%
21
The Journal of the Acoustical Society of America
33 papers in training set
Top 0.2%
0.6%
22
BMC Bioinformatics
383 papers in training set
Top 8%
0.6%