Back

Temporal Dissociation of Syntactic Disambiguation and Memory Retrieval during Sentence Processing: Naturalistic MEG Evidence from Interpretable Models

Dunagan, D.; Low, D. S.; Yue, S.; Meyer, L.; Hale, J.

2026-04-21 neuroscience
10.64898/2026.04.20.719609 bioRxiv
Show abstract

Human sentence comprehension proceeds word-by-word, with prior research proposing two central sources of cognitive demand during incremental processing: forward-looking disambiguation of the incoming information stream, and backward-looking retrieval of information associated with previous words from working memory. Recent work has shown that Transformer-based language models successfully generate predictions about sentence processing load in human psycho- and neurolinguistic data by operationalizing disambiguation cost as next-token surprisal, and memory retrieval cost as normalized attention entropy (NAE). Such models, however, remain difficult to interpret as it is not well understood what factors play causally into the decision to assign a cost value to a given word in such artificial neural networks. Here, we present interpretable and cognitively grounded models of disambiguation and memory retrieval and evaluate their neural alignment and spatio-temporal correlates using human magnetoencephalography responses to naturalistic narrative speech. Multivariate temporal response function modeling demonstrates firstly that these human-bias-informed models fare equally well in accounting for observed human language processing data as their Transformer counterparts. This same modeling framework then suggests that surprisal and NAE temporally dissociate in the cortical language network -- surprisal being predictive of bilateral superior temporal gyrus and supramarginal gyrus activation [~]300-500 ms, and NAE being predictive of activity in the same regions, but later [~]750-850 ms. By demonstrating that interpretable neurocomputational models can achieve meaningful brain alignment while maintaining explanatory transparency, this work offers a methodological blueprint for bridging the gap between algorithmic theory and neural implementation.

Matching journals

The top 4 journals account for 50% of the predicted probability mass.

1
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 0.1%
33.0%
2
PLOS Computational Biology
1633 papers in training set
Top 5%
6.8%
3
Neuron
282 papers in training set
Top 2%
6.8%
4
The Journal of Neuroscience
928 papers in training set
Top 2%
6.4%
50% of probability mass above
5
eLife
5422 papers in training set
Top 13%
6.4%
6
Nature Communications
4913 papers in training set
Top 35%
4.3%
7
NeuroImage
813 papers in training set
Top 3%
3.6%
8
Philosophical Transactions of the Royal Society B
51 papers in training set
Top 2%
2.9%
9
Neurobiology of Language
28 papers in training set
Top 0.1%
2.7%
10
Nature Neuroscience
216 papers in training set
Top 3%
2.4%
11
Nature Human Behaviour
85 papers in training set
Top 2%
1.9%
12
Human Brain Mapping
295 papers in training set
Top 3%
1.8%
13
PLOS Biology
408 papers in training set
Top 9%
1.7%
14
Science Advances
1098 papers in training set
Top 20%
1.5%
15
Scientific Reports
3102 papers in training set
Top 64%
1.3%
16
Cell Reports
1338 papers in training set
Top 28%
1.2%
17
eneuro
389 papers in training set
Top 8%
0.9%
18
Communications Biology
886 papers in training set
Top 19%
0.9%
19
Imaging Neuroscience
242 papers in training set
Top 3%
0.9%
20
Journal of Cognitive Neuroscience
119 papers in training set
Top 1%
0.8%
21
Cognition
44 papers in training set
Top 0.4%
0.8%
22
PLOS ONE
4510 papers in training set
Top 69%
0.7%
23
Communications Psychology
20 papers in training set
Top 0.4%
0.6%
24
Cerebral Cortex
357 papers in training set
Top 2%
0.6%
25
PNAS Nexus
147 papers in training set
Top 3%
0.6%