Transformer Language Models Reveal Distinct Patterns in Aphasia Subtypes and Recovery Trajectories
Ahamdi, S. S.; Fridriksson, J.; Den Ouden, D.
Show abstract
Language impairments in aphasia are characterized by various representational disruptions that may be reflected in discourse production. This research examines the capacity of transformer-based language models, particularly GPT-2, to serve as a computational framework for analyzing variations in aphasic narrative speech. A longitudinal dataset of narrative speech samples collected at six time points from individuals with aphasia (N = 47) was utilized as part of an intervention study. All transcripts were processed via the GPT-2 language model to obtain activation values from each of the 12 transformer layers. Statistically significant differences in activation magnitude across aphasia subtypes were found at every layer (all p < .001), with the most pronounced effects in the deeper layers. Pairwise Tukey HSD tests revealed consistent distinctions between Brocas aphasia and both Anomic and Wernickes aphasia, suggesting a shared activation profile between the latter two. Longitudinal tests revealed significant changes over time, especially in the final three layers (10-12). These findings suggest that transformer-based activation patterns reflect meaningful variation in aphasic discourse and could complement current diagnostic tools. Overall, GPT-2 provides a scalable tool to model representational dynamics in aphasia and enhance the clinical interpretability of deep language models.
Matching journals
The top 11 journals account for 50% of the predicted probability mass.