Application of Explainable AI in Neuroscience: Enhancing Autism Screening
Geman, O.; Sharghilavan, S.; Abbasi, H.; Toderean, R.; Postolache, O.; Mihai, A.-S.; Karppa, M.
Show abstract
The main challenges in the life of a child with autism are difficulties in communication, behavior, and social interaction. Early diagnosis of this neurodevelopmental disorder improves patient outcomes by enabling more effective, personalized interventions. This diagnosis can sometimes be difficult, especially in very young children. Non-invasive, relatively accessible, and able to reflect neural function in real time, electroencephalography (EEG) shows promise in the detection of Autism spectrum disorders (ASD). However, because EEG data is still difficult for experts to understand, machine learning and artificial intelligence (AI) are beginning to be used in this field as well. In this paper, a ResNet+BiLSTM hybrid deep network was applied and achieved high accuracy in distinguishing individuals with autism from neurotypical subjects. Since AI models typically provide predictions without clear explanations, this study employs explainable AI (XAI) methods such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) to clarify their decision-making.Delta, theta, alpha, beta, and gamma waves, as well as ERP components P100, N100, P200, MMN, and P600, were analyzed in the two neurotypical and autistic groups that were compared in this study using EEG recordings. By integrating SHAP and LIME, the system achieved both accurate classification and transparent explanations, pointing to EEG- and ERP-based features as reliable biomarkers for ASD.
Matching journals
The top 10 journals account for 50% of the predicted probability mass.