G3DCT: An Interpretable Spatial Grid-based Framework with Temporal Convolution-Transformer for EEG Artifact Identification
He, A.; Wang, X.; Yu, J.; Wang, X.; Ge, Z.; Kong, Y.; Yang, G.; Yang, C.; Yang, C.; Cao, M.
Show abstract
Electroencephalography (EEG) serves as a fundamental tool in modern neurology, cognitive neuroscience, and brain-computer interfaces, but its practical application is often compromised by artifacts. Physiological artifacts are particularly intractable due to overlapping spectral features with neural signals, hindering reliable EEG interpretation. In this work, we propose Grid-based 3D Convolution-Transformer (G3DCT), an interpretable deep learning framework for EEG artifact identification. The framework embeds multi-channel EEG signals into fixed grids to leverage electrode spatial topology, employs parallel multi-branch temporal convolutions and Transformers to handle complex artifacts, and incorporates an attention module to visualize scalp activation patterns, which enhances physiological interpretability. Our evaluation on three datasets demonstrates that G3DCT outperforms existing state-of-the-art models. For challenging combined artifacts, it secures a gain of 2.8% in F1-score over the second-best model. These results demonstrate that G3DCT provides an efficient and robust solution for EEG artifact identification, which has the potential to enhance the reliability of EEG-based applications in practice.
Matching journals
The top 4 journals account for 50% of the predicted probability mass.