Involuntary facial muscle activity during imagined vocalisation contaminates EEG and enables emotion decoding
Tang, Y.; Corballis, P. M.; Hallum, L. E.
Show abstract
AO_SCPLOWBSTRACTC_SCPLOWDecoding imagined speech from electroencephalography (EEG) recordings is potentially useful for brain-computer interfaces. Previous studies have focused on decoding semantic information from EEG, leaving the decoding of emotion - an important component of human communication - largely unexplored. Here, we report two experiments involving participants tasked with overt (n = 14) or imagined (n = 21) emotional vocalisation in five different categories: anger, happiness, neutral, sadness, and pleasure. Throughout, we recorded 64-channel EEG; we computed time-frequency features and used a logistic-regression classifier to evaluate emotion decoding accuracy. In five participants, we also recorded facial surface electromyography (sEMG) during imagined vocalisation, and studied the contamination of EEG by sEMG. Our results show that emotion can be decoded from single-trial EEG recordings of both overt (78.1%, chance = 20%) and imagined vocalisation (36.4%). The high-gamma band (50 to 100 Hz) and lateral EEG channels (T7, T8, and proximal) were important for decoding. sEMG analysis indicated that involuntary facial muscle activity contributed to these spectral and spatial patterns during imagined vocalisation, especially during happy vocalisations. We conclude that involuntary facial muscle activity is associated with certain emotion categories (i.e., happiness), and drives above-chance decoding of emotion from single-trial EEG recordings of imagined vocalisation.
Matching journals
The top 5 journals account for 50% of the predicted probability mass.