Human brains implicitly and rapidly distinguish AI from human voices before decoding prosodic meaning
Chen, W.; Pell, M.; Jiang, X.
Show abstract
People encounter AI voices daily. Existing behavioral studies suggest listeners rely on prosodic cues such as intonation and expressiveness to detect audio deepfakes, reporting that AI voices sound prosodically less rich than human voices. To test whether prosodic processing drives deepfake discrimination in the neural time course of voice processing, we recorded electroencephalographic (EEG) data while participants listened to human and AI-generated speakers producing utterances in confident vs. doubtful prosody (tone of voice), with attention directed toward memorizing speaker names. We used voice cloning to control for speaker identity confounds between human and AI voices. Multivariate pattern analysis revealed that neural discrimination of human vs. AI voices emerged rapidly regardless of prosody (confident: 176 ms; doubtful: 134 ms), substantially preceding prosody discrimination (confident vs. doubtful within human voices: 2066 ms; within AI voices: 1366 ms). Acoustic analysis confirmed that prosodic distinctions became classifiable only at utterance offset (90% normalized duration), converging with neural evidence that prosody requires near-complete temporal integration. This temporal dissociation between rapid voice source discrimination and late-emerging prosody decoding suggests that prosody plays a smaller role in audio deepfake detection than listeners retrospectively report. Representational similarity analysis further revealed that spectral envelope features (mel-frequency cepstral coefficients; MFCC), rather than the visually salient high-frequency energy differences, drove neural human-AI discrimination, with MFCCs earliest independent contribution (228 ms) closely following the MVPA decoding onset (134-176 ms). Future studies may manipulate specific acoustic components to establish the causal sources of this rapid and sustained neural discrimination. Significance StatementPeople encounter AI voices daily, in phone calls, navigation apps, supermarket checkouts, and subway announcements. Using electroencephalography, we show that the human brain automatically and rapidly distinguishes everyday AI voices from human speech, even without conscious attention to voice source. Although people may attribute this ability to AI voices sounding monotone or prosodically unnatural, the brain relies on subtler acoustic signatures, enabling discrimination before prosodic information becomes available. Attempts to identify the specific acoustic features driving this neural detection were inconclusive, pointing to the need for future causal investigations. We encourage engineers and policymakers to ensure AI voices remain perceptually detectable, as increasingly humanlike AI voices could cognitively disadvantage the general public if they become indistinguishable from human speech.
Matching journals
The top 3 journals account for 50% of the predicted probability mass.