Back

AI-enhanced behavioral approach to measuring hearing in infants and toddlers: Proof-of-Concept Study

Schlittenlacher, J.; Blankenship, C.; Jackson, I.; Visram, A.; Munro, K.; Hunter, L.; Moore, D. R.

2025-07-11 pediatrics
10.1101/2025.07.10.25331271 medRxiv
Show abstract

ObjectiveShow that a basic unsupervised machine learning (ML) algorithm can give information on the direction of child and infant reactions to sound using non-identifiable video-recorded facial features. DesignInfants and toddlers were presented warble tones or single-syllable utterances 45 degrees to the left or right. A camera recorded their reactions, from which features like head turns or eye gaze were extracted with OpenFace. Three clusters were formed using Expectation Maximization on 80% of the toddler data. The remaining 20% and all infant data were used to verify if the clusters represent groups for sound presentations to the left, to the right, and both directions. Study Sample28 infants (2-5 months) and 30 toddlers (2-4 years), born preterm (<32 weeks gestational age) were presented ten sounds each. ResultsThe largest cluster comprised 90% of the trials with sound presentations in both directions, indicating "no decision." The remaining two clusters could be interpreted to represent reactions to the left and the right, respectively, and average sensitivities of 96% for the toddlers and 68% for the infants. ConclusionsA simple machine learning algorithm demonstrated that it can form correct decisions on the direction of sound presentation using non-identifiable facial behavioural data.

Matching journals

The top 1 journal accounts for 50% of the predicted probability mass.