Back

Insights into dynamic sound localisation: A direction-dependent comparison between human listeners and a Bayesian model.

McLachlan, G. A.; Majdak, P.; Reijniers, J.; Mihocic, M.; Peremans, H.

2024-04-29 neuroscience
10.1101/2024.04.26.591250 bioRxiv
Show abstract

Self-motion is an essential but often overlooked component of sound localisation. While the directional information of a source is implicitly contained in head-centred acoustic cues, that acoustic input needs to be continuously combined with sensorimotor information about the head orientation in order to decode these cues to a world-centred frame of reference. On top of that, the use of head movement significantly reduces ambiguities in the directional information provided by the incoming sound. In this work, we evaluate a Bayesian model that predicts dynamic sound localisation, by comparing its predictions to human performance measured in a behavioural sound-localisation experiment. Model parameters were set a-priori, based on results from various psychoacoustic and sensorimotor studies, i.e., without any post-hoc parameter fitting to behavioral results. In a spatial analysis, we evaluated the models capability to predict spatial localisation responses. Further, we investigated specific effects of the stimulus duration, the spatial prior and sizes of various model uncertainties on the predictions. The spatial analysis revealed general agreement between the predictions and the actual behaviour. The altering of the model uncertainties and stimulus duration revealed a number of interesting effects providing new insights on modelling the human integration of acoustic and sensorimotor information in a localisation task. Author summaryIn everyday life, sound localisation requires both interaural and monaural acoustic information. In addition to this, sensorimotor information about the position of the head is required to create a stable and accurate representation of our acoustic environment. Bayesian inference is an effective mathematical framework to model how humans combine information from different sources and form beliefs about the world. Here, we compare the predictions from a Bayesian model for dynamic sound localisation with data from a localisation experiment. We show that we can derive the model parameter values from previous psychoacoustic and sensorimotor experiments and that the model without any post-hoc fitting, can predict general dynamic localisation performance. Finally, the discrepancies between the modelled data and behavioural data are analysed by testing the effects of adjusting the model parameters.

Matching journals

The top 3 journals account for 50% of the predicted probability mass.

1
The Journal of the Acoustical Society of America
33 papers in training set
Top 0.1%
22.8%
2
PLOS Computational Biology
1633 papers in training set
Top 1%
18.9%
3
PLOS ONE
4510 papers in training set
Top 21%
8.5%
50% of probability mass above
4
Hearing Research
49 papers in training set
Top 0.1%
6.5%
5
Frontiers in Neuroscience
223 papers in training set
Top 1%
4.0%
6
Journal of Computational Neuroscience
23 papers in training set
Top 0.1%
3.7%
7
Scientific Reports
3102 papers in training set
Top 35%
3.6%
8
Journal of the Association for Research in Otolaryngology
11 papers in training set
Top 0.1%
3.1%
9
Trends in Hearing
12 papers in training set
Top 0.1%
2.8%
10
Philosophical Transactions of the Royal Society B
51 papers in training set
Top 2%
2.5%
11
NeuroImage
813 papers in training set
Top 5%
1.2%
12
eneuro
389 papers in training set
Top 8%
1.0%
13
iScience
1063 papers in training set
Top 24%
1.0%
14
Journal of The Royal Society Interface
189 papers in training set
Top 4%
0.9%
15
BMC Bioinformatics
383 papers in training set
Top 7%
0.8%
16
Vision Research
26 papers in training set
Top 0.2%
0.8%
17
Frontiers in Neuroinformatics
38 papers in training set
Top 0.8%
0.7%
18
Frontiers in Physiology
93 papers in training set
Top 6%
0.7%
19
Chaos, Solitons & Fractals
32 papers in training set
Top 2%
0.7%
20
Journal of Neuroscience Methods
106 papers in training set
Top 2%
0.5%
21
Royal Society Open Science
193 papers in training set
Top 6%
0.5%
22
Ear & Hearing
15 papers in training set
Top 0.3%
0.5%