Computational modeling of head direction cells in three-dimensional space: directional encoding and visual cue manipulation
Wang, Y.; Hu, J.; Xu, S.; Xu, X.; Pan, X.; Wang, R.
Show abstract
Head direction (HD) cells can fire selectively as a function of the animals head azimuth direction and form an internal compass for navigation. They were found in the mammalian limbic system including the dorsal presubiculum and entorhinal cortex. The underlying network updates its directional estimate in a self-organized fashion and can be recalibrated by external sensory cues. Although ring-attractor models were proposed to account for azimuth coding on horizontal planes, they cannot explain conjunctive azimuth-and-pitch tuning observed in HD cells of the presubiculum of flying bats navigating in three-dimensional (3-D) space. Based on the 3-D electrophysiological recordings, we developed a toroidal continuous attractor network that jointly encodes horizontal azimuth and vertical pitch angle of head direction. The model can reproduce the experimentally recorded tuning curves of individual HD cells, and accurately encode the 3-D dynamic head direction of bat by HD cell population. The model also simulate the influence of horizontal visual cue manipulation on the HD system in 3-D space and predicts how horizontal landmark rotation induces a sustained azimuthal offset that persists after the cue is removed, which is comparable to two-dimensional experimental findings. This research clarifies how conjunctive 3-D direction codes are generated and modified by vestibular input, visual information and recurrent connectivity. It also uncovers the computational principles and properties of the brains navigation functions in realistic 3-D environments and offers new theoretical reference for future studies. Author summaryA central function of the brains navigation system is to track head direction, which in terrestrial mammals is largely confined to a horizontal plane. However, for animals like bats that navigate freely in three-dimensional (3-D) space, the neural code for head direction has been found to be high-dimensional. While ring attractor network models elegantly explain horizontal azimuth coding, how the 3-D head direction code is formed and affected by sensory information remains unknown. Here, we develop a toroidal continuous attractor network model with vestibular and visual modules that explains the jointly encoding of azimuth and pitch angles, and the effect of visual cue. We introduce the toroidal topology to the continuous attractor network, and the high-dimensional angular velocity signal from vestibular system and visual input are used to update the population dynamics of the network. Our model is constrained by and reproduces 3-D electrophysiological data from flying bats, captures the unique tuning curves of individual head direction cells, and further achieves the accurate encoding of dynamic 3-D head movements by population activity. Evidences in the rodent have suggested visual cue manipulation recalibrates head direction coding horizontally. Our model further successfully simulates the influence of visual cue rotation in 3-D space, predicting a persistent, global offset in the azimuth encoding--an effect that endures after cue removal and aligns with 2-D rodent experimental phenomena. This work provides a mechanistic, network-level explanation for 3-D head direction encoding and reveals how multimodal cues are integrated to form the internal compass in a volumetric world.
Matching journals
The top 4 journals account for 50% of the predicted probability mass.