Predictive E-prop: A biologically inspired approach to train predictive coding-based recurrent spiking neural networks
Noe, D.; Yamamoto, H.; Katori, Y.; Sato, S.
Show abstract
The predictive coding framework offers a compelling model for temporal signal processing in the cortex. Recent studies explored its implementation in spiking architectures using Hebbian plasticity rules or offline learning; however, a biologically inspired model that enables gradient-based minimization of prediction errors remains an open challenge. In this work, we demonstrate that the predictive coding objective can be optimized using the online and local nature of the e-prop learning algorithm in recurrent spiking neural networks, creating the Predictive E-prop model. We demonstrate that the model is capable of learning complex time-series signals purely from self-supervised learning, using only its own prediction error as input, maintaining self-sustaining activity and reproducing the targets underlying dynamics even in the absence of external stimuli. Furthermore, Predictive E-prop shows robust signal reconstruction abilities, effectively filtering noise and successfully interpolating sparse data. A comparative study against a backpropagation-based approach reveals that the two achieve comparable performance after training, confirming the viability of our model for timeseries generation tasks. These findings are particularly relevant for future developments in neuromorphic hardware, offering a purely self-supervised, gradient-based model that could provide significant advantages in power efficiency and computational ability.
Matching journals
The top 5 journals account for 50% of the predicted probability mass.