Asymmetric Reinforcement Learning Explains Human Choice Patterns in Decision-making Under Risk
Shahdoust, N.; Cowan, R. L.; Price, T. A.; Davis, T. S.; Liu, A.; Rabinovich, R.; Zarr, V.; Libowitz, M. R.; Shofty, B.; Rahimpour, S.; Borisyuk, A.; Smith, E. H.
Show abstract
Human decisions under uncertainty are shaped by experience, but the computations that translate expectation and experience into choice remain debated in neural and cognitive science. Prior studies highlight reinforcement learning (RL) as a unifying framework, yet it is unclear whether human behavior under risk is better captured by symmetric updating from outcomes or by asymmetric learning that weights reward and loss differently. This work examines which learning strategies better explain trial-by-trial choices given contextual uncertainty and manipulations of outcome distributions. Our results show that a Risk Sensitive (RS) model with asymmetric learning rates best explains human behavior in our novel decision-making task. Fitting candidate models to individual trial histories yielded value signals that predicted both choice and response time. These results highlight that RS model, as an asymmetric learning provides a concise and identifiable account of behavior in decision-making under risk tasks.
Matching journals
The top 4 journals account for 50% of the predicted probability mass.