Back

Predicting continuous outcomes: Some new tests of associative approaches to contingency learning.

Chow, J.; Don, H. J.; Colagiuri, B.; Livesey, E. J.

2025-06-15 animal behavior and cognition
10.1101/2025.06.12.659290 bioRxiv
Show abstract

Associative learning models have traditionally simplified contingency learning by relying on binary classification of cues and outcomes, such as administering a medical treatment (or not) and observing whether the patient recovered (or not). While successful in capturing fundamental learning phenomena across human and animal studies, these models are not capable of representing variability in human experiences that are common in many real-world contexts. Indeed, where variation in outcome magnitude exists (e.g., severity of illness in a medical scenario), this class of models, at best, approximate the outcome mean with no ability to represent the underlying distribution of values. In this paper, we introduce one approach to incorporating a distributed architecture into a prediction error learning model that tracks the contingency between cues and dimensional outcomes. Our Distributed Model allows associative links to form between the cue and outcome nodes that provide distributed representation depending on the magnitude of the outcome, thus enabling learning that extends beyond approximating the mean. Comparing the Distributed Model against a Simple Delta Model across four contingency learning experiments, we found that the Distributed Model provides significantly better fit to empirical data in virtually all participants. These findings suggest human learners rely on a means of encoding outcomes that preserves the continuous nature of experienced events, advancing our understanding of causal inference in complex environments. Author SummaryWhen we learn about cause and effect in everyday life--such as whether a medicine helps recovery from illness--we experience outcomes that vary in degree rather than simply happening or not happening. Traditional models of how humans and animals learn have largely focused on these all-or-nothing scenarios, essentially tracking the average value when outcomes are dimensional. We developed a model that extends on simple error-correction models to represent how people learn about relationships between cues and outcomes that can take on a range of values. Instead of just tracking the average, our Distributed Model captures the full spectrum of possible outcomes and their frequencies. We tested this model against a conventional single point-estimate approach across four experiments and found that our Distributed Model better matched how people make predictions in nearly every case. Our findings suggest that a relatively simple adjustment to conventional prediction-error learning algorithms that allows representation of outcome magnitudes provide a powerful way to capture the information that we preserve when we learn about variable outcomes. This has important implications for understanding how people make predictions and decisions in real-world situations where outcomes naturally vary, from medical treatments to environmental changes.

Matching journals

The top 4 journals account for 50% of the predicted probability mass.

1
PLOS Computational Biology
1633 papers in training set
Top 0.4%
26.1%
2
Psychological Review
19 papers in training set
Top 0.1%
12.5%
3
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 7%
8.5%
4
Behavioral Neuroscience
25 papers in training set
Top 0.1%
4.9%
50% of probability mass above
5
Frontiers in Artificial Intelligence
18 papers in training set
Top 0.1%
4.4%
6
PLOS ONE
4510 papers in training set
Top 36%
4.0%
7
Scientific Reports
3102 papers in training set
Top 36%
3.6%
8
eLife
5422 papers in training set
Top 25%
3.6%
9
Journal of Cognitive Neuroscience
119 papers in training set
Top 0.6%
2.9%
10
Nature Communications
4913 papers in training set
Top 43%
2.8%
11
Cognition
44 papers in training set
Top 0.3%
1.7%
12
Nature Human Behaviour
85 papers in training set
Top 2%
1.7%
13
Philosophical Transactions of the Royal Society B: Biological Sciences
53 papers in training set
Top 0.5%
1.5%
14
Frontiers in Behavioral Neuroscience
46 papers in training set
Top 0.8%
0.9%
15
Communications Psychology
20 papers in training set
Top 0.2%
0.9%
16
F1000Research
79 papers in training set
Top 5%
0.8%
17
The Journal of Neuroscience
928 papers in training set
Top 8%
0.8%
18
Neuropsychologia
77 papers in training set
Top 1%
0.8%
19
iScience
1063 papers in training set
Top 34%
0.7%
20
Neural Computation
36 papers in training set
Top 0.8%
0.7%
21
Cognitive, Affective, & Behavioral Neuroscience
25 papers in training set
Top 0.3%
0.6%
22
Science Advances
1098 papers in training set
Top 32%
0.6%
23
Addiction Neuroscience
17 papers in training set
Top 0.5%
0.6%
24
Animal Cognition
22 papers in training set
Top 0.2%
0.6%
25
Frontiers in Psychology
49 papers in training set
Top 1%
0.6%
26
Progress in Neurobiology
41 papers in training set
Top 3%
0.5%
27
Learning & Memory
23 papers in training set
Top 0.2%
0.5%
28
Journal of The Royal Society Interface
189 papers in training set
Top 6%
0.5%
29
Proceedings of the Royal Society B: Biological Sciences
341 papers in training set
Top 8%
0.5%
30
eneuro
389 papers in training set
Top 11%
0.5%