Back

Modelling Speed-Accuracy Tradeoffs in the Stopping Rule for Confidence Judgments

Herregods, S.; Le Denmat, P.; Desender, K.

2023-02-28 neuroscience
10.1101/2023.02.27.530208 bioRxiv
Show abstract

Making a decision and reporting confidence in the accuracy of that decision are thought to be driven by the same mechanism: the accumulation of evidence. Previous research has shown that choices and reaction times are well accounted for by a computational model assuming noisy accumulation of evidence until crossing a decision boundary (e.g., the drift diffusion model). Decision confidence can be derived from the amount of evidence following post-decision evidence accumulation. Currently, the stopping rule for post-decision evidence accumulation is underspecified. In the current work, we quantitatively and qualitatively compare the ability of four prominent models of confidence couched within evidence accumulation to account for this stopping rule. In two experiments, participants were instructed to make fast or accurate decisions, and to give fast or carefully considered confidence judgments. We then compared the different models in their ability to capture the speed-accuracy effects on confidence. Both qualitatively and quantitatively, the data were best accounted for by our newly proposed Flexible Collapsing Boundaries model, in which post-decision accumulation terminates once it reaches one of two opposing slowly collapsing confidence boundaries. Inspection of the parameters of this model revealed that instructing participants to make fast versus accurate decisions influenced the height of the decision boundaries, while instructing participants to make fast versus careful confidence judgments influenced height of the confidence boundaries. Our data show that the stopping rule for confidence judgments can be well described as an accumulation-to-bound process, and that the height of these confidence boundaries are under strategic control.

Matching journals

The top 4 journals account for 50% of the predicted probability mass.

1
PLOS Computational Biology
1633 papers in training set
Top 0.7%
22.4%
2
Psychological Review
19 papers in training set
Top 0.1%
17.4%
3
Scientific Reports
3102 papers in training set
Top 10%
8.4%
4
PLOS ONE
4510 papers in training set
Top 29%
6.3%
50% of probability mass above
5
eLife
5422 papers in training set
Top 26%
3.6%
6
Cognition
44 papers in training set
Top 0.2%
3.1%
7
eneuro
389 papers in training set
Top 4%
2.3%
8
Journal of Cognitive Neuroscience
119 papers in training set
Top 0.7%
2.1%
9
Nature Communications
4913 papers in training set
Top 47%
2.1%
10
Computational Psychiatry
12 papers in training set
Top 0.1%
1.9%
11
Philosophical Transactions of the Royal Society B
51 papers in training set
Top 3%
1.9%
12
Frontiers in Neuroscience
223 papers in training set
Top 4%
1.7%
13
Neural Computation
36 papers in training set
Top 0.4%
1.7%
14
Proceedings of the National Academy of Sciences
2130 papers in training set
Top 33%
1.7%
15
Neural Networks
32 papers in training set
Top 0.4%
1.7%
16
Journal of Vision
92 papers in training set
Top 0.4%
1.2%
17
Psychonomic Bulletin & Review
14 papers in training set
Top 0.1%
0.9%
18
Bulletin of Mathematical Biology
84 papers in training set
Top 2%
0.9%
19
Journal of The Royal Society Interface
189 papers in training set
Top 4%
0.9%
20
Journal of Neurophysiology
263 papers in training set
Top 0.7%
0.9%
21
Cognitive, Affective, & Behavioral Neuroscience
25 papers in training set
Top 0.2%
0.8%
22
Attention, Perception, & Psychophysics
17 papers in training set
Top 0.1%
0.7%
23
NeuroImage
813 papers in training set
Top 6%
0.7%
24
Nature Human Behaviour
85 papers in training set
Top 5%
0.6%
25
Journal of Computational Neuroscience
23 papers in training set
Top 0.5%
0.6%
26
Frontiers in Behavioral Neuroscience
46 papers in training set
Top 1%
0.6%