Which visual working memory model accounts best for target representation in the attentional blink?
Wang, S.; Karabay, A.; Akyürek, E.
Show abstract
The nature of working memory (WM) limitations has been a topic of long-standing debate, with several models proposed to elucidate this issue. In this study, we conducted a systematic comparison of seven visual WM models to assess their ability to account for target consolidation during the attentional blink (AB). The AB phenomenon refers to where participants often fail to encode the second of two targets when there is a short time interval of [~]500 msec or less between them, providing an opportunity to evaluate commensurate WM limitations. Despite the growing consensus on the applicability of some WM models, such as the standard mixture model and the variable precision model, to the AB domain, no study has systematically evaluated these models in this context. We compared the performance of seven widely adopted visual WM models in four different AB datasets, drawn from three separate laboratories. We fitted each model and computed the Akaike Information Criterion (AIC) values at an individual level, across different conditions and experiments, based on which we compared the models. Slot-family models most often minimized AIC for second-target reports at short lags, while variable-precision models improved at longer lags and with color targets, indicating predominantly discrete consolidation during the AB, with feature- and lag-dependent graded components. These patterns imply that failure-to-encode (guessing) dominates over low-precision encoding, except when feature content or lag affords partial consolidation, refining theories of episodic tokenization and WM consolidation during the AB.
Matching journals
The top 7 journals account for 50% of the predicted probability mass.