Nonparametric Bayesian Contextual Control: Integrating Automatisation and Prior Knowledge for Stable Adaptive Behaviour
Hranova, S.; Kiebel, S.; Smolka, M. N.; Schwöbel, S.
Show abstract
Humans have a remarkable ability to act efficiently and accurately in familiar situations while remaining flexible in novel circumstances. Nonparametric contextual inference has been proposed as a computational principle that can model how agents achieve flexible yet stable behaviour in dynamic and possibly unknown environments. However, it remains an open question how humans learn, deploy and reuse stable contextual task representations so efficiently. To address this question, we propose the nonparametric Bayesian Contextual Control (NP-BCC) model, which integrates nonparametric contextual learning with two well-established cognitive mechanisms: repetition-based automatisation and schema-like prior knowledge. These two mechanisms are assumed to support behavioural stability and facilitate novel task acquisition. Simulations in dynamic multi-armed bandit tasks of increasing difficulty illustrate how the NP-BCC can acquire and reuse contextual task representations, with the proposed mechanisms operating in the intended, functionally meaningful manner. Specifically, we show via simulations that automatisation not only enhances task performance but also stabilizes contextual inference and structure learning, while structured prior knowledge accelerates the acquisition of novel contexts. We discuss the implications of our findings for computational accounts of adaptive behaviour and contextual learning, and outline directions for future empirical work, including investigations of context-dependent behavioural dysregulation relevant to conditions such as substance use disorders. Author summaryPeople are very good at repeating well-learned actions in familiar situations, but they can also quickly adjust their behaviour when circumstances change. How the brain balances stability and flexibility is still not fully understood. There is growing evidence that the brain organizes experience into different "contexts", which are mental representations of encountered situations. Computational models based on this idea can in principle reproduce flexible behaviour, but they often become unstable in complex environments. To improve stability, we borrow two simple strategies from everyday human behaviour. First, people tend to repeat actions that have worked well before. Second, when facing something new, they often reuse strategies from similar past situations. Using simulations, we show that combining these strategies with context-based learning produces more reliable behaviour in the model. Prior experience helps the model understand new situations more quickly, while repeated actions help stabilise behaviour once a situation becomes familiar. Taken together, our findings show how such mechanisms can give rise to both flexible and stable behaviour in the model.
Matching journals
The top 1 journal accounts for 50% of the predicted probability mass.