Abstract
This paper offers a formal account of policy learning, or habitual behavioural optimisation, under the framework of Active Inference. In this setting, habit formation becomes an autodidactic, experience-dependent process, based upon what the agent sees itself doing. We focus on the effect of environmental volatility on habit formation by simulating artificial agents operating in a partially observable Markov decision process. Specifically, we used a ‘two-step’ maze paradigm, in which the agent has to decide whether to go left or right to secure a reward. We observe that in volatile environments with numerous reward locations, the agents learn to adopt a generalist strategy, never forming a strong habitual behaviour for any preferred maze direction. Conversely, in conservative or static environments, agents adopt a specialist strategy; forming strong preferences for policies that result in approach to a small number of previously-observed reward locations. The pros and cons of the two strategies are tested and discussed. In general, specialization offers greater benefits, but only when contingencies are conserved over time. We consider the implications of this formal (Active Inference) account of policy learning for understanding the relationship between specialisation and habit formation.
Author Summary Active inference is a theoretical framework that formalizes the behaviour of any organism in terms of a single imperative – to minimize surprise. Starting from this principle, we can construct simulations of simple “agents” (artificial organisms) that show the ability to infer causal relationships and learn. Here, we expand upon currently-existing implementations of Active Inference by enabling synthetic agents to optimise the space of behavioural policies that they can pursue. Our results show that by adapting the probabilities of certain action sequences (which may correspond biologically to the phenomenon of synaptic plasticity), and by rejecting improbable sequences (synaptic pruning), the agents can begin to form habits. Furthermore, we have shown our agent’s habit formation to be environment-dependent. Some agents become specialised to a constant environment, while other adopt a more general strategy, each with sensible pros and cons. This work has potential applications in computational psychiatry, including in behavioural phenotyping to better understand disorders.