February 28, 2021

Learning when effort matters: Neural dynamics underlying updating and adaptation to changes in performance efficacy

To determine how much cognitive control to invest in a task, people need to consider whether exerting control matters for obtaining potential rewards. In particular, they need to account for the efficacy of their performance–the degree to which potential rewards are determined by their performance or by independent factors (e.g., random chance). Yet it remains unclear how people learn about their performance efficacy in a given environment. Here, we examined the neural and computational mechanisms through which people (a) learn and dynamically update efficacy expectations in a changing environment, and (b) proactively adjust control allocation based on their current efficacy expectations. We recorded EEG in 40 participants performing an incentivized cognitive control task, while their performance efficacy (the likelihood that reward for a given trial would be determined by performance or at random) dynamically varied over time. We found that participants continuously updated their self-reported efficacy expectations based on recent feedback, and that these updates were well described by a standard prediction error-based reinforcement learning algorithm. Paralleling findings on updating of expected rewards, we found that model-based estimates of efficacy prediction errors were encoded by the feedback-related P3b. Updated expectations of efficacy in turn influenced levels of effort exerted on subsequent trials, reflected in greater proactive control (indexed by the contingent negative variation [CNV]) and improved performance when participants expected their performance to be more efficacious. These findings demonstrate that learning and adaptation to the efficacy of one’s environment is underpinned by similar computations and neural mechanisms as are involved in learning about potential reward.

 bioRxiv Subject Collection: Neuroscience

 Read More

Leave a Reply

%d bloggers like this: