Primates and rodents are able to continually acquire, adapt, and transfer knowledge and skill, and lead to goal-directed behavior during their lifespan. For the case when context switches slowly, animals learn via slow processes. For the case when context switches rapidly, animals learn via fast processes. We build a biologically realistic model with modules similar to a distributed computing system. Specifically, we are emphasizing the role of thalamocortical learning on a slow time scale between the prefrontal cortex (PFC) and medial dorsal thalamus (MD). Previous work  has already shown experimental evidence supporting classification of cell ensembles in the medial dorsal thalamus, where each class encodes a different context. However, the mechanism by which such classification is learned is not clear. In this work, we show that such learning can be self-organizing in the manner of an automaton (a distributed computing system), via a combination of Hebbian learning and homeostatic synaptic scaling. We show that in the simple case of two contexts, the network with hierarchical structure can do context-based decision making and smooth switching between different contexts. Our learning rule creates synaptic competition  between the thalamic cells to create winner-take-all activity. Our theory shows that the capacity of such a learning process depends on the total number of task-related hidden variables, and such a capacity is limited by system size N. We also theoretically derived the effective functional connectivity as a function of an order parameter dependent on the thalamo-cortical coupling structure.
bioRxiv Subject Collection: Neuroscience