The brain performs probabilistic inference to interpret the external world, but the underlying neuronal mechanisms remain not well understood. The stimulus structure of natural scenes exists in a high-dimensional feature space, and how the brain represents and infers the joint posterior distribution in this rich, combinatorial space is a challenging problem. There is added difficulty when considering the neuronal mechanics of this representation, since many of these features are computed in parallel by distributed neural circuits. Here, we present a novel solution to this problem. We study continuous attractor neural networks (CANNs), each representing and inferring a stimulus attribute, where attractor coupling supports sampling-based inference on the multivariate posterior of the high-dimensional stimulus features. Using perturbative analysis, we show that the dynamics of coupled CANNs realizes Langevin sampling on the stimulus feature manifold embedded in neural population responses. In our framework, feedforward inputs convey the likelihood, reciprocal connections encode the stimulus correlational priors, and the internal Poisson variability of the neurons generate the correct random walks for sampling. Our model achieves high-dimensional joint probability representation and Bayesian inference in a distributed manner, where each attractor network infers the marginal posterior of the corresponding stimulus feature. The stimulus feature can be read out simply with a linear decoder based only on local activities of each network. Simulation experiments confirm our theoretical analysis. The study provides insight into the fundamental neural mechanisms for realizing efficient high-dimensional probabilistic inference.
bioRxiv Subject Collection: Neuroscience