A fundamental challenge at the interface of machine learning and neuroscience is to uncover computational principles that are shared between artificial and biological neural networks. In deep learning, normalization methods, such as batch normalization, weight normalization, and their many variants, help to stabilize hidden unit activity and accelerate network training, and these methods have been called one of the most important recent innovations for optimizing deep networks. In the brain, homeostatic plasticity represents a set of mechanisms that also stabilize and normalize network activity to lie within certain ranges, and these mechanisms are critical for maintaining normal brain function. Here, we propose a functional equivalence between normalization methods in deep learning and homeostatic plasticity mechanisms in the brain. First, we discuss parallels between artificial and biological normalization methods at four spatial scales: normalization of a single neuron’s activity, normalization of synaptic weights of a neuron, normalization of a layer of neurons, and normalization of a network of neurons. Second, we show empirically that normalization methods in deep learning push activation patterns of hidden units towards a homeostatic state, where all neurons are equally used — a process we call "load balancing". Third, we develop a neural normalization algorithm, inspired by a phenomena called synaptic scaling, and show that this algorithm performs competitively against existing normalization methods. Overall, we hope this connection will enable neuroscientists to propose new hypotheses for why normalization works so well in practice and new normalization algorithms based on established neurobiological principles. In return, machine learners can help quantify the trade-offs of different homeostatic plasticity mechanisms in the brain and offer insights about how stability may promote plasticity.
bioRxiv Subject Collection: Neuroscience