The algorithmic rules that define deep neural networks are clearly defined, however the principles that define their performance remain poorly understood. Here, we use systems neuroscience and information theoretic approaches to analyse a feedforward neural network as it is trained to classify handwritten digits. By tracking the topology of the network as it learns, we identify three distinct phases of topological reconfiguration. Each phase brings the connections of the neural network into alignment with patterns of information contained in the input dataset, as well as the preceding layers. Performing dimensionality reduction on the data reveals a process of low-dimensional category separation as a function of learning. Our results enable a systems-level understanding of how deep neural networks function, and provide evidence of how neural networks reorganize edge weights and activity patterns so as to most effectively exploit the information theoretic content of input data during edge-weight training.
bioRxiv Subject Collection: Neuroscience