Visual acuity is better for vertical and horizontal compared to other orientations. This cross-species phenomenon is often explained by "efficient coding", whereby more neurons show sharper tuning for the orientations most common in natural vision. However, it is unclear if experience alone can account for such biases. Here, we measured orientation representations in a convolutional neural network, VGG-16, trained on modified versions of ImageNet (rotated by 0, 22.5, or 45 degrees counter-clockwise of upright). Discriminability for each model was highest near the orientations that were most common in the network’s training set. Furthermore, there was an over-representation of narrowly tuned units selective for the most common orientations. These effects emerged in middle layers and increased with depth in the network. Our results suggest that biased orientation representations can emerge through experience with a non-uniform distribution of orientations. These findings thus support the efficient coding hypothesis and highlight that biased training data can systematically distort processing in CNNs.
bioRxiv Subject Collection: Neuroscience