Human not only can effortlessly recognize objects, but also characterize object categories into semantic concepts and construct nested hierarchical structures. Similarly, deep convolutional neural networks (DCNNs) can learn to recognize objects as perfectly as human; yet it is unclear whether they can learn semantic relatedness among objects that is not provided in the learning dataset. This is important because it may shed light on how human acquire semantic knowledge on objects without top-down conceptual guidance. To do this, we explored the relation among object categories, indexed by representational similarity, in two typical DCNNs (AlexNet and VGG11). We found that representations of object categories were organized in a hierarchical fashion, suggesting that the relatedness among objects emerged automatically when learning to recognize them. Critically, the emerged relatedness of objects in the DCNNs was highly similar to the WordNet in human, implying that top-down conceptual guidance may not be a prerequisite for human learning the relatedness among objects. Finally, the developmental trajectory of the relatedness among objects during training revealed that the hierarchical structure was constructed in a coarse-to-fine fashion, and evolved into maturity before the establishment of object recognition ability. Taken together, our study provides the first empirical evidence that semantic relatedness of objects emerged as a by-product of object recognition, implying that human may acquire semantic knowledge on objects without explicit top-down conceptual guidance.
bioRxiv Subject Collection: Neuroscience