In everyday life, we have no trouble recognizing and categorizing objects as they change in position, size, and orientation in our visual fields. This phenomenon is known as object invariance. Previous fMRI research suggests that higher-level object processing regions in the human lateral occipital cortex may link object responses from different affine states (i.e. size and viewpoint) through a general linear mapping function with the learned mapping capable of predicting responses of novel objects. In this study, we extended this approach to examine the mapping for both Euclidean (e.g. position and size) and non-Euclidean (e.g. image statistics and spatial frequency) transformations across the human ventral visual processing hierarchy, including areas V1, V2, V3, V4, ventral occipitotemporal cortex (VOT), and lateral occipitotemporal cortex (LOT). The predicted pattern generated from a linear mapping could capture a significant amount, but not all, of the variance of the true pattern across the ventral visual pathway. The derived linear mapping functions were not entirely category independent as performance was better for the categories included in the training. Moreover, prediction performance was not consistently better in higher than lower visual regions, nor were there notable differences between Euclidean and non-Euclidean transformations. Together, these findings demonstrate a near-orthogonal representation of object identity and non-identity features throughout the human ventral visual processing pathway, with the non-identity features largely untangled from the identity features early in the visual processing.
bioRxiv Subject Collection: Neuroscience