Abstract
Object classification has been proposed as a principal objective of the primate ventral visual stream and has been used as an optimization target for deep neural network models (DNNs) of the visual system. However, visual brain areas represent many different types of information, and optimizing for classification of object identity alone does not constrain how other information may be encoded in visual representations. Information about different scene parameters may be discarded altogether (“invariance”), represented in non-interfering subspaces of population activity (“factorization”) or encoded in an entangled fashion. In this work, we provide evidence that factorization is a normative principle of biological visual representations. In the monkey ventral visual hierarchy, we found that factorization of object pose and background information from object identity increased in higher-level regions and strongly contributed to improving object identity decoding performance. We then conducted a large-scale analysis of factorization of individual scene parameters – lighting, background, camera viewpoint, and object pose – in a diverse library of DNN models of the visual system. Models which best matched neural, fMRI and behavioral data from both monkeys and humans across 12 datasets tended to be those which factorized scene parameters most strongly. Notably, invariance to these parameters was not as consistently associated with matches to neural and behavioral data, suggesting that maintaining non-class information in factorized activity subspaces is often preferred to dropping it altogether. Thus, we propose that factorization of visual scene information is a widely used strategy in brains and DNN models thereof.
Introduction
Artificial deep neural networks (DNNs) are the most predictive models of neural responses to images in the primate high-level visual cortex1,2. Many studies have reported that DNNs trained to perform image classification produce internal feature representations broadly similar to those in areas V4 and IT of the primate cortex, and that this similarity tends to be greater in models with better classification performance3. However, it remains opaque what aspects of the representations of these more performant models drive them to better match neural data. Moreover, beyond a certain threshold level of object classification performance, further improvement fails to produce a concomitant improvement in predicting primate neural responses2,4,5. This weakening trend motivates finding new normative principles, besides object classification ability, that push models to better match primate visual representations.
One strategy for achieving high object classification performance is to form neural representations that discard some (are tolerant to) or all (are invariant to) information besides object class. Invariance in neural representations is in some sense a zero-sum strategy: building invariance to some parameters improves the ability to decode others. We also note that our use of “invariance” in this context refers to invariance in neural representations, rather than behavioral or perceptual invariance6. However, high-level cortical neurons in the primate ventral visual stream are known to simultaneously encode many forms of information about visual input besides object identity, such as object pose7–10. In this work, we seek to characterize how the brain simultaneously represents different forms of information.
In particular, we introduce methods to quantify the relationships between different types of visual information in a population code (e.g., object pose vs. camera viewpoint), and specifically the degree to which different forms of information are “factorized.” Intuitively, if the variance driven by one parameter is encoded along orthogonal dimensions of population activity space compared to the variance driven by other scene parameters, we say that this representation is factorized. We note that our definition of factorization is closely related to the existing concept of manifold disentanglement6,11 and can be seen as a generalization of disentanglement to high-dimensional visual scene parameters like object pose. Factorization can enable simultaneous decoding of many parameters at once, supporting diverse visually guided behaviors (e.g., spatial navigation, object manipulation or object classification)12.
Using existing neural datasets, we found that both factorization of and invariance to object category and position information increase across the macaque ventral visual cortical hierarchy. Next, we leveraged the flexibility afforded by in silico models of visual representations to probe different forms of factorization and invariance in more detail, focusing on several scene parameters of interest: background content, lighting conditions, object pose, and camera viewpoint. Across a broad library of DNN models that varied in their architecture and training objectives, we found that factorization of all of the above scene parameters in DNN feature representations was positively correlated with models’ matches to neural and behavioral data. Interestingly, while neural invariance to some scene parameters (background scene and lighting conditions) predicted neural fits, invariance to others (object pose and camera viewpoint) did not. Our results generalized across both monkey and human datasets using different measures (neural spiking, fMRI, and behavior; 12 datasets total) and could not be accounted for by models’ classification performance. Thus, we suggest that factorized encoding of multiple behaviorally-relevant scene variables is an important consideration, alongside other desiderata such as classification performance, in building more brain-like models of vision.
Results
Disentangling object identity manifolds in neural population responses can be achieved by qualitatively different strategies. These include building invariance of responses to non-identity scene parameters (or, more realistically, partial invariance6) and/or factorizing non-identity-driven response variance into isolated (factorized) subspaces (Figure 1A, left vs. center panels, cylindrical/spherical shaded regions represent object manifolds). Both strategies maintain an “identity subspace” in which object manifolds are linearly separable. In a non-invariant, non-factorized representation, other variables like camera viewpoint also drive variance within the identity subspace, “entangling” the representations of the two variables (Figure 1A, right; viewpoint driven variance is mainly in identity subspace, orange flat shaded region).
To formalize these different representational strategies, we introduced measures of factorization and invariance to scene parameters in neural population responses (Figure 1B; see Equations 2-4 in Methods). Concretely, invariance to a scene variable (e.g. object motion) is computed by measuring the degree to which varying that parameter alone changes neural responses, relative to the changes induced by varying other parameters (lower relative influence on neural activity corresponds to higher invariance, or tolerance, to that parameter). Factorization is computed by identifying the axes in neural population activity space that are influenced by varying the parameter of interest and assessing how much it overlaps the axes influenced by other parameters (“a” in Figure 1B,C; lower overlap corresponds to higher factorization). We quantified this overlap in two different ways (“PCA based” and “covariance based” factorization, corresponding to Equations 2 and 4 in Methods) which produced similar results when compared in subsequent analyses (unless otherwise noted, factorization scores will generally refer to the PCA based method, and the covariance method is shown in Figures 5-7 for comparison). Intuitively, a neural population in which one neural subpopulation encodes object identity and another separate subpopulation encodes object position exhibits a high degree of factorization of those two parameters (however, note that factorization may also be achieved by neural populations with mixed selectivity in which the “subpopulations” correspond to subspaces, or independent orthogonal linear projections, of neural activity space rather than physical subpopulations). Though the example presented in Figure 1 focused on factorization of and invariance to object identity versus non-identity variables, we stress that our definitions can be applied to any scene variables of interest. Furthermore, we presented a simplified visual depiction of the geometry within each scene variable subspace in Figure 1. We emphasize that our factorization metric does not require a particular geometry within a variable’s subspace, whether parallel linearly ordered coding of viewpoint as in the cylindrical class manifolds shown in Figure 1A and 1B, or a more complex geometry where there is a lack of parallelism and/or a more nonlinear layout.
While factorization and invariance are not mutually exclusive representational strategies, they are qualitatively different. Factorization, unlike invariance, has the potential to enable the simultaneous representation of multiple scene parameters in a decodable fashion. Intuitively, factorization increases with higher dimensionality as this decreases overlap, all other things being equal (in the limit, the angle between points will approach 90° or a fully orthogonal code in high dimensions), and for a given finite, fixed dimension, factorization is mainly driven by the angle between this dimension and the other variable subspaces which measures the degree of contamination (Figure 1C; square vs. parallelogram). In a simulation, we found that the extent to which the variables of interest were represented in a factorized way (i.e., along orthogonal axes, rather than correlated axes) influenced the ability of a linear discriminator to successfully decode both variables in a generalizable fashion from a few training samples (Figure 1C).
Given the theoretically desirable properties of factorized representations, we next asked whether such representations are observed in neural data, and how much factorization contributes empirically to downstream decoding performance in real data. Specifically, we took advantage of an existing dataset in which the tested images independently varied object identity versus object pose plus background context13. We found that both V4 and IT responses exhibited more significant factorization of object identity information from non-identity information than a shuffe control (which accounts for effects on factorization due to dimensionality of these regions) (Figure S1; see Methods). Furthermore, the degree of factorization increased from V4 to IT (Figure 2A). Consistent with prior studies, we also found that invariance to non-identity information increased from V4 to IT in our analysis (Figure 2A, right, solid lines)14. Invariance to non-identity information was even more pronounced when measured in the subspace of population activity capturing the bulk (90%) of identity-driven variance, as a consequence of increased factorization of identity from non-identity information (Figure 2A, right, dashed lines).
To illustrate the beneficial effect of factorization on decoding performance, we performed a statistical lesion experiment that precisely targeted this aspect of representational geometry. Specifically, we analyzed a transformed neural representation obtained by rotating the population data so that inter-class variance more strongly overlapped with the principal components of the intra-class variance in the data (see Equation 1 in Methods). Note that this transformation, designed to decrease factorization, acts on the angle between latent variable subspaces. The applied linear basis rotation leaves all other activity statistics completely intact (such as mean neural firing rates, covariance structure of the population, and its invariance to non-class variables) yet has the effect of strongly reducing object identity decoding performance in both V4 and IT (Figure 2B). Our analysis shows that maintaining invariance alone in the neural population code was insufficient to account for a large fraction of decoding performance in high-level visual cortex; factorization of non-identity variables is key to the decoding performance achieved by V4 and IT representations.
We next asked whether factorization is found in deep neural network (DNN) model representations and whether this novel, heretofore unconsidered metric is a strong indicator of more brainlike models. When working with computational models, we have the liberty to test an arbitrary number of stimuli; therefore, we could independently vary multiple scene parameters at sufficient scale to enable computing factorization and invariance for each, and we explored factorization in DNN model representations in more depth than previously measured in existing neural experiments. To gain insight back into neural representations, we also assessed the ability of each model to predict separately collected neural and behavioral data. In this fashion, we may indirectly assess the relative significance of geometric properties like factorization and invariance to biological visual representations – if, for instance, models with more factorized representations consistently match neural data more closely, we may infer that those neural representations likely exhibit factorization themselves (Figure 3). To measure factorization, invariance, and decoding properties of DNN models, we generated an augmented image set, based on the images used in the previous dataset (Figure 2), in which we independently varied the foreground object identity, foreground object pose, background identity, scene lighting, and 2D scene viewpoint. Specifically for each base image from the original dataset, we generated sets of images that varied exactly one of the above scene parameters while keeping the others constant, allowing us to measure the variance induced by each parameter relative to the variance across all scene parameters (Figure 3, top left; 100 base scenes and 10 transformed images for each source of variation). We presented this large image dataset to models (4000 images total) to assess the relative degree of representational factorization of and invariance to each scene parameter. We conducted this analysis across a broad range of DNNs varying in architecture and objective as well as other implementational choices to obtain the widest possible range of DNN representations for testing our hypothesis. These included models using supervised training for object classification15,16, contrastive self-supervised training17,18, and self-supervised models trained using auxiliary objective functions19–22 (see Methods and Table S2).
First, we asked whether, in the course of training, DNN models develop factorized representations at all. We found that the final layers of trained networks exhibited consistent increases in factorization of all tested scene parameters relative to a randomly initialized (untrained) baseline with the same architecture (Figure 4A, top row, rightward shift relative to black cross, a randomly initialized ResNet-50). By contrast, training DNNs produced mixed effects on invariance, typically increasing it for background and lighting but reducing it for object pose and camera viewpoint (Figure 4A, bottom row, leftward shift relative to black cross for left two panels). Moreover, we found that the degree of factorization in models correlated with the degree to which they predicted neural activity for single-unit IT data (Figure 4A, top row), which can be seen as correlative evidence that neural representations in IT exhibit factorization of all scene variables tested. Interestingly, we saw a different pattern for representational invariance to a scene parameter. Invariance showed mixed correlations with neural predictivity (Figure 4A, bottom row), suggesting that IT neural representations build invariance to some scene information (background and lighting) but not to others (object pose and observer viewpoint). Similar effects were observed when we assessed correlations between these metrics and fits to human behavioral data rather than macaque neural data (Figure 4B).
To assess the robustness of these findings to choice of images and brain regions used in an experiment, we conducted the same analyses across a large and diverse set of previously collected neural and behavioral datasets, from different primate species and visual regions (6 macaque datasets13,23,24: two V4, two IT, and two behavior; 6 human datasets24–26: two V4, two HVC, and two behavior; Table S1). Consistently, increased factorization of scene parameters in model representations correlated with models being more predictive of neural spiking responses, voxel BOLD signal, and behavioral responses to images (Figure 5A, black bars; see Figure S2 for scatter plots across all datasets). Although invariance to appearance factors (background identity and scene lighting) correlated with more brainlike models, invariance for spatial transforms (object pose and camera viewpoint) consistently did not (zero or negative correlation values; Figure 5C, red and green open circles). Our results were preserved when we re-ran the analyses using only the subset of models with the identical ResNet-50 architecture (Figure S3) or when we evaluated model predictivity using representational dissimilarity matrices of the population (RDMs) instead of linear regression (encoding) fits of individual neurons or voxels (Figure S4). Furthermore, the main finding of a positive correlation between factorization and neural predictivity was robust to the particular choice of PCA threshold we used to quantify factorization (Figure S5). We found similar results using a covariance based method for computing factorization that does not have any free parameters (Figure 5C faded filled circles; see Equations 4 in Methods).
Finally, we tested whether our results generalized across the particular image set used for computing the model factorization scores in the first place. Here, instead of relying on our synthetically generated images, where each scene parameter was directly controlled, we re-computed factorization from two types of relatively unconstrained natural movies, one where the observer moves in an urban environment (approximates camera viewpoint changes)27 and another where objects move in front of a fairly stationary observer (approximates object pose changes)28. Similar to the result found for factorization measured using augmentations of synthetic images, factorization of frame-by-frame variance (local in time, presumably dominated by either observer or camera motion; see Methods) from other sources of variance across natural movies (non-local in time) was correlated with improved neural predictivity in both macaque and human data while invariance to local frame-by-frame differences was not (Figure 5B; black versus gray bars). Thus, we have shown that a main finding – the importance of object pose and camera viewpoint factorization for achieving brainlike representations – holds across types of brain signal (spiking vs. BOLD), species (monkey vs. human), cortical brain areas (V4 vs. IT), images for testing in experiments (synthetic, grayscale vs. natural, color), and image sets for computing the metric (synthetic images vs. natural movies).
Our analysis of DNN models provides strong evidence that greater factorization of a variety of scene variables is consistently associated with a stronger match to neural and behavioral data. Prior work had identified a similar correlation between object classification performance (measured fitting a decoder for object class using model representations) and fidelity to neural data3. A priori, it is possible that the correlations we have demonstrated between scene parameter factorization and neural fit can be entirely captured by the known correlation between classification performance and neural fits2,3, as factorization and classification may themselves be correlated. However, we found that factorization scores significantly boosted cross-validated predictive power of neural/behavioral fit performance compared to simply using object classification alone, and factorization boosted predictive power as much if not slightly more when using RDMs instead of linear regression fits to quantify the match to the brain/behavior (Figure 6). Thus, considering factorization in addition to object classification performance improves upon our prior understanding of the properties of more brainlike models (Figure 7).
Discussion
Object classification, which has been proposed as a normative principle for the function of the ventral visual stream, can be supported by qualitatively different representational geometries3,29. These include representations that are completely invariant to non-class information30,31 and representations that retain a high-dimensional but factorized encoding of non-class information, which disentangles the representation of multiple variables (Figure 1A). Here, we presented evidence that factorization of non-class information is an important strategy used, alongside invariance, by the high-level visual cortex (Figure 2) and by DNNs that are predictive of primate neural and behavioral data (Figures 4 & 5).
Prior work has indicated that building representations that support object classification performance and representations that preserve high-dimensional information about natural images are both important principles of the primate visual system1,32 (though see Conwell et al.33). Critically, our results cannot be accounted for by classification performance or dimensionality alone (Figure 6, gray and pink bars); that is, the relationship between factorization and matches to neural data was not entirely mediated by classification or dimensionality. That said, we do not regard factorization and dimensionality, or factorization and object classification performance, as mutually exclusive hypotheses for useful principles of visual representations. Indeed, high-dimensional representations could be regarded as a means to facilitate factorization, and likewise factorized representations can better support classification (Figure 1C).
Our notion of factorization is related to, but distinct from, several other concepts in the literature. Many prior studies in machine learning have considered the notion of disentanglement, often defined as the problem of inferring independent factors responsible for generating the observed data34–36. One prior study notably found that machine learning models designed to infer disentangled representations of visual data displayed single-unit responses that resembled those of individual neurons in macaque IT37. Our definition of factorization is more flexible, requiring only that independent factors be encoded in orthogonal subspaces, rather than by distinct individual neurons. Moreover, our definition applies to generative factors, such as camera viewpoint or object pose, that are multidimensional and context dependent. Factorization is also related to a measure of “abstraction” in representational geometry introduced in a recent line of work38,39, which is observed to emerge in trained neural networks12,40. In these studies, an abstract representation is defined as one in which variables are encoded and can be decoded in a consistent fashion regardless of the values of other variables. A fully factorized representation should be highly abstract according to this definition, though factorization emphasizes the geometric properties of the population representation while these studies emphasize the consequences for decoding performance in training downstream linear read-outs. Relatedly, another recent study found that orthogonal encoding of class and non-class information is one of several factors that determines few-shot classification performance41. Our work can be seen as complementary to work on representational straightening of natural movie trajectories in the population space42. This work suggested that visual representations maintain a locally linear code of latent variables like camera viewpoint, while our work focused on the global arrangement of the linear subspaces affected by different variables (e.g., overall coding of camera viewpoint driven variance versus sources of variance from other scene variables in a movie). Local straightening of natural movies was found to be important for early visual cortex neural responses but not necessarily for high-level visual cortex43 – where the present work suggests factorization may play a role.
Our work has several limitations. First, our analysis is primarily correlative. Going forward, we suggest that factorization could prove to be a useful objective function for optimizing neural network models that better resemble primate visual systems, or that factorization of latent variables should at least be a byproduct of other objectives that lead to more brain-like models. An important direction for future work is finding ways to directly incentivize factorization in model objective functions so as to test its causal impact on the fidelity of learned representations to neural data. Second, our choice of scene variables to analyze in this study was heuristic and somewhat arbitrary. Future work could consider unsupervised methods (in the vein of independent components analysis) for uncovering the latent sources of variance that generate visual data, and assessing to what extent these latent factors are encoded in factorized form. Third, in our work we do not specify the details of how a particular scene parameter is encoded within its factorized subspace, including whether the code is linear ( “straightened”) or nonlinear42,44. Neural codes could adopt different strategies resulting in similar factorization scores at the population level, each with some support in visual cortex literature: (1) Each neuron encodes a single latent variable45,46, (2) Separate brain subregions encode qualitatively different latent variables but using distributed representations within each region47–49, (3) Each neuron encodes multiple variables in a distributed population code, such that the factorization of different variables is only apparent as independent directions when assessed in high-dimensional population activity space45,50. Future work can disambiguate among these possibilities by systematically examining ventral visual stream subregions9,49,51 and the single neuron tuning curves within them52,53.
Methods
Monkey datasets
Macaque monkey datasets were of single-unit neural recordings23, multi-unit neural recordings13, and object recognition behavior24. Single-unit spiking responses to natural images were measured in V4 and anterior ventral IT23. The advantages of this dataset are that it contains well-isolated single neurons, the gold standard for electrophysiology. Furthermore, the IT recordings were obtained from penetrating electrodes targeting the anterior ventral portion of IT near the base of skull, reflecting the highest level of the IT hierarchy. On the other hand, the multi-unit dataset was obtained from across IT with a bias toward where multi-unit arrays are more easily placed such as CIT and PIT13, complementing the recording locations of the single-unit dataset. An advantage of the multi-unit dataset using chronic recording arrays is that an order of magnitude more images were tested per recording site (see dataset comparisons in Table S1). Finally, the monkey behavioral dataset came from a third study examining the image-by-image object classification performance of macaques and humans24.
Human datasets
Three datasets from humans were used, two fMRI datasets and one object recognition behavior dataset4,24,25. The fMRI datasets used different images (color versus grayscale) but otherwise used fairly similar number of images and voxel resolution in MR imaging. Human fMRI studies have found that different DNN layers tend to map to V4 and HVC human fMRI voxels4. The human behavioral dataset measured image-by-image classification performance and was collected in the same study as the monkey behavioral signatures24.
Computational models
In recent years, a variety of approaches to training DNN vision models have been developed that learn representations that can be used for downstream classification (and other) tasks. Models differ in a variety of implementational choices including in their architecture, objective function, and training dataset. In the models we sampled, objectives included supervised learning of object classification (AlexNet, ResNet), self-supervised contrastive learning (MoCo, SimCLR), and other unsupervised learning algorithms based on auxiliary tasks (e.g., reconstruction or colorization). A majority of the models that we considered relied on the widely used, performant ResNet-50 architecture, though some in our library utilized different architectures. The randomly initialized network control utilized ResNet-50 (see Figure 4A,B). The set of models we used is listed in Table S2.
Simulation of factorized versus non-factorized representational geometries
For the simulation in Figure 1C, we generated data in the following way. First we randomly sampled the values of N=10 binary features. Feature values corresponded to positions in an N-dimensional vector space as follows: each feature was assigned an axis in N-dimensional space, and the value of each feature (+1 or –1) was treated as a coefficient indicating the position along that axis. All but two of the feature axes were orthogonal to the rest. The last two features, which served as targets for the trained linear decoders, were assigned axes whose alignment ranged from 0 (orthogonal) to 1 (identical). In the noiseless case, factorization of these two variables with respect to one another is given by subtracting the square of the cosine of the angle between the axes from 1. We added Gaussian noise to the positions of each data point and randomly sampled K positive and negative examples for each variable of interest to use as training data for the linear classifier (a support vector machine).
Macaque neural data analyses
For the shuffe control used as a null model for factorization, we shuffed the object identity labels of the images (Figure S1). For the transformation used in Figure 2B, we computed the principal components of the mean neural activity response to each object class (“class centers,” xc), referred to as the inter-class PCs, . We also computed the principal components of the data with corresponding class centers subtracted (i.e., x – xc) from each activity pattern, referred to as the intra-class . We transformed the data by applying to the class centers a change of basis matrix Winter→intra that rotated each inter-class PC into the corresponding intra-class . That is, the class centers were transformed by this matrix, but the relative positions of activity patterns within a given class were fixed. For an activation vector x belonging to a class c for which the average activity vector over all images of class c is xc, the transformed vector was:
This transformation has the effect of preserving intra-class variance statistics exactly from the original data and preserving everything about the statistics of inter-class variance except its orientation relative to intra-class variance. That is, the transformation is designed to affect (specifically decrease) factorization while controlling for all other statistics of the activity data that may be relevant to object classification performance (considering the simulation in Figure 1C of two binary variables, this basis change of the neural data in Figure 2B is equivalent to turning a square into the maximally flat parallelogram, the degenerate one where all the points are collinear).
Scene parameter variation
Our generated scenes consisted of foreground objects imposed upon natural backgrounds. To measure variance associated with a particular parameter like the background identity, we randomly sampled ten different backgrounds while holding the other variables (e.g., foreground object identity and pose constant). To measure variance associated with foreground object pose, we randomly varied object angle from [-90, 90] along all three axes independently, object position on the two in-plane axes, horizontal [-30%, 30%] and vertical [-60%, 60%], and object size [×1/1.6, ×1.6]. To measure variance associated with camera position, we took crops of the image with scale uniformly varying from 20% to 100% of the image size, and position uniformly distributed across the image. To measure variance associated with lighting conditions we applied random jitters to the brightness, contrast, saturation, and hue of an image, with jitter value bounds of [-0.4, 0.4] for brightness, contrast, and saturation and [-0.1, 0.1] for hue. These parameter choices follow standard data augmentation practices for self-supervised neural network training, as used, for example, in the SimCLR and MoCo models tested here17,18.
Factorization and invariance metrics
Factorization and invariance were measured according to the following equations:
Variance induced by a parameter (varparam) is computed by measuring the variance (summed across all dimensions of neural activity space) of neural responses to the 10 augmented versions of a base image where the augmentations are those obtained by varying the parameter of interest. This quantity is then averaged across the 100 base images. The variance induced by all parameters is simply the sum of the variances across all images and augmentations. To define the “other-parameter subspace,” we averaged neural responses for a given base image over all augmentations using the parameter of interest, and ran PCA on the resulting set of averaged responses. The subspace was defined as the space spanned by top PCA components containing 90% of the variance of these responses. Intuitively, this space captures the bulk of the variance driven by all parameters other than the parameter of interest (due to the averaging step). The variance of the parameter of interest within this “other-parameter subspace,” varparam|other_param_subspace, was computed the same way as varparambut using the projections of neural activity responses onto the other-parameter subspace. In the main text, we refer to this method of computing factorization as PCA based factorization.
We also considered an alternative definition of factorization referred to as covariance based factorization. In this alternative definition, we measured the covariance matrices covparam and covother_param induced by varying (in the same fashion as above) the parameter of interest, and all other parameters. Factorization was measured by the following equation:
This is equal to 1 minus the dot product between the normalized, flattened covariance matrices, and thus covariance based factorization is a measure of the discrepancy of the covariance structure induced by the parameter of interest and other parameters. The main findings were unaffected by our choice of method for computing the factorization metric, whether PCA or covariance based (Figures 5-7). An advantage of the PCA based method is that as an intermediate one recovers the linear subspaces containing parameter variance, but in so doing requires an arbitrary choice of the explained variance threshold used to choose the number of PCs. By contrast, the covariance based method is more straightforward to compute and has no free parameters. Thus, these two metrics are complementary, and somewhat analogous in methodology to two metrics commonly used for measuring dimensionality (the number of components needed to explain a certain fraction of the variance, analogous to our original PCA based definition, and the participation ratio, analogous to our covariance based definition)54,55.
Natural movie factorization metrics
For natural movies, variance is not induced by explicit control of a parameter as in our synthetic scenes but implicitly, by considering contiguous frames (separated by 200ms in real time) as reflective of changes in one of two motion parameters (object versus observer motion) depending on how stationary the observer is (MIT Moments in Time movie set: stationary observer; UT-Austin Egocentric movie set: nonstationary)27,28. Here, the all parameters condition is simply the variance across all movie frames which in the case of MIT Moments in Time dataset includes variance across thousands of video clips taken in many different settings and in the case of the UT-Austin Egocentric movie dataset includes variance across only 4 movies but over long durations of time during which an observer translates extensively in an an environment (3-5 hours).
Thus, movie clips in the MIT Moments in Time movie set contained new scenes with different object identities, backgrounds, and lightings and thus effectively captured variance induced by these non-spatial parameters28. In the UT Austin Egocentric movie set, new objects and backgrounds are encountered as the subject navigates around the urban landscape27.
Model neural encoding fits
Linear mappings between model features and neuron (or voxel) responses were computed using ridge regression (with regularization coefficient selected by cross validation) on a low-dimensional linear projection of model features (top 300 PCA components computed using images in each dataset). We also tested an alternative approach to measuring representational similarity between models and experimental data based on representational similarity analysis (RSA)56, computing dot product similarities of the representations of all pairs of images and measuring the Spearman correlation coefficient between these pairwise similarity matrices obtained from a given model and neural dataset, respectively.
Model behavioral signatures
We followed the approach of Rajalingham, Issa et al.24 We took human and macaque behavioral data from the object classification task and used it to create signatures of image-level difficulty (the “I1” vector) and image-by-distractor-object confusion rates (the “I2” matrix). We did the same for the DNN models, extracting model “behavior” by training logistic regression classifiers to classify object identity in the same image dataset used in the experiments of Rajalingham, Issa et al.24, using model layer activations as inputs. Model behavioral accuracy rates on image by distractor object pairs were assessed using the classification probabilities output by the logistic regression model, and these were used to compute I1 and I2 metrics as was done for the true behavioral data. Behavioral similarity between models and data was assessed by measuring the correlation between the entries of the I1 vectors and I2 matrices (both I1 and I2 results are reported).
Model layer choices
The scatter plots in Figure 4A,B and Figure S2 use metrics (factorization, invariance, and goodness of neural fit) taken from the final representational layer of the network (the layer prior to the logits layer used for classification in supervised network, prior to the embedding head in contrastive learning models, or prior to any auxiliary task-specific layers in unsupervised models trained using auxiliary tasks). However, representational geometries of model activations, and their match to neural activity and behavior, vary across layers. This variability arises because different model layers correspond to different stages of processing in the model (convolutional layers in some cases, and pooling operations in others), and may even have different dimensionalities. To ensure that our results do not depend on idiosyncrasies of representations in one particular model layer and the particular network operations that precede it, summary correlation statistics in all other figures (Figures 5-7 & S3-S5) show the results of the analysis in question averaged over the five final representational layers of the model. That is, the metrics of interest (factorization, invariance, neural encoding fits, RDM correlation, behavioral similarity scores) were computed independently for each of the five final representational layers of each model, and these five values were averaged prior to computing correlations between different metrics.
Correlation of model predictions and experimental data
A Spearman’s linear correlation coefficient was calculated for each model layer by biological dataset combination (6 monkey datasets and 6 human datasets). Here, we do not correct for noise in the biological data when computing the correlation coefficient, as this would require trial repeats (for computing intertrial variability) that were limited or not available in the fMRI data used. In any event, normalizing by the data noise ceiling applies a uniform scaling to all model prediction scores and does not affect model comparison, which only depends on ranking models as being relatively better or worse in predicting brain data.
Finally, we estimated the effectiveness of model factorization, invariance, or dimensionality in combination with model object classification performance for predicting model neural and behavioral fit by performing a linear regression on the particular dual metric combination (e.g., classification plus object pose factorization) and reporting the Spearman correlation coefficient of the linearly weighted metric combination. The correlation was assessed on held-out models (80% used for training, 20% for testing), and the results were averaged over 100 randomly sampled train/test splits.
Acknowledgements
This work was performed on the Columbia Zuckerman Institute Axon GPU cluster and via generous access to Cloud TPUs from Google’s TPU Research Cloud (TRC). JWL was supported by the DOE CSGF (DE–SC0020347). EBI was supported by a Klingenstein-Simons fellowship, Sloan Foundation fellowship, and Grossman-Kavli Scholar Award. We thank Erica Shook for comments on a previous version of the manuscript. The authors declare no competing interests.
Supplement
References
- 1.Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object RecognitionPLOS Comput. Biol 10
- 2.Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like?bioRxiv 407007https://doi.org/10.1101/407007
- 3.Performance-optimized hierarchical models predict neural responses in higher visual cortexProc. Natl. Acad. Sci 201403112https://doi.org/10.1073/pnas.1403112111
- 4.Brain hierarchy score: Which deep neural networks are hierarchically brain-like?iScience 24
- 5.Linsley, D., et al. Performance-optimized deep neural networks are evolving into worse models of inferotemporal visual cortex. Preprint at 10.48550/arXiv.2306.03779 (2023).https://doi.org/10.48550/arXiv.2306.03779
- 6.Untangling invariant object recognitionTrends Cogn. Sci 11:333–341
- 7.Functional Compartmentalization and Viewpoint Generalization Within the Macaque Face-Processing SystemScience 330:845–851
- 8.Explicit information for category-orthogonal object properties increases along the ventral streamNat. Neurosci 19:613–622
- 9.The ventral visual pathway: an expanded neural framework for the processing of object qualityTrends Cogn. Sci 17:26–49
- 10.Capturing the objects of vision with neural networks. NatHum. Behav 5:1127–1144
- 11.Classification and Geometry of General Perceptual ManifoldsArXiv171006487 Cond-Mat Q-Bio Stat
- 12.Abstract representations emerge naturally in neural networks trained to perform multiple tasksNat. Commun 14
- 13.Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition PerformanceJ. Neurosci 35:13402–13418
- 14.Selectivity and Tolerance (“Invariance”) Both Increase as Visual Information Propagates from Cortical Area V4 to ITJ. Neurosci 30:12978–12995
- 15.ImageNet Classification with Deep Convolutional Neural NetworksAdvances in Neural Information Processing Systems 25:1097–1105
- 16.Deep Residual Learning for Image RecognitionArXiv151203385 Cs
- 17.Momentum Contrast for Unsupervised Visual Representation LearningArXiv191105722 Cs
- 18.A Simple Framework for Contrastive Learning of Visual RepresentationsArXiv200205709 Cs Stat
- 19.Contrastive Multiview CodingArXiv190605849 Cs
- 20.Unsupervised Visual Representation Learning by Context PredictionArXiv150505192 Cs
- 21.Mask R-CNN Proceedings of the IEEE International Conference on Computer Vision :2961–2969
- 22.Large Scale Adversarial Representation LearningAdvances in Neural Information Processing Systems 32
- 23.Balanced Increases in Selectivity and Tolerance Produce Constant Sparseness along the Ventral Visual StreamJ. Neurosci 32:10170–10182
- 24.Large-Scale, High-Resolution Comparison of the Core Visual Object Recognition Behavior of Humans, Monkeys, and State-of-the-Art Deep Artificial Neural NetworksJ. Neurosci 38:7255–7269
- 25.Identifying natural images from human brain activityNature 452:352–355
- 26.Deep image reconstruction from human brain activityPLOS Comput. Biol 15
- 27.Discovering important people and objects for egocentric video summarizationIEEE Conference on Computer Vision and Pattern Recognition https://doi.org/10.1109/CVPR.2012.6247820
- 28.Monfort, M., et al. Moments in Time Dataset: one million videos for event understanding. Preprint at 10.48550/arXiv.1801.03150 (2019).https://doi.org/10.48550/arXiv.1801.03150
- 29.Goal-Driven Recurrent Neural Network Models of the Ventral Visual StreambioRxiv https://doi.org/10.1101/2021.02.17.431717
- 30.Deep Clustering for Unsupervised Learning of Visual FeaturesArXiv180705520 Cs
- 31.Unsupervised Learning of Visual Features by Contrasting Cluster AssignmentsArXiv200609882 Cs
- 32.High-performing neural network models of visual cortex benefit from high latent dimensionalitybioRxiv https://doi.org/10.1101/2022.07.13.499969
- 33.What can 1.8 billion regressions tell us about the pressures shaping high-level visual representation in brains and machines?bioRxiv https://doi.org/10.1101/2022.03.28.485868
- 34.Disentangling by FactorisingProceedings of the 35th International Conference on Machine Learning :2649–2658
- 35.A Framework for the Quantitative Evaluation of Disentangled Representations. in International conference on learning representations
- 36.Higgins, I., et al. Towards a Definition of Disentangled Representations. Preprint at 10.48550/arXiv.1812.02230 (2018).https://doi.org/10.48550/arXiv.1812.02230
- 37.Unsupervised deep learning identifies semantic disentanglement in single inferotemporal neuronsArXiv200614304 Q-Bio
- 38.The Geometry of Abstraction in the Hippocampus and Prefrontal CortexCell 183:954–967
- 39.Tuned geometries of hippocampal representations meet the computational demands of social memoryNeuron https://doi.org/10.1016/j.neuron.2024.01.021
- 40.Alleman, M., Lindsey, J. W. & Fusi, S. Task structure and nonlinearity jointly determine learned representational geometry. Preprint at 10.48550/arXiv.2401.13558 (2024).https://doi.org/10.48550/arXiv.2401.13558
- 41.Neural representational geometry underlies few-shot concept learningProc. Natl. Acad. Sci 119
- 42.Primary visual cortex straightens natural video trajectoriesNat. Commun 12
- 43.Brain-like representational straightening of natural movies in robust feedforward neural networksThe Eleventh International Conference on Learning Representations 11
- 44.Perceptual straightening of natural videosNat. Neurosci 22
- 45.What Is the Goal of Sensory Coding?Neural Comput 6:559–601
- 46.The Code for Facial Identity in the Primate BrainCell 169:1013–1028
- 47.A Cortical Region Consisting Entirely of Face-Selective CellsScience 311:670–674
- 48.Parallel, multi-stage processing of colors, faces and shapes in macaque inferior temporal cortexNat. Neurosci 16:1870–1878
- 49.A Channel for 3D Environmental Shape in Anterior Inferotemporal CortexNeuron 84:55–62
- 50.The importance of mixed selectivity in complex cognitive tasksNature 497:585–590
- 51.A new neural framework for visuospatial processingNat. Rev. Neurosci 12:217–230
- 52.Norm-based face encoding by single neurons in the monkey inferotemporal cortexNature 442:572–575
- 53.A face feature space in the macaque temporal lobeNat. Neurosci 12:1187–1196
- 54.Abbott, L. F., Rajan, K. & Sompolinsky, H. Interactions between Intrinsic and Stimulus-Evoked Activity in Recurrent Neural Networks. Preprint at 10.48550/arXiv.0912.3832 (2010).https://doi.org/10.48550/arXiv.0912.3832
- 55.Optimal Degrees of Synaptic ConnectivityNeuron 93:1153–1164
- 56.Representational geometry: integrating cognition, computation, and the brainTrends Cogn. Sci 17:401–412
- 1.Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition PerformanceJ. Neurosci 35:13402–13418
- 2.Balanced Increases in Selectivity and Tolerance Produce Constant Sparseness along the Ventral Visual StreamJ. Neurosci 32:10170–10182
- 3.Large-Scale, High-Resolution Comparison of the Core Visual Object Recognition Behavior of Humans, Monkeys, and State-of-the-Art Deep Artificial Neural NetworksJ. Neurosci 38:7255–7269
- 4.Identifying natural images from human brain activityNature 452:352–355
- 5.Deep image reconstruction from human brain activityPLOS Comput. Biol 15
- 6.A Simple Framework for Contrastive Learning of Visual RepresentationsArXiv200205709 Cs Stat
- 7.Momentum Contrast for Unsupervised Visual Representation LearningArXiv191105722 Cs
- 8.Improved Baselines with Momentum Contrastive LearningArXiv200304297 Cs
- 9.Unsupervised Feature Learning via Non-Parametric Instance-level DiscriminationArXiv180501978 Cs
- 10.What makes for good views for contrastive learningArXiv200510243 Cs
- 11.Unsupervised Learning of Visual Features by Contrasting Cluster AssignmentsArXiv200609882 Cs
- 12.Deep Clustering for Unsupervised Learning of Visual FeaturesArXiv180705520 Cs
- 13.Bootstrap Your Own Latent: A New Approach to Self-Supervised LearningArXiv200607733 Cs Stat
- 14.Gidaris, S., Bursuc, A., Komodakis, N., Pérez, P. & Cord, M. Boosting Few-Shot Visual Learning with Self-Supervision. Preprint at 10.48550/arXiv.1906.05186 (2019).https://doi.org/10.48550/arXiv.1906.05186
- 15.Zhang, R., Isola, P. & Efros, A. A. Colorful Image Colorization. Preprint at 10.48550/arXiv.1603.08511 (2016).https://doi.org/10.48550/arXiv.1603.08511
- 16.Noroozi, M. & Favaro, P. Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles. Preprint at 10.48550/arXiv.1603.09246 (2017).https://doi.org/10.48550/arXiv.1603.09246
- 17.Large Scale Adversarial Representation LearningAdvances in Neural Information Processing Systems 32
- 18.Deep Residual Learning for Image RecognitionArXiv151203385 Cs
- 19.Zagoruyko, S. & Komodakis, N. Wide Residual Networks. Preprint at 10.48550/arXiv.1605.07146 (2017).https://doi.org/10.48550/arXiv.1605.07146
- 20.ImageNet Classification with Deep Convolutional Neural NetworksAdvances in Neural Information Processing Systems 25:1097–1105
- 21.Going Deeper with ConvolutionsArXiv14094842 Cs
- 22.Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. & Wojna, Z. Rethinking the Inception Architecture for Computer Vision. Preprint at 10.48550/arXiv.1512.00567 (2015).Rethinking the Inception Architecture for Computer Vision https://doi.org/10.48550/arXiv.1512.00567
- 23.Densely Connected Convolutional Networkshttps://doi.org/10.48550/arXiv.1608.06993
- 24.Simonyan, K. & Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. Preprint at 10.48550/arXiv.1409.1556 (2015).https://doi.org/10.48550/arXiv.1409.1556
- 25.Howard, A. G., et al. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. Preprint at 10.48550/arXiv.1704.04861 (2017).https://doi.org/10.48550/arXiv.1704.04861
- 26.Iandola, F. N., et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. Preprint at 10.48550/arXiv.1602.07360 (2016).https://doi.org/10.48550/arXiv.1602.07360
- 27.Aggregated Residual Transformations for Deep Neural NetworksArXiv161105431 Cs
- 28.Tan, M., et al. MnasNet: Platform-Aware Neural Architecture Search for Mobile. Preprint at 10.48550/arXiv.1807.11626 (2019).https://doi.org/10.48550/arXiv.1807.11626
- 29.Zhang, X., Zhou, X., Lin, M. & Sun, J. ShuffeNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices. Preprint at 10.48550/arXiv.1707.01083 (2017).https://doi.org/10.48550/arXiv.1707.01083
Article and author information
Author information
Version history
- Preprint posted:
- Sent for peer review:
- Reviewed Preprint version 1:
- Reviewed Preprint version 2:
- Version of Record published:
Copyright
© 2024, Jack W. Lindsey & Elias B. Issa
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 737
- downloads
- 98
- citation
- 1
Views, downloads and citations are aggregated across all versions of this paper published by eLife.