Factorized visual representations in the primate visual system and deep neural networks
Abstract
Object classification has been proposed as a principal objective of the primate ventral visual stream and has been used as an optimization target for deep neural network models (DNNs) of the visual system. However, visual brain areas represent many different types of information, and optimizing for classification of object identity alone does not constrain how other information may be encoded in visual representations. Information about different scene parameters may be discarded altogether (‘invariance’), represented in noninterfering subspaces of population activity (‘factorization’) or encoded in an entangled fashion. In this work, we provide evidence that factorization is a normative principle of biological visual representations. In the monkey ventral visual hierarchy, we found that factorization of object pose and background information from object identity increased in higherlevel regions and strongly contributed to improving object identity decoding performance. We then conducted a largescale analysis of factorization of individual scene parameters – lighting, background, camera viewpoint, and object pose – in a diverse library of DNN models of the visual system. Models which best matched neural, fMRI, and behavioral data from both monkeys and humans across 12 datasets tended to be those which factorized scene parameters most strongly. Notably, invariance to these parameters was not as consistently associated with matches to neural and behavioral data, suggesting that maintaining nonclass information in factorized activity subspaces is often preferred to dropping it altogether. Thus, we propose that factorization of visual scene information is a widely used strategy in brains and DNN models thereof.
eLife assessment
The study makes a valuable empirical contribution to our understanding of visual processing in primates and deep neural networks, with a specific focus on the concept of factorization. The analyses provide convincing evidence that high factorization scores are correlated with neural predictivity. This work will be of interest to systems neuroscientists studying vision and could inspire further research that ultimately may lead to better models of or a better understanding of the brain.
https://doi.org/10.7554/eLife.91685.3.sa0eLife digest
When looking at a picture, we can quickly identify a recognizable object, such as an apple, applying a single word label to it. Although extensive neuroscience research has focused on how human and monkey brains achieve this recognition, our understanding of how the brain and brainlike computer models interpret other complex aspects of a visual scene – such as object position and environmental context – remains incomplete.
In particular, it was not clear to what extent object recognition comes at the expense of other important scene details. For example, various aspects of the scene might be processed simultaneously. On the other hand, general object recognition may interfere with processing of such details.
To investigate this, Lindsey and Issa analyzed 12 monkey and human brain datasets, as well as numerous computer models, to explore how different aspects of a scene are encoded in neurons and how these aspects are represented by computational models. The analysis revealed that preventing effective separation and retention of information about object pose and environmental context worsened object identification in monkey cortex neurons. In addition, the computer models that were the most brainlike could independently preserve the other scene details without interfering with object identification.
The findings suggest that human and monkey high level ventral visual processing systems are capable of representing the environment in a more complex way than previously appreciated. In the future, studying more brain activity data could help to identify how rich the encoded information is and how it might support other functions like spatial navigation. This knowledge could help to build computational models that process the information in the same way, potentially improving their understanding of realworld scenes.
Introduction
Artificial deep neural networks (DNNs) are the most predictive models of neural responses to images in the primate highlevel visual cortex (Cadieu et al., 2014; Schrimpf et al., 2020). Many studies have reported that DNNs trained to perform image classification produce internal feature representations broadly similar to those in areas V4 and IT of the primate cortex, and that this similarity tends to be greater in models with better classification performance (Yamins et al., 2014). However, it remains opaque what aspects of the representations of these more performant models drive them to better match neural data. Moreover, beyond a certain threshold level of object classification performance, further improvement fails to produce a concomitant improvement in predicting primate neural responses (Schrimpf et al., 2020; Nonaka et al., 2021; Linsley, 2023). This weakening trend motivates finding new normative principles, besides object classification ability, that push models to better match primate visual representations.
One strategy for achieving high object classification performance is to form neural representations that discard some (are tolerant to) or all (are invariant to) information besides object class. Invariance in neural representations is in some sense a zerosum strategy: building invariance to some parameters improves the ability to decode others. We also note that our use of ‘invariance’ in this context refers to invariance in neural representations, rather than behavioral or perceptual invariance (DiCarlo and Cox, 2007). However, highlevel cortical neurons in the primate ventral visual stream are known to simultaneously encode many forms of information about visual input besides object identity, such as object pose (Freiwald and Tsao, 2010; Hong et al., 2016; Kravitz et al., 2013; Peters and Kriegeskorte, 2021). In this work, we seek to characterize how the brain simultaneously represents different forms of information.
In particular, we introduce methods to quantify the relationships between different types of visual information in a population code (e.g., object pose vs. camera viewpoint), and specifically the degree to which different forms of information are ‘factorized’. Intuitively, if the variance driven by one parameter is encoded along orthogonal dimensions of population activity space compared to the variance driven by other scene parameters, we say that this representation is factorized. We note that our definition of factorization is closely related to the existing concept of manifold disentanglement (DiCarlo and Cox, 2007; Chung et al., 2018) and can be seen as a generalization of disentanglement to highdimensional visual scene parameters like object pose. Factorization can enable simultaneous decoding of many parameters at once, supporting diverse visually guided behaviors (e.g., spatial navigation, object manipulation, or object classification) (Johnston and Fusi, 2023).
Using existing neural datasets, we found that both factorization of and invariance to object category and position information increase across the macaque ventral visual cortical hierarchy. Next, we leveraged the flexibility afforded by in silico models of visual representations to probe different forms of factorization and invariance in more detail, focusing on several scene parameters of interest: background content, lighting conditions, object pose, and camera viewpoint. Across a broad library of DNN models that varied in their architecture and training objectives, we found that factorization of all of the above scene parameters in DNN feature representations was positively correlated with models’ matches to neural and behavioral data. Interestingly, while neural invariance to some scene parameters (background scene and lighting conditions) predicted neural fits, invariance to others (object pose and camera viewpoint) did not. Our results generalized across both monkey and human datasets using different measures (neural spiking, fMRI, and behavior; 12 datasets total) and could not be accounted for by models’ classification performance. Thus, we suggest that factorized encoding of multiple behaviorally relevant scene variables is an important consideration, alongside other desiderata such as classification performance, in building more brainlike models of vision.
Results
Disentangling object identity manifolds in neural population responses can be achieved by qualitatively different strategies. These include building invariance of responses to nonidentity scene parameters (or, more realistically, partial invariance; DiCarlo and Cox, 2007) and/or factorizing nonidentitydriven response variance into isolated (factorized) subspaces (Figure 1A, left vs. center panels, cylindrical/sphericalshaded regions represent object manifolds). Both strategies maintain an ‘identity subspace’ in which object manifolds are linearly separable. In a noninvariant, nonfactorized representation, other variables like camera viewpoint also drive variance within the identity subspace, ‘entangling’ the representations of the two variables (Figure 1A, right; viewpointdriven variance is mainly in identity subspace, orange flatshaded region).
To formalize these different representational strategies, we introduced measures of factorization and invariance to scene parameters in neural population responses (Figure 1B; see Equations 2–4 in ‘Methods’). Concretely, invariance to a scene variable (e.g., object motion) is computed by measuring the degree to which varying that parameter alone changes neural responses, relative to the changes induced by varying other parameters (lower relative influence on neural activity corresponds to higher invariance, or tolerance, to that parameter). Factorization is computed by identifying the axes in neural population activity space that are influenced by varying the parameter of interest and assessing how much it overlaps the axes influenced by other parameters (‘a’ in Figure 1B and C; lower overlap corresponds to higher factorization). We quantified this overlap in two different ways (‘principal components analysis (PCA)based’ and ‘covariancebased’ factorization, corresponding to Equations 2 and 4 in ‘Methods’), which produced similar results when compared in subsequent analyses (unless otherwise noted, factorization scores will generally refer to the PCAbased method, and the covariance method is shown in Figures 5–7 for comparison). Intuitively, a neural population in which one neural subpopulation encodes object identity and another separate subpopulation encodes object position exhibits a high degree of factorization of those two parameters (however, note that factorization may also be achieved by neural populations with mixed selectivity in which the ‘subpopulations’ correspond to subspaces, or independent orthogonal linear projections, of neural activity space rather than physical subpopulations). Though the example presented in Figure 1 focused on factorization of and invariance to object identity versus nonidentity variables, we stress that our definitions can be applied to any scene variables of interest. Furthermore, we presented a simplified visual depiction of the geometry within each scene variable subspace in Figure 1. We emphasize that our factorization metric does not require a particular geometry within a variable’s subspace, whether parallel linearly ordered coding of viewpoint as in the cylindrical class manifolds shown in Figure 1A and B, or a more complex geometry where there is a lack of parallelism and/or a more nonlinear layout.
While factorization and invariance are not mutually exclusive representational strategies, they are qualitatively different. Factorization, unlike invariance, has the potential to enable the simultaneous representation of multiple scene parameters in a decodable fashion. Intuitively, factorization increases with higher dimensionality as this decreases overlap, all other things being equal (in the limit, the angle between points will approach 90^{o} or a fully orthogonal code in high dimensions), and for a given finite, fixed dimension, factorization is mainly driven by the angle between this dimension and the other variable subspaces which measures the degree of contamination (Figure 1C; square vs. parallelogram). In a simulation, we found that the extent to which the variables of interest were represented in a factorized way (i.e., along orthogonal axes, rather than correlated axes) influenced the ability of a linear discriminator to successfully decode both variables in a generalizable fashion from a few training samples (Figure 1C).
Given the theoretically desirable properties of factorized representations, we next asked whether such representations are observed in neural data, and how much factorization contributes empirically to downstream decoding performance in real data. Specifically, we took advantage of an existing dataset in which the tested images independently varied object identity versus object pose plus background context (Majaj et al., 2015; https://github.com/brainscore/vision/blob/master/examples/data_metrics_benchmarks.ipynb). We found that both V4 and IT responses exhibited more significant factorization of object identity information from nonidentity information than a shuffle control (which accounts for effects on factorization due to dimensionality of these regions) (Figure 2—figure supplement 1; see ’Methods’). Furthermore, the degree of factorization increased from V4 to IT (Figure 2A). Consistent with prior studies, we also found that invariance to nonidentity information increased from V4 to IT in our analysis (Figure 2A, right, solid lines; Rust and DiCarlo, 2010). Invariance to nonidentity information was even more pronounced when measured in the subspace of population activity capturing the bulk (90%) of identitydriven variance as a consequence of increased factorization of identity from nonidentity information (Figure 2A, right, dashed lines).
To illustrate the beneficial effect of factorization on decoding performance, we performed a statistical lesion experiment that precisely targeted this aspect of representational geometry. Specifically, we analyzed a transformed neural representation obtained by rotating the population data so that interclass variance more strongly overlapped with the principal components (PCs) of the intraclass variance in the data (see Equation 1 in ’Methods’). Note that this transformation, designed to decrease factorization, acts on the angle between latent variable subspaces. The applied linear basis rotation leaves all other activity statistics completely intact (such as mean neural firing rates, covariance structure of the population, and its invariance to nonclass variables) yet has the effect of strongly reducing object identity decoding performance in both V4 and IT (Figure 2B). Our analysis shows that maintaining invariance alone in the neural population code was insufficient to account for a large fraction of decoding performance in highlevel visual cortex; factorization of nonidentity variables is key to the decoding performance achieved by V4 and IT representations.
We next asked whether factorization is found in DNN model representations and whether this novel, heretofore unconsidered metric, is a strong indicator of more brainlike models. When working with computational models, we have the liberty to test an arbitrary number of stimuli; therefore, we could independently vary multiple scene parameters at sufficient scale to enable computing factorization and invariance for each, and we explored factorization in DNN model representations in more depth than previously measured in existing neural experiments. To gain insight back into neural representations, we also assessed the ability of each model to predict separately collected neural and behavioral data. In this fashion, we may indirectly assess the relative significance of geometric properties like factorization and invariance to biological visual representations – if, for instance, models with more factorized representations consistently match neural data more closely, we may infer that those neural representations likely exhibit factorization themselves (Figure 3). To measure factorization, invariance, and decoding properties of DNN models, we generated an augmented image set, based on the images used in the previous dataset (Figure 2), in which we independently varied the foreground object identity, foreground object pose, background identity, scene lighting, and 2D scene viewpoint. Specifically for each base image from the original dataset, we generated sets of images that varied exactly one of the above scene parameters while keeping the others constant, allowing us to measure the variance induced by each parameter relative to the variance across all scene parameters (Figure 3, top left; 100 base scenes and 10 transformed images for each source of variation). We presented this large image dataset to models (4000 images total) to assess the relative degree of representational factorization of and invariance to each scene parameter. We conducted this analysis across a broad range of DNNs varying in architecture and objective as well as other implementational choices to obtain the widest possible range of DNN representations for testing our hypothesis. These included models using supervised training for object classification (Krizhevsky et al., 2012; He et al., 2016), contrastive selfsupervised training (He et al., 2020; Chen et al., 2020), and selfsupervised models trained using auxiliary objective functions (Tian et al., 2019; Doersch et al., 2015; He et al., 2017; Donahue and Simonyan, 2019; see ’Methods’ and Supplementary file 1b).
First, we asked whether, in the course of training, DNN models develop factorized representations at all. We found that the final layers of trained networks exhibited consistent increases in factorization of all tested scene parameters relative to a randomly initialized (untrained) baseline with the same architecture (Figure 4A, top row, rightward shift relative to black cross, a randomly initialized ResNet50). By contrast, training DNNs produced mixed effects on invariance, typically increasing it for background and lighting but reducing it for object pose and camera viewpoint (Figure 4A, bottom row, leftward shift relative to black cross for left two panels). Moreover, we found that the degree of factorization in models correlated with the degree to which they predicted neural activity for singleunit IT data (Figure 4A, top row), which can be seen as correlative evidence that neural representations in IT exhibit factorization of all scene variables tested. Interestingly, we saw a different pattern for representational invariance to a scene parameter. Invariance showed mixed correlations with neural predictivity (Figure 4A, bottom row), suggesting that IT neural representations build invariance to some scene information (background and lighting) but not to others (object pose and observer viewpoint). Similar effects were observed when we assessed correlations between these metrics and fits to human behavioral data rather than macaque neural data (Figure 4B).
To assess the robustness of these findings to choice of images and brain regions used in an experiment, we conducted the same analyses across a large and diverse set of previously collected neural and behavioral datasets, from different primate species and visual regions (six macaque datasets [Majaj et al., 2015; Rust and DiCarlo, 2012; Rajalingham et al., 2018]: two V4, two ITC (inferior temporal cortex), and two behavior; six human datasets [Rajalingham et al., 2018; Kay et al., 2008; Shen et al., 2019]: two V4, two HVC (higher visual cortex), and two behavior; Supplementary file 1a). Consistently, increased factorization of scene parameters in model representations correlated with models being more predictive of neural spiking responses, voxel BOLD signal, and behavioral responses to images (Figure 5A, black bars; see Figure 4—figure supplements 1–3 for scatter plots across all datasets). Although invariance to appearance factors (background identity and scene lighting) correlated with more brainlike models, invariance for spatial transforms (object pose and camera viewpoint) consistently did not (zero or negative correlation values; Figure 5C, red and green open circles). Our results were preserved when we reran the analyses using only the subset of models with the identical ResNet50 architecture (Figure 5—figure supplement 1) or when we evaluated model predictivity using representational dissimilarity matrices of the population (RDMs) instead of linear regression (encoding) fits of individual neurons or voxels (Figure 5—figure supplement 2). Furthermore, the main finding of a positive correlation between factorization and neural predictivity was robust to the particular choice of PCA threshold we used to quantify factorization (Figure 5—figure supplement 3). We found similar results using a covariancebased method for computing factorization that does not have any free parameters (Figure 5C, faded filled circles; see Equations 4 in ‘Methods’).
Finally, we tested whether our results generalized across the particular image set used for computing the model factorization scores in the first place. Here, instead of relying on our synthetically generated images, where each scene parameter was directly controlled, we recomputed factorization from two types of relatively unconstrained natural movies, one where the observer moves in an urban environment (approximates camera viewpoint changes) (Lee et al., 2012) and another where objects move in front of a fairly stationary observer (approximates object pose changes) (Monfort, 2019). Similar to the result found for factorization measured using augmentations of synthetic images, factorization of framebyframe variance (local in time, presumably dominated by either observer or camera motion; see ‘Methods’) from other sources of variance across natural movies (nonlocal in time) was correlated with improved neural predictivity in both macaque and human data while invariance to local framebyframe differences was not (Figure 5B; black versus gray bars). Thus, we have shown that a main finding – the importance of object pose and camera viewpoint factorization for achieving brainlike representations – holds across types of brain signal (spiking vs. BOLD), species (monkey vs. human), cortical brain areas (V4 vs. IT), images for testing in experiments (synthetic, grayscale vs. natural, color), and image sets for computing the metric (synthetic images vs. natural movies).
Our analysis of DNN models provides strong evidence that greater factorization of a variety of scene variables is consistently associated with a stronger match to neural and behavioral data. Prior work had identified a similar correlation between object classification performance (measured fitting a decoder for object class using model representations) and fidelity to neural data (Yamins et al., 2014). A priori, it is possible that the correlations we have demonstrated between scene parameter factorization and neural fit can be entirely captured by the known correlation between classification performance and neural fits (Schrimpf et al., 2020; Yamins et al., 2014) as factorization and classification may themselves be correlated. However, we found that factorization scores significantly boosted crossvalidated predictive power of neural/behavioral fit performance compared to simply using object classification alone, and factorization boosted predictive power as much if not slightly more when using RDMs instead of linear regression fits to quantify the match to the brain/behavior (Figure 6). Thus, considering factorization in addition to object classification performance improves upon our prior understanding of the properties of more brainlike models (Figure 7).
Discussion
Object classification, which has been proposed as a normative principle for the function of the ventral visual stream, can be supported by qualitatively different representational geometries (Yamins et al., 2014; Nayebi, 2021). These include representations that are completely invariant to nonclass information (Caron et al., 2019b; Caron, 2019a) and representations that retain a highdimensional but factorized encoding of nonclass information, which disentangles the representation of multiple variables (Figure 1A). Here, we presented evidence that factorization of nonclass information is an important strategy used, alongside invariance, by the highlevel visual cortex (Figure 2) and by DNNs that are predictive of primate neural and behavioral data (Figures 4 and 5).
Prior work has indicated that building representations that support object classification performance and representations that preserve highdimensional information about natural images are both important principles of the primate visual system (Cadieu et al., 2014; Elmoznino and Bonner, 2022; though see Conwell et al., 2022). Critically, our results cannot be accounted for by classification performance or dimensionality alone (Figure 6, gray and pink bars); that is, the relationship between factorization and matches to neural data was not entirely mediated by classification or dimensionality. That said, we do not regard factorization and dimensionality, or factorization and object classification performance, as mutually exclusive hypotheses for useful principles of visual representations. Indeed, highdimensional representations could be regarded as a means to facilitate factorization, and likewise factorized representations can better support classification (Figure 1C).
Our notion of factorization is related to, but distinct from, several other concepts in the literature. Many prior studies in machine learning have considered the notion of disentanglement, often defined as the problem of inferring independent factors responsible for generating the observed data (Kim and Mnih, 2018; Eastwood and Williams, 2018; Higgins, 2018). One prior study notably found that machine learning models designed to infer disentangled representations of visual data displayed singleunit responses that resembled those of individual neurons in macaque IT (Higgins et al., 2021). Our definition of factorization is more flexible, requiring only that independent factors be encoded in orthogonal subspaces, rather than by distinct individual neurons. Moreover, our definition applies to generative factors, such as camera viewpoint or object pose, that are multidimensional and context dependent. Factorization is also related to a measure of ‘abstraction’ in representational geometry introduced in a recent line of work (Bernardi et al., 2020; Boyle et al., 2024), which is observed to emerge in trained neural networks (Johnston and Fusi, 2023; Alleman et al., 2024). In these studies, an abstract representation is defined as one in which variables are encoded and can be decoded in a consistent fashion regardless of the values of other variables. A fully factorized representation should be highly abstract according to this definition, though factorization emphasizes the geometric properties of the population representation while these studies emphasize the consequences for decoding performance in training downstream linear readouts. Relatedly, another recent study found that orthogonal encoding of class and nonclass information is one of several factors that determines fewshot classification performance (Sorscher et al., 2022). Our work can be seen as complementary to work on representational straightening of natural movie trajectories in the population space (Hénaff et al., 2021). This work suggested that visual representations maintain a locally linear code of latent variables like camera viewpoint, while our work focused on the global arrangement of the linear subspaces affected by different variables (e.g., overall coding of camera viewpointdriven variance versus sources of variance from other scene variables in a movie). Local straightening of natural movies was found to be important for early visual cortex neural responses but not necessarily for highlevel visual cortex (Toosi and Issa, 2022), where the present work suggests factorization may play a role.
Our work has several limitations. First, our analysis is primarily correlative. Going forward, we suggest that factorization could prove to be a useful objective function for optimizing neural network models that better resemble primate visual systems, or that factorization of latent variables should at least be a byproduct of other objectives that lead to more brainlike models. An important direction for future work is finding ways to directly incentivize factorization in model objective functions so as to test its causal impact on the fidelity of learned representations to neural data. Second, our choice of scene variables to analyze in this study was heuristic and somewhat arbitrary. Future work could consider unsupervised methods (in the vein of independent components analysis) for uncovering the latent sources of variance that generate visual data, and assessing to what extent these latent factors are encoded in factorized form. Third, in our work we do not specify the details of how a particular scene parameter is encoded within its factorized subspace, including whether the code is linear (‘straightened’) or nonlinear (Hénaff et al., 2021; Hénaff et al., 2019). Neural codes could adopt different strategies, resulting in similar factorization scores at the population level, each with some support in visual cortex literature: (1) each neuron encodes a single latent variable (Field, 1994; Chang and Tsao, 2017), (2) separate brain subregions encode qualitatively different latent variables but using distributed representations within each region (Tsao et al., 2006; LaferSousa and Conway, 2013; Vaziri et al., 2014), and (3) each neuron encodes multiple variables in a distributed population code, such that the factorization of different variables is only apparent as independent directions when assessed in highdimensional population activity space (Field, 1994; Rigotti et al., 2013). Future work can disambiguate among these possibilities by systematically examining ventral visual stream subregions (Kravitz et al., 2013; Vaziri et al., 2014; Kravitz et al., 2011) and the single neuron tuning curves within them (Leopold et al., 2006; Freiwald et al., 2009).
Methods
Monkey datasets
Macaque monkey datasets were of singleunit neural recordings (Rust and DiCarlo, 2012), multiunit neural recordings (Majaj et al., 2015), and object recognition behavior (Rajalingham et al., 2018). Singleunit spiking responses to natural images were measured in V4 and anterior ventral IT (Rust and DiCarlo, 2012). The advantages of this dataset are that it contains wellisolated single neurons, the gold standard for electrophysiology. Furthermore, the IT recordings were obtained from penetrating electrodes targeting the anterior ventral portion of IT near the base of skull, reflecting the highest level of the IT hierarchy. On the other hand, the multiunit dataset was obtained from across IT with a bias toward where multiunit arrays are more easily placed such as CIT and PIT (Majaj et al., 2015), complementing the recording locations of the singleunit dataset. An advantage of the multiunit dataset using chronic recording arrays is that an order of magnitude more images were tested per recording site (see dataset comparisons in Supplementary file 1a). Finally, the monkey behavioral dataset came from a third study examining the imagebyimage object classification performance of macaques and humans (Rajalingham et al., 2018).
Human datasets
Three datasets from humans were used, two fMRI datasets and one object recognition behavior dataset (Nonaka et al., 2021; Rajalingham et al., 2018; Kay et al., 2008). The fMRI datasets used different images (color versus grayscale) but otherwise used a fairly similar number of images and voxel resolution in MR imaging. Human fMRI studies have found that different DNN layers tend to map to V4 and HVC human fMRI voxels (Nonaka et al., 2021). The human behavioral dataset measured imagebyimage classification performance and was collected in the same study as the monkey behavioral signatures (Rajalingham et al., 2018).
Computational models
In recent years, a variety of approaches to training DNN vision models have been developed that learn representations that can be used for downstream classification (and other) tasks. Models differ in a variety of implementational choices including in their architecture, objective function, and training dataset. In the models we sampled, objectives included supervised learning of object classification (AlexNet, ResNet), selfsupervised contrastive learning (MoCo, SimCLR), and other unsupervised learning algorithms based on auxiliary tasks (e.g., reconstruction or colorization). A majority of the models that we considered relied on the widely used, performant ResNet50 architecture, though some in our library utilized different architectures. The randomly initialized network control utilized ResNet50 (see Figure 4A and B). The set of models we used is listed in Supplementary file 1b.
Simulation of factorized versus nonfactorized representational geometries
For the simulation in Figure 1C, we generated data in the following way. First, we randomly sampled the values of N = 10 binary features. Feature values corresponded to positions in an Ndimensional vector space as follows: each feature was assigned an axis in Ndimensional space, and the value of each feature (+1 or –1) was treated as a coefficient indicating the position along that axis. All but two of the feature axes were orthogonal to the rest. The last two features, which served as targets for the trained linear decoders, were assigned axes whose alignment ranged from 0 (orthogonal) to 1 (identical). In the noiseless case, factorization of these two variables with respect to one another is given by subtracting the square of the cosine of the angle between the axes from 1. We added Gaussian noise to the positions of each data point and randomly sampled K positive and negative examples for each variable of interest to use as training data for the linear classifier (a support vector machine).
Macaque neural data analyses
For the shuffle control used as a null model for factorization, we shuffled the object identity labels of the images (Figure 2—figure supplement 1). For the transformation used in Figure 2B, we computed the PCs of the mean neural activity response to each object class (‘class centers,’ x^{c}), referred to as the interclass PCs, v_{1}^{inter}, v_{2}^{inter}, …, v_{N}^{inter}. We also computed the PCs of the data with corresponding class centers subtracted (i.e., x  x^{c}) from each activity pattern, referred to as the intraclass PCs v_{1}^{intra}, v_{2}^{intra}, …, v_{N}^{intra}. We transformed the data by applying to the class centers a change of basis matrix W_{inter→intra} that rotated each interclass PC into the corresponding intraclass PC: W_{inter→intra}=v_{1}^{intra} (v_{1}^{inter})^{T} + …1v_{N}^{intra} (v_{N}^{inter})^{T}. That is, the class centers were transformed by this matrix, but the relative positions of activity patterns within a given class were fixed. For an activation vector x belonging to a class c for which the average activity vector over all images of class c is x^{c}, the transformed vector was
This transformation has the effect of preserving intraclass variance statistics exactly from the original data and preserving everything about the statistics of interclass variance except its orientation relative to intraclass variance. That is, the transformation is designed to affect (specifically decrease) factorization while controlling for all other statistics of the activity data that may be relevant to object classification performance (considering the simulation in Figure 1C of two binary variables, this basis change of the neural data in Figure 2B is equivalent to turning a square into the maximally flat parallelogram, the degenerate one where all the points are collinear).
Scene parameter variation
Our generated scenes consisted of foreground objects imposed upon natural backgrounds. To measure variance associated with a particular parameter like the background identity, we randomly sampled 10 different backgrounds while holding the other variables (e.g., foreground object identity and pose constant). To measure variance associated with foreground object pose, we randomly varied object angle from [–90, 90] along all three axes independently, object position on the two inplane axes, horizontal [–30%, 30%] and vertical [–60%, 60%], and object size [×1/1.6, ×1.6]. To measure variance associated with camera position, we took crops of the image with scale uniformly varying from 20 to 100% of the image size, and position uniformly distributed across the image. To measure variance associated with lighting conditions, we applied random jitters to the brightness, contrast, saturation, and hue of an image, with jitter value bounds of [–0.4, 0.4] for brightness, contrast, and saturation and [–0.1, 0.1] for hue. These parameter choices follow standard data augmentation practices for selfsupervised neural network training, as used, for example, in the SimCLR and MoCo models tested here (He et al., 2020; Chen et al., 2020).
Factorization and invariance metrics
Factorization and invariance were measured according to the following equations:
Variance induced by a parameter (var_{param}) is computed by measuring the variance (summed across all dimensions of neural activity space) of neural responses to the 10 augmented versions of a base image where the augmentations are those obtained by varying the parameter of interest. This quantity is then averaged across the 100 base images. The variance induced by all parameters is simply the sum of the variances across all images and augmentations. To define the ‘otherparameter subspace,’ we averaged neural responses for a given base image over all augmentations using the parameter of interest, and ran PCA on the resulting set of averaged responses. The subspace was defined as the space spanned by top PCA components containing 90% of the variance of these responses. Intuitively, this space captures the bulk of the variance driven by all parameters other than the parameter of interest (due to the averaging step). The variance of the parameter of interest within this ‘otherparameter subspace,’ var_{paramother_param_subspace}, was computed the same way as var_{param} but using the projections of neural activity responses onto the otherparameter subspace. In the main text, we refer to this method of computing factorization as PCAbased factorization.
We also considered an alternative definition of factorization referred to as covariancebased factorization. In this alternative definition, we measured the covariance matrices cov_{param} and cov_{other_param} induced by varying (in the same fashion as above) the parameter of interest, and all other parameters. Factorization was measured by the following equation:
This is equal to 1 minus the dot product between the normalized, flattened covariance matrices, and thus covariancebased factorization is a measure of the discrepancy of the covariance structure induced by the parameter of interest and other parameters. The main findings were unaffected by our choice of method for computing the factorization metric, whether PCA or covariance based (Figures 5—7). An advantage of the PCAbased method is that as an intermediate one recovers the linear subspaces containing parameter variance, but in so doing requires an arbitrary choice of the explained variance threshold used to choose the number of PCs. By contrast, the covariancebased method is more straightforward to compute and has no free parameters. Thus, these two metrics are complementary and somewhat analogous in methodology to two metrics commonly used for measuring dimensionality (the number of components needed to explain a certain fraction of the variance, analogous to our original PCAbased definition, and the participation ratio, analogous to our covariancebased definition) (Ding and Glanzman, 2010; LitwinKumar et al., 2017).
Natural movie factorization metrics
For natural movies, variance is not induced by explicit control of a parameter as in our synthetic scenes but implicitly, by considering contiguous frames (separated by 200 ms in real time) as reflective of changes in one of two motion parameters (object versus observer motion) depending on how stationary the observer is (MIT Moments in Time movie set: stationary observer; UTAustin Egocentric movie set: nonstationary) (Lee et al., 2012; Monfort, 2019). Here, the all parameters condition is simply the variance across all movie frames, which in the case of MIT Moments in Time dataset includes variance across thousands of video clips taken in many different settings and in the case of the UTAustin Egocentric movie dataset includes variance across only four movies but over long durations of time during which an observer translates extensively in an environment (3–5 hr). Thus, movie clips in the MIT Moments in Time movie set contained new scenes with different object identities, backgrounds, and lightings and thus effectively captured variance induced by these nonspatial parameters (Monfort, 2019). In the UTAustin Egocentric movie set, new objects and backgrounds are encountered as the subject navigates around the urban landscape (Lee et al., 2012).
Model neural encoding fits
Linear mappings between model features and neuron (or voxel) responses were computed using ridge regression (with regularization coefficient selected by crossvalidation) on a lowdimensional linear projection of model features (top 300 PCA components computed using images in each dataset). We also tested an alternative approach to measuring representational similarity between models and experimental data based on representational similarity analysis (Kriegeskorte and Kievit, 2013), computing dot product similarities of the representations of all pairs of images and measuring the Spearman correlation coefficient between these pairwise similarity matrices obtained from a given model and neural dataset, respectively.
Model behavioral signatures
We followed the approach of Rajalingham et al., 2018. We took human and macaque behavioral data from the object classification task and used it to create signatures of imagelevel difficulty (the ‘I1’ vector) and imagebydistractorobject confusion rates (the ‘I2’ matrix). We did the same for the DNN models, extracting model ‘behavior’ by training logistic regression classifiers to classify object identity in the same image dataset used in the experiments of Rajalingham et al., 2018, using model layer activations as inputs. Model behavioral accuracy rates on image by distractor object pairs were assessed using the classification probabilities output by the logistic regression model, and these were used to compute I1 and I2 metrics as was done for the true behavioral data. Behavioral similarity between models and data was assessed by measuring the correlation between the entries of the I1 vectors and I2 matrices (both I1 and I2 results are reported).
Model layer choices
The scatter plots in Figure 4A and B and Figure 4—figure supplements 1–3 use metrics (factorization, invariance, and goodness of neural fit) taken from the final representational layer of the network (the layer prior to the logits layer used for classification in supervised network, prior to the embedding head in contrastive learning models, or prior to any auxiliary taskspecific layers in unsupervised models trained using auxiliary tasks). However, representational geometries of model activations, and their match to neural activity and behavior, vary across layers. This variability arises because different model layers correspond to different stages of processing in the model (convolutional layers in some cases, and pooling operations in others), and may even have different dimensionalities. To ensure that our results do not depend on idiosyncrasies of representations in one particular model layer and the particular network operations that precede it, summary correlation statistics in all other figures (Figures 5—7, Figure 5—figure supplements 1–3) show the results of the analysis in question averaged over the five final representational layers of the model. That is, the metrics of interest (factorization, invariance, neural encoding fits, RDM correlation, behavioral similarity scores) were computed independently for each of the five final representational layers of each model, and these five values were averaged prior to computing correlations between different metrics.
Correlation of model predictions and experimental data
A Spearman linear correlation coefficient was calculated for each model layer by biological dataset combination (six monkey datasets and six human datasets). Here, we do not correct for noise in the biological data when computing the correlation coefficient as this would require trial repeats (for computing intertrial variability) that were limited or not available in the fMRI data used. In any event, normalizing by the data noise ceiling applies a uniform scaling to all model prediction scores and does not affect model comparison, which only depends on ranking models as being relatively better or worse in predicting brain data. Finally, we estimated the effectiveness of model factorization, invariance, or dimensionality in combination with model object classification performance for predicting model neural and behavioral fit by performing a linear regression on the particular dual metric combination (e.g., classification plus object pose factorization) and reporting the Spearman correlation coefficient of the linearly weighted metric combination. The correlation was assessed on heldout models (80% used for training, 20% for testing), and the results were averaged over 100 randomly sampled train/test splits.
Data availability
The current manuscript is a computational study, so no data have been generated for this manuscript. Publicly available datasets and models were used. Analysis code is available at https://github.com/issalab/LindseyIssaFactorization, (copy archived at Issa, 2024).

Collaborative Research in Computational NeurosciencefMRI of human visual areas in response to natural images.https://doi.org/10.6080/K0QN64NG
References

Deep neural networks rival the representation of primate IT cortex for core visual object recognitionPLOS Computational Biology 10:e1003963.https://doi.org/10.1371/journal.pcbi.1003963

Classification and geometry of general perceptual manifoldsPhysical Review X 8:031003.https://doi.org/10.1103/PhysRevX.8.031003

Untangling invariant object recognitionTrends in Cognitive Sciences 11:333–341.https://doi.org/10.1016/j.tics.2007.06.010

ConferenceUnsupervised Visual Representation Learning by Context PredictionIEEE International Conference on Computer Vision. pp. 1422–1430.https://doi.org/10.1109/ICCV.2015.167

ConferenceLarge scale adversarial representation learningAdvances in Neural Information Processing Systems.

ConferenceA framework for the quantitative evaluation of disentangled representationsInternational Conference on Learning Representations.

What is the goal of sensory coding?Neural Computation 6:559–601.https://doi.org/10.1162/neco.1994.6.4.559

A face feature space in the macaque temporal lobeNature Neuroscience 12:1187–1196.https://doi.org/10.1038/nn.2363

ConferenceDeep residual learning for image recognitionIEEE Conference on Computer Vision and Pattern Recognition (CVPR. pp. 770–778.https://doi.org/10.1109/CVPR.2016.90

ConferenceMask RCNN2017 IEEE International Conference on Computer Vision. pp. 2980–2988.https://doi.org/10.1109/ICCV.2017.322

ConferenceMomentum contrast for unsupervised visual representation learningIEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 9729–9738.https://doi.org/10.1109/CVPR42600.2020.00975

Perceptual straightening of natural videosNature Neuroscience 22:984–991.https://doi.org/10.1038/s4159301903774

Primary visual cortex straightens natural video trajectoriesNature Communications 12:5982.https://doi.org/10.1038/s4146702125939z

SoftwareLindseyIssaFactorization, version swh:1:rev:0df67f8c65db6ab3c1fd2cafdf1505116a303c0dSoftware Heritage.

ConferenceDisentangling by FactorisingProceedings of the 35th International Conference on Machine Learning. pp. 2649–2658.

A new neural framework for visuospatial processingNature Reviews. Neuroscience 12:217–230.https://doi.org/10.1038/nrn3008

The ventral visual pathway: an expanded neural framework for the processing of object qualityTrends in Cognitive Sciences 17:26–49.https://doi.org/10.1016/j.tics.2012.10.011

Representational geometry: integrating cognition, computation, and the brainTrends in Cognitive Sciences 17:401–412.https://doi.org/10.1016/j.tics.2013.06.007

ConferenceImagenet classification with deep Convolutional neural networksAdvances in Neural Information Processing Systems. pp. 1097–1105.

Parallel, multistage processing of colors, faces and shapes in macaque inferior temporal cortexNature Neuroscience 16:1870–1878.https://doi.org/10.1038/nn.3555

ConferenceDiscovering important people and objects for egocentric video summarization2012 IEEE Conference on Computer Vision and Pattern Recognition. pp. 1346–1353.https://doi.org/10.1109/CVPR.2012.6247820

Capturing the objects of vision with neural networksNature Human Behaviour 5:1127–1144.https://doi.org/10.1038/s41562021011946

Both increase as visual information propagates from cortical area V4 to ITThe Journal of Neuroscience 30:12978–12995.

Balanced increases in selectivity and tolerance produce constant sparseness along the ventral visual streamThe Journal of Neuroscience 32:10170–10182.https://doi.org/10.1523/JNEUROSCI.612511.2012

Deep image reconstruction from human brain activityPLOS Computational Biology 15:e1006633.https://doi.org/10.1371/journal.pcbi.1006633

ConferenceBrainlike representational straightening of natural movies in robust feedforward neural networksThe Eleventh International Conference on Learning Representations.
Article and author information
Author details
Funding
DOE CSGF (DESC0020347)
 Jack W Lindsey
KlingensteinSimons Foundation (Fellowship in Neuroscience)
 Elias B Issa
Sloan Foundation (Fellowship)
 Elias B Issa
GrossmanKavli Center at Columbia (Scholar Award)
 Elias B Issa
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
This work was performed on the Columbia Zuckerman Institute Axon GPU cluster and via generous access to Cloud TPUs from Google’s TPU Research Cloud (TRC). JWL was supported by the DOE CSGF (DESC0020347). EBI was supported by a KlingensteinSimons fellowship, Sloan Foundation fellowship, and GrossmanKavli Scholar Award. We thank Erica Shook for comments on a previous version of the manuscript. The authors declare no competing interests.
Version history
 Preprint posted: August 4, 2023 (view preprint)
 Sent for peer review: August 22, 2023
 Preprint posted: February 2, 2024 (view preprint)
 Preprint posted: June 5, 2024 (view preprint)
 Version of Record published: July 5, 2024 (version 1)
Cite all versions
You can cite all versions using the DOI https://doi.org/10.7554/eLife.91685. This DOI represents all versions, and will always resolve to the latest one.
Copyright
© 2024, Lindsey and Issa
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 498
 views

 62
 downloads

 0
 citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Cell Biology
 Neuroscience
One of the most extensively studied members of the Ras superfamily of small GTPases, Rac1 is an intracellular signal transducer that remodels actin and phosphorylation signaling networks. Previous studies have shown that Rac1mediated signaling is associated with hippocampaldependent working memory and longerterm forms of learning and memory and that Rac1 can modulate forms of both pre and postsynaptic plasticity. How these different cognitive functions and forms of plasticity mediated by Rac1 are linked, however, is unclear. Here, we show that spatial working memory in mice is selectively impaired following the expression of a genetically encoded Rac1 inhibitor at presynaptic terminals, while longerterm cognitive processes are affected by Rac1 inhibition at postsynaptic sites. To investigate the regulatory mechanisms of this presynaptic process, we leveraged new advances in mass spectrometry to identify the proteomic and posttranslational landscape of presynaptic Rac1 signaling. We identified serine/threonine kinases and phosphorylated cytoskeletal signaling and synaptic vesicle proteins enriched with active Rac1. The phosphorylated sites in these proteins are at positions likely to have regulatory effects on synaptic vesicles. Consistent with this, we also report changes in the distribution and morphology of synaptic vesicles and in postsynaptic ultrastructure following presynaptic Rac1 inhibition. Overall, this study reveals a previously unrecognized presynaptic role of Rac1 signaling in cognitive processes and provides insights into its potential regulatory mechanisms.

 Neuroscience
Daily experiences often involve the processing of multiple sequences, yet storing them challenges the limited capacity of working memory (WM). To achieve efficient memory storage, relational structures shared by sequences would be leveraged to reorganize and compress information. Here, participants memorized a sequence of items with different colors and spatial locations and later reproduced the full color and location sequences one after another. Crucially, we manipulated the consistency between location and color sequence trajectories. First, sequences with consistent trajectories demonstrate improved memory performance and a trajectory correlation between reproduced color and location sequences. Second, sequences with consistent trajectories show neural reactivation of common trajectories, and display spontaneous replay of color sequences when recalling locations. Finally, neural reactivation correlates with WM behavior. Our findings suggest that a shared common structure is leveraged for the storage of multiple sequences through compressed encoding and neural replay, together facilitating efficient information organization in WM.