1. Neuroscience
Download icon

Unsupervised changes in core object recognition behavior are predicted by neural plasticity in inferior temporal cortex

  1. Xiaoxuan Jia  Is a corresponding author
  2. Ha Hong
  3. James J DiCarlo
  1. Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, United States
  2. McGovern Institute for Brain Research, United States
  3. Harvard-MIT Division of Health Sciences and Technology, United States
  4. Center for Brains, Minds and Machines, United States
Research Article
  • Cited 0
  • Views 706
  • Annotations
Cite this article as: eLife 2021;10:e60830 doi: 10.7554/eLife.60830

Abstract

Temporal continuity of object identity is a feature of natural visual input and is potentially exploited – in an unsupervised manner – by the ventral visual stream to build the neural representation in inferior temporal (IT) cortex. Here, we investigated whether plasticity of individual IT neurons underlies human core object recognition behavioral changes induced with unsupervised visual experience. We built a single-neuron plasticity model combined with a previously established IT population-to-recognition-behavior-linking model to predict human learning effects. We found that our model, after constrained by neurophysiological data, largely predicted the mean direction, magnitude, and time course of human performance changes. We also found a previously unreported dependency of the observed human performance change on the initial task difficulty. This result adds support to the hypothesis that tolerant core object recognition in human and non-human primates is instructed – at least in part – by naturally occurring unsupervised temporal contiguity experience.

eLife digest

A bear is a bear, regardless of how far away it is, or the angle at which we view it. And indeed, the ability to recognize objects in different contexts is an important part of our sense of vision. A brain region called the inferior temporal (IT for short) cortex plays a critical role in this feat. In primates, the activity of groups of IT cortical nerve cells correlates with recognition of different objects – and conversely, suppressing IT cortical activity impairs object recognition behavior. Because these cells remain selective to an item despite changes of size, position or orientation, the IT cortex is thought to underly the ability to recognise an object regardless of variations in its visual properties. How does this tolerance arise?

A property called ‘temporal continuity’ is thought to be involved – in other words, the fact that objects do not blink in and out of existence. Studies in nonhuman primates have shown that temporal continuity can indeed reshape the activity of nerve cells in the IT cortex, while behavioural experiments with humans suggest that it affects the ability to recognize objects. However, these two sets of studies used different visual tasks, so it is still unknown if the cellular processes observed in monkey IT actually underpin the behavioural effects shown in humans. Jia et al. therefore set out to examine the link between the two.

In the initial experiments, human volunteers were given, in an unsupervised manner, a set of visual tasks designed similarly to the previous tests in nonhuman primates. The participants were presented with continuous views of the same or different objects at various sizes, and then given tests of object recognition. These manipulations resulted in volunteers showing altered size tolerance over time. Aiming to test which cellular mechanism underpinned this behavioural effect, Jia et al. built a model that simulated the plasticity of individual IT cells and the IT networks, to predict the changes of object recognition observed in the volunteers. A high predictability of the model revealed that the plasticity in IT cortex did indeed account for the behavioral changes in the volunteers. These results shed new light on the role that temporal continuity plays in vision, refining our understanding of the way the IT cortex helps to assess the world around us.

Introduction

Among visual areas, the inferior temporal (IT) cortex is thought to most directly underlie core visual object recognition in human and non-human primates (Ito et al., 1995; Rajalingham and DiCarlo, 2019). For example, simple weighted sums of IT neuronal population activity can accurately explain and predict human and monkey core object recognition (COR) performance over dozens of such tasks (Majaj et al., 2015). Moreover, direct suppression of IT activity disrupts COR behavior (Afraz et al., 2015; Rajalingham and DiCarlo, 2019). These results were found in the face of significant variation in object latent variables including size, position, and pose, and the high performance of the simple IT readout (weighted sum) rests on the fact that many individual IT neurons show high tolerance to those variables (DiCarlo et al., 2012; Hung et al., 2005; Li et al., 2009), reviewed by DiCarlo et al., 2012.

But how does the ventral stream wire itself up to construct these highly tolerant IT neurons? Simulated IT ‘neurons’ in the deep layers of artificial neural networks (ANNs) have such tolerance and provide quite accurate approximations of the adult ventral visual stream processing (Khaligh-Razavi and Kriegeskorte, 2014; Rajalingham et al., 2018; Yamins et al., 2014). However, those ANNs are produced by training with millions of supervised (labeled) training images, an experience regime that is almost surely not biologically plausible over evolution or postnatal development. That simple fact rejects all such ANNs as models of the construction of IT tolerance, regardless of whether or not the brain is executing some form of backpropagation-instructed plasticity (Lillicrap et al., 2020; Rumelhart et al., 1986). So the question remains open: how does the ventral stream wire itself up to construct a tolerant IT with minimal supervision?

The temporal stability of object identity under natural viewing (i.e., objects do not rapidly jump in and out of existence) has been proposed as a key available source of unsupervised information that might be leveraged by the visual system to construct neural tolerance, even during adulthood (Földiák, 1991; Hénaff et al., 2019; Rolls and Stringer, 2006; Wallis et al., 2009; Wallis et al., 2009; Wiskott and Sejnowski, 2002). Consistent with this view, psychophysical results from human subjects show that unsupervised exposure to unnatural temporal contiguity experience (i.e., laboratory situations in which object do jump in and out of existence) reshapes position tolerance (Cox et al., 2005), pose tolerance (Wallis and Bülthoff, 2001), and depth illumination tolerance (Wallis et al., 2009) as measured at the behavioral level. Similarly, neurophysiological data from adult macaque IT show that unsupervised exposure to unnatural temporal contiguity experience reshapes IT neuronal position and size tolerance (Li et al., 2009; Li and DiCarlo, 2008; Li and DiCarlo, 2010), in a manner that is qualitatively consistent with the human behavioral data.

Taken together, our working hypothesis is that the ventral visual stream is under continual reshaping pressure via unsupervised visual experience, that such experience is an important part of the construction of the tolerant representation that is ultimately exhibited at the top level of the ventral stream (IT), that the IT population feeds downstream causal mechanistic chains to drive core object discrimination behavior, and that the performance on each such behavioral tasks is well approximated by linear readout of IT (Hung et al., 2005; Majaj et al., 2015).

However, there is a key untested prediction in this working hypothesis: is the single neuronal plasticity in adult monkey IT quantitatively consistent with the adult human behavioral changes resulting from unsupervised temporal contiguity experience? In this study, we chose to focus on testing that missing link as it was far from obvious that it would hold up. In particular, the prior IT neurophysiology work was with basic-level objects and produced seemingly large changes (~25% change in IT selectivity per hour of exposure in Li and DiCarlo, 2010), and the prior human behavioral work was with subordinate-level objects and produced significant, but subtle, changes in behavior (e.g., ~3% performance change in Cox et al., 2005). Moreover, if we found that the link did not hold, it would call into question all of the elements of the overall working hypothesis (especially IT’s relationship to COR behavior,and/or the importance of unsupervised plasticity to the IT representation). Thus, either result would be important.

To test whether our working hypothesis is quantitatively accurate over the domain of unsupervised temporal contiguity-induced plasticity, we sought to build a model to predict the changes in human object discrimination performance that should result from temporally contiguity experience-driven changes in IT neuronal responses. This model has three components: (1) a generative IT model (constrained by prior IT population response; Majaj et al., 2015) that approximates the IT population representation space and can thus simulate the IT population response to any image of the objects (within the space) with variation in size; (2) an unsupervised plasticity rule (constrained by prior IT neural plasticity data; Li and DiCarlo, 2010) to quantitatively describe and predict firing rate (FR) change of single IT neurons resulting from temporally contiguous pair of experienced images and can thus be used to update the simulated IT population representation; and (3) an IT-to-COR-behavior-linking model (learned weighted sums, previously established by Majaj et al., 2015) to predict behavioral discrimination performance from the state of the IT (simulated) population both before and after each epoch of unsupervised experience.

To overcome the limitation of non-overlapping tasks in previous psychophysics and neurophysiology studies and to extend prior psychophysical work, we carried out new human behavioral experiments. Specifically, we measured the progression of changes in size-specific human object discrimination performance that resulted from unsupervised temporal contiguity experience using the same exposure paradigm as the prior monkey neurophysiology work (Li and DiCarlo, 2010). We did not use the exact same images as prior work, but we expected the model to still make accurate predictions of all behavioral changes. We made these behavioral measurements for a wide range of object discrimination tasks, ranging from subordinate (specifically different face objects) to basic level.

Because humans make sensory-independent mistakes due to inattentional state, these sensory-independent random choices (referred to as lapse rate) set a ceiling in the measurable human behavioral performance (Prins, 2012; Wichmann and Hill, 2001). When tasks are in the saturated regime, it is hard to detect any learning effect as any changes in sensory representation would be hidden by the behavioral ceiling (see later). Therefore, we focused our psychophysical study in the mid-range of task difficulty where learning effects can be measured. However, this meant that the task difficulty in human psychophysics could not be in the basic object regime where the neural data were collected. Thus, to make behavioral predictions from the neural data, we took advantage of the overall model to build this bridge: we first tuned the unsupervised plasticity rule by neural data with basic-level object images (Li and DiCarlo, 2010); we then used a generative IT model – capable of simulating the response of each artificial IT neuron for a wide range of image discriminability levels – to make quantitative predictions of behavioral change in the regime where the human behavioral learning effects can be readily measured.

Indeed, our behavioral tests revealed a strong dependency of learning effect on the initial task difficulty, with initially hard (d’ < 0.5) and initially easy (d’ > 2.5) COR tasks showing smaller measured learning effects than COR tasks of intermediate initial difficulty. We found that our overall model was quite accurate in its predictions of the direction, magnitude, and time course of the changes in measured human size tolerance in the regime where behavioral effects were readily measured for all of the tested unsupervised experience manipulations. The overall model also predicted how the behavioral effect size depended on the initial d’ once we assume behavioral lapses (Prins, 2012) in the model at approximately the same level as those inferred in our subject pool. We note that, because of the (expected) inability to observed behavioral changes for tasks with initial high d’, this study cannot confirm or refute the hypothesized linkage between IT neural effects and behavioral effects in that particular regime.

Taken together, this result shows that at least three separate types of studies (human unsupervised learning, IT unsupervised plasticity, and IT-to-COR-behavior testing) are all quantitatively consistent with each other. As such, this result adds support to the overall working hypothesis: that tolerant COR is instructed – at least in part – by naturally occurring unsupervised temporal contiguity experience that gradually reshapes the non-linear image processing of the ventral visual stream without the need for millions of explicit supervisory labels (Krizhevsky et al., 2017; LeCun et al., 1989; Riesenhuber and Poggio, 1999) and reviewed by LeCun et al., 2015.

Results

Our working hypothesis (see Introduction) predicts that IT population plasticity resulting from unsupervised visual experience should accurately predict the direction, magnitude, and time course of all changes in human object discrimination performance resulting from the same visual exposure. To quantitatively test these predictions, we first carried out a set of human psychophysical experiments with unsupervised temporal continuity experience that closely approximate the exposure paradigm that has been shown to reliably produce IT plasticity (measured as changes in size tolerance at single IT recording site) (Li and DiCarlo, 2010).

Measure changes in human object discrimination performance induced by unsupervised visual experience

The basic experimental strategy is that, after testing initial object discrimination performance on a set of discrimination tasks (‘Test phase,Figure 1A), we provide an epoch of unsupervised visual experience (‘Exposure phase,Figure 1A) that is expected to result in IT plasticity (based on the results of Li and DiCarlo, 2010). At the end of the exposure epoch, we remeasure discrimination performance (Test phase), then provide the next epoch of unsupervised experience (Exposure phase), etc. (see Figure 1A). This strategy allowed us to evaluate the accumulation of positive or negative behavioral changes (a.k.a. ‘learning’) resulting from four unsupervised experience epochs (400 exposure ‘trials’ each) over approximately 1.5–2 hr. We include control discrimination tasks to subtract out any general learning effects.

Figure 1 with 1 supplement see all
Experimental design and conceptual hypothesis.

(A) Illustration of human behavioral experimental design and an example trial from the Test phase and from the Exposure phase. (B) Top: example confusion matrix for a two-way alternative forced choice (2AFC) size-specific sub-task run during each Test phase to monitor object-specific, size-specific changes in discrimination performance (see Materials and methods). Bottom: the two unsupervised exposure flavors deployed in this study (see Materials and methods). Only one of these was deployed during each Exposure phase (see Figure 2). Exposed images of example-exposed objects (here, faces) are labeled with asterisks, and the arrows indicate the exposure events (each is a sequential pair of images). Note that other object and sizes are tested during the Test phases, but not exposed during the Exposure phase (see d’ brackets vs. asterisks). Each bracket with a d’ symbol indicates a preplanned discrimination sub-task that was embedded in the Test phase and contributed to the results (Figure 2). In particular, performance for target objects at non-exposed size (d’ labeled with dashed lines), target objects at exposed size (d’ labeled with bold solid lines), and control objects (d’ labeled with black line) was calculated based on test phase choices. (C) Expected qualitative changes in the inferior temporal (IT) neural population representations of the two objects that results from each flavor of exposure (based on Li and DiCarlo, 2010). In each panel, the six dots show three standard sizes of two objects along the size variation manifold of each object. Assuming simple readout of IT to support object discrimination (e.g., linear discriminant, see Majaj et al., 2015), non-swapped exposure tends to build size-tolerant behavior by straightening out the underlying IT object manifolds, while swapped exposure tends to disrupt (‘break’) size-tolerant behavior by bending the IT object manifolds toward each other at the swapped size. This study asks if that idea is quantitatively consistent across neural and behavioral data with biological data-constrained models.

Specifically, we evaluated changes in discrimination performance (relative to initial performance) of each of a set of size-specific object discrimination tasks. A total of 174 human subjects on Amazon Mechanical Turk (see Materials and methods and Kar et al., 2019; Majaj et al., 2015; Rajalingham et al., 2018) participated in this experiment.

To measure object discrimination performance in each subject, we used a set of two-way alternative forced choice (2AFC) sub-tasks (size-specific object discrimination tasks; see Materials and methods). These sub-tasks were randomly interleaved (trial by trial) in each test phase, and the key test conditions used in the analyses (brackets indicated with d’s in Figure 1B) were embedded within a balanced set of six sub-tasks and cover trials (see Figure 1B and Materials and methods).

Our first experiments used pairs of faces as the objects to discriminate, and we targeted our exposure manipulations at the big size (2× the baseline size; see Materials and methods and Figure 1; later, we targeted other pairs of objects and other sizes). Specifically, we used eight face objects from a previous study (Majaj et al., 2015). We chose these face objects at this size because, prior to unsupervised exposure, they had intermediate discriminability (mean d’ = 2.0 ± 0.1 for big size, frontal view, n = 28 pairs of faces), thus allowing us the possibility to measure both positive and negative changes in discrimination performance. For each subject, two target faces (manipulated during exposure) and two control faces (not shown during exposure) were randomly chosen from these eight faces.

Subjects were instructed to identify the single foreground face in a briefly presented test image (100 ms) by choosing among two alternative choice faces immediately presented after the test image, one of which is always correct (i.e., 50% chance rate). The test image contained one foreground object with variation in view (position, size, pose), overlaid on a random background (see Materials and methods for test image generation). The choice images were always baseline views (i.e., size of ~2°, canonical pose) without background.

Similar to prior work testing the effects of unsupervised exposure on single-site IT recordings (Li and DiCarlo, 2010), each experiment consisted of two phases (Figure 1A): test phases to intermittently measure the size-specific object discrimination performance (d’) for the target face pair and control face pair (three d’ measured in each group of subjects, see Figure 1B bottom); and exposure phases to provide unsupervised visual experience (pairs of images with different sizes in close temporal proximity; Figure 1A) that – based on prior work – was expected to improve or decrease the discrimination performance on the exposed objects.

The purpose of the exposure phase was to deploy unsupervised visual experience manipulations to target a particular object pair (two ‘target’ objects) at particular views (e.g., sizes) of those target objects. For each exposure event, two images, each containing a different size object (frontal; no background), were presented consecutively (100 ms each) (see Materials and methods for details). In non-swapped exposure events, both images contained the same object (expected to ‘build’ size tolerance under the temporal contiguity hypothesis). In swapped exposure events, each images contained a different target object (expected to ‘break’ size tolerance under the temporal contiguity hypothesis). The conceptual predictions of the underlying IT neural population target object manifolds (DiCarlo and Cox, 2007) are that non-swapped exposure events will straighten the manifold of each target object by associating size exemplars of the same object (as in the natural world), and that swapped exposure events will bend and decrease the separation between the two manifolds by incorrectly associating size exemplars of different objects (Figure 1C). This logic and experimental setup are adopted entirely from prior work (Li and DiCarlo, 2008; Li and DiCarlo, 2010).

In our studies here, we specifically focused on manipulating the size tolerance in the medium size (×1 of baseline view; ~2°) to big size (×2 of baseline view; ~4°) regime. Thus, the images shown during the exposure phase (indicated by * in Figure 1B) were always medium- and big-size, frontal view of the target objects. We conducted three types of unsupervised exposure experiments (u): swapped (u1), non-swapped (u2) and non-swapped, followed by swapped (u3).

In experiment u1 (swapped exposure events), we found that discrimination of the target face pair viewed at big size decreased with increasing numbers of exposure events (Figure 2A; top rows; red solid line; n = 102 subjects). We found little to no change in the performance for the non-exposed (small size) versions of those same faces (black dashed line; mean initial d´ is 1.2 ± 0.1) or for non-exposed control faces (also tested at big size, black solid line). Lower panels in Figure 2A show the learning effect defined by subtracting changes in control face discrimination performance (to remove general learning effects over the experience epochs, which turned out to be small; see Figure 2A, upper panel). In sum, we demonstrated an unsupervised, object-selective, size-selective temporal contiguity-induced learning effect that was qualitatively consistent with prior work in ‘breaking’ tolerance (Cox et al., 2005; Wallis and Bülthoff, 2001) and measured the accumulation of that learning over increasing amounts of unsupervised exposure.

Measured human unsupervised learning effects as a function of amount of unsupervised exposure.

Each ‘exposure event’ is the presentation of two, temporally adjacent images (see Figure 1A, right). We conducted three longitudinal unsupervised exposure experiments (referred to as u1, u2, and u3). (A) Swapped exposure experiment intended to ‘break’ size tolerance (n = 102 subjects; u1). Upper panels are the changes in d’ relative to initial d’ for targeted objects (faces) at exposed size (big) (red line), control objects (other faces) at the same size (big) (black line), and targeted faces at non-exposed size (small) (dashed black line) as a function of number of exposure events prior to testing. Lower panel is the to-be-predicted learning effect determined by subtracting change of d’ for control objects from the change of d’ for target objects (i.e., red line minus black line). (B) Same as (A), but for non-swapped exposure experiment (n = 36 subjects; u2). (C) Same as (A), except for non-swapped exposure followed by swapped exposure (n = 37 subjects; u3) to test the reversibility of the learning. In all panels, performance is based on population pooled d’ (see Materials and methods). Error bars indicate bootstrapped standard error of the mean population pooled d’ (bootstrapping is performed by sampling with replacement across all trials). p-value is directly estimated from the bootstrapped distributions of performance change by comparing to no change condition. * indicates p-value<0.05; ** indicates p-value<0.01; *** indicates p-value<0.001.

In experiment u2 (non-swapped exposure events), we found that discrimination of the target face pair viewed at big size increased with increasing numbers of exposure events (Figure 2B; top rows; blue solid line; n = 36 subjects). As in experiment u1, we found little to no change in performance for the non-exposed (small size) versions of those same faces or for non-exposed control faces (also tested at big size, black solid line). This shows that, as predicted by the temporal contiguity hypothesis, unsupervised experience can build size tolerance at the behavioral level.

Interestingly, after ~800 exposure events, the exposure-induced learning effects appeared to plateau in both ‘breaking’ tolerance conditions (experiment u1, Figure 2A) and ‘building’ tolerance conditions (experiment u2, Figure 2B), suggesting a limit in the measurable behavioral effects (see Discussion).

To test whether this unsupervised learning effect is reversible, we measured human performance in a combined design (experiment u3) by first providing exposure epochs that should ‘build’ tolerance, followed by exposure epochs that should ‘break’ tolerance (n = 37 subjects). Consistent with the results of experiments u1 and u2, we found that size tolerance first increased with non-swapped (‘build’) exposures and then decreased with swapped (‘break’) exposures (Figure 2C), and that the effect did not spill over to the control objects.

In sum, these results confirmed that the effect of unsupervised visual experience was specific (to manipulated object and sizes) and strong even in adults. Furthermore, the measured human learning effect trajectories with different unsupervised visual exposure conditions (u1, u2, u3) were taken as behavioral effects that must – without any parameter tuning – be quantitatively predicted by our working hypothesis (that links IT neural responses to COR behavior; see Introduction). We next describe how we built an overall computational model to formally instantiate that working hypothesis to make those predictions.

A generative model to simulate the population distribution of IT responses

To generate predictions of human behavior performance, we need to measure or otherwise estimate individual IT neural responses to the same images used in the human psychophysical testing (above) for a sufficiently large set of IT neurons (a.k.a. IT population responses). Because each of the objects we used in human psychophysics had been previously tested in neural recording experiments from monkey IT, we did not collect new IT population responses (very time consuming), but we decided instead to make suitably accurate predictions of the initial population pattern of IT response for test images of those objects (i.e., the IT response patterns prior to any simulated unsupervised plasticity effects). To do this, we built a generative model of the IT population based on the previously recorded IT population response to those objects. The output of this model is the FR of a simulated IT population to one presentation of a newly rendered test image (generated from the 64 base objects used in the previous study). With this model, we could simulate the initial IT population responses to any image rendered from the psychophysically tested objects (approximately) without recording more neurons in behaving animals.

This generative IT model captures the IT neuronal representation space with a multi-dimensional Gaussian (MDG) model, assuming the distribution of IT population responses is Gaussian-like for each object (see Materials and methods for Gaussian validation) (Figure 3A). Because the MDG preserves the covariance matrix of IT responses to 64 objects, any random draw from this MDG gives rise to an object response preference profile (one response level for each of 64 objects) of a simulated IT neural site. To simulate the variance in object size, for each simulated site, we randomly chose one size-tuning kernel from a pool of size-tuning curves that we had obtained by fitting curves to real IT responses across changes in presented object size (n = 168 recording sites; data from Majaj et al., 2015). This process is repeated independently for each simulated site. Motivated by prior work (Li et al., 2009), we assumed separability of object representation and size tuning, and simulated the response to any of the 64 objects.

Figure 3 with 1 supplement see all
Generative ITmodel and validation of the IT-to-core object recognition (COR)-behavior-linking model.

(A) Generative IT model based on real IT population responses. Top left box: schematic illustration of the neuronal representation space of IT population with a multi-dimensional Gaussian (MDG) model. Each point of the Gaussian cloud is one IT neural site. Middle box: an example of simulated IT neural site. The distribution of object preference for all 64 objects is created by a sample randomly drawn from the MDG (highlighted as a red dot; each color indicates a different object). Then, a size-tuning kernel is randomly drawn from a pool of size-tuning curves (upper right box; kernels fit to real IT data) and multiplied by the object response distribution (outer product), resulting in a fully size-tolerant (i.e., separable) neural response matrix (64 objects × 3 sizes). To simulate the final mean response to individual images with different backgrounds, we added a ‘clutter’ term to each element of the response matrix (σ2clutter; see Materials and methods). To simulate the trial-by-trial ‘noise’ in the response trials, we added a repetition variance (σ2repeats; see Materials and methods). Bottom box: another example of simulated IT site. (B) Response distance matrices for neuronal responses from real IT neuronal activity (n = 168 sites) and one simulated IT population (n = 168 model sites) generated from the model. Each matrix element is the distance of the population response between pairs of objects as measured by Pearson correlation (64 objects, 2016 pairs). (C) Similarity of the model IT response distance matrix to the actual IT response distance matrix. Each dot represents the unique values of the two matrices (n = 2016 object pairs), calculated for the real IT population sample and the model IT population sample (r = 0.93 ± 0.01). (D) Determination of the two hyperparameters of the IT-to-behavior-linking model. Each panel shows performance (d’) as a function of number of recording sites (training images fixed at m = 20) for model (red) and real IT responses (black) for two object discrimination tasks (task 1 is easy, human pre-exposure d’ is ~3.5; task 2 is hard, human pre-exposure d’ is ~0.8; indicated by dashed lines). In both tasks, the number of IT neural sites for the IT-to-behavior decoder to match human performance is very similar (n ~ 260 sites), and this was also true for all 24 tasks (see E), demonstrating that a single set of hyperparameters (m = 20, n = 260) could explain human pre-exposed performance over all 24 tasks (as previously reported by Majaj et al., 2015). (E) Consistency between human performance and model IT-based performance of 24 different tasks for a given pair of parameters (number of training samples m = 20 and number of recording sites n = 260). The consistency between model prediction and human performance is 0.83 ± 0.05 (Pearson correlation ± SEM). (F) Manifold of the two hyperparameters (number of recording sites and number of training images) where each such pairs (each dot on the plot) yields IT-based performance that matches initial (i.e., pre-exposure) human performance (i.e., each pair yields a high consistency match between IT model readout and human behavior, as in E). The dashed line is an exponential fit to those dots at any of the three sizes as the outer product of the object and size-tuning curves (A, bottom). However, since most measured size-tuning curves are not perfectly separable across objects (DiCarlo et al., 2012; Rust and Dicarlo, 2010) and because the tested conditions included arbitrary background for each condition, we introduced independent clutter variance caused by backgrounds on top of this for each size of an object (A) by randomly drawing from the distribution of variance across different image exemplars for each object. We then introduced trial-wise variance for each image based on the distribution of trial-wise variance of the recorded IT neural population (Figure 3—figure supplement 1E). In sum, this model can generate a new, statistically typical pattern of IT response over a population of any desired number of simulated IT neural sites to different image exemplars within the representation space of 64 base objects at a range of sizes (here targeting ‘small,’ ‘medium,’ and ‘big’ sizes to be consistent with human behavioral tasks; see Materials and methods for details). The simulated IT population responses were all constrained by recorded IT population statistics (Figure 3—figure supplement 1). These statistics define the initial simulated IT population response patterns, and thus they ultimately influence the predicted unsupervised neural plasticity effects and the predicted behavioral consequences of those neural effects.

To check if the simulation is statistically accurate in terms of the layout of images in IT population representation space, we compared the representation similarity matrix (RSM; correlation between neuronal population responses to different images) of different draws of a simulated IT with the RSM measured from the actual IT neural data (Figure 3B). One typical example of that is shown in Figure 3C, revealing high correlation of the two RSMs (r = 0.93 ± 0.01). While this does not guarantee that any such simulated IT population is fully identical to an IT population that might exist in an actual monkey or human IT, our goal was simply to get the simulated IT population response distribution in the proper range (up to second-order statistics).

A standard IT-to-COR-behavior-linking model for core object discrimination behavior

To make predictions about how IT neural changes will result in behavioral changes, we first needed a model to establish the linkage between IT population response and core object discrimination behavior prior to any experience-induced effects. We have previously found that simple weighted linear sums of IT neural responses accurately predict the performance (d’) of human object discrimination for new images of those same objects (here termed the IT-to-COR-behavior-linking model) (Majaj et al., 2015). That model has only two free hyperparameters: the number of neural sites and the number of labeled (a.k.a. ‘training’) images used to set the weights of the decoder for each object discrimination. Once those two hyperparameters are locked, it has been empirically demonstrated that the performance for any object discrimination task on new images is accurately predicted by its trained decoder (Majaj et al., 2015). To test whether the simulated IT population activity from the generative IT model (above) could quantitatively reproduce those prior results and to lock these two hyperparameters, we compared the predicted performance (for any given object recognition task) based on the simulated IT population (Figure 3D; red solid line) with the predicted performance based on the previously recorded IT neural population (black solid line). We did this as a function of number of recording sites for a set of object recognition tasks. Figure 3D illustrates two example tasks (error bar is standard error across 40 random subsamples of recording sites). As expected, we found that the model predictions overlapped with decoded performance of real IT neural sites, indicating that our generative IT model has captured the relevant components of the IT population response.

We next set out to choose the two free hyperparameters (number of sites and number of training examples). The crossing point with human performance in Figure 3D reflects how many neural sites are necessary to reach human performance level for a given number of training samples. Unlike the real IT neural data (n = 168 recording sites) that required extrapolation to estimate the number of sites matching human absolute performance (Majaj et al., 2015), we simulated up to 1000 IT sites with the generative model to cover the range of neural sites necessary to reach human performance.

Consistent with Majaj et al., 2015, we found that the number of simulated IT sites required to match human was similar across different tasks (260 ± 23) IT sites given 20 training images (tested over 24 object discrimination tasks: low variation eight-way tests: eight basic level, eight car identification, and eight face identification tasks; previously used in Majaj et al., 2015). Specifically, we here used 260 sites with 20 training samples for all tasks, and the match between the decoded simulated IT performance and human performance over all discrimination tasks was r = 0.83 ± 0.05 (n = 24 tasks), similar to previously reported match between decoded neural IT performance and human for the same tasks (r = 0.868 from Majaj et al., 2015). Note that other specific combinations of the number of IT sites and the number of training examples are also suitable (Figure 3F), and we explore this later.

In sum, by setting the two decoder hyperparameters to match initial human performance, we established a fixed linear decoder rule that could be applied to our simulated IT population to quantitatively predict the expected performance of the subject (i.e., the owner of that IT population) for any object discrimination task.

The consequence is that, because the linkage model between the IT population and behavior is now fixed in the model, any changes in the model IT population are automatically mapped to predicted changes (if any) in behavioral performance. From here on, we locked down the generative IT model and the decoders that matched human initial performance (before learning), and combine both of these models later to make predictions of direction and magnitude of behavioral performance change (if any) that should result from any given change in the IT population driven by unsupervised plasticity (Figure 2).

Unsupervised IT plasticity rule

To model the IT neural population response changes that result from the unsupervised visual experience provided to the human subjects, we developed an unsupervised IT plasticity rule guided by previous studies of IT plasticity effects in the rhesus monkey that used the same paradigm of unsupervised visual experience that we provided here to our human subjects (Li et al., 2009; Li and DiCarlo, 2008; Li and DiCarlo, 2010). In particular, we set out to build an unsupervised IT plasticity rule that could predict the (mean) response change that occurs in each and every IT neuron as a result of each presented pair of temporally contiguous visual images. We assumed that the same model would also apply to human ‘IT’ without any parameter modifications (see Discussion).

Those prior monkey studies revealed that exposure to altered (‘swapped’) visual statistics typically disrupts the size tolerance of single IT neurons, while exposure to normal statistics in visual experience (non-swapped condition) typically builds size tolerance (Li and DiCarlo, 2010). To develop our unsupervised IT plasticity rule, we replicated the exact same experiment used in the monkeys on simulated IT neural sites. Figure 4A illustrates the exposure design for single IT sites, where the preferred object (P) and non-preferred object (N) of each neural site are defined by the magnitude of neuronal activity (z-scored across all objects for each IT site). Selectivity of a neural site is measured by the difference of neuronal responses to its preferred and non-preferred objects (P – N)/(p + N), the same as Li and DiCarlo, 2010.

Temporal continuity-based inferior temporal (IT) neuronal plasticity rule.

(A) Illustration of exposure design for IT neuronal plasticity (adapted directly from Li and DiCarlo, 2010) measured with single electrodes. P refers to preferred object of the IT unit, and N refers to non-preferred object of that unit. (B) We developed an IT plasticity rule that modifies the model neuronal response to each image in individual neural sites according to the difference in neuronal response between lagging and leading images for each exposure event (see Materials and methods). The figure shows the model-predicted plasticity effects for a standard, size-tolerance IT neural site and 1600 exposure events (using the same exposure design as Li and DiCarlo, 2010; i.e., 400 exposure events delivered [interleaved]) for each of the four black arrows in panel (A) for three different plasticity rates. Dashed lines indicate model selectivity pattern before learning for comparison. (C) Normalized change over time for modeled IT selectivity for three different plasticity rates. Dashed lines are the mean neuronal plasticity results from the same neural sites in Li and DiCarlo, 2010 (mean change in P vs. N responses, where the mean is taken over all p > N selective units that were sampled and then tested; see Li and DiCarlo, 2010). Solid lines are the mean predicted neuronal plasticity for the mean IT model ‘neural sites’ (where these sites were sampled and tested in a manner analogous to Li and DiCarlo, 2010; see Materials and methods). Blue line indicates the change in P vs. N selectivity at the non-swapped size, green indicates change in selectivity at the medium size, and red indicates change in selectivity at the swapped size. Error bars indicate standard error of the mean. (D) Mean swapped object (red) and non-swapped object (blue) plasticity that results for different model IT neuronal sub-groups – each selected according to their initial pattern of P vs. N selectivity (analogous to the same neural sub-group selection done by Li and DiCarlo, 2010; c.f. their Figure 6).

We used a Hebbian-like (associative) plasticity rule (Caporale and Dan, 2008; Hebb, 1949; Oja, 1982; Paulsen and Sejnowski, 2000), which updates FR for each pair of images based on the difference of neuronal FR between the lagging and leading images (see Materials and methods). Our plasticity rule states that the modification of FR of each IT unit to the leading image at time t is equal to the difference of FR between lagging and leading images multiplied by a plasticity rate α. This plasticity rule tends to reduce the difference in neuronal responses to consecutive images and implies a temporal association to images presented close in time. The plasticity rule is conceptually similar to previously proposed temporal continuity plasticity or a.k.a. slow feature analysis (Berkes and Wiskott, 2005; Földiák, 1990; Földiák, 1991; Mitchison, 1991; Sprekeler et al., 2007). It is physiologically attractive because the findings on short-term synaptic plasticity revealed that synaptic efficacy changes over time in a way that reflects the history of presynaptic activity (Markram et al., 2012; Markram et al., 1997). Even though conceptually similar, our plasticity rule is a ‘descriptive’ rather than a ‘mechanistic’ rule of plasticity at all stages of the ventral stream. That is, the rule does not imply that all the underlying plasticity is in IT cortex itself – but only aims to quantitatively capture and predict the changes in IT responses resulting from unsupervised visual experience. It is general in a sense that it can make predictions for different objects or dimensions of variations, but it is (currently) limited in that it only applies to temporally paired image associations, ignores any correlation in the neural response patterns, and assumes that updates occur only in the responses to the exposed images (i.e., non-exposed object/size combinations are not affected).

To show the effects of this unsupervised IT plasticity rule, we illustrate with an example simulated IT neural site. The simulated neural site in Figure 4B was initialized to be – like many adult monkey IT neurons – highly size tolerant: its response to a preferred object (P) is always greater than response to a non-preferred object (N) at each size. After applying the unsupervised exposure design in Figure 4A (200 exposure events for each arrow, 1600 exposure events in total), the responses to each of the six conditions (2 objects × 3 sizes) evolved as shown in Figure 4B. We note two important consequences of this plasticity rule. First, because the rule was designed to decrease the difference in response across time, responses to images presented consecutively tend to become more similar to each other, which results in a reduction in the response difference between P and N at both the swapped and the non-swapped sizes. Second, once the neural site reached a state in which its response is no different over consecutively exposed images, the learning effect saturates. Notably, unlike adaptive changes in plasticity rate in the typical supervised optimization of deep neural networks (Kingma and Ba, 2014), our plasticity rate is kept constant over the ‘lifetime’ of the model. The gradual shrinkage of learning effect (Δ(P – N)/(p + N)) as more and more exposure events are provided was a consequence of the gradual reduction in the absolute difference between neuronal responses to the two consecutive images that makeup each exposure event.

There is only one free parameter in our plasticity rule equation – the plasticity rate α. We determined this parameter using the single-electrode physiology data collected previously in the lab (Li and DiCarlo, 2010). Figure 4C shows the average IT plasticity effect that results from different settings of α (here the plasticity effect is defined by the normalized changes in selectivity: Δ(P – N)/(P – N), exactly as was done in Li and DiCarlo, 2010). As expected, a higher plasticity rate (α) results in greater model IT plasticity effects (Figure 4C). We chose the plasticity rate (α) that best matched the prior monkey IT neurophysiology results (i.e., the α that resulted in the minimal difference between the model IT plasticity effect [solid lines] and the experimentally reported IT plasticity effect [dashed lines] for swapped, non-swapped, and medium object sizes; see Figure 4C middle). The best α is 0.0016 nru per exposure event (nru = normalized response units; see Materials and methods for intuition about approximate spike rate changes). Once we set the plasticity rate, we locked it down for the rest of this study (otherwise noted later where we test rate impact).

We next asked if our IT plasticity rule naturally captured the other IT plasticity effects reported in the monkey studies (Li and DiCarlo, 2010). Specifically, it was reported that, for each neural site, the selectivity that results from a fixed amount of unsupervised exposure depends on the initial selectivity of that site. Thus, the unsupervised ‘swapped’ experience manipulation causes a reduction of selectivity for neural sites that show a moderate level of initial P (preferred object) vs. N (non-preferred object) selectivity at the swapped size, and the same amount of unsupervised experience reverses the selectivity of neuronal sites that show a low level of initial selectivity at the swapped size (i.e., cause the site to, oxymoronically, prefer object N over object P). Li and DiCarlo, 2010 also reported that the more natural, ‘non-swapped’ experience manipulation caused a building of new selectivity (for neuronal units that initially show a strong preference for P at some sizes, but happened to have low P vs. N selectivity at the non-swapped size).

We tested for both of these effects in our model by selecting subsets of neural sites in the simulated IT population in exactly the same way as Li and DiCarlo, 2010 (sampled from n = 1000 simulated IT units) and then applied the plasticity rule to those units. We found a very similar dependency of the IT plasticity to those previously reported IT plasticity effects (Figure 4D; cf. see Figures 6 and 7 of Li and DiCarlo, 2010).

Given that our IT plasticity rule tends to pull the response of temporally contiguous images toward each other (Berkes and Wiskott, 2005; Földiák, 1990; Földiák, 1991; Mitchison, 1991; Sprekeler et al., 2007), it is not entirely obvious how this can build selectivity (i.e., pull response to P and N apart). The reason this occurs is that some IT neural sites have (by chance draw from the generative model of IT, above) initially high selectivity for P vs. N at the medium size and no selectivity at (e.g.) the big size. (Indeed, such variation in the IT population exists as reported in Li and DiCarlo, 2010.) By design, the non-swapped (‘natural’) unsupervised exposure temporally links Pmed (high response) with Pbig, which – given the plasticity rule – tends to pull the Pbig response upward (pull it up higher than Nbig). In addition, the non-swapped exposure links Nmed (low response) with Nbig, which can pull the Nbig response downward (provided that the Nmed response is initially lower than the Nbig response). Both effects thus tend to increase the Pbig vs. Nbig response difference (i.e., both effects tend to ‘build’ selectivity for P vs. N at the big presentation size, which results in the neural site preferring object P over object N at both the medium and the big sizes – a property referred to as size ‘tolerance’). This effect is observed in single IT neural site size-tuning curve for P and N before and after learning (see Figure 3 in Li and DiCarlo, 2010). Indeed, it is this effect that conceptually motivated temporal contiguity plasticity in the first place – natural-occurring statistics can be used to equalize the responses to the same object over nuisance variables (such as size).

In sum, our very simple IT plasticity rule quantitatively captures the average IT plasticity effects for which its only free parameter was tuned, and it also naturally captures the more subtle IT neural changes that have been previously described.

Putting together the overall model to predict human unsupervised learning effects

To summarize, we have (1) built and tested a generative IT model that captured the object representation space and variability in the actual primate IT population; (2) locked down a set of parameters of a linear decoder rule that quantitatively links the current state of the simulated IT population to initial human performance on any discrimination task (including the ones we plan to test); and (3) defined an IT plasticity rule that describes how each individual IT neural site changes as a result of each unsupervised exposure event, and we locked down the only free parameter (plasticity rate) in that rule to match existing monkey IT plasticity data (see Figure 1—figure supplement 1A). At this point, we could – without any further parameter tuning – combine each of these three model components into a single overall model that predicts the direction, magnitude, and time course of human unsupervised learning effects that should result from any unsupervised learning experiment using this exposure paradigm (pairwise temporal image statistics).

Specifically, to generate the predictions for each of unsupervised learning experiments (u: u1, u2, u3; see Figure 2), we (1) initialized a potential adult human IT (from the generative IT model) with a total of 260 simulated IT recording sites; (2) built linear decoders for the planned object discrimination tasks that read from all 260 sites, using 20 training examples for each and every task; (3) froze the parameters of all such decoders (i.e., froze the resulting weighting on each simulated IT neural site on the ‘subject’s’ object choice decision); (4) ‘exposed’ the IT model population to the same unsupervised temporal exposure history as the human subjects, using the IT plasticity rule to update the model ‘IT’ after each temporally adjacent image exposure pair to update the responses of each simulated IT neural site (note that the plasticity trajectory of each neural site is dependent on both its initial object/size response matrix [1], and the sequence of images applied during unsupervised experience [u]); (5) measured the changes in ‘behavioral’ performance of the overall model (changes in object discrimination performance of the [frozen] decoders [2]); and (6) took those changes as the model predictions of the changes in human performance that should result from that unsupervised experience (u). Again we emphasize that, while the overall model relies heavily on data and parameters derived explicitly or implicitly from these prior studies (Li and DiCarlo, 2010; Majaj et al., 2015), no components or parameters of this model nor its predictions depended on the behavioral data collected in this study.

To give robust model estimates of the average predicted behavioral effects, we repeated this process (1–6) 100 times for each experiment (u) and averaged the results, which is analogous to running multiple subjects and averaging their results (as we did with the human data; see Figure 2). For clarity, we note that the prediction stochasticity is due to random sampling of the IT generative population, the clutter variability introduced in the generative IT model when generating the initial population response for each test image, the trial-by-trial variation in the simulated IT responses, the random unsupervised exposure event sequence (see Materials and methods), and randomly drawn test images, all of which we expect to average out.

Note that, in expecting that these overall model predictions might be accurate, we are implicitly making the following assumptions: (1) monkey IT and human IT are approximately identical (Kriegeskorte et al., 2008; Rajalingham et al., 2015), (2) the linkage of IT to behavioral performance is approximately identical (as suggested by Majaj et al., 2015), (3) human IT unsupervised plasticity is the same as monkey IT unsupervised plasticity, and (4) humans do not re-learn or otherwise alter the assumed mechanistic linkage between IT and behavior during or after unsupervised visual experience (at least not at the time scales of these experiments: 1.5–2 hr).

Results: predicted learning vs. observed learning

Figure 5A, D, E show the model-predicted learning effects (black solid line) for each of the three unsupervised experiments (u1, u2, u3) plotted on top of the observed measured human learning effects (red line, reproduced from the learning effects shown in Figure 2 bottom). For each experiment, we found that the overall model did a very good job of predicting the direction, magnitude, and time course of the changes in human behavior. The directional predictions are not surprising given prior qualitative results, but the accurate predictions of the magnitude and time course are highly non-trivial (see below). Despite these prediction successes, we also noticed that the predictions were not perfect, most notably after large numbers of unsupervised exposures (e.g. Figure 5E, rightmost points), suggesting that one or more of our assumptions and corresponding model components are not entirely accurate (see Discussion).

Overall model-predicted learning effects vs. actual learning effects.

(A) Overall model-predicted learning effect (solid black line) for experiment u1 (swapped exposure) with the IT-to-behavior-linking model matched to initial human performance (hyperparameters: number of training images m = 20, number of model neural sites n = 260; see Figure 3) and the IT plasticity rate matched to prior IT plasticity data (0.0016; see Figure 4). Red line indicates measured human learning effect (reproduced from Figure 2A, lower). Gray lines indicate model predictions for four times smaller plasticity rate and four times larger plasticity rate. Error bars are standard error over 100 runs of the overall model; see text. (B) Decoder hyperparameter space: number of training samples and number of neural features (recording sites). The dashed line indicates pairs of hyperparameters that give rise to IT-to-behavior performances that closely approximate human initial (pre-exposure) human object recognition performance over all tasks. (C) Predicted unsupervised learning effects with different choices of hyperparameters (in all cases, the IT plasticity rate was 0.0016 – i.e., matched to the prior IT plasticity data; see Figure 4). The two black lines (nearly identical, and thus appear as one line) are the overall model-predicted learning that results from hyperparameters indicated by the black dots (i.e., two possible correct settings of the decoder portion of the overall model, as previously established by Majaj et al., 2015). Green and blue lines are the overall model predictions that result from hyperparameters that do not match human initial performance (i.e., non-viable IT-to-behavior-linking models). (D) Predicted learning effect (black line) and measured human learning effect (red) for building size-tolerance exposure. (E) Model-predicted learning effect (black line) and measured human learning effect (red) for building and then breaking size-tolerance exposure. In both (D) and (E), the overall model used the same parameters as in (A) (i.e., IT plasticity rate of 0.0016, number of training samples m = 20, and number of model neural sites n = 260).

Given the surprising overall quantitative accuracy of the model predictions straight ‘out of the box,’ we wondered if those predictions might somehow occur even for models that we had not carefully tuned to the initial (pre-exposure) human performance and the previously reported IT plasticity. That is, which components of the model are critical to this predictive success? We tested this in two ways (focusing here on experiment u1).

First, we built model variants in which the IT plasticity rate (α) was either four times smaller or four times bigger than empirically observed in the prior monkey IT neurophysiology (solid gray lines) and re-ran the entire simulation procedure (above). In both cases, the predictions of these (non-biology-matched) model variants were now clearly different in magnitude than the observations (Figure 5A). This result is arguably the strongest evidence that the single-unit IT plasticity effects fully account for – and do not over-account for – the human unsupervised learning effects presented thus far.

Second, we built model variants in which the two decoder hyperparameters (number of neural sites and number of training images) were no longer correctly aligned with the initial human performance levels. Figure 5B illustrates the two-dimensional hyperparameter space, and the dashed line represents potential choices of the two hyperparameters that match human initial performance (the IT-to-COR-behavior-matching manifold; Figure 3F). Regions above (or below) that manifold indicate hyperparameter choices where the decoders are better (or worse) performing than initial human performance. We found that the unsupervised learning effects predicted by the overall model (Figure 5C, two black lines on top of each other corresponding to two choices of hyperparameters, black dots in Figure 5B) continued to well-approximate human learning effects. This was also true for other combinations of hyperparameters along the dashed black manifold in Figure 5B (~10 combinations tested; results were similar to those shown in Figure 5C–E, not shown). In other words, for model settings in which the model variant was in line with the biological initial state, the predictions of the unsupervised learning effects remained similarly accurate. This is a nice robustness check on the model simulations and predictions. (However, as a side note orthogonal to our goals here, this result also means that, as in prior work [Majaj et al., 2015], we cannot use this analysis to determine which of these model variants is more matched to the biology.)

In contrast, when we built model variants in which the choices of the two hyperparameters did not match human initial performance, the unsupervised learning effect predicted by the overall model clearly differed from the observed human learning effect. Specifically, when an overall model starts off with ‘super-human’ performance, it overpredicted the learning effect; and when a different model starts off as ‘sub-human,’ it underpredicted the learning effect.

In sum, it is not the case that any model of the form we have built here will produce the correct predictions – proper (biological) setting of the unsupervised IT plasticity rate and proper (biological) setting of the IT-to-COR-behavior-linkage model are both critical. It is important to note that we did not tune these biologically constrained hyperparameters based on fitting the unsupervised behavioral learning effects in Figure 2 – they were derived in accordance with prior neurobiological work as outlined above.

The unsupervised learning effect depended on the initial task difficulty

So far, we have established a quantitative overall model that quite accurately predicted the direction, magnitude, and time course of learning effects resulting from a range of unsupervised exposure manipulations. For each of those tests, we focused on object discrimination tasks that had an intermediate level of initial task difficulty (face discrimination tasks with initial d’ around 2.0), so that we had dynamic range to see both increases and decreases in performance (e.g., Figure 2). However, we noticed that our IT plasticity rule seemed to imply that those learning effects would depend on the strength of the initial selectivity of individual IT neural sites for the exposed objects (i.e., the initial P vs. N response difference). The intuition is that this response difference is the driving force for IT plasticity updates (e.g., no difference leads to no update, large difference leads to large update). This in turn implied that the learning effect size should depend on the initial task performance (d’).

To test for this dependence, we focused on the unsupervised size tolerance ‘breaking’ manipulation (as in u1, Figure 2A, but with 800 unsupervised ‘swapped’ exposures; see Materials and methods) and tested new sets of human subjects using a wide range of initial task difficulties, ranging from subordinate object discriminations (low d’) to basic-level object discriminations (high d’). We focused on 13 size-specific object discrimination sub-tasks with either small-medium-size swapping exposure or medium-big-size swapping exposure. Each subject received only one exposure variant (see Materials and methods). For each exposure variant, 20–40 new human subjects were tested, and we quantified the unsupervised learning effect (‘breaking’) as the change (from initial) in performance (relative to control objects, as in Figure 2A).

Figure 6B shows that unsupervised learning effect plotted against pre-exposure task difficulty for all 13 object discrimination tasks. This result not only confirms that this unsupervised learning effect is observed for a range of object discriminations (e.g., not just face objects), but it also showed a relationship between task difficulty (d’) and the magnitude of that learning effect. In particular, for initially easy tasks (d’ > ~2.5) and initially difficult tasks (d’ < ~0.5), we observed a smaller learning effect than tasks with intermediate initial performance.

Figure 6 with 1 supplement see all
Learning effect as a function of initial task difficulty.

(A) Illustration of the saturation of measured d’ that results from the assumption that the subject guesses on a fraction of trials (lapse rate), regardless of the quality of the sensory evidence provided by the visually evoked inferior temporal (IT) neural population response (x-axis). (B) Measured human learning effect for different tasks (colored crosses) as a function of initial (pre-exposure) task difficulty (d’) with comparison to model predictions with or without lapse rate (dots). Each cross or dot is a specific discrimination task. For crosses, different colors indicate different types of tasks and exposures: green indicates small-size face discrimination learning effect induced with medium-small swapped exposure (n = 100 subjects); blue indicates big-size face discrimination learning effect induced with medium-big swapped exposure (n = 161 subjects); red indicates small-size basic-level discrimination learning effect induced with medium-small swapped exposure (n = 70 subjects). Performance is based on population pooled d’. Error bars indicate bootstrapped standard error of the mean population pooled d’ (bootstrapping is performed by sampling with replacement across all trials). p-value is directly estimated from the bootstrapped distributions of performance change by comparing to no change condition. * indicates p-value<0.05.

We found that our overall model quite naturally – for the reasons outlined above – predicted the smaller learning effect for initially difficult tasks (the left side of Figure 6B). Notably, the model as defined above did not naturally predict the lack of observed learning effects for the initially easy tasks (high initial d’) – it tended to overpredict the magnitude of behavioral changes that will result in those high d’ task (see Figure 6B, black dots). However, we realized that, if we assumed that the model also has a lapse rate similar to that of humans (Prins, 2012), then this discrepancy might go away. That is, we assume that there is some non-zero fraction of trials for which the subject guesses or makes motor errors, regardless of the quality of the sensory-driven information. The intuition here is that human subjects make task-independent mistakes (‘lapses’), and even a low rate of random lapses puts a ceiling on the d’ value that can be experimentally measured (Figure 6A). In the context of our learning experiments, this assumption would mean that the underlying neural representation might indeed be changing a great deal (at least that is what our current model predicts), but those changes cannot be measured as changes in human performance in the face of a lapse-rate induced measurement ceiling (e.g., a sensory evidence d’ of 5 could change to a sensory evidence d’ of 3.5 [a large d’ change of 1.5], but we would measure an observed behavioral d’ of ~3 in both cases and thus report a behavioral d’ change of ~0). In contrast, the overall model that we described above had a zero lapse rate, which meant that we could measure changes in its behavioral performance for even very large initial d’ values.

To explore this, we asked: what is the (mean) lapse rate of the human subjects in our experiments? To estimate this, we used half our human data to rank the average initial human performance of each task which we take as an estimate of the ordering of those tasks in terms of available sensory evidence. We then used the remaining data to plot the observed human performance on each task (Figure 6—figure supplement 1A). We found that the average performance tended to plateau around 90%, which we take as an indication of a performance ceiling that cannot be explained by a lack of sensory evidence, and thus we attribute to multiple types of downstream errors collectively referred to as lapses. We also directly quantified the distribution of task performance accuracy for basic-level tasks (easy tasks: d’ > 2.5) in our experiments (Figure 6) and found that the distribution has a maximum of 95.0% (Figure 6—figure supplement 1B). These analyses suggest that the lapse rate of our subject pool is ~10% (i.e., 95% accuracy for two choice tasks with perfect sensory evidence), which is consistent with prior work on human lapse rates (<20%; Manning et al., 2018). We simulated the effect of 9% lapse rate and 20% lapse rate (i.e., we told the model to make random guesses on 9% or 20% of trials, regardless of the strength of sensory evidence) and found that these new overall models reasonably explained the dependence of the observed magnitude of human d’ changes as a function of initial human d’ (Figure 6B).

In sum, we interpret the lapse rate analyses not as a failure of the overall model, but instead as a limitation of our psychophysical experiments in this study. That is, the lapse rate estimate is consistent with the hypotheses that, in the high initial d’ range, the IT population is changing (indeed, the exposure conditions are close to the conditions of the original monkey neural experiments; Li and DiCarlo, 2010), but that, in the face of a lapse rate, the behavioral consequences of those changes are predicted to be small relative to the effects of downstream biological variability. That being said, it also means that the current study is simply not able to test the IT plasticity to behavioral-learning linkage in the initial high d’ range, and we take that as a target for future experimental work (see Discussion).

Discussion

The goal of this study was to ask if previously reported temporal contiguity-driven unsupervised plasticity in IT neurons quantitatively accounts for temporal contiguity-driven unsupervised learning effects in humans.

To do that, we built an overall computational model to predict human performance change resulting from plasticity in individual IT neural site FRs under the paradigm of unsupervised temporal contiguity exposure (temporally contiguous pairs of images). The overall model had three core components: (1) a generative model of a baseline adult IT neuronal population, (2) an IT-population-to-COR-behavior-linking model (adopted directly from Majaj et al., 2015), and (3) an IT plasticity rule that aimed to capture and predict how pairs of temporally associated images lead to updates in the (future) image-driven responses of each individual IT neural site. Each of these three model components was guided by prior work and constrained by prior data to set its parameters, and the combined overall computational model was directly used to predict human learning effects without further parameter tuning.

To test the overall model, we asked the model to predict the human performance changes for three separate unsupervised learning experiments and compared those predictions with the human performance changes (averaged over human subjects) that we measured in those three experiments. We found that the direction, magnitude, and time course of those mean unsupervised learning effects were all quite well predicted by the overall model (but not perfectly predicted). We also found that the model could naturally explain the dependence of the measured unsupervised learning on initial object discrimination difficulty, but that it could only fully do so when we assumed model ‘behavioral’ lapse rates that were similar to that estimated in our human subject pool.

In sum, this work establishes a quantitative linking model between the plasticity in individual IT neurons and human behavioral changes (both improvements and disruptions) for temporal contiguity-driven unsupervised learning for the designed tasks. More broadly, the accuracy of the model predictions supports the overarching hypothesis that temporally contiguous unsupervised learning could help shape neural representations that underlie robust (i.e., tolerant) COR, even in adults.

We were somewhat surprised that the overall model did such an accurate job of predicting the human learning effects over time essentially from predicted updates on the responses of IT neural sites. This was surprising because the overall model implicitly assumes that monkey IT and human IT are approximately identical (Kriegeskorte et al., 2008; Rajalingham et al., 2018), the linkage of IT to COR behavior is approximately identical in monkeys and humans (as previously suggested; Majaj et al., 2015), human unsupervised IT plasticity is the same as monkey IT plasticity, and that little or no behaviorally relevant plastic changes occur in the mechanistic linkage between IT and behavior during or after unsupervised visual experience (at least not at the time scales of these experiments: 1.5–2 hr). Of course, the results here do not prove all of the above assumptions to be correct. Indeed, another interpretation of these results is that many of those assumptions are incorrect, but that the errors they each induce in the model predictions coincidentally cancel each other out. However, based on both the prior work cited above and the current study, we believe that it is more parsimonious to assume that the model is predictively accurate because all of the above are approximately correct until further experiments show otherwise.

We think it is quite interesting and demonstrative of this approach that the overall model could predict behavioral learning effects in the low initial d’ and moderate initial d’ regime for different exposure types (build or break), even though the images used in those tasks are in a different d’ regime from the specific images used in the neuronal plasticity experiments used to tune the model learning (i.e., those neural experiments were done with images in a high d’ regime). We note that the ability to make such tests in these other d’ regimes is the key reason that we built the overall composite model in the first place: a generative IT model (simulating IT population patterns to a wide range of image conditions) + an IT neural-to-behavior-linking model + an IT plasticity model (independent of specific images). End-to-end models such as this allow us to predict effects under conditions that no experiment has yet tested, and to then test some of them as we have done here. That is, the integrated model imparts new generalizable knowledge and understanding.

We noted a small discrepancy between the predictions of the model and the human learning data at the longest exposure durations that we tested (1600 exposure; ~1.5 hr; see Figure 5A,C), where the model predicted slightly stronger behavioral changes than measured. The most likely possibility is that learning over long periods of unsupervised exposure involves more complicated neural mechanisms than those that are captured by our simplified unsupervised IT plasticity rule. For example, perhaps the plasticity rate slows down as the subject fatigues in the experiment. Or perhaps the plasticity mechanisms involve some type of renormalization of the responses of each IT neuron to retain some selectivity to different objects, as motivated by prior theoretical work on temporal contiguity learning (Sprekeler et al., 2007; Wiskott and Sejnowski, 2002). Similarly, plasticity along the ventral stream could involve homeostatic range adjustment, which is fundamental to individual neurons (Turrigiano and Nelson, 2004), as motivated by studies of LTP and LTD plasticity in V1 neurons (e.g., BCM rules; Bienenstock et al., 1982; Toyoizumi et al., 2005). While we did not explicitly model any of these neural plasticity effects, they could be explored in future modeling studies with even tighter tests on neurons and behavior than we achieved here.

The dependency of behavioral learning effects on initial task difficulty is also an important observation. The empirical human results show that tasks with moderate initial difficulty give rise to maximum measurable learning effects in the paradigm used here. The learning effects for hard to moderate tasks are naturally predicted with the initial overall model. However, for easy tasks (typically d’ > ~2.5), our initial overall model predicted a larger learning effect than we empirically observed in humans. In hindsight, this discrepancy seems obvious if we assume that human subjects make errors unrelated to the sensory evidence (e.g., due to inattention or motor error). In prior work, this is quantified as a lapse rate (Madigan and Williams, 1987; Manning et al., 2018; Prins, 2012; Wichmann and Hill, 2001). Our simulation, and previous studies, suggests that the existence of lapse rate imposes a ceiling on measured performance (Gold and Ding, 2013; Pisupati et al., 2021; Prins, 2012; Wichmann and Hill, 2001) and thus would strongly mask measured performance changes in that (high d’) regime. In other words, any performance changes within that ceiling would be virtually undetectable. Even though we were able to indirectly estimate the lapse rate of our subject pool, we did not make direct measurements of lapse rate. Furthermore, the influence of lapse rate on learning could be complex, for example, it can fluctuate across time depending on brain states (Ashwood et al., 2020). Therefore, future experiments are needed to try to better estimate unsupervised performance changes in the high d’ regime (perhaps with same images as the neural experiments) and comparing with this model and others. This might be done by precisely measuring lapse rate for each human subject to possibly allow extraction of even small accuracy changes within each subject, controlling/minimizing lapse rate in a lab environment, giving larger experimental doses of unsupervised exposure, or all of these. However, our results here cannot rule out the possibility that no behavioral effects will be found in this high initial performance regime, no matter how strong the unsupervised exposure dose. Such a finding would falsify the model presented here.

In this work, since we want to establish a quantitative link between single IT neuronal plasticity and human unsupervised learning effects, we matched many of the conditions to those used in prior IT plasticity work. For example, we used the exact same exposure procedure to introduce temporal continuity-based learning for size tolerance (Li and DiCarlo, 2010). However, our study still included several differences compared to prior IT plasticity work. For completeness, we list these differences and our assumptions here. First, since we rely on a previous IT population response dataset (Majaj et al., 2015) to create a generative IT model and establish the link between IT population activity and human object recognition performance, we rendered test images from the same 3D objects as Majaj et al., 2015 with background variation and cover trials to prevent from pixel matching in human object discrimination tasks. We assumed that the IT neurons are not making net unsupervised changes during each test phase in their image preference patterns as a result of the image background or randomly interleaved cover image, and therefore, the net learning effect does not depend on what is shown during each test phase. Second, and related, we assumed that the IT generative model could reasonably approximate the variations in IT responses caused by changes in background clutter. Third, the human discrimination tasks are in the lower d’ regime compared to the images used in the prior electrophysiology work. We assumed that the learning rate estimated in the high d’ regime for an IT population is generalizable to IT neural plasticity in this regime. Because the model predictions turned out to be surprisingly accurate, we take these differences and the corresponding assumptions (above) as a strength of this study rather than as a weakness, yet the differences do admit the possibility that a fortuitous coincidence in assumption errors led to accurate model predictions rather than our preferred interpretation that the assumptions and the model are all approximately correct. Future neural recordings in non-human primates with behavioral learning measurement could be helpful to directly validate the link suggested by our model using the exact same objects and test images as the ones in our human psychophysics.

Our study tested unsupervised learning effects for multiple types of object discriminations (Figure 6). However, because learning effects are easiest to measure in the initial task difficulty regime of subordinate-level discrimination (Figure 6), we focused much of the psychophysical testing here on face discrimination tasks. This choice was only for convenience, and none of the results here or in prior psychophysical work (Cox et al., 2005; Wallis and Bülthoff, 2001) suggest that temporal continuity-induced changes in face discrimination are different than those of any other subordinate object discriminations. However, it is well established that IT neurons that respond more to faces than other objects are strongly spatial-clustered within both the monkey and the human ventral stream (Kanwisher et al., 1997; Tsao et al., 2003), and those neurons often have responses that could be used to support discrimination behavior among different faces (Chang and Tsao, 2017). Do our results imply that IT neurons within these large clusters are more (or less) susceptible to unsupervised plasticity than other IT neurons with face discrimination signals that are not clustered? No, the results here do not add evidence for or against those hypotheses. Briefly, the IT-to-behavior-linking model used here assumes that each task will be driven by the IT neurons that are most useful for that task, as discovered by linear classifiers (Majaj et al., 2015). This linking model does not explicitly care if the IT neurons are clustered or not. The null hypothesis here is that all IT neurons have equal plastic potential, regardless of if they are strongly clustered (‘face patch’) or not (with the degree of observed plasticity then depending only on the visual input statistics). Alternative hypotheses are that neurons within IT face patches have higher (or lower) plastic potential than other IT face-encoding neurons or other IT neurons in general. Our results are consistent with the null hypothesis (i.e., all IT neurons equally potentially plastic), provided that the average plasticity rate is in line with the average IT plasticity rate observed in the prior monkey studies. More targeted future neurophysiology experiments could test these specific alternative hypotheses.

Tolerant object recognition and temporal continuity-driven unsupervised learning

Human (and monkey) visual object recognition is highly tolerant to object viewpoint, even under short, but natural, viewing durations of ~200 ms referred to as ‘core object recognition’ (COR) (DiCarlo et al., 2012). Much evidence suggests that this ability derives from neural non-linear processing (and thus neural re-representation) of the incoming image along the ventral visual stream, and some ANN models have become reasonably accurate emulators of that non-linear processing and of its supported COR behavior (Cadieu et al., 2014; Krizhevsky et al., 2017; Kubilius et al., 2018; Yamins et al., 2014). However, because the ‘learning’ of those models is highly non-biological (in the sense that millions of labeled images are used to explicitly supervise the learning), a key question remains completely open: how does the ventral stream develop its non-linear processing strategy?

One proposed idea is that, during postnatal development and continuing into adulthood, naturally occurring temporally continuous visual experience can implicitly instruct plasticity mechanisms along the ventral stream that, working together, lead to the transform-invariant object representation (Berkes and Wiskott, 2005; Einhäuser et al., 2005; Földiák, 1991; Wallis et al., 2009). Intuitively, the physics of time and space in our natural world constrains the visual experience we gain in everyday life. Because identity-preserving retinal projections often occur closely in time, the spatiotemporal continuity of our viewing experience could thus be useful to instructing the non-linear processing that in turn supports highly view-tolerant object recognition behavior. Under this hypothesis, objects do not need to be labeled per se, they are simply the sources that statistically ‘travel together’ over time.

We are not the first to propose this overarching hypothesis or variants of it as the theoretical idea dates back to at least ~1960 (Attneave, 1954; Barlow, 1961). Földiák suggested that the internal representation should mimic physical entities in real life, which are subject to continuous changes in time (Földiák, 1990; Földiák, 1991). This process is purely unsupervised and achieves transformation invariance by extracting slow features from quickly varying sensory inputs (Berkes and Wiskott, 2005; Sprekeler et al., 2007; Wiskott and Sejnowski, 2002). A range of mathematical implementations of learning rules (Berkes and Wiskott, 2005; Földiák, 1991; Isik et al., 2012; Körding et al., 2004; Wiskott and Sejnowski, 2002) all include variants of this same conceptual idea: to achieve response stability of each neuron over time (while also maintaining response variance over the full population of neurons). Various synaptic plasticity mathematical rules and associated empirical observations support this form of unsupervised learning: Hebbian learning (Hebb, 1949; Földiák, 1991; Löwel and Singer, 1992; Paulsen and Sejnowski, 2000), anti-Hebbian learning (Földiák, 1990; Mitchison, 1991; Pehlevan et al., 2017), BCM rule (Bienenstock et al., 1982; Toyoizumi et al., 2005), and spike-timing-dependent plasticity (Caporale and Dan, 2008; Markram et al., 1997; Rao and Sejnowski, 2001). This prior work showed that unsupervised learning of neural representations of objects through temporal continuity was possible, at least in theory.

Human psychophysics studies have provided empirical evidence supporting the role of unsupervised temporal contiguity plasticity in visual object recognition. Wallis and Bulthoff found that unsupervised exposure to temporal image sequences of different views of different faces led to performance deficits compared to sequences of the same face (Wallis and Bülthoff, 2001; Wallis and Bülthoff, 1999). They also pointed out that these results were only observed in similar face pairs (i.e., low d’) rather than very distinct faces (i.e., higher d’). Cox et al. showed that ‘swapped’ unsupervised experience of pairs of images across saccades could reduce (‘break’) position tolerance of object discrimination (Cox et al., 2005). Balas and Sinha showed that observing object motion can increase both generalization to nearby views and selectivity to exposed views (Balas and Sinha, 2008). These behavioral observations revealed that unsupervised temporal contiguity is constantly contributing to the tolerance of object recognition behavior, even in adults, and thus it must be inducing some kind of underlying neural changes somewhere in the brain.

Our human psychophysical results reported here extend this prior work in three ways. First, we measured the learning effects over prolonged periods of time, which allowed us to test for accumulation and saturation. Second, we found that the behavioral learning effect is reversible (Figure 2C). Third, we found that this unsupervised learning effect depended on initial task difficulty, which might explain why some studies report stronger effects than others. For example, Wallis and Bulthoff found that the learning effects on view tolerance were only observed in similar face pairs rather than very distinct faces (Wallis and Bülthoff, 2001), and those similar face pairs have initial d’ that happens to reside in the mid-range where we predict/observe the largest behavioral effects (Figure 6B). Third, and most importantly, we designed our unsupervised visual statistical manipulations in the same way as previous monkey neurophysiology experiments, which allowed us to quantitatively compare our human behavioral results with prior monkey neuronal results.

Because IT is, among other ventral stream areas, thought to most directly underlie object discrimination behavior (DiCarlo et al., 2012; Ito et al., 1995; Rajalingham and DiCarlo, 2019) and IT plasticity has been found in many studies (Baker et al., 2002; Logothetis et al., 1995; Messinger et al., 2001), reviewed by  Op de Beeck and Baker, 2010, it is natural to ask if temporally contiguous unsupervised experience also leads to plastic changes in IT neurons. Miyashita and colleagues showed that neurons in the temporal lobe shape their responses during learning of arbitrarily paired images such that each neuron’s response becomes more similar to images that were presented nearby in time (Miyashita, 1988; Miyashita, 1993; Naya et al., 2003; Sakai and Miyashita, 1991). Li and DiCarlo directly tested the role of unsupervised visual experience in IT neuronal tolerance by manipulating the identities and properties of objects presented consecutively in time (Li and DiCarlo, 2008; Li and DiCarlo, 2010). They found that, over ~1.5 hr of unsupervised exposure of ‘swapped’ temporal statistics, the size and position tolerance of IT neuronal responses were significantly modified, and that these changes were not reward or task dependent (Li and Dicarlo, 2012). Qualitatively similar temporal continuity-dependent neuronal plasticity has also been observed in rodents during development (Matteucci and Zoccolan, 2020).

Although prior experimental work seemed qualitatively consistent with the overarching theoretical idea, it did not demonstrate that the behavioral learning effects could be explained by the IT neural effects. The results of our study here show that those two effects are quantitatively consistent with each other – the behavioral effects can be largely accounted for by the IT neural effects. While this extends the work of others in the area (see Introduction), some studies have reported null results or have proposed alternative mechanisms. Okamura et al. showed that continuous motion or view presentation is not necessary to form tolerance, rather, enough exposure to different views (even in random sequence) can support view-invariant object recognition (Okamura et al., 2014). This evidence suggests an alternative, or additional, mechanism to form tolerant object recognition in addition to temporal continuity. Van Meel and Op de Beeck investigated whether temporal continuity experience can alter size-tolerance representation in human LOC using fMRI and reported no observable effects (Van Meel and Op de Beeck, 2020). However, because no behavioral learning effects are reported and fMRI signal has limited spatial and temporal resolution, this null result may not be inconsistent with the results presented here. Looking in rodents, Crijns et al. tested the temporal contiguity hypothesis in adult rat primary visual cortex and found that the tolerance in orientation selectivity across spatial frequency was not affected by temporal continuity manipulation (Crijns et al., 2019), which may be caused by a different representation mechanism in lower levels of visual hierarchy. On the other hand, Matteucci and Zoccolan reported that reduced temporal continuity experience in early postnatal life led to a loss of complex cell functional properties in rat V1 (Matteucci and Zoccolan, 2020). In sum, the literature is still highly varied and future neurophysiological and behavioral experiments are necessary to test the boundary conditions of temporal contiguity induced effects.

Future directions

We believe that models similar to the one proposed here will be an important future direction in harmonizing results across spatial scales (neurons to behavior) and across species (rodents to primates to humans), such as the studies outlined above. A second future direction is to extend our current overall model to other modalities, like view invariance or position invariance. This could be done by collecting further psychophysical data, adding proper tuning kernels to the current generative IT model, and using the same IT plasticity rule and decoding model. A third future direction is to extend our current model to other objects beyond those that have been tested in monkeys and humans. This could be achieved through testing new IT population responses to new and old objects and then embedding the new objects in the MDG model of the IT population representation space based on neuronal population response similarity. Alternatively, we can use image-computable deep ANN models that quite accurately predict ventral stream neuronal population responses (Kubilius et al., 2018; Yamins et al., 2014) and use the ‘IT’ layer to build a much larger representation space of objects. A fourth future direction is to develop new unsupervised learning algorithms that implement some of the core ideas of temporal contiguity learning, but are scaled to produce high-performing visual systems that are competitive with state-of-the-art neural network systems trained by full supervision. Many computational efforts have touched on this direction (Agrawal et al., 2015; Bahroun and Soltoggio, 2017; Goroshin et al., 2014; Higgins et al., 2016; Kheradpisheh et al., 2016; Lotter et al., 2016; Srivastava et al., 2015; Wang and Gupta, 2015; Whitney et al., 2016), and some are just beginning to make predictions about the responses along the ventral stream (Zhuang et al., 2021). A key next step will be to put those full-scale models to experimental test at both the neurophysiological and behavioral levels.

Materials and methods

Datasets from prior work

To build a quantitative linking model that predicts unsupervised learning effects in humans from neuronal response in IT, we used three experimental datasets: (1) human data: human psychophysics performance data collected with Amazon Mechanical Turk; (2) IT population data: simultaneous recordings of 168 sites with multi-electrode Utah array recordings implanted in monkey IT (from a previous study; Majaj et al., 2015); and (3) IT single-site learning data: multi-unit activity recorded with single electrodes in monkey IT (from a previous study; Li and DiCarlo, 2010). All processed data are available at https://github.com/jiaxx/temporal_learning_paper (copy archived at swh:1:rev:bb355bb96286db2148c3abdc8f71b5880f657c5f), Jia, 2021.

IT population dataset

Request a detailed protocol

Multi-electrode array (Utah arrays) recordings from two awake macaque monkeys provided 168 multi-unit IT neural sites to 64 objects (5460 high-variation naturalistic images) for modeling use. Image presentation was 100 ms, and each image was repeated between 25 and 50 times. Spike counts were binned in the time window 70–170 ms post stimulus presentation and averaged across repetitions to produce a 5760 by 168 neural response pattern array. The 64 exemplar objects come from eight categories (animals, boats, cars, chairs, faces, fruits, planes, and tables). Images were generated by placing a single exemplar object on a randomly drawn natural scene background at a wide range of positions, sizes, and poses. Images were presented at 8° diameter at the center of gaze to awake fixating animals in a rapid serial visual presentation (RSVP) procedure (horizontal black bars indicate stimulus presentation period). See Majaj et al., 2015 for details.

IT plasticity dataset

Request a detailed protocol

Physiology data of unsupervised IT plasticity effects measured in multi-unit activities recorded at each single electrode in macaque monkey IT cortex were reanalyzed from Li and DiCarlo, 2010 (n = 42 multi-unit activity (MUA) sites). We refer to these as ‘neural sites.’ The FR of sorted units was tested in response to preferred (P) and non-preferred (N) objects, each presented at a range of sizes, and were re-tested after different amounts of unsupervised exposure to evaluate the effect of that exposure on those response measures. See (Li and DiCarlo, 2010) for details.

Image generation

Request a detailed protocol

We used the same 3D object models as previous published IT-behavior study (Majaj et al., 2015) and applied the same rendering mechanism (ray-tracing software) to each 3D object while parametrically varying its position, rotation, and size, and projected on a randomly chosen unique natural background (out of a pool of 130 images) to generate new test image examples. All images were achromatic. The ground truth of each image was the identity of the generating 3D model, and this was used to evaluate performance accuracy. This naturalistic image generation allows us to gain full control of all the object-related metadata in the images while preserving a relatively natural COR experience.

For each object, we predefine a ‘baseline view’ (i.e., exact center of gaze, size of ~2° or 1/3 of the diameter of the image, and canonical pose; see Methods of Majaj et al., 2015; Rajalingham et al., 2018). Variations in size, position, and rotation are transformations relative to baseline view of the object. Since our focus here was unsupervised learning of size-tolerant object selectivity, we intentionally introduced more images that only vary in size to measure size tolerance. Medium-sized objects were the ‘baseline’ size (~2°). Small-sized objects were 0.5× of baseline (~1°). Big-sized objects were 2× that of baseline (~4°). All test images for different sizes were generated with random naturalistic backgrounds. We thus created a set of 240 ‘size test’ images per object (i.e., 80 images per object at each of the three test sizes). The final test images were each 512 × 512 pixels and were always presented to the subject at a total extent of ~7° of visual angle at the center of gaze (as in the prior neurophysiology studies above).

To neutralize possible size-specific attentional effects and possible size-specific adaptation effects, we presented these ‘size test’ images intermixed with other ‘cover’ images of the same objects. These cover images were generated using mild variation in all of the object view parameters. Specifically, we sampled randomly and uniformly from the following ranges: [−1.2°,+1.2°] for object position in both azimuth (h) and elevation (v); [−2.4°,+2.4°] for rotation in all three axes; and [x0.7, x1.3] for size. These cover images were mixed randomly with the ‘size test’ images (above) at a ratio of 1 cover image per ‘size test’ image to generate a set of psychophysical test images for each subject (illustrated in Figure 1—figure supplement 1B). The behavioral results from the cover images were not part of the analyses.

Human psychophysics and analysis

Request a detailed protocol

All human experiments were done in accordance with the MIT Committee on the Use of Humans as Experimental Subjects (COUHES). We used Amazon Mechanical Turk (MTurk), an online platform where subjects can participate in non-profit psychophysical experiments for payment based on the duration of the task. In the description of each task, it is clearly stated that participation is voluntary and subjects may quit at any time. Subjects can preview each task before agreeing to participate. Subjects will also be informed that anonymity is assured and the researchers will not receive any personal information. MTurk requires subjects to read task descriptions before agreeing to participate. If subjects successfully complete the task, they anonymously receive payment through the MTurk interface. Since it is easier and faster to recruit subjects through MTurk, we can collect a much larger dataset than traditional in-lab human psychophysics.

A total of 505 (174 subjects in Figure 2 and 331 subjects in Figure 6) subjects successfully completed our tasks published through Amazon’s Mechanical Turk. Subjects who failed to complete the task or follow the instructions were rejected. Aspects of COR performance were measured based on the behavioral report following each test image presentation (Rajalingham et al., 2018). Previous work compared the results of COR tasks measured in the laboratory setting with controlled viewing with results measured via Amazon MTurk and found virtually identical results (Pearson correlation 0.94 ± 0.01; from Majaj et al., 2015).

Each behavioral experiment contained two types of phases: a test phase in which specific aspects of object discrimination performance were measured (see below) and an exposure phase in which pairs of temporally contiguous images were experienced (see Figure 1). The main experiment consisted of five test phases (200 trials each; 6–8 min) and four interleaved exposure phases (400 exposure events each; 12–20 min) that together allowed us to measure exposure-induced changes in size-specific object discrimination over time (total experiment time ranged from 90 min to 120 min).

Test phase

Request a detailed protocol

Our goal was to measure the discriminability of targeted (exposed) pairs of objects at targeted (exposed) sizes (and, as references, we also measured discriminability for control object pairs and for target objects at a non-exposed size). Conceptually, each such discrimination sub-task (size-specific object discrimination task) is a generated set of images from object A at a specific size that must be discriminated from a generated set of images of object B at a specific size, and mapped to the same object at a medium size (e.g., see Figure 1B, choice images). For clarity, we note that, given this design, the only variation in each of these sub-task image test sets was the image background. These size-specific sub-tasks were randomly interleaved with cover trials to disguise this underlying fact from the subject (see Figure 1—figure supplement 1B).

To measure performance on each sub-task, we used a 2AFC design. Each 2AFC trial started with a central fixation point. Subjects were requested to fixate the black fixation point because the test image was always presented briefly at that location and they might miss it otherwise. After 500 ms, the fixation dot disappeared and a test image appeared centered at dot location (center of the screen) for 100 ms, followed by the presentation of two ‘choice’ images presented on the left and right of the screen (Figure 1A). One of the choice images always matched the identity (or category) of the object that was used to generate the test image and was thus the correct choice, and its location was randomly assigned on each trial (50% on the right and 50% on the left). After mouse-clicking a choice image, the subject was given another fixation point (i.e., the next test phase trial began). No feedback on correctness of the choice was given.

To measure size-specific discrimination performance, we created size-specific 2AFC sub-tasks. Specifically, each sub-task was a balanced (i.e., 50%/50%) set of size-specific test images generated from objects A and B (see above), and the two choices presented after each test image were ‘clean’ examples of objects A and B at a standard (‘medium’) size (Figure 1A).

For each subject, the test images were pseudorandomly drawn from a test image pool that contained the desired number of ‘size test’ images and cover images (Figure 1—figure supplement 1B). Among the 200 trials (50 test images of each test object; four objects in total), 40% contained the ‘size test’ images (20 for each object; 10 for small and 10 for big), 10% contained baseline views (medium size; five for each object), and the remaining 50% test images were ‘cover images’ (see above) that were not used in analyses (see Figure 1—figure supplement 1B for example test images). The number of test images for target and control object pairs was thus balanced. The number of test images for small and big sizes was also balanced regardless of exposure type. As a result, for each subject, we created six size-specific 2AFC sub-tasks in total (three different sizes for each object pair) regardless of exposure type. The number of test images for target and control face pairs at different sizes in each test phase is specified in Figure 1—figure supplement 1B.

To evaluate exposure-induced learning effects, we only calculated the discrimination performance of three exposure-relevant sub-tasks (preplanned, Figure 1B): (1) the sub-task with exposed (target) objects at the exposure-manipulated size (Figure 1B, red or blue d’); (2) the sub-task with non-exposed (control) objects at the exposure-manipulated size (Figure 1B, black d’); and (3) the sub-task with exposed (target) objects at the non-manipulated size (Figure 1B, dashed black d’). For example, one subject might have been randomly assigned to exposure type = (experiment u1, swapped condition), target size = (big size), target objects = (face A, face B), control objects = (face C, face D). In this example, each test phase aimed to measure performance (d’) on three specific sub-tasks: [face A big vs. face B big], [face C big vs. face D big], and [face A small vs. face B small].

The sizes of the subject groups are provided in Results. The test trials for size and objects were always balanced in each subject group. In Figure 2, the subject groups differ in the exposure type (three subject groups). In each of these three groups, the target exposure size was the big size, and within each group, the specific face objects for target and control were randomly selected for each subject. In Figure 6, the 13 subject groups correspond to the 13 sub-tasks that were targeted for exposure (see below). Within each subject group, the targeted type of object (i.e., face or basic level) and the targeted exposure size (i.e., medium-big or medium-small) was the same for all subjects, and within each group, the specific objects for target and control were randomly selected within the targeted type.

We computed the d’ for each exposure-relevant sub-task (typically three d’ values for each subject group; see Figure 1B) based on the population (pooled) confusion matrix of the entire subject group. For each sub-task, we constructed a 2 × 2 confusion matrix by directly filling the behavioral choices into hit, miss, false alarm, and correct rejection according to the stimuli and response of each trial (Figure 1B). From the pooled confusion matrix, we computed the d’ for each sub-task. We used standard signal detection theory to compute d’s from the confusion matrix (d’ = Z(TPR) – Z(FPR), where Z is the inverse of the cumulative Gaussian distribution function, and TPR and FPR are true-positive and false-positive rates, respectively). The d’ value was bounded within −7.4 to 7.4 (via an epsilon of 0.0001). The mean d’ for each sub-task of each subject group was determined by averaging the d’ calculated from each bootstrapped subjects sample (which converges to the d’ of the pooled confusion matrix). The error bar (bootstrapped standard error) of performance represents the standard deviation of population pooled d’ over all bootstrap samples (1000 samples in each case), which is performed by sampling with replacement across all trials (aggregated for each subject group). p-value is directly estimated from the bootstrapped distribution of performance change by comparing to no change condition, which is by definition 0.

Exposure phase

Request a detailed protocol

Each exposure trial (a.k.a. exposure ‘event’) in the exposure phase was intended to deliver a pair of temporally contiguous images at the center of gaze. Each trial initiated with the presentation of a small, central black dot (~0.5°), and the subject was required to mouse-click on that dot (this is intended to naturally bring the center of gaze to the dot). Immediately after a successful mouse-click (within 0.5° of the dot), two images were presented sequentially at the location of the black dot. Each image was shown for 100 ms with no time lag between them. After the event, the black dot reappeared at a new, randomly chosen location (out of nine possible locations) on the screen (i.e., the next exposure trial began). The details of those images are described below in the context of the specific experiments carried out.

Because we here focused on the effects of unsupervised exposure events on size tolerance, the size of object in each of the two sequential images was always different and always included the medium (‘baseline’) size: either big-sized objects paired with medium-sized objects or small-sized objects paired with medium-sized objects. In either variant, the order of those two images was counterbalanced, as in Li and DiCarlo, 2010 (e.g., approximately half of the events transitioned from medium to big objects and the other half from big to medium objects; signified by the double-headed arrows in Figure 1B).

Flavors of unsupervised exposure

Request a detailed protocol

Following prior work (Cox et al., 2005; Li and DiCarlo, 2010; Wallis and Bülthoff, 2001), there are two basic flavors of unsupervised exposure. The first flavor is referred to as the swapped exposure, in which the two images within each exposure event are generated from different objects (here, at different sizes). Based on prior work (Cox et al., 2005; Wallis and Bülthoff, 2001), this exposure flavor is expected to gradually ‘break’ (disrupt) size-tolerance discrimination of those two objects. The second flavor is non-swapped exposure, in which the two images are generated from the same object (here, at different sizes). While this have been less studies in human psychophysics, based on prior IT neurophysiology results (Li and DiCarlo, 2010), this exposure flavor is expected to gradually build size-tolerant discrimination of those two objects.

Experimental designs

Request a detailed protocol

Our main experimental goal was to test the directions, magnitudes, and temporal profiles of changes In size-tolerant object discrimination (assessed in the test phases, see above) resulting from different types of unsupervised exposure conditions (Figures 1 and 2). To do that, we deployed the two flavors (above) in three types of unsupervised experience types (u), and each subject was tested in only one of those three types. The first type (u1) was a series of swapped exposure epochs (intuitively, this aims for maximal ‘breaking’). The second type (u2) was a series of non-swapped exposure epochs (intuitively, this aims for maximal ‘building’). The third type (u3) was two swapped exposure epochs followed by two non-swapped exposure epochs (intuitively, this aims to test the reversibility of the unsupervised learning).

Each type of experiment lasted for about 90 min, and each consisted of nine phases in total: five test phases (200 test images each) and four exposure epochs (400 exposure events in each epoch; Figure 1A). This experiment was done with face objects only in a total of 174 subjects over all conditions (u1 = 102 subjects, u2 = 36 subjects, u3 = 37 subjects).

Our secondary experimental goal was to study how learning effect depends on the perceptual similarity of the exposed objects. To do this, we chose pairs of objects to cover a wide range of initial discrimination difficulties. Intuitively, it is easier to discriminate an elephant from a pear than it is to discriminate an apple from a pear. Specifically, we chose a total of 13 size-specific object pairs selected from a set of eight face objects (n = 10 pairs) and six basic-level objects (n = 3 pairs). For subjects being exposed to faces, the control objects were also faces; for subjects exposed to basic-level objects, the control objects were other basic-level objects. These 13 pairs were selected based on pilot experiments that suggested that they would span a range of initial discrimination performance. Indeed, when tested in the full experiment (below), we found that mean human initial discrimination difficulties ranged broadly (d’ range: 0.4–2.6, based on the first test phase). We thus ran 13 groups of subjects (i.e., one group per target object/size pair) with ~20–40 subjects per group. Because the goal here was to test the magnitude of size-specific learning (not the time course), we tested only the ‘swapped’ flavor of unsupervised experience using just one long exposure epoch (consisting of 800 exposure events). Each subject was exposed with only one pair of objects and was exposed to one size variant of the exposure: small-medium-size swapping or medium-big-size swapping. We bracketed that unsupervised exposure with one pre-exposure test phase (200 trials) and one post-exposure test phase (also 200 trials). The learning effect was always measured at the exposed size (e.g., if exposed with small-medium swapping, the learning effect was measured as the performance change of small-size discrimination task of the exposed object pair), subtracting the performance change for control object pair at the exposed size (all exactly analogous to Figure 2A).

When conducting multiple tests of the same null hypothesis and considering any one of those tests to reject that null hypothesis, this results in an increase in the likelihood of incorrectly rejecting the null hypothesis by pure chance. To set an appropriate null rejection level, a correction for multiple comparison (e.g., Bonferroni correction or FDR) needs to be conducted, which corrects the alpha level for each test to account for the number of tests of the same null hypothesis. In our testing of learning effects over exposure amount (Figure 2), we are not asking whether the learning effect for any exposure amount is different from 0, which would require multiple-comparison correction for number of tests. Instead, each point is a single test of a different null hypothesis: ‘There is no learning effect at exposure amount x.’ These results demonstrate how learning effect changes as a function of exposure time for different exposure types. Therefore, we do not believe that the multiple-comparison correction is applicable in this situation. In the statistical test for learning effect of different tasks (Figure 6), the dependent variables are the observed learning effects for tasks that differ in initial task difficulty. We are asking whether the learning effect that is measured at 800 exposures for a given task is significantly different from 0 (the null hypothesis for all tasks). Thus, there is only one comparison for each dependent variable; therefore, we believe that a multiple-comparison correction is not necessary here. If we were asking whether there is any learning effect observed for any one of the measured tasks given the exposure, then a multiple-comparison correction would be necessary, but that is not the question being asked here. Instead, we are simply showing the trend of the effect size for each tested task, with bootstrapped standard deviations of the mean, to demonstrate the relationship between initial task difficulty and the learning effect size.

Generative IT model

Request a detailed protocol

We modeled the IT neuronal population response based on the IT population dataset collected from monkey IT cortex with a MDG model. This model assumes that the distribution of IT population response (the distribution the mean responses of individual IT neurons to all images of an object) to each object is Gaussian-like. We tested this hypothesis with a normality test and found that 81.25% (52 out of 64 distributions for 64 objects) of the IT population response distributions were Gaussian (reject when p<0.01). This MDG model preserves the covariance matrix of neuronal responses to all 64 objects that have been tested in monkey IT cortex. A random draw (of a 64 × 1 vector) from the MDG is conceptualized as the average response (over image repetitions) of a simulated IT recording site to each of the 64 objects. To generate the simulated IT tuning over changes in object size, for each simulated IT site, we multiplied (outer product) that 64 × 1 vector with a randomly chosen size-tuning kernel (1 × 3 vector) that was randomly sampled from a batch of size-tuning curves (Figure 3—figure supplement 1B) that we had obtained by fitting curves to real IT responses across changes in presented object size (n = 168 recording sites; data from Majaj et al., 2015). This gives rise to perfectly size-tolerant simulated IT neurons (i.e., by construction, the tuning over object identity and over size are perfectly separable). Note that this produced a population of simulated IT neurons with a broad range of size tuning, but with that range approximating that observed across actual IT neurons. The distribution of the variance across size (a.k.a. variance across size reflects the shape of size-tuning curve, e.g., 0 variance corresponds to a flat tuning curve across sizes) is shown in Figure 3—figure supplement 1C. To introduce more biological realism and to approximate the fact that each image is presented on a random background, we randomly jittered each value in the 64 × 3 matrix by a zero mean, iid shift of each matrix element (randomly drawn from the distribution of variance across image exemplars for each object from IT neural data [Figure 3—figure supplement 1D]; σ2clutter). Given this procedure, we could generate a potentially infinite number of simulated IT neurons and their (mean) responses to each and every image condition of interest. We verified that, even with the simplifying assumptions imposed here, the population responses of simulated IT populations were quite similar to the actual IT neural population responses (in the sense of image distances in the IT population space [Figure 3B] and variance level [Figure 3D]).

To generate a hypothetical IT neural (model) population, we simply repeated the above process to obtain the requested number of model neurons in the simulated population (note: the MDG and the size-tuning kernel pool was always fixed). In addition, when we ‘recorded’ from these neurons (e.g., in Figure 3A), we additionally added response ‘noise’ that was independently drawn on each repetition of the same image (σ2repeats; mean zero, variance scaled with the mean to approximate known IT Poisson repetition ‘noise’; Figure 3—figure supplement 1E).

IT-to-behavior-linking model

Request a detailed protocol

To generate behavioral performance predictions from model IT population responses, we applied a previously defined IT-to-recognition-behavior-linking model (Majaj et al., 2015). In that study, the authors used actual IT neural population responses to show that a set of possible IT-to-behavioral-linking models could each accurately describe and predict human performance on all tested recognition tasks within the reliability limits of the data. We here used one of the simplest, most biological plausible of those models – a linking model that seeks to infer the test image’s true label by computing the Pearson correlation between the mean IT population response to each possible object class (computed on the IT response to the training images) and the IT population response evoked by the current test image (note that test images are never used in the training of decoders). In other words, the model’s ‘choice’ of object category for each test image was taken to be the choice object whose (simulated) IT population mean (over the training images) was closest to the population vector evoked by the current test image. The only difference from the prior work (Majaj et al., 2015) is that here we used simulated IT neurons (see Generative IT model) to drive the ‘behavior’ of the model. (Note that the linking model has two key hyperparameters [see Results] and, for each simulation run, we held those constant.)

Since the model (IT population + linking model) could now be treated as a behaving ‘subject,’ we analyzed the behavioral choices in exactly the same way as the actual human behavioral choices to arrive at d’ values that could be directly compared (i.e., generate a confusion matrix for each 2AFC sub-task, see above).

Similarly, to test a new model ‘subject,’ we simply generated an entirely new IT model population (see above) and then found the parameters of the IT-to-behavior-linking model for that subject.

To simulate human lapses (see Results), we introduced a (fixed) percentage of trials in each of the ‘behavioral’ confusion matrices where model responses were randomly chosen. Note that, when initial d’ is below ~2, the lapse rate most consistent with the data (9%) has little influence on measurable performance (see Figure 6A) and thus only a minor effect on the model in Figure 5. Therefore, all predictions in Figure 5 were made with 0% lapse rate.

Unsupervised IT plasticity model

Request a detailed protocol

We built a descriptive (non-mechanistic) learning rule with the same fundamental concept as previous computational models of temporal continuity learning (Földiák, 1990; Földiák, 1991; Sprekeler et al., 2007; Wiskott and Sejnowski, 2002), except its mathematical implementation. In our setup, there are always only two images in each exposure event (a leading image and a lagging image). Our plasticity rule states that, after each exposure event, the modification of the mean FRresponse to the leading image is updated as follows:

ΔFRleading=α(FRlaggingFRleading)

This plasticity rule tends to reduce the response difference between two exposed images (i.e., it tends to create response stability over time, assuming that the statistics of the future are similar). In our overall model, we apply this plasticity rule to each and every simulated IT neuron (true) after each and every exposure event. Note that, under repeated exposure events, the FR to all images will continue to change until there is no difference in responses to the leading and lagging images, which means that the responses will eventually reach a steady state.

Compared with previous plasticity rules (e.g., Hebbian rule) for temporal continuity learning, our plasticity rule is relatively simple. Our plasticity rule updates each IT unit’s output FR directly rather than its input weights (Földiák, 1991). Based on immediate activity history, our learning rule continuously changes each unit’s output by pulling its responses to consecutive images closer until reaching steady state. This learning rule has several features. First, it is temporally asymmetric, which means that the direction of rate change of the leading image depends on the sequence of leading and lagging image. In other words, the response to the lagging image is going to pull the response to the leading image toward it. However, since our experiments randomized the leading and lagging images on each exposure trial, this results in a change in the response to both images rather than an asymmetric change. Second, the effect of our plasticity rule is constrained to exposed image pairs and ignores any correlation in the neural representation space. Even though we do not yet have experimental data to accurately generalize the plasticity rule further than what has been presented in this paper, it is potentially generalizable to other types of tolerance (position, pose) and to other exposure paradigms.

The neural plasticity data were collected by selecting preferred (P) and non-preferred (N) images for each unit, which are two different objects (Li and DiCarlo, 2010). We set the neural plasticity rate of the simulated neurons to match that observed in biological IT neurons. To do this, we focused on the same initial high d’ regime as the neural plasticity data were collected. Specifically, for each simulated IT site, objects P and N were chosen independently out of the 64 objects based on its mean response to each object (most likely to be in the high d’ regime). The plasticity rule was applied to each simulated site as it underwent unsupervised exposure, with the neural response function updated based on its responses to images containing objects P and N at the exposed sizes (see Materials and methods: Generative IT model for details). Because of the initial randomness in the size-tuning kernel selected for each neural site (Figure 3—figure supplement 1B, C) and the clutter variance introduced for the site’s responses to different image exemplars (Figure 3—figure supplement 1D), the response profile of each simulated neuron is unique and thus the updated direction during simulated unsupervised learning is not always in the same direction across the population. The averaged learning effect across all simulated neurons was then computed (as if these neurons had been observed in an experiment) and that simulated learning effect was compared to the averaged learning effect observed in the biological IT neurons. The plasticity rate was optimized to minimize this difference. The (fixed) plasticity rate determined in this way could then be used with the plasticity rule to compute the expected individual IT neuronal response pattern changes to any pair images for which the image-driven responses are both known. In this study, that means we could apply it to any images in the space of the generative model of IT, but we note that this same plasticity rule could be applied to other models of IT responses (e.g., those from contemporary image-computable models; Kubilius et al., 2018; Yamins et al., 2014). However, it is important to note that the learning rate value is determined by the IT population statistics and the plasticity rule chosen here and thus should not be taken as a universal value. Changes in the statistics of the simulated IT population (i.e., covariance matrix, variance across sizes or clutter variance, etc.) can influence the initial state of the IT population, and as a consequence influence both plasticity rate value and the predicted changes for each simulated IT neuron.

The plasticity rate that best matches neural data is 0.0016 nru per exposure event (nru = normalized response units). The normalized response is calculated by Δ(P – N)/(P – N), where P and N represent the z-scored FR (across all objects) to preferred and non-preferred objects. Z-score is measured in terms of standard deviations from the mean. Therefore, 1 normalized response unit is 1 std of the response (FR) distribution across all tested objects. Since the mean multi-unit FR is 90 ± 23 spk/s (std across objects) for the IT population across 64 objects, we estimate that 1 nru is ~23 spk/s. Therefore, 0.0016 nru corresponds to a FR change of ~0.035 spk/s per exposure event, which means that ~30 exposure events of this kind would give rise to 1 spk/s change in P vs. N selectivity.

Data availability

All data generated or analyzed during this study are included in the manuscript and supporting files, in the most useful format (https://github.com/jiaxx/temporal_learning_paper (copy archived at https://archive.softwareheritage.org/swh:1:rev:bb355bb96286db2148c3abdc8f71b5880f657c5f)). Datasets from previous studies (IT population dataset [Majaj et al., 2015] and IT plasticity data [Li & DiCarlo, 2010]) are also compiled in the most useful format and saved at the same Github location. Original datasets for previous studies can be obtained by directly contacting the corresponding authors of those studies ([Majaj et al., 2015] and [Li & DiCarlo, 2010]).

References

  1. Book
    1. Barlow HB
    (1961)
    Possible Principles Underlying the Transformations of Sensory Messages
    Physiological Laboratory, Cambridge University.

Decision letter

  1. Michael J Frank
    Senior Editor; Brown University, United States
  2. Thomas Serre
    Reviewing Editor; Brown University, United States
  3. Takeo Watanabe
    Reviewer; Brown University, Rhode Island, United States
  4. Hans Op de Beeck
    Reviewer; KU Leuven, Leuven, Belgium

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

This manuscript addresses a major unknown in the literature. Previous empirical studies on the potential role of unsupervised temporal contiguity learning have observed behavioral effects in relatively difficult object discriminations and changes in neural selectivity for very easy object discriminations at the level of single neurons. Linking these two observations is not trivial, and the current manuscript is a very important step towards filling this gap.

Decision letter after peer review:

Thank you for submitting your article "Unsupervised changes in core object recognition behavior are predicted by neural plasticity in inferior temporal cortex" for consideration by eLife. Your article has been reviewed by 3 peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Michael Frank as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Takeo Watanabe (Reviewer #2); Hans Op de Beeck (Reviewer #3).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

As the editors have judged that your manuscript is of interest, but as described below that additional experiments are required before it is published, we would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). First, because many researchers have temporarily lost access to the labs, we will give authors as much time as they need to submit revised manuscripts. We are also offering, if you choose, to post the manuscript to bioRxiv (if it is not already there) along with this decision letter and a formal designation that the manuscript is "in revision at eLife". Please let us know if you would like to pursue this option. (If your work is more suitable for medRxiv, you will need to post the preprint yourself, as the mechanisms for us to do so are still in development.)

Summary:

This paper raised the very interesting and fundamental question of how unsupervised exposure to objects makes the ventral system learn to better recognize objects in a temporally tolerant way. To address this question, an overall unsupervised computational model was built. The model had three core aspects such as a generative model of a baseline monkey IT population, an IT population and behavior linking model, and an IT plasticity rule that predicts response development of temporally associated object images. Sophisticated human psychophysical experiments were also conducted to examine how the model predicts human object recognition performance. The model with empirically-based monkey IT neural parameters remarkably well predicted the human behavioral results. These results suggest that the unsupervised learning processing successfully linked empirical and theoretical learning results based on monkey IT population to human learning of object recognition.

The reviewers agreed that the manuscript would make a welcome addition to a relatively sparse literature – trying to bridge the behavioral and neurophysiological literature. The general approach was deemed elegant, and the psychophysics results for experiments conducted at this scale on AMT to be surprisingly clean (for the extensive subject training it required). These are arguments that the paper has both the novelty and the quality to be a good fit for the journal.

However, there was also a general sense that the story was somewhat oversold – seemingly providing an almost definitive answer to the questions asked, and in particular on the ability to bridge the two levels. The authors did not convince yet the reviewers that this is an appropriate conclusion. First the model fit does appear to be solid enough, in particular in the high-sensitive range in which most of the neural data were obtained. Furthermore, the conditions between the neurophysiological and the psychophysical experiments were not sufficiently matched, and for too many of the differences we do not know to what extent they matter. There were also substantial issues brought up by reviewers on the modeling side with a reviewer even attempting a crude implementation of the model to understand how parameter free it truly is.

Overall, this is a manuscript with great promises but important caveats were found and controls will need to be done.

Essential revisions:

The manuscript can improve substantially in the message it gives about the level of understanding that has been reached with this work.

1. The results confirm that for large image differences there is a smaller behavioral effect of exposure than needed to really say that the missing link has been resolved. Even a model with a relatively large lapse rate does not explain this. There is very little learning for initial d'>>2, despite that the model with lapse rate 9% still expects so. Also, the lapse rate is a handy solution, but not a very convincing argument. This is a problem, because all the neural findings were obtained with large image differences. The authors do not emphasize this enough, but it should almost be with red letters over the manuscript because it totally changes the interpretation of the findings. As we understand the manuscript, α is estimated in Figure 4 using image pairs with a very high d', and then the prediction of learning effect in Figure 6B fails for such image pairs. Isn't this a major problem? Please comment.

2. The authors made the behavioral task artificially difficult by introducing changes to the test paradigm for which it has not been shown how they might affect the learning effect induced by temporal contiguity exposure at the neural level. This might again affect whether the obtained α can be relied upon. Here are some of the differences. First, in the behavioral experiments, the test phase showed the stimuli on a background. This was not done at the neural level -- do the neural effects of temporal contiguity survive adding a background? Second, in the behavioral experiments, cover trials were interleaved to 'disguise' the simplicity of the testing. How would these cover trials impact neural recordings? Third, the behavioral experiment asks not only for invariance across size, but also in position (at least in screen coordinates), and apparently also pose (p. 3). The neural findings are not necessarily generalizable to this more complicated situation. Fourth, the image discriminability is not matched between the new behavioral experiments and the old electrophysiology. Most of the data here are obtained with faces, which is similar to previous behavioral experiments in this domain, but not representative for the very large image differences in electrophysiology. We encourage the authors to think themselves for ways to address these concerns. The ideal way to address these issues would be to re-do the neural data collection. That is obviously out of scope for a revision given that the current manuscript does not present new neural data. New psychophysical data would not help because the changes with respect to neural recordings are necessary to bring performance below ceiling. Maybe this could be done by switching to another dependent variable that remains relevant at ceiling, such as reaction time data? However, this would add an additional layer of complexity and assumptions about how RT relates to neural selectivity. Maybe the authors can consider this. If not, then we would be amenable to the authors (i) acknowledging this discrepancy and toning down their conclusions accordingly, (ii) indicating which differences might be most important (or speculate on why they might not be important, and (iii) mentioning explicitly that further neural recordings are necessary to decide the issue.

3. The authors should also be more complete about the discrepancies in the literature that put further doubt on how accurate our estimate is of the actual effect size and the boundary conditions under which effects of temporal contiguity exposure occur. There are null results in the literature. For example, Crijns et al., 2019 found no effect in rat V1. Nor did a recent human fMRI study (Van Meel et al., 2020, NeuroImage), which used an exposure paradigm with objects and size changes that is very similar to what is used in the current manuscript. This literature is not mentioned, or even cited wrongly (Crijns et al.,). I have a difficulty with bringing all these null results in agreement with the current set of findings. We might overestimate the effect of temporal contiguity based upon how the authors have summarized the literature. In terms of future directions, there is clearly also a need for further neural investigation of the boundary conditions of temporal contiguity effects.

4. There was a level of skepticism regarding how truly parameter free the model actually is. A model that goes from single cell responses to behavior necessarily has quite a few moving parts and associated design choices. For some of these choices it would be comforting to see what happens when they are dropped or altered. To what extent is the apparent zero-parameter match of the learning rates subject to an implicit model fitting by means of design choices? Specifically, the authors use the same size tuning template for all neurons in the population, combined with multiplicative noise. It feels as if this is not a random choice. The reviewers suspect that it will – at least on average – preserve the performance of the classifier across different object scales, because the classifier computes correlation coefficients and hence normalizes responses. It also means that the update during learning for the non-swapped exposure is – again on average – in the same direction for all neurons (either increasing or decreasing the rate, depending on the size tuning template). Is the improvement of discrimination performance in the model independent of the specific size tuning template that is used? Or does it depend on whether the firing rate at the large size is higher or lower than at the medium size? An independent sampling of size tuning for the different neurons would have been the more natural choice. It would be comforting to see the results hold up.

Please note that one of the reviewers actually went through the trouble to actually quickly code up a toy variant of the model. They reported that it appeared as if the learning rate in the model could be controlled by means of setting noise levels, and the consistent size tuning altered how much the model's classification performance improves for the non-swap exposure scenario. It was also noted that the improvement for non-swap and the deterioration for swap is actually even more pronounced without the consistency in size tuning, which maybe speaks in favor of at least the qualitative aspect of the model. The question thus remains whether it is possible to get both learning rates to match the data without the tuning consistency. This is an important point raised and we expect additional comments and controls.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Thank you for submitting your article "Unsupervised changes in core object recognition behavior are predicted by neural plasticity in inferior temporal cortex" for consideration by eLife. Your article has been reviewed by 2 peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Michael Frank as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Takeo Watanabe (Reviewer #2); Hans Op de Beeck (Reviewer #3).

The reviewers have discussed their reviews with one another, and the Reviewing Editor has drafted this to help you prepare a revised submission.

Essential Revisions:

As requested by Reviewer #2, please:

1) Add a brief discussion regarding differences in how objects/faces are encoded in the visual cortex and possible implications (or lack thereof) for the interpretation of results.

2) Confirm that corrections for multiple comparisons were implemented.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Thank you for resubmitting your work entitled "Unsupervised changes in core object recognition behavior are predicted by neural plasticity in inferior temporal cortex" for further consideration by eLife. Your revised article has been evaluated by Michael Frank (Senior Editor) and a Reviewing Editor.

The manuscript has been improved but there are some remaining issues that need to be addressed, as outlined below:

The argument that you are making regarding the absence of correction for multiple comparisons sounds reasonable, but it is open to debate. While we are sympathetic to your point about letting "the reader [to] decide on what inferences to draw from the data, confidence intervals and associated statistical test", we also feel that in the current manuscript form, the reader might not realize the absence of corrections. As a compromise, we suggest that you add a paragraph in the methods including your justification to make this clear.

https://doi.org/10.7554/eLife.60830.sa1

Author response

Summary:

This paper raised the very interesting and fundamental question of how unsupervised exposure to objects makes the ventral system learn to better recognize objects in a temporally tolerant way. To address this question, an overall unsupervised computational model was built. The model had three core aspects such as a generative model of a baseline monkey IT population, an IT population and behavior linking model, and an IT plasticity rule that predicts response development of temporally associated object images. Sophisticated human psychophysical experiments were also conducted to examine how the model predicts human object recognition performance. The model with empirically-based monkey IT neural parameters remarkably well predicted the human behavioral results. These results suggest that the unsupervised learning processing successfully linked empirical and theoretical learning results based on monkey IT population to human learning of object recognition.

The reviewers agreed that the manuscript would make a welcome addition to a relatively sparse literature – trying to bridge the behavioral and neurophysiological literature. The general approach was deemed elegant, and the psychophysics results for experiments conducted at this scale on AMT to be surprisingly clean (for the extensive subject training it required). These are arguments that the paper has both the novelty and the quality to be a good fit for the journal.

We are pleased to hear that the reviewers appreciate the bridge we aim to build in this line of research and the specific contributions of this particular study. We note that the “clean” Amazon Mechanical Turk (AMT) data is likely a result of the very large number of subjects that the platform allowed us to study.

However, there was also a general sense that the story was somewhat oversold – seemingly providing an almost definitive answer to the questions asked, and in particular on the ability to bridge the two levels. The authors did not convince yet the reviewers that this is an appropriate conclusion. First the model fit does appear to be solid enough, in particular in the high-sensitive range in which most of the neural data were obtained. Furthermore, the conditions between the neurophysiological and the psychophysical experiments were not sufficiently matched, and for too many of the differences we do not know to what extent they matter. There were also substantial issues brought up by reviewers on the modeling side with a reviewer even attempting a crude implementation of the model to understand how parameter free it truly is.

Overall, this is a manuscript with great promises but important caveats were found and controls will need to be done.

Our mistake – we did not intend to claim that this study provides a definitive answer to the motivating question. Instead, our goal was to try to clearly explain what we did (which is a complicated set of things to carry out and clearly walk the reader through in a single paper) and then describe how things turned out (a surprisingly good match of the predictions and the data, even though that did not need to be the case). However, on re-reading the paper with these reviewer comments in mind, we agree that our language and phrasing overstates the inferences that can be drawn from these results. As such, we have done our best to modify the text in the abstract, introduction, and discussion to focus on the approach and results, but to avoid language that implies conclusions beyond those results.

In particular, we agree that, because the study we carried out did not show clear psychophysical effects at large initial d’ values (where most of the neural plasticity data were collected), it is impossible for us to know for sure from this study alone that the linkage we infer will indeed hold in that (high initial d’) regime, but only that all indications we have thus far suggest that it might. As such, we now emphasize our uncertainty about what might be going on for high initial d’ tasks, we situate our results in that context, and we proposed potential new future experiments/models to resolve the issue.

In addition, we have made the following changes:

– We carried out additional analysis to test the hypothesis of lapse rate in our human subject pool and justified its value in our overall model (reviewers’ comment 1).

– We added discussion on condition differences between the neurophysiological and the psychophysical experiments. We explain why we see those differences as a strength rather than a weakness of the study, but we also take care in describing the limits of the inferences that we draw from these results, including the possibility that all of the condition differences have fortuitously conspired to result in largely accurate model predictions (reviewers’ comments 1 and 2).

– We added discussion acknowledging the range of results (especially null results) in the literature on the learning effects induced with temporal continuity and we outline future experiments to test potential limits of learning induced via temporal continuity experience (reviewers’ comment 3).

– We clarified that our size tuning curve for each simulated unit is randomly chosen rather than the same for all units (reviewers’ comment 4).

– We do not want to claim our model is parameter free, only that no parameters were tuned in making the predictions about the size of the behavioral effects in the low and mid d’ regimes. We have now clarified that our model is constrained by IT data (by adding a supplemental figure illustrating the variance distributions from IT data) and emphasized it is not fit to the to-be-predicted behavioral changes (reviewers’ comment 4).

Each of these changes is summarized in detail below in response to specific questions.

Essential revisions:

The manuscript can improve substantially in the message it gives about the level of understanding that has been reached with this work.

1. The results confirm that for large image differences there is a smaller behavioral effect of exposure than needed to really say that the missing link has been resolved. Even a model with a relatively large lapse rate does not explain this. There is very little learning for initial d'>>2, despite that the model with lapse rate 9% still expects so. Also, the lapse rate is a handy solution, but not a very convincing argument. This is a problem, because all the neural findings were obtained with large image differences. The authors do not emphasize this enough, but it should almost be with red letters over the manuscript because it totally changes the interpretation of the findings. As we understand the manuscript, α is estimated in Figure 4 using image pairs with a very high d', and then the prediction of learning effect in Figure 6B fails for such image pairs. Isn't this a major problem? Please comment.

We agree that we should have explained this issue far earlier in the paper, and we have now done so. To summarize:

The reviewers are correct that most of the IT neural data were collected for relatively large object differences (e.g. dog vs. boat). With those large differences, the typical human behavioral performance at discriminating among objects is likely large (e.g. d’ greater than ~3; though we do not have tests with the actual objects used in the neural study as those images were not derived from 3D object models). If human subjects have behavioral lapses (i.e. they make errors un-related to the sensory evidence provided), then our modeling suggests that there is a ceiling of measurable performance and thus the ability to detect behavioral changes of the magnitudes that the neural data predict in this behavioral performance regime (a.k.a. easy discrimination tasks) would be essentially minimum or impossible given the exposure dose we provided here (see Figure 6A). In fact, attentional lapse is almost inevitable in most psychophysics tests (Prins, 2012). As such, we do not consider this to be a “handy solution” (to use the Reviewer’s phrase), but an experimental limitation that is predicted by standard psychophysical models (i.e. a non-zero amount of stochasticity caused by attentional state fluctuation or mistakes in motor action) (Prins, 2012; Wichmann and Hill, 2001).

Because we were aware of this experimental limitation, at the start of our study we focused on object discrimination tasks with initially moderate d’ values (i.e. moderately difficult tasks). The key idea that we employed was that we could tune the plasticity of the simulated IT population in the same d’ regime where the neural plasticity data were collected (high initial d’; (Li and DiCarlo, 2010)) by selecting the preferred and non-preferred objects from a total of 64 objects for each recording site (clarified in Methods lines 1347-1354), and then – because we also have a separate model of the initial IT population patterns across all level of image discriminability -- ask how well the overall model predicts in the regime where the behavioral effects can be measured (low to moderate initial d’s). And we report that this prediction is quite quantitatively accurate (but not perfect) in these behavioral performance regimes for a range of exposure types. These results are the main contribution of the study (Figure 5. A, D and E), and we hope that the reviewers will agree that this quantitative correspondence did not need to obtain from the experiments we conducted, and is thus a contribution. Moreover, in some sense, this quantitative correspondence is, at least to us, even more surprising because, as the reviewers correctly points out, we tuned the learning rate of the neural plasticity model based on measurements of neural responses to images in the high behavioral performance regime! This quantitative correspondence is also notable because there are differences in the conditions that were used in the behavioral and (two) neural experiments that the model had to successful bridge (see response to reviewers’ comment 2).

The accuracy of these predictions in the low and moderate initial d’ regime implies that the linkage is likely to apply across all behavioral regimes. However, we agree with the reviewer(s) that we have not directly shown that the linkage is accurate in the high initial d’ regime. Indeed, we did not observe strong behavioral changes in that regime with the unsupervised exposure dose we were able to deliver in this study. Why not? In the paper, we put forward the hypothesis that even small lapse rates (i.e. small levels of stochasticity caused by attentional state fluctuation or mistakes in motor action) prevent us (or anyone) from measuring changes in behavioral accuracy in that regime given the dose of unsupervised exposure we deployed. In the new version, we provide further evidence in support of that hypothesis by showing ceiling performance in subjects measured with basic level tasks (ceiling is not 100%, which indicates lapse rate is not 0; new Supp. Figure 3). However, we agree that an alternative explanation – one that we previously failed to emphasize – is the possibility that no behavioral effects will ever be found in this high initial performance regime, no matter how strong the unsupervised exposure dose (Introduction lines 107-109; Discussion lines 705-707), and that our results presented in this study cannot directly test the linkage of the neural effects to the behavioral effects in this regime (because we found no behavioral effects to predict!). As a result, we cannot yet guarantee that the proposed linkage model holds over all initial d’ regimes.

To further explore and clearly demarcate what we can say and cannot say as a result of our experiments (explained above), we have done the following:

1. To infer the empirical lapse rate of our subject pool, we have estimated the performance ceiling of subjects in our study. To do so, we used half of the trials to rank order the difficulty of all the tasks we tested in Figure 6 from difficult to easy, and then we plotted the mean population performance of the other half of the trials on those rank ordered tasks (panel A). This plot shows a plateau of mean performance around 90% correct, which we take as an estimate of the mean measurable performance ceiling that can be observed in our subject pool even if perfect sensory evidence was provided. From this performance ceiling, we can infer a mean lapse rate (i.e. 20% lapse rate give a 90% performance ceiling) of our subject pool. This is still within a reasonable range in human lapse rate (<20%; (Manning et al., 2018)). Furthermore, the cumulative performance distribution of all bootstrapped population performance from our basic level tasks (panel B) showed a maximum performance at 95%. Combined together, this evidence indicate the existence of a performance ceiling and justify our use of lapse rate in our full model when making predictions of behavioral changes. We have now included this plot as supplemental Figure 3 and added text in results (Results: lines 594-607). We are aware that even though we were able to indirectly estimate the lapse rate of our subject pool, this is not a direct measurement in a well-controlled condition, and we specifically discussed this point in discussion (lines 687-707) and proposed alternative possibilities and future directions.

While we were redoing the old and new model simulations for Figure 6, we found an error in our previous implementation of the lapse rate. That implementation error resulted in the previously submitted Figure 6B plot having the correct qualitative form, but not quantitatively correct values. We have now double checked the simulations in Figure 6 to fix this problem. And in the resubmission, we have updated Figure 6 to show the predicted learning effect with three different lapse rates to help make clear how lapse rate affects both the observed initial performance (x axis) and the learning effect size that is expected to be observed (y axis). Specifically, we now include the base model with no lapse rate at all (i.e. lapse rate = 0%), the base model with a lapse rate determined from the maximum task performance observed in our subject pool (accuracy ~95.5%, lapse rate = 9%), and the lapse rate inferred from the sigmoidal estimate of the performance ceiling in our subject pool (accuracy ~90%, lapse rate = 20%; see Supp. Figure 3).

2. We now explain in the Introduction that we have little to no sensitivity to measure behavioral effects in the high initial d’ regime where the original neural data were collected, so we needed a different approach (Introduction: lines 90-100).

3. We now emphasize that model predictions are made by tuning the unsupervised learning portion of the model to neural data in the high initial d’ regime (where the neural data were collected) and then making full model predictions in the low to moderate initial d’ regimes where behavioral learning effects are predicted to occur (Introduction: lines 90-100; Discussion: lines 664-673; Methods: lines 1347-1354).

4. We modified the writing in the introduction and in the discussion to make it clear that – because of the limits of the psychophysical experiments that we did -- we (the field) do not yet know what is going on in the high initial d’ regime (where the original neural data were collected), and, based on our modeling, we propose future psychophysical experiments that should produce effects in that regime (if the overall model is correct). (Introduction: lines 108-110; Discussion: 703-705)

Specific changes in manuscript text are (all modified text in the resubmitted manuscript are highlighted for visibility):

To address point 1 above, we modified result section for Figure 6 (lines 578-614):

“We found that our overall model quite naturally – for the reasons outlined above – predicted the smaller learning effect for initially difficult tasks (the left side of Figure 6B). […] That being said, it also means that the current study is simply not able to test the IT-plasticity to behavioral-learning linkage in the initial high d’ range, and we take that as a target for future experimental work (see Discussion).”

To address point 1 above, we modified the Discussion section (lines 687-707)

“The dependency of behavioral learning effects on initial task difficulty is also an important observation. […] However, our results here cannot rule out the possibility that no behavioral effects will be found in this high initial performance regime, no matter how strong the unsupervised exposure dose. Such a finding would falsify the model presented here.”

To bring up this issue early, in Introduction section (lines 90-109):

“Because human make sensory independent mistakes due to inattentional state, these sensory independent random choices (referred to as lapse rate) set a ceiling in the measurable human behavioral performance (Prins, 2012; Wichmann and Hill, 2001). When tasks are in the saturated regime, it is hard to detect any learning effect as any changes in sensory representation would be hidden by the behavioral ceiling (see later). Therefore, we focused our psychophysical study in the mid-range of task difficulty where learning effects can be measured. However, this meant that the task difficulty in human psychophysics could not be in the basic object regime where the neural data were collected. Thus, to make behavioral predictions from the neural data, we took advantage of the overall model to build this bridge: we first tuned the unsupervised plasticity rule by neural data with basic level object images (Li and DiCarlo, 2010); we then used a generative IT model – capable of simulating the response of each artificial IT neuron for a wide range of image discriminability levels – to make quantitative predictions of behavioral change in the regime where the human behavioral learning effects can be readily measured.

Indeed, our behavioral tests revealed a strong dependency of learning effect on the initial task difficulty, with initially hard (d’<0.5) and initially easy (d’>2.5) COR tasks showing smaller measured learning effects than COR tasks of intermediate initial difficulty. We found that our overall model was quite accurate in its predictions of the direction, magnitude, and time course of the changes in measured human size tolerance in the regime where behavioral effects were readily measured for all of the tested unsupervised experience manipulations. The overall model also predicted how the behavioral effect size depended on the initial d’ once we assume behavioral lapses (Prins, 2012) in the model at approximately the same level as those inferred in our subject pool. We note that, because of the (expected) inability to observed behavioral changes for tasks with initial high d’, this study cannot confirm or refute the hypothesized linkage between IT neural effects and behavioral effects in that particular regime.”

To clearly explain how we determined the learning rate in high d’ regime and applied it to other task difficulties, in the Discussion section (lines 664-673):

“We think it is quite interesting and demonstrative of this approach that the overall model could predict behavioral learning effects in the low initial d’ and moderate initial d’ regime for different exposure types (build or break), even though the images used in those tasks are in a different d’ regime from the specific images used in the neuronal plasticity experiments used to tune the model learning (i.e. those neural experiments were done with images in a high d’ regime). We note that the ability to make such tests in these other d’ regimes is the key reason that we built the overall composite model in the first place: a generative IT model (simulating IT population patterns to a wide range of image conditions) + an IT neural-to-behavior linking model + an IT plasticity model (independent of specific images). End-to-end models such as this allow us to predict effects under conditions that no experiment has yet tested, and to then test some of them as we have done here. That is, the integrated model imparts new generalizable knowledge and understanding.”

In the methods section, we clearly explained how we obtained learning rate and factors that could influence its value (lines 1347-1369):

“The neural plasticity data were collected by selecting preferred (P) and nonpreferred (N) images for each unit, which are two different objects (Li and DiCarlo, 2010). […] Changes in the statistics of the simulated IT population (i.e. covariance matrix, variance across sizes or clutter variance, etc) can influence the initial state of the IT population, and as a consequence influence both plasticity rate value and the predicted changes for each simulated IT neuron.”

To clearly state the limitation of our work in the high d’ regime (point 4 above), we added in introduction (lines 107-109):

“We note that, because of the (expected) inability to observed behavioral changes for tasks with initial high d’, this study cannot confirm or refute the hypothesized linkage between IT neural effects and behavioral effects in that particular regime.”

In the Discussion section (lines 705-707):

“However, our results here cannot rule out the possibility that no behavioral effects will be found in this high initial performance regime, no matter how strong the unsupervised exposure dose. Such a finding would falsify the model presented here.”

2. The authors made the behavioral task artificially difficult by introducing changes to the test paradigm for which it has not been shown how they might affect the learning effect induced by temporal contiguity exposure at the neural level. This might again affect whether the obtained α can be relied upon. Here are some of the differences.

The reviewers are correct in that we did not expose or test the subjects to any of the exact stimuli that were used in the monkey neural experiments. Instead, we rely on a model of IT neural activity that applies over a range of conditions and make behavioral change predictions in image exposure and test conditions where the neural plasticity data were not measured. Because the predictions turn out to be accurate across this range of conditions, we consider this to be a strength, rather than a weakness of the study. If the predictions had turned out to be inaccurate, then each of the differences between the neural and the behavioral data become important to consider in trying to understand where the model is broken (e.g. IT does not actually support this behavior, or it does not support is as modeled, or it is not plastic in humans, etc. etc.). Indeed, if the results had revealed that prediction failure, we would have started carrying out new experiments to try to pin down the condition difference(s) that might isolate the source of the prediction failure. However, that is not what we found.

Of course, there remains the possibility that the match of the model predictions to the empirical data (in the low and mid d’ range) derives from the fortuitous (or unfortunate, depending on your point of view) coincidence of separate effects of each of the differences that the reviewer points out below. In the new manuscript, we have now clearly indicated these differences (Discussion: lines 708-728), and the possibility, however unlikely, that a fortuitous coincidence of model assumption failures is still a possible explanation of our finding of a good alignment between the model predictions and the empirical behavioral data.

For clarity, we describe those model assumptions below as they relate to the condition differences that the reviewers raise.

First, in the behavioral experiments, the test phase showed the stimuli on a background. This was not done at the neural level – do the neural effects of temporal contiguity survive adding a background? Second, in the behavioral experiments, cover trials were interleaved to 'disguise' the simplicity of the testing. How would these cover trials impact neural recordings?

Because both of these questions are asking about condition differences in test images, we addressed them together as follows.

First, it is important to clarify that the main learning effect of temporal continuity is introduced during the exposure phase, where the exposed images are presented without background in both the neural and behavioral experiments. We consider that if neural activity plasticity is induced mainly during unsupervised exposure (as found in the original Li et al. studies), then alterations to the test images relative to the Li et al. studies (e.g. adding background or introducing cover trials during test) should not change the neural plasticity (Discussion: lines 716-718).

Second, it is important to emphasize that the model of (initial) generative IT population activity was derived from IT population recordings with images of objects on naturalistic backgrounds (similar to those used in the test images here). That generative IT model was constrained by those IT data. The IT-COR-behavior linking model that linearly reads out from the model IT population representation to make predictions about human performance behavior is also based on the same IT population recording dataset. Therefore, we here used the same method to generate test images with backgrounds and variations for human psychophysics, so that we can use the IT-COR-behavior model to establish the link between simulated IT population and measured human behavior. Thus, for our modeling purpose and behavioral experiments we aimed to explain, we think the choice of test images is justified. We have added additional text in Discussion (lines 712-716) to clarify this for the reader.

Third, the behavioral experiment asks not only for invariance across size, but also in position (at least in screen coordinates), and apparently also pose (p. 3). The neural findings are not necessarily generalizable to this more complicated situation.

From this comment, it appears that we failed to clearly explain the experiments we did. During the exposure phase, the only variation we introduced is object size (Introduction: lines 83-86). We did not (in this study) consider unsupervised exposures to changes in position or pose. However, during the test phase, we introduced changes in size, position, pose and background. We did this to make sure the subjects were each engaged in each task as a genuine object discrimination task (rather than a pairwise image discrimination task). Thus, most of the test trials were to insure that the subjects were working in this mode, and we the later sorted out the specific image pairs where the prior work allowed us to make predictions (i.e. changes in object size). Thus, most of the test trials were cover trials in that sense. We have now clarified this in section (Discussion: lines 712-716).

In addition, we agree with the reviewer in that all of our results only apply to learning about changes in object size, and we have now further clarified that in the Introduction section (lines 83-85) and we have text in the Discussion section (lines: 817-819) to emphasize that the effects of other types of invariance are still to be tested. We also now state that the model we provide here can be used to make predicting about those behavioral effects, but we cannot yet say if those predictions will be accurate.

Fourth, the image discriminability is not matched between the new behavioral experiments and the old electrophysiology. Most of the data here are obtained with faces, which is similar to previous behavioral experiments in this domain, but not representative for the very large image differences in electrophysiology.

This is addressed in the replies above to reviewers’ comment 1. In short, the model allows us to make predictions across those differences and the predictions turned out to be accurate (lines 664-673).

We encourage the authors to think themselves for ways to address these concerns. The ideal way to address these issues would be to re-do the neural data collection. That is obviously out of scope for a revision given that the current manuscript does not present new neural data. New psychophysical data would not help because the changes with respect to neural recordings are necessary to bring performance below ceiling. Maybe this could be done by switching to another dependent variable that remains relevant at ceiling, such as reaction time data? However, this would add an additional layer of complexity and assumptions about how RT relates to neural selectivity. Maybe the authors can consider this. If not, then we would be amenable to the authors (i) acknowledging this discrepancy and toning down their conclusions accordingly, (ii) indicating which differences might be most important (or speculate on why they might not be important, and (iii) mentioning explicitly that further neural recordings are necessary to decide the issue.

The reviewer has very nicely recapitulated our thought process on how to proceed with changes to the manuscript. As suggested, we have now more narrowly scoped the inferences we can draw on and we have now acknowledged these issues up front in the manuscript (lines 86-87) and extensively discussed in Discussion.

The only future oriented point we differ slightly with the reviewers on this is that, while we agree that more neural recordings might be useful in the low to moderate initial d’ performance regime, to our mind the most important thing that needs to be done next is even more stringent psychophysical studies in the high initial d’ performance regime (or with higher unsupervised exposure dose). Based on the results presented here, the neural data clearly predict that internal accumulated sensory evidence in this regime is strongly weakened by unsupervised exposure, but the linking model – including its lapse component – tells us that those effects will be very difficult to measure. Now that we have the results in the current study, we are on much more solid ground to proceed with those experiments in a follow up study, and if we (or others) fail to find behavioral effects after dealing with the psychophysical challenges, this would importantly falsify the current leading linking model of IT to core object recognition behavior. We discuss this now in the modified Discussion, along with proposals for further neural recordings to further bridge differences between neural and behavioral studies.

Specific changes in manuscript text are in the Discussion section (lines 704-724):

“In this work, since we want to establish a quantitative link between single IT neuronal plasticity and human unsupervised learning effect, we matched many of the conditions to those used in prior IT plasticity work. […] Future neural recordings in non-human primates with behavioral learning measurement could be helpful to directly validate the link suggested by our model using the exact same objects and test images as the ones in our human psychophysics.”

3. The authors should also be more complete about the discrepancies in the literature that put further doubt on how accurate our estimate is of the actual effect size and the boundary conditions under which effects of temporal contiguity exposure occur. There are null results in the literature. For example, Crijns et al., 2019 found no effect in rat V1. Nor did a recent human fMRI study (Van Meel et al., 2020, NeuroImage), which used an exposure paradigm with objects and size changes that is very similar to what is used in the current manuscript. This literature is not mentioned, or even cited wrongly (Crijns et al.,). I have a difficulty with bringing all these null results in agreement with the current set of findings. We might overestimate the effect of temporal contiguity based upon how the authors have summarized the literature. In terms of future directions, there is clearly also a need for further neural investigation of the boundary conditions of temporal contiguity effects.

We have now corrected the citation (line 793-794) and added a paragraph in Discussion to address the null results in the literature and the necessity to test the limits of temporal continuity learning effects (Discussion: lines 795-813).

“Although prior experimental work seemed qualitatively consistent with the overarching theoretical idea, it did not demonstrate that the behavioral learning effects could be explained by the IT neural effects. […] In sum, the literature is still highly varied and future neurophysiological and behavioral experiments are necessary to test the boundary conditions of temporal contiguity induced effects. ”

4. There was a level of skepticism regarding how truly parameter free the model actually is. A model that goes from single cell responses to behavior necessarily has quite a few moving parts and associated design choices.

As the reviewers pointed out, it is impossible to have a parameter free model to go from single neuron plasticity to behavior. Instead, our model is a physiological data constrained model. Here are our design choices. First, we constrained the simulated IT population based on real IT recording data, which includes the IT representation for different objects (simulated by multi-variant Gaussian based on the covariance matrix of the population), variance across stimulus size (simulated by extracting size tuning kernels from real IT neurons), variance across random backgrounds (simulated by introducing variance for image exemplars from the data distribution; Figure 3—figure supplement 1D) and variance across repeats (simulated by introducing variance across repeats with values from the real distribution; Figure 3—figure supplement 1E). We have now added a new supplemental figure to show the distributions of variance (Figure 3—figure supplement 1). These distributions constrained the representation and variance of the simulated IT population, which as a result will influence the initial response profiles of individual simulated neurons.

Second, we constrained the learning rate based on the neuronal plasticity of single IT recording sites. We hypothesized a learning rule with one free parameter – the learning rate, which describes the rate of neural activity change for consecutive images. We tuned the value of that learning rate ONLY by matching the population learning effect of the simulated IT population with the measured learning effect of real IT neurons. Therefore, the learning rate is specific to the statistics of the simulated IT population and is constrained by real IT plasticity data.

Third, we constrained the IT-COR-behavior linking model parameters (specifically, number of training sample and population size) based on human initial performance (In effect, implementing the linking model proposed by prior work (Majaj et al., 2015)).

Most importantly, all of the above parameters were locked before we made predictions of human learning effects (except for a final introduction of lapse rate to explain the ceiling effect for easy tasks). Therefore, our model is not parameter free overall. However, all free parameters were constrained by real data but were not tuned to the behavioral learning effects that we measured here. We emphasize that there is no parameter tuning when predicting the human learning effects. In practice, we did not include the lapse rate in the first model prediction because it only becomes obvious in retrospect and, given the reviewers’ suggestions (above), we have done more work to independently justify the choice of this parameter (Figure 6—figure supplement 1: indication of ceiling performance). In sum, all of the model behavioral effects in Figures 5A, D and E were true (i.e. parameter free) predictions. Prediction in Figure 6B is largely parameter free (new Figure 6B, black dots) with the exception of lapse rate which is reasonably constrained by human ceiling performance (See response to reviewers’ comment 1).

We now emphasize the importance of constraining models by experimental data and made the following text changes:

In the legend of Figure 1 (line 175):

“This study asks if that idea is quantitatively consistent across neural and behavioral data with biological data constrained models.”

In Results Lines 292-298

“In sum, this model can generate a new, statistically typical pattern of IT response over a population of any desired number of simulated IT neural sites to different image exemplars within the representation space of 64 base objects at a range of sizes (here targeting “small”, “medium”, and “big” sizes to be consistent with human behavioral tasks; see Methods for details). The simulated IT population responses were all constrained by recorded IT population statistics (Figure 3—figure supplement 1). These statistics define the initial simulated IT population response patterns and thus they ultimately influence the predicted unsupervised neural plasticity effects and the predicted behavioral consequences of those neural effects.”

Lines 458-462

“At this point, we could – without any further parameter tuning – combine each of these three model components into a single overall model that predicts the direction, magnitude and time course of human unsupervised learning effects that should result from any unsupervised learning experiment using this exposure paradigm (pairwise temporal image statistics).”

Lines 474-477:

“Again we emphasize that, while the overall model relies heavily on data and parameters derived explicitly or implicitly from these prior studies (Li and DiCarlo, 2010; Majaj et al., 2015), no components or parameters of this model and nor its predictions depended on the behavioral data collected in this study.”

Lines 531-535:

“In sum, it is not the case that any model of the form we have built here will produce the correct predictions – proper (biological) setting of the unsupervised IT plasticity rate and proper (biological) setting of the IT-to-COR-behavior linkage model are both critical. It is important to note that we did not tune these biologically constrained hyperparameters based on fitting the unsupervised behavioral learning effects in Figure 2 – they were derived in accordance with prior neurobiological work as outlined above.”

In Discussion (lines 637-639):

“Each of these three model components was guided by prior work and constrained by recorded IT data to set its parameters, and the combined overall computational model was directly used to predict human learning effect without further parameter tuning.”

For some of these choices it would be comforting to see what happens when they are dropped or altered.

We strongly resonate with this comment and it was our first reaction to the match between the model and the results. That is, we wondered, maybe all models produce the same result? We took exactly the strategy the reviewer proposed – we tested how altered parameters (i.e. changing the model) would influence the match to the results in Figure 5. Here, we altered 3 parameters: plasticity rate, number of training samples and number of recording sites (lines 506-535). We found that the plasticity rate can significantly alter the predicted learning effect magnitude so that it no longer matches the observed learning effect magnitude. We also found there is a manifold in the decoder parameter space that matches human initial performance. As long as the parameters are on this manifold, the human learning effect predicted by each of those models is similar to the measured learning effect. These counterfactual results gave us comfort that the observed match between the (original) model and the behavioral data was interesting and hence we included that in the original submission.

Modified text in Results (lines 526-525):

“In contrast, when we built model variants in which the choices of the two hyperparameters did not match human initial performance, the unsupervised learning effect predicted by the overall model clearly differed from the observed human learning effect. Specifically, when an overall model starts off with “super-human” performance, it overpredicted the learning effect, and when a different model starts off as “sub-human” it underpredicted the learning effect.

In sum, it is not the case that any model of the form we have built here will produce the correct predictions – proper (biological) setting of the unsupervised IT plasticity rate and proper (biological) setting of the IT-to-COR-behavior linkage model are both critical. It is important to note that we did not tune these biologically constrained hyperparameters based on fitting the unsupervised behavioral learning effects in Figure 2 – they were derived in accordance with prior neurobiological work as outlined above.”

It is worth noting that the simulated IT population responses were all constrained by recorded IT population statistics. Changes in the statistics of the simulated IT population (i.e. covariance matrix, size tuning or cluster variance, etc) can influence the initial state of the IT population, and as a consequence influence both learning rate value and the prediction. However, since this deviation from real IT population lives in a much higher dimensional parameter space, we did not explore further in this direction, but rather, locked the generative IT model according to real IT population statistics. Future work with proper constrains could be done in this direction.

Modified text in Results (lines 295-298):

“The simulated IT population responses were all constrained by recorded IT population statistics (Figure 3—figure supplement 1). These statistics define the initial simulated IT population response patterns and thus they ultimately influence the predicted unsupervised neural plasticity effects and the predicted behavioral consequences of those neural effects.”

Modified text in Methods (lines 1356-1360):

“However, it is important to note that the learning rate value is determined by the IT population statistics and the plasticity rule chosen here and thus should not be taken as a universal value. Changes in the statistics of the simulated IT population (i.e. covariance matrix, variance across sizes or clutter variance, etc) can influence the initial state of the IT population, and as a consequence influence both plasticity rate value and the predicted changes for each simulated IT neuron.”

To what extent is the apparent zero-parameter match of the learning rates subject to an implicit model fitting by means of design choices? Specifically, the authors use the same size tuning template for all neurons in the population, combined with multiplicative noise. It feels as if this is not a random choice. The reviewers suspect that it will – at least on average – preserve the performance of the classifier across different object scales, because the classifier computes correlation coefficients and hence normalizes responses. It also means that the update during learning for the non-swapped exposure is – again on average – in the same direction for all neurons (either increasing or decreasing the rate, depending on the size tuning template).

First, we would like to clarify that we did not (and do not) claim a zero-parameter match for estimating the neural plasticity rate. Instead, the neural plasticity rate is tuned so that the simulated IT neurons match the amplitude of changes observed in the prior IT plasticity studies (Li and DiCarlo, 2010). Because that plasticity rate is tuned using simulated neurons from the generative model of IT and the size tuning kernels, any changes in those response profiles might change the value of plasticity rate needed to match those prior data. Note that, because of this, we do not view the plasticity rate that we report as a key claim of our study.

Second, this comment implies that we were not clear about our method of simulating neural size tuning in our original manuscript. To clarify, for each simulated IT site, we randomly draw a size tuning kernel from a pool of ~168 size tuning kernels (fit to the mean size tuning of each IT site to all objects) that we had previously fit to real IT data. Therefore, the distribution of size tuning in the simulations are, by construction, matched to the distribution of size tuning observed in IT (Figure 3—figure supplement 1B and C). Because of this randomness in the size tuning kernel selected for each neural site and the variance introduced for the site’s responses to different image exemplars, the response profile of each simulated neuron is unique and thus the update direction during simulated unsupervised learning is not always in the same direction (Method: lines 1354-1358). To demonstrate this point more clearly, we’ve added another simulated IT site in Figure 3A, and added example IT recordings across size and objects in Figure 3—figure supplement 1A.

Modified text in methods (lines 1354-1358):

“Because of the initial randomness in the size tuning kernel selected for each neural site (Figure 3—figure supplement 1B and C) and the clutter variance introduced for the site’s responses to different image exemplars (Figure 3—figure supplement 1D), the response profile of each simulated neuron is unique and thus the update direction during simulated unsupervised learning is not always in the same direction across the population.”

Is the improvement of discrimination performance in the model independent of the specific size tuning template that is used? Or does it depend on whether the firing rate at the large size is higher or lower than at the medium size? An independent sampling of size tuning for the different neurons would have been the more natural choice. It would be comforting to see the results hold up.

As clarified before, we did independent sampling for size tuning curves. Therefore, the results presented here are based on random sampling of size tunings for different neurons, which is exactly what the reviewers suggested here.

However, since it caused confusion in the previous version, we have now added another simulated IT example in Figure 3 to reflect the random sampling and rephrased the method of size tuning creation and selection in the figure legend, result section and method section to emphasize this point.

In the figure legend: “Then, a size tuning kernel is randomly drawn from the a pool of size tuning curves (upper right box; kernels fit to real IT data) and multiplied by the object response distribution (outer product), resulting in a fully size-tolerant (i.e. separable) neural response matrix (64 objects x 3 sizes).”

In the result section: “To simulate the variance in object size, for each simulated site, we randomly chose one size tuning kernel from a pool of size tuning curves that we had obtained by fitting curves to real IT responses across changes in presented object size (n=168 recording sites; data from (Majaj et al., 2015)). This process is repeated independently for each simulated site.”

In the method section: “To generate the simulated IT tuning over changes in object size, for each simulated IT site, we multiplied (outer product) that 64x1 vector with a randomly chosen size tuning kernel (1x3 vector) that was randomly sampled from a batch of size tuning curves that we had obtained by fitting curves to real IT responses across changes in presented object size (n=168 recording sites; data from (Majaj et al., 2015)).”

Previously in the model schematic in Figure 3 there was only one simulated IT site and could have caused confusion, so we have added one more example simulated site to the figure to show the fact that each simulated IT site has a size tuning kernel randomly drawn from the pool of kernels fit to real IT data.

Please note that one of the reviewers actually went through the trouble to actually quickly code up a toy variant of the model. They reported that it appeared as if the learning rate in the model could be controlled by means of setting noise levels, and the consistent size tuning altered how much the model's classification performance improves for the non-swap exposure scenario. It was also noted that the improvement for non-swap and the deterioration for swap is actually even more pronounced without the consistency in size tuning, which maybe speaks in favor of at least the qualitative aspect of the model. The question thus remains whether it is possible to get both learning rates to match the data without the tuning consistency. This is an important point raised and we expect additional comments and controls.

We are very impressed and thankful that the reviewers’ took the time to simulate a variant of the overall model! However, we think it should be clear at this point that we didn’t use fixed size tuning and our model didn’t assume tuning consistency. The results presented here represent random draw from the size tuning kernels extracted from real data. We have also explained how we optimized the learning rate for our simulated IT population and what factors can influence its value (specifically the IT population statistics could influence the value of learning rate) at the beginning of this question. To further clarify, the learning rate value is specific to the IT population statistics and may not be a universal value. As also stated in the reply above, we do not view the plasticity rate that we report as a key claim of our study. We’ve now added clarification in the text (Methods: lines 1356-1360).

We are fully aware that the IT representation (simulated with the multi-dimensional Gaussian model), size tuning kernels (variance across size) and noise levels (variance caused by samplers or repeats) can influence the learning rate value. We modified result and method section to make this point clear.

Result lines 295-298:

“The simulated IT population responses were all constrained by recorded IT population statistics (Figure 3—figure supplement 1). These statistics define the initial simulated IT population response patterns and thus they ultimately influence the predicted unsupervised neural plasticity effects and the predicted behavioral consequences of those neural effects.”

Methods: lines 1356-1360

“However, it is important to note that the learning rate value is determined by the IT population statistics and the plasticity rule chosen here and thus should not be taken as a universal value. Changes in the statistics of the simulated IT population (i.e. covariance matrix, variance across sizes or clutter variance, etc) can influence the initial state of the IT population, and as a consequence influence both plasticity rate value and the predicted changes for each simulated IT neuron.”

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Essential Revisions:

As requested by Reviewer #2, please:

1) Add a brief discussion regarding differences in how objects/faces are encoded in the visual cortex and possible implications (or lack thereof) for the interpretation of results.

We interpret the reviewer’s comment as asking us to update the manuscript to discuss the possibility that the unsupervised plasticity effects of IT neurons are equivalent for all face-selective neurons, regardless of if those neurons are clustered within face patches (monkeys; within FFA or OFA in humans) or are scattered throughout IT. As such, we have added the following paragraph to the discussion (lines 735-755):

“Our study tested unsupervised learning effects for multiple types of object discriminations (Figure 6). […] More targeted future neurophysiology experiments could test these specific alternative hypotheses.”

2) Confirm that corrections for multiple comparisons were implemented.

When conducting multiple tests of the same null hypothesis and considering any one of those tests to reject that null hypothesis, this results in an increase in the likelihood of incorrectly rejecting the null hypothesis by pure chance. To set an appropriate null rejection level, a correction for multiple comparison (e.g. Bonferroni correction or FDR) is conducted, which corrects the α level for each test to account for the number of tests of the same null hypothesis.

In our testing of learning effects over exposure amount (Figure 2), we are not asking whether the learning effect for any exposure amount is different from 0, which would require multiple comparison correction for number of tests. Instead, each point is a single test of each null hypothesis: “There is no learning effect at exposure amount x.” Therefore, we do not believe the Bonferroni correction is applicable in this situation, and we leave it to the reader to decide on what inferences to draw from the data, confidence intervals and associated statistical tests. If the reviewer prefers, we could only test/show the maximum exposure point for panels A and B as that is the planned test of the simple null hypothesis: “Unsupervised exposure of this type for the maximum duration we could supply causes no change in discrimination performance."

In the statistical test for learning effect of different tasks (Figure 6), the dependent variable is the observed learning effect for tasks that differ in initial task difficulty. Here, we are asking whether the learning effect that is measured at 800 exposures for a given task is significantly different from 0 (the null hypothesis for all tasks). Thus, there is only one comparison for each task, therefore we believe that a multiple comparisons correction is not necessary here. If we were asking whether there is any learning effect observed for any one of the measured tasks, then a Bonferroni correction would be necessary, but that is not the question being asked here. Instead, we are simply showing the trend of the effect size for each tested task, with bootstrapped standard deviations of the mean, and leaving it to the reader to draw inferences about the relationship between initial task difficulty and learning effects size (the main goal of this figure). Indeed, if the reviewer prefers, we could drop the significance testing symbols from Figure 6 without loss of content.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

The manuscript has been improved but there are some remaining issues that need to be addressed, as outlined below:

The argument that you are making regarding the absence of correction for multiple comparisons sounds reasonable, but it is open to debate. While we are sympathetic to your point about letting "the reader [to] decide on what inferences to draw from the data, confidence intervals and associated statistical test", we also feel that in the current manuscript form, the reader might not realize the absence of corrections. As a compromise, we suggest that you add a paragraph in the methods including your justification to make this clear.

We understand whether or not to apply correction for multiple comparisons in our case may be open to debate in the field. As suggested by the editor, we have added our justification in the Method Section to make it clear why we didn’t conduct correction for multiple comparison for our analysis. Please see the modified text below (lines 1300-1317):

“When conducting multiple tests of the same null hypothesis and considering any one of those tests to reject that null hypothesis, this results in an increase in the likelihood of incorrectly rejecting the null hypothesis by pure chance. […] Instead, we are simply showing the trend of the effect size for each tested task, with bootstrapped standard deviations of the mean, to demonstrate the relationship between initial task difficulty and the learning effect size. ”

https://doi.org/10.7554/eLife.60830.sa2

Article and author information

Author details

  1. Xiaoxuan Jia

    1. Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
    2. McGovern Institute for Brain Research, Cambridge, United States
    Present address
    Allen Institute for Brain Science, Seattle, United States
    Contribution
    Conceptualization, Data curation, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing - original draft, Writing - review and editing
    For correspondence
    jxiaoxuan@gmail.com
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-5484-9331
  2. Ha Hong

    1. Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
    2. McGovern Institute for Brain Research, Cambridge, United States
    3. Harvard-MIT Division of Health Sciences and Technology, Cambridge, United States
    Present address
    Caption Health, 2000 Sierra Point Pkwy, Brisbane, United States
    Contribution
    Data curation, Software, Methodology, Writing - review and editing
    Competing interests
    No competing interests declared
  3. James J DiCarlo

    1. Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
    2. McGovern Institute for Brain Research, Cambridge, United States
    3. Center for Brains, Minds and Machines, Cambridge, United States
    Contribution
    Conceptualization, Resources, Supervision, Funding acquisition, Investigation, Visualization, Methodology, Writing - review and editing
    Competing interests
    No competing interests declared

Funding

National Institutes of Health (2-RO1-EY014970-06)

  • James J DiCarlo

Simons Foundation (SCGB [325500])

  • James J DiCarlo

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

This work was supported in part by a grant from the National Institutes of Health to JJD (2-RO1-EY014970-06) and by the Simons Foundation (SCGB [325500] to JJD). We would like to thank Charles Cadieu, Dan Yamins, Ethan Solomon, Nuo Li, Elias Issa, Rishi Rajalingham, and Arash Afraz for helpful discussions.

Ethics

Human subjects: All human experiments were done in accordance with the MIT Committee on the Use of Humans as Experimental Subjects (COUHES; the protocol number is 0812003043). We used Amazon Mechanical Turk (MTurk), an online platform where subjects can participate in non-profit psychophysical experiments for payment based on the duration of the task. In the description of each task, it is clearly stated that participation is voluntary and subjects may quit at any time. Subjects can preview each task before agreeing to participate. Subjects will also be informed that anonymity is assured and the researchers will not receive any personal information. MTurk requires subjects to read task descriptions before agreeing to participate. If subjects successfully complete the task, they anonymously receive payment through the MTurk interface.

Senior Editor

  1. Michael J Frank, Brown University, United States

Reviewing Editor

  1. Thomas Serre, Brown University, United States

Reviewers

  1. Takeo Watanabe, Brown University, Rhode Island, United States
  2. Hans Op de Beeck, KU Leuven, Leuven, Belgium

Publication history

  1. Received: July 8, 2020
  2. Accepted: June 10, 2021
  3. Accepted Manuscript published: June 11, 2021 (version 1)
  4. Accepted Manuscript updated: June 17, 2021 (version 2)
  5. Version of Record published: July 30, 2021 (version 3)

Copyright

© 2021, Jia et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 706
    Page views
  • 99
    Downloads
  • 0
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

Further reading

    1. Neuroscience
    Debora Fusca, Peter Kloppenburg
    Research Article

    Local interneurons (LNs) mediate complex interactions within the antennal lobe, the primary olfactory system of insects, and the functional analog of the vertebrate olfactory bulb. In the cockroach Periplaneta americana, as in other insects, several types of LNs with distinctive physiological and morphological properties can be defined. Here, we combined whole-cell patch-clamp recordings and Ca2+ imaging of individual LNs to analyze the role of spiking and nonspiking LNs in inter- and intraglomerular signaling during olfactory information processing. Spiking GABAergic LNs reacted to odorant stimulation with a uniform rise in [Ca2+]i in the ramifications of all innervated glomeruli. In contrast, in nonspiking LNs, glomerular Ca2+ signals were odorant specific and varied between glomeruli, resulting in distinct, glomerulus-specific tuning curves. The cell type-specific differences in Ca2+ dynamics support the idea that spiking LNs play a primary role in interglomerular signaling, while they assign nonspiking LNs an essential role in intraglomerular signaling.

    1. Neuroscience
    Wanhui Sheng et al.
    Research Article Updated

    Hypothalamic oxytocinergic magnocellular neurons have a fascinating ability to release peptide from both their axon terminals and from their dendrites. Existing data indicates that the relationship between somatic activity and dendritic release is not constant, but the mechanisms through which this relationship can be modulated are not completely understood. Here, we use a combination of electrical and optical recording techniques to quantify activity-induced calcium influx in proximal vs. distal dendrites of oxytocinergic magnocellular neurons located in the paraventricular nucleus of the hypothalamus (OT-MCNs). Results reveal that the dendrites of OT-MCNs are weak conductors of somatic voltage changes; however, activity-induced dendritic calcium influx can be robustly regulated by both osmosensitive and non-osmosensitive ion channels located along the dendritic membrane. Overall, this study reveals that dendritic conductivity is a dynamic and endogenously regulated feature of OT-MCNs that is likely to have substantial functional impact on central oxytocin release.