Abstract
Visual detection is a fundamental natural task. Detection becomes more challenging as the similarity between the target and the background in which it is embedded increases, a phenomenon termed “similarity masking”. To test the hypothesis that V1 contributes to similarity masking, we used voltage sensitive dye imaging (VSDI) to measure V1 population responses while macaque monkeys performed a detection task under varying levels of target-background similarity. Paradoxically, we find that during an initial transient phase, V1 responses to the target are enhanced, rather than suppressed, by target-background similarity. This effect reverses in the second phase of the response, so that in this phase V1 signals are positively correlated with the behavioral effect of similarity. Finally, we show that a simple model with delayed divisive normalization can qualitatively account for our findings. Overall, our results support the hypothesis that a nonlinear gain control mechanism in V1 contributes to perceptual similarity masking.
Introduction
Searching for, and detecting, visual targets in our environment is a ubiquitous natural task that our visual system performs exceptionally well. A key feature of behavioral detection performance is that the texture similarity between the target and the background in which it is embedded profoundly affects target detectability. The more similar are the target and the background, the harder it is to detect the target (Campbell & Kulikowski 1966, Foley 1994, Sebastian et al 2017, Stromeyer & Julesz 1972, Watson & Solomon 1997, Wilson et al 1983). This phenomenon, which is termed “similarity masking”, is the foundation of camouflage.
An example of similarity masking is illustrated in Figure 1. Detecting a low contrast oriented visual target is easy on a uniform gray background (Fig. 1A). Detectability decreases when the target has a similar orientation as the background (Fig. 1B). The neural basis of similarity masking is not well understood. The main goal of the current study was to test the hypothesis that neural interactions between the representations of the target and background in the primary visual cortex (V1) contribute to the perceptual effect of similarity masking.
The responses of visual neurons to a target can be strongly modulated by the context in which the stimulus is presented. Such contextual modulations have powerful, complex and diverse effects in the visual cortex (Allman et al 1985, Angelucci et al 2017, Angelucci & Bressloff 2006, Bai et al 2021, Cavanaugh et al 2002b, Henry et al 2020, Michel et al 2018, Sceniak et al 1999, Shushruth et al 2012). Most of these effects reflect sublinear interactions between the target and the background, suggesting that they could potentially contribute to behavioral masking effects. If nonlinear computations in V1 contribute to similarity masking, we would predict that the signals evoked by a target will be maximally reduced by a background that is similar to the target.
We tested this hypothesis by measuring V1 population responses in macaque monkeys while they performed a visual detection task under masking conditions (Fig. 1C-E). Because the nature of contextual modulations in V1 is complex, a second goal of our study was to quantitatively characterize the spatiotemporal dynamics of V1 population responses to different combinations of targets and backgrounds.
As a first step, we characterized the behavioral effects of similarity masking in two macaque monkeys, demonstrating clear effects of similarity on target detectability and reaction times. These results confirm that macaque monkeys are a good animal model for human similarity masking.
Second, we used voltage-sensitive dye imaging (VSDI; (Grinvald & Hildesheim 2004, Seidemann et al 2002, Shoham et al 1999)) to measure V1 population responses at two scales: the scale of the retinotopic map and the scale of orientation columns, while the monkeys performed the similarity masking detection task. To study the effect of similarity masking on the neural detection sensitivity, we constructed a task-specific decoder at each scale. Each decoder first pools the responses using a scale-dependent spatial template and then combines these responses over time to form a decision variable. The distributions of the decision variable in target-present vs. target-absent trials are used to compute neural sensitivity that can be compared to behavioral sensitivity (Seidemann & Geisler 2018).
We found that V1 population responses to the target and background display two distinct phases. An initial transient phase that starts at response onset, and a second phase that lasts until stimulus offset or the animal’s response. Surprisingly, the first phase displays a paradoxical effect; during this phase the target evoked response is strongest when the target and background are similar and is therefore anticorrelated with behavior. This effect reverses in the second phase so that in this phase the targetevoked response is reduced with increased target-background similarity. V1 responses during this second phase are therefore consistent with behavior.
We also observed complex spatiotemporal dynamics of the population response to the target and background stimuli, including a repulsion of V1 columnar-scale representation of target orientation in the direction away from the background orientation.
Finally, we show that a simple dynamic population gain control model can qualitatively account for our physiological and behavioral results, and that the estimated properties of the gain-control mechanism are consistent with a principled computational approach to feature encoding and decoding. Overall, our results are consistent with the hypothesis that contextual interactions between the representations of the target and background in V1 are likely to contribute to the perceptual phenomena of similarity masking.
Results
Behavioral effect of target-background similarity masking To study the neural basis of visual similarity masking, we trained two monkeys (macaca mulatta) to perform a visual detection task in which a small horizontal target appeared on a larger background at a known location in half of the trials (Fig. 1E). The monkey indicated target absence by maintaining fixation and target presence by making a saccadic eye movement to the target location as soon as it was detected. Within a block of trials, the contrast of the target and the background were fixed, while the orientation of the background varied randomly from trial to trial, allowing us to test for the effect of target-background orientation similarity on behavioral and neural detection sensitivities.
We tested the behavioral effect of similarity masking over five combinations of target and background contrasts (Fig. 2A). For each combination, we measured the behavioral sensitivity as a function of background orientation. Performance with no background (uniform gray screen) served as a baseline (Fig. 2A, dashed horizontal lines). Performance as a function of background orientation was fitted with an inverted Gaussian function. At all five target and background contrast combinations, detection sensitivity was lowest when background and target orientations matched, confirming the expected similarity masking effect from human subjects. These results demonstrate that macaque monkeys are a good animal model for studying the neural basis of human similarity masking.
The supplementary information includes a perceptual demonstration of similarity masking for a wide range of target amplitudes, orientations, and spatial-frequencies (Supplementary Material: “Orientation Masking Demo”). This demonstration can give the reader an intuitive sense of the masking effects studied here.
We also examined the effect of target-background orientation similarity on the monkeys’ reaction times (Fig. 2C). We find two distinct effects of orientation similarity on reaction times. At higher target and background contrasts, reaction times are maximal when the background and target have the same orientation (when detectability is lowest and the task is hardest) and monotonically decrease as targetbackground similarity decreases (detectability increases and the task becomes easier). Surprisingly, at lower target and background contrasts, reaction times are low when the background matches target orientation, then increases as the background-target orientation difference increases, and then drops again when the background approaches the orthogonal orientation to the target. Thus, under these conditions, we see an interesting decoupling between difficulty and reaction time, so that reaction times can be shortest in the harder conditions. This surprising effect is present in both monkeys. Some of the complex neural dynamics described below could explain this interesting effect (see Discussion).
Our next goal was to test the hypothesis that contextual interactions between the representations of the target and background in V1 contribute to the observed behavioral similarity masking results.
Neural population responses to target and background stimuli in macaque V1
While the monkeys performed the similarity masking detection task, we used VSDI to measure V1 population responses to the target and the background. In each cranial window, we first used a fast and efficient VSDI protocol to obtain a detailed retinotopic map (Yang et al 2007). We then positioned the target so that its neural representation fell at the center of our imaging area.
The target elicits V1 population activity at two fundamental spatial scales. At the large retinotopic scale, the target evokes an activity envelope that spreads over several mm2 and is well fitted by a two-dimensional (2D) Gaussian (Fig. 3B, top row; (Chen et al 2006, Chen et al 2012, Sit et al 2009)). Our 8x8 mm2 imaging area allows us to capture this entire target-responsive region. Because the background is much larger than the target and is centered at the same location in the visual field, it produces a relatively uniform response within the imaging area (Fig. 3B, second row). Similarly, the target-plus-background stimulus elicits activity within the entire imaged area, with a relatively elevated activity at the retinotopic region corresponding to the target location (Fig. 3B, 3rd row). However, the targetevoked response in the presence of the background (response to target plus background minus response to the background alone) appears significantly weaker than the response to the target alone (Fig. 3B, 1st vs. 4th row). This reduced target-evoked response in the presence of the background could contribute to the perceptual masking effect of the background. Our goal here was to determine how this sublinear interaction between the response to the target and background depends on target-background similarity in orientation.
In addition to the retinotopic-scale response envelope, fine scale response modulations at a scale of individual orientation columns (diameter of ∼0.3 mm) reflect the orientation of the target and background. These columnar scale modulations have a relatively small amplitude and therefore appear as small ripples riding on top of the larger retinotopic response envelope. The relatively smaller VSDI responses at the columnar scale is due to a mixture of robust non-orientation selective V1 population responses in V1 as well as optical and biological blurring (Chen et al 2012). We can selectively access the columnar scale signals by spatially filtering the responses at the scale of the orientation columns. Despite their small relative amplitude, these columnar-scale signals provide high quality single-trial orientation decoding (Benvenuti et al 2018).
Retinotopic-scale effect of target-background similarity masking
To study the effect of similarity masking on V1 responses at the retinotopic scale, we used an optimal linear decoder of V1 population responses (Chen et al 2006, Chen et al 2008) that allows us to assess the neural detection sensitivity of V1 population responses (i.e., how well one can detect the target from single-trial V1 population responses) (Fig. 3C-D). The retinotopic decoder takes into account the location and shape of the envelope of the target-evoked response (Fig. 3D), as well as the structure of the noise covariance matrix (Fig. 3C; (Chen et al 2006)).
Figure 4 summarizes the dynamics of the retinotopic template output in response to the V1 signals across all of our experiments for two combinations of target and background contrasts (see Table 1 and Fig. 4 – figure supplement 1 for the full set of tested target/background combinations). When presented on a uniform gray background, the target-related retinotopic signal begins to rise ∼40 ms after target onset, reaches its peak ∼100 ms after stimulus onset, and remains high for the next 100 ms (Fig. 4B,H, black curve). However, when the same target is added to the background, the target-related retinotopic signals display a wide range of responses that depend on background orientation (Fig. 4B,H, colored curves).
Our main interest here is in the target-evoked response in the presence of the background (Fig. 4C,I), which can be extracted by subtracting the response to the background alone (Fig. 4A,G) from the response to the target plus background (Fig. 4B,H). If V1 contextual interactions at the retinotopic scale contribute to the behavioral effect of similarity masking, we would expect that target-evoked responses would be weakest with high target-background similarity (similar target and background orientations) and strongest with low target-background similarity (orthogonal target and background orientations).
Surprisingly, we find that the target-evoked response in V1 displays two distinct phases, with the early phase showing a paradoxical neural dependence on target-background orientation similarity that is anticorrelated with the behavioral masking effect, and with a later phase that is consistent with the behavioral masking effect. Specifically, in the early phase which starts at response onset, the targetevoked response is highest when the background matches the target orientation even though behaviorally this is the condition in which detection performance is the worst. However, after this initial phase, the high-similarity target-evoked response starts to drop, while the low-similarity target-evoked response continues to build up, so that in the later stages the target-evoked response is strongest on the dissimilar background and weakest on the similar background, consistent with the behavioral effect of similarity.
To quantify the relation between the effects of target-background orientation similarity on V1 population responses and behavior, we computed the correlation between the effect of orientation similarity on behavior (Fig. 4F,L) and its effect on the target-evoked neural responses in individual 10 ms frames (Fig. 4E,K). This analysis reveals a robust paradoxical negative correlation between the early neural V1 response and behavior, weak positive correlation between the late neural V1 response and behavior, and no correlation between behavior and the integrated neural response. This result was obtained from averaging the response across all trials irrespective of whether the monkey made the correct decision.
To examined whether decision- and/or attention-related signals have a major contribution to the observed biphasic dynamics, we repeated the analysis on only the hits and correct rejection trials (Fig. 4 – figure supplement 2B-C). Our results are qualitatively the same for the subset of correct trials, indicating that decision- and/or attention-related signals are unlikely to play a major role in the observed dynamics.
Because the target and background are defined by their orientation, the correspondence between the neural signals in V1 and behavior may be better captured by V1 responses at the columnar scale. Our next step was therefore to examine the dynamics of the columnar-scale target-evoked responses in V1.
Neural effects of target-background similarity masking at the scale of orientation columns
To study the effect of similarity masking on V1 responses at the columnar scale, we developed a linear columnar decoder of the VSDI signals (Fig. 3E). The columnar decoder takes into account the location of the orientation columns within the retinotopic envelope of the target-evoked response. Because the target is horizontal, the output of the columnar template is expected to be positive for the horizontal target and background stimuli and negative for the vertical background stimulus (since the horizontal and vertical columnar maps are anti-correlated).
As with the output of the retinotopic-scale template, the output of the columnar-scale template displays two distinct phases. Figure 5 shows the time course of the columnar template signals to background alone (Fig. 5A,G), the target plus background (Fig. 5B,H), and target-evoked response in the presence of the background (Fig. 5C,I). In the early phase, the target-evoked response is highest when the background and target have similar orientations, producing a paradoxical neural response that is anti-correlated with the behavioral masking effect (Fig. 5E,K). In the second phase the trend reverses and the target-evoked response is strongest on the dissimilar background and weakest on the similar background, consistent with the behavioral effect of similarity (Fig. 5E,K). Similar results were obtained with other target and background contrast combinations (Fig. 5 – figure supplement 1).
Again, to examined whether decision- and/or attention-related signals have a major contribution to the observed biphasic dynamics at the columnar scale, we examined the behavioral correlations with hits and correct rejection trials only. We found only minor differences in the target-evoked response and behavioral correlations (Fig. 4 – figure supplement 2D-E), indicating that the observed biphasic dynamics at the columnar scale are unlikely to have a major top-down contribution.
Because the first phase of the response is shorter than the second phase, when V1 response is integrated over both phases, the overall response is positively correlated with the behavioral masking effect (Fig. 5D,J, figure supplement 1C). Therefore, our results suggest that the neural masking effect at the columnar scale in V1 could play a major role in the behavioral similarity masking effects.
Dynamics of columnar-scale orientation population trajectories
Our decoding analysis focuses on the columnar-scale orientation signals along the 0°-90° axis and reveals complex columnar-scale dynamic interactions between the target-evoked response and the response evoked by the background (Fig. 5). To examine these dynamics in more detail, we performed two types of population-vector analyses (Fig. 6). We began by assigning each pixel within the retinotopic footprint of the target-evoked response to one of 12 equally spaced preferred orientations, creating 12 orientation selective clusters of pixels (Fig. 6B-C). We then computed for each stimulus the response in each orientation selective cluster in each frame and displayed, in the first analysis, the population orientation tuning curve as a function of time (Fig. 6D), and in the second analysis the population vector dynamic trajectory in the polar space spanned by the 12 orientations (Fig. 6E); i.e., the orientation θ and magnitude Rθ of the peak of the population response over time.
The first population vector analysis reveals that V1 responses to the target or background alone are consistent with the stimulus orientation. In background only trials, shortly after stimulus onset the peak of the population tuning curve closely matches background orientation (Fig. 7A, F, top row, red arrow and horizontal line). Similar results were obtained in the target only trials, where the peak of the population response tuning curve matches target orientation (Fig. 7A, F, 4th row, green arrow and horizontal line).
Target plus background responses display complex spatiotemporal dynamics. To examine the dynamics of the target-evoked response in the presence of the background, we subtracted the background only response from the target plus background response (Fig. 7A, F, 3rd row). The results reveal complex target-background interactions which could lead to a population tuning peak (white curve) that significantly deviates from the target orientation. For example, in some conditions, we observe an orientation tuning peak that is repelled from target orientation in the direction away from background orientation. An interesting goal of future studies would be to examine potential perceptual correlates of these interactions.
In the second population vector analysis, we plotted the response trajectories for each stimulus using the vector representation in polar coordinates (Fig. 7B-E, G-J). After stimulus onset, the population vector for background only moves in the direction corresponding to the background orientation (Fig. 7B,G) and for target only moves in direction corresponding to the target (Fig. 7E,J).
The trajectories in the target plus background conditions are more complex. For example, when background orientation is at +/-45 deg to the target, the population response is initially dominated by the background, but then in mid-flight, the population response changes direction and turns toward the direction of the target orientation.
Such complex interactions can be used to constrain models of V1 population response.
Dynamic gain control model qualitatively captures similarity masking effects in V1
Our next goal was to determine whether the observed interactions between the background- and target-evoked responses can be qualitatively captured by a gain control model. In this model, orientation columnar response was tuned to one of 12 equally spaced orientations. The responses of each orientation column were specified by the simple normalization model summarized in Figure 8A. Specifically, the spatiotemporal input stimulus generates an excitation signal and a normalization signal that are both linear with the input root mean square (rms) contrast. The normalization signal is then combined with a normalization constant to obtain the normalization factor. The normalized response is obtained by dividing the excitation signal by the normalization factor. The final response is then obtained by applying a response exponent p, which is similar to applying a spiking nonlinearity. Importantly, the excitation and normalization signals can differ in their spatial extent, orientation tuning width, and temporal impulse response (see Methods for model parameters).
We find that this simple model can qualitatively captures our key results. First, in response to background alone (Fig. 8B), the modeled population vector peaked at ∼100 ms after stimulus onset and then dropped to a lower amplitude, as in our data (Fig. 5A,G). This reduction in response amplitude was due to normalization signal that was delayed relative to the excitation signal. Second, as in the real data, response to the target plus background is less than the sum of the responses to each component separately. Third, as in our physiological results, the target-evoked response in the presence of the background is biphasic, having a brief early component in which the response is enhanced by target-background similarity, and a longer-lasting late component in which the response is suppressed by target-background similarity (Fig. 8D). This leads to an early phase in which the response is anticorrelated with the behavioral effect of similarity masking, and a late phase and an integrated response that are positively correlated with the behavioral effect of similarity masking (Fig. 8E,F).
Finally, this simple model can also display the curved trajectories of the population vector in response to the target plus background (compare Fig. 8H to Fig. 7C,H).
Additional results show that the model is relatively insensitive to the spatial extent of the normalization signal (Fig. 8 – figure supplement 1). The model predicts very similar temporal dynamics with the spatial extent of the background mask as small as twice the size of the target (Fig. 8 – figure supplementary 2). When the background and target are the same size, the model predicts that sufficiently high contrast background will also drive the same biphasic temporal dynamics (fig. 8 – figure supplement 3, rows 7 and 8). To account for the orientation-dependent neural and behavioral masking effects, the model requires an orientation tuned normalization (Fig. 8 – figure supplement 4).
Overall, our results suggest that a simple model with delayed and orientation tuned divisive gain control can qualitatively capture the complex spatiotemporal dynamics of V1 population responses to localized oriented targets added to oriented backgrounds.
Discussion
To test the hypothesis that nonlinear computations in V1 contribute to the perceptual effect of similarity masking, we used voltage-sensitive dye imaging (VSDI) to measure neural population responses from V1 in two macaque monkeys while they performed a visual detection task in which a small oriented target was detected in the presence of a larger background of varied orientations. Like human observers, the monkeys were strongly affected by the orientation similarity of the target and the background (Fig. 2). Their detection threshold increased with increased target-background orientation similarity, while their reaction times showed complex, and in some cases non-monotonic, dependency on target-background orientation similarity.
To quantify the neural effects of similarity masking, we measured neural sensitivity to the target at two fundamental spatial scales of V1 topographic representations. The large scale of the retinotopic map and the finer scale of the columnar orientation map. We discovered that at both scales, V1 population responses to the target and background display two distinct phases (Fig. 4B, H, Fig. 5B, H). An initial transient phase in which target-evoked V1 response is strongest when the target and background have similar orientations. At this early phase, V1 responses are therefore paradoxically anti-correlated with the behavioral effect of similarity masking (Fig. 4E, K, Fig. 5E, K). In the second phase, the masking effect reverses, and the target-evoked response is maximally reduced when the target and background are similar. In this second sustained phase V1 population responses are therefore consistent with the behavioral similarity masking effect. To explore the possibility that these biphasic dynamics reflect contributions from decision- and/or attention-related top-down signals rather than from low-level nonlinear encoding mechanisms in V1, we re-examined our results while excluding error trials (Fig. 4 – figure supplement 2). We found that the biphasic dynamics hold even for the subset of correct trials, reducing the likelihood that decision/attention-related signals play a major role in explaining our results.
The positive correlation between the neural and behavioral masking effects occurred earlier (Fig. 5E, K vs. Fig. 4E, K) and was more robust at the columnar scale than at the retinotopic scale (Fig. 5D, J vs. Fig. 4D, J; see also Fig. 4 – figure supplement 1C, and Fig. 5 – figure supplement 1C). In addition, while the temporally integrated columnar response was positively correlated with behavior across all target and background contrasts tested (Fig. 5E,K, figure supplement 1D), the integrated retinotopic responses were uncorrelated, or in some cases anticorrelated, with behavior (Fig. 4E,K, figure supplement 1D). These results suggest that behavioral performance in our task is dominated by columnar scale V1 signals in the second phase of the response. To the best of our knowledge, this is the first demonstration of such decoupling between V1 responses at the retinotopic and columnar scales, and the first demonstration that columnar scale signals are a better predictor of behavioral performance in a detection task.
Due to the challenges of setting up these experiments, we were unable to collect all target/background contrast combinations from both monkeys. However, in the common conditions, the results appear similar in the two animals, and the key results seem to be robust to the contrast combination in the animal where a wider range of contrast combinations was tested (Fig. 4 – figure supplement 1, and Fig. 5 – figure supplement 1).
We find that when the target and background have similar orientations, columnar-scale information about the target is restricted to the first phase of the response and then largely disappears during the second phase of the response. These physiological results could be related to the surprising mismatch between task difficulty and reaction times (Fig. 2C). Rather than having reaction times that monotonically increase with task difficulty, in our masking detection task, reaction times can be shortest when target and background orientations match, even though it is hardest to detect the target under these conditions. The short reaction time to this stimulus may be the consequence of the target information being best represented in the early phase of the response.
The nature of contextual modulations in V1 is quite complex (Angelucci et al 2017, Angelucci & Bressloff 2006, Bai et al 2021, Cavanaugh et al 2002b, Henry et al 2020, Michel et al 2018, Polat et al 1998, Sceniak et al 1999, Shushruth et al 2012). A second goal of our study was to quantitatively characterize the spatiotemporal dynamics of columnar-scale V1 population responses to targets and backgrounds of different orientations and contrasts. Using a dynamic population vector analysis, we find that in the presence of an oriented background, the peak of the population orientation tuning to the target can deviate significantly from target orientation. For example, in some conditions, we observe a population orientation tuning peak that is repelled away from target orientation in the direction opposite to background orientation (Fig. 7E). These orientation-dependent interactions could contribute to nonveridical perceptual representations of orientation such as in the well-known tilt illusion effect (Clifford 2014, Schwartz et al 2007, Wenderoth & Johnstone 1987). An important goal for future studies would be to test for this possibility.
Using the population vector analysis, we find that columnar scale V1 representations are initially dominated by the orientation of the background. The target orientation then appears in the second phase of the response, which leads to curved population vector trajectories (Fig. 7C, H). Identifying possible perceptual consequences of such dynamic and complex trajectories, and understanding the neural circuit mechanisms that give rise to such responses, are two important goals for future work.
Nonlinear response properties in V1 are commonly modeled as a consequence of a divisive gain control mechanism (Albrecht & Geisler 1991, Carandini & Heeger 1994, Heeger 1991, Heeger 1992, Sit et al 2009). As a first step toward understanding the mechanisms that could give rise to the observed V1 responses, we tested whether a simple dynamic gain control model could account for our findings (Fig. 8). We find that a simple gain control model can qualitatively account for our results, but that in order to do so, the model has to display two important properties. First, to account for the biphasic nature of V1 response, the divisive normalization signals have to be delayed relative to the excitatory signal. Second, in order to account for the reduced neural sensitivity with target-background similarity in the second phase of the response, the divisive normalization signal has to be orientation selective (Fig. 8 – figure supplement 4). Because in primates and carnivores, robust orientation selectivity first emerges in V1 (Hubel & Wiesel 1959, Hubel & Wiesel 1968), these results suggest that a significant portion of the nonlinear interactions observed in the current study originate in V1 rather than being inherited from the ascending inputs that V1 receives from the LGN. While our experimental and computational results point to a delayed gain control signal that operates at the level of V1, they do not directly speak to the circuit and biophysical mechanisms that contribute to the implementation of this gain control in V1.
Multiple candidate mechanisms for implementing gain control in V1 have been proposed (Angelucci et al 2017, Angelucci & Bressloff 2006, Ozeki et al 2009, Rubin et al 2015, Tsodyks et al 1997). Our results provide new and powerful constraints for such mechanistic models.
A key difference between our study and previous center-surround studies (e.g., (Cavanaugh et al 2002a, Cavanaugh et al 2002b, Henry et al 2020, Shushruth et al 2012)) is the stimuli that we used. First, in our experiments, the target and the mask were additive, while in most previous center-surround studies the target occludes the background. Such studies therefore restrict the mask to the surround while our study allows target-mask interactions at the center. Second, most previous center-surround studies have a sharp-edged target/surround border, while in our experiments no sharp edges were present. Unpublished results from our lab suggest that such sharp edges have a large impact on V1 population responses. Third, our stimuli were flashed for a short interval of 250 ms corresponding to a typical duration of a fixation in natural vision, while most previous center-surround studies used either longerduration drifting stimuli or very short-duration random-order stimuli for reverse-correlation analysis.
Because our targets are added to the background rather than occluding it, it is likely that a significant portion of the behavioral and neural masking effects that we observe come from target-mask interactions at the target location rather than from the effect of the mask in the surround. Several lines of evidence support this possibility. First, in human subjects, perceptual similarity masking effects can be almost entirely accounted for by target-mask interactions at the target location and are recapitulated when the mask has the same size and location as the target (Sebastian et al 2017). There is a reduction in masking when the background is windowed to the target envelope, but this effect is due to removing background within the target envelope (Sebastian et al 2017). Second, in our computational model (Fig. 8), the effects of mask orientation on the dynamics of the response are qualitatively similar if the mask is restricted to the size and location of the target and mask contrast is increased (Figs. 8 – figure supplement 3). Third, in our model, the results are qualitatively the same when the spatial pooling region for the normalization signal is the same as that for the excitation signal (Fig. 8 – figure supplement 1). These considerations suggest that center-surround interactions may not be necessary for neural and behavioral masking effects with additive targets.
Finally, we note that the tuned similarity normalization that explains the neural and behavioral similarity-masking effects reported here is consistent with a principled encoding strategy for feature detection under natural conditions. When viewing a static scene under natural conditions, human and non-human primates make 3-4 saccadic eye movements per second, with fixations between saccades of 200-300 ms. Given the typical size of the saccades, most visual receptive fields are stimulated during each fixation by a largely statistically independent random sample of natural image (Frazor & Geisler 2006). Analysis of the responses of linear receptive fields to random samples of natural image shows that the standard deviation of the response increases in proportion to the product of the luminance, the contrast and the similarity of the natural background to the receptive field (Sebastian et al 2017). Thus, divisive normalization by the product of luminance, contrast and similarity causes the standard deviation of the responses across natural images to be much more constant (i.e., nearly independent of the luminance, contrast and similarity of the background within the receptive field). This more constant standard deviation makes it possible, with relatively simple decoders, to reach near optimal featuredetection performance, under the high levels of stimulus uncertainty that occur under natural conditions (Schwartz & Simoncelli 2001, Sebastian et al 2017).
Acknowledgements
We thank We thank members of Seidemann and Geisler laboratories for their assistances with this project. This work was supported by NIH grants EY-016454 to E.S., EY-024662 to W.S.G. and E.S., BRAIN U01-NS099720 to E.S. and W.S.G., and DARPA-NESD0-N66001-17-C-4012 to E.S.
Methods
All procedures have been approved by the University of Texas Institutional Animal Care and Use Committee and conform to NIH standards.
Widefield Voltage-Sensitive Dye Imaging
The experimental technique for widefield voltage-sensitive dye (VSD) imaging of neural response in awake, behaving macaques was adapted from previous studies (Bai et al 2021, Chen et al 2006, Chen et al 2008, Chen et al 2012). Briefly, two adult male macaque monkeys (Monkey H and Monkey T) were implanted with a metal head post and metal recording chambers located over the dorsal portion of V1, a region representing the lower contralateral visual field at eccentricities of 2–5°. Craniotomy and durotomy were performed. A transparent artificial dura made of silicone was used to protect the brain while allowing optical access for imaging (Arieli et al 2002)(Fig. 3A). Experiments were conducted in the left hemisphere chamber of Monkey H, and in both left and right hemisphere chambers of Monkey T.
VSD imaging was used to record neural population activity at a high resolution in space and time (Shoham et al 1999). Before each experiment, VSD (RH1691 or RH1838) was topically applied on the cortex for ∼ 2 hours to allow the VSD molecules to bind to cellular membranes. In Monkey H, fluorescence from neural activity was recorded using Imager 3001 (Optical Imaging, Inc.) using a tungsten-halogen light source (Zeiss). An infrared eye-tracker (Dr Bouis Inc.) was used to monitor eye position. In Monkey T, florescence was recorded using custom Matlab software interfaced to PCO Edge 4.2 sCMOS camera (Excelitas PCO GmbH) using X-Cite 110LED light source (Excelitas Technologies Corp). Eye position was monitored using an Eyelink 100 Plus video eye-tracker (SR Research Ltd).
Both imaging systems were interfaced to a double-SLR-lens-macro system with housing for dichroic mirrors in between the two SLR lenses. The combination of a 50 mm fixed-focus objective lens (cortex end, Nikkor 50mm f/1.2) and an 85 mm fixed-focused (Canon EF 85mm f/1.2L USM) camera lens provided 1.7x magnification, corresponding to imaging approximately an 8 x 8 mm2 area of the cortex. Fluorescence signals were measured through a dichroic mirror (650-nm long-pass filter) and an emission filter (RG 665). VSD molecules were excited by light at 6301nm. Imaging data were collected at 512 × 512 resolution at 100⍰Hz. Data acquisition was time locked to the animal’s heartbeat (EKG QR up-stroke, HP Patient Monitor HP78352C). More details about optical imaging with VSD in behaving monkeys are described elsewhere (Bai et al 2021, Chen et al 2006, Chen et al 2008, Chen et al 2012).
Prior to the main experiments, VSD imaging was used to obtain a precise retinotopic map of the entire recording (Fig. 3 – figure supplement 1; (Yang et al 2007)). In two out of three chambers, retinotopic maps indicate that V1 extended into the lunate sulcus. In the third chamber, V1 terminated ∼0.75mm from the lunate sulcus. The area used for decoding analysis was chosen to entirely lie within V1.
Behavioral Task with Optical Stimulation
Monkeys were trained to detect a small additive horizontal Gabor target (4cpd, with σ = 0.14°, 0.33° FWHM envelope) centered on a sinusoidal grating background mask of the same spatial frequency (4° raised-cosine windowed). The background grating was oriented at 0°, ±15°, ±30°, ±45°, ±60, and 90° from the Gabor target orientation. Both the Gabor and background grating were bright-centered – that is, the 0° orientation background was completely in phase with the target. The contrast of the target and background were varied in combinations of levels, reported in Michelson contrast:
For each experiment, a Fixation recording block and a Detection recording block were made using the same target and background conditions. In both blocks, the target and background were centered at a fixed position for each experiment corresponding to the working cortical chamber. This position varied between experiments from 1.6 to 3 deg of visual angle eccentricity from the fixation point and from 20 to 50 deg of polar angle from the vertical meridian in the corresponding hemifield (Fig. 3 – figure supplement 1). At these coordinates, the spatial extent of the target was fully imaged through the cortical window, and the larger oriented background uniformly activated the entire imaging area.
In the Fixation block, the monkeys were required to remain fixated for each imaging trial while either the target or full-field sinusoidal gratings at 100% contrast were flashed at 5Hz (60ms ON, 140ms OFF) for 1.0s. These recordings were processed to obtain retinotopic and columnar orientation response maps that were used to decode the detection recording responses from the same experiment day (Fig. 3C-E).
In the Detection recordings blocks, a background with random orientation appeared on every trial, with a 50% chance of an accompanying Gabor target (Fig. 1C-E). The monkeys were tasked to report the presence of the target. Each trial began with fixation on a bright 0.1° square. An auditory tone and the dimming of the fixation square cued the monkey to the start of the detection task trial. 250ms later, the background with or without the target was presented. The monkeys were trained to maintain gaze at the fixation cue on target absent trials or saccade to and hold gaze (for 150ms) at the target position to indicate target detection (with a 75ms minimum allowed reaction time). When the target was present, it remained on screen for a maximum of 250ms or was extinguished immediately upon the monkeys’ saccade initiation. The monkey was given 600ms to make the saccade or to hold fixation and was subsequently rewarded on correct choices: stay (correct reject) on target absent trials, or saccade to target (hit) on target present trials. The target and background contrast level were fixed for each recording block. The probability of each orientated background and target presence were balanced for each recording block.
A separate target only Detection block on uniform gray background was also taken on each experiment day using the same routine as above. This block was later used as the reference data to normalize response amplitudes across experiment days.
Experiments were conducted with custom code using TEMPO real-time control system (Reflective Computing). The visual stimulus was presented on a Sony CRT (1024x768 @ 100 Hz), distanced 108 cm from the animal (50 pixels-per-degree), with mean luminance 50 cd/m2 . The visual stimulus was generated using in-house real-time graphics software (glib).
Behavior Performance and Reaction Time
Behavior performance was calculated for each background orientation and reported in units of detection sensitivity index d’ (d-prime). D-prime and criterion were estimated as:
where Φ−1(x) is the inverse transform of the cumulative normal distribution; and P(HIT) and P (FA) represent the proportion of hits and false alarms respectively. To avoid leaving out the data in some conditions (e.g., when there are no false alarms), we scaled all the proportions to be between 0.005 and .
Mapping between the unbiased percentage correct response and d’ is as follows and is depicted in Figure 2B:
D-prime performance across orientations was fitted with an inverted, dc-shifted Gaussian:
The reaction times of the animals were calculated from the stimulus onset to the onset of saccade. Consequently, there was no measurement of reaction time on trials where the animals remain fixated at the fixation point.
VSD Imaging
For each trial, an image sequence was captured for a total of 1.2 seconds including pre-stimulus and post-stimulus frames. The image sequence was analyzed to extract the response using a variant of the previous reported routines (Bai et al 2021, Chen et al 2006, Chen et al 2008, Chen et al 2012).
Image stabilization was introduced as the first stage of pre-processing to de-accentuate blood vessel edges in the ΔF/F response map caused by micro movements of the camera and/or the cortex during imaging. The image intensity across time at each individual pixel was modelled with separable motionfree (Ix0(t)) and motion-related components as follows:
For each trial, a single global motion vector was obtained by estimating the translational motion of center portion of the images (1/4 of the imaging area). The motion coefficients for each pixel was then obtained using least squares fitting to the model. The motion-corrected image is Ix0(t). This the approach to image stabilization (compared to traditional image registration approach) has the advantage of correcting for non-rigid movements (rotations, expansion/contractions, affine transformation, local distortions, etc.) and sub-pixel motion.
Retinotopic and Columnar Template Decoding
Template decoding was used to summarize the retinotopic and columnar response for each image frame. The retinotopic response map of the target and columnar orientation map of the imaging area were estimated from the Fixation recording blocks, in which response were stimulated with visual presentation at 5Hz. The preprocessing steps for the Fixation recording blocks were: image stabilization, 5Hz FFT response extraction, ΔF/F normalization, then down-sampling to 128x128 pixels (from 512x512). Baseline florescence (F0) was estimated by the average florescence over frames -80 to 0ms relative to stimulus onset. ΔF/F was calculated as:
The retinotopic response map of the Gabor target (Hret(x)) was estimated by fitting a 2D Gaussian over the 5Hz flashing Gabor amplitude response. The full-ROI (8x8mm2 imaging area) columnar orientation response maps (Hori (x)) were estimated from the flashing full-field grating, where the 5Hz FFT grating response amplitudes were bandpass filtered from 0.8 to 3.0 cycles/mm and the orientation tuning of each pixel was estimated as described previously (Chen et al 2012). Subsequently, the full-ROI orientation map was windowed by the retinotopic map to co-localize the retinotopic and the columnar decoders. The columnar map comprised of pixelwise response magnitude (A(x))and tuning angle (θ(x)), represented in modified Euler’s form:
These maps served as templates for decoding the retinotopic and columnar response from the Detection blocks. To further reduce the effect of motion artefacts, a pixel-wise reliability weighted approach was adopted. The Detection blocks was preprocessed with image stabilization, down-sampling to 128x128, and ΔF/F normalization as above. From the pre-processed images, the retinotopic scale variance and columnar scale variance were calculated to be used as reliability weights.
The variance was obtained from condition-mean subtracted residuals taken across all frames across trials, with the ΔF/F response (Rret(x,t)) bandpass filtered between 0.8 to 3.0 cycles/mm (Rcol(x,t)) for the columnar scale variance. Reliability weighting was implemented by normalizing each template pixel by the corresponding pixel-wise variance.
Two different columnar decoding methods were employed. The first examined the overall response aligned to the orientation of the Gabor (0° orientation tuning axis). A second columnar decoding scheme was employed to examine the full orientation population response. This second scheme was comprised of 12 decoders that evenly partitioned the orientation space, such that each decoder contained the column response magnitude in a subset of pixels tuned with ±7.5° centered at -75° to 90° in 15° steps. Each decoder therefore represents the population response of similarly tuned neural ensembles spanning the orientation space every 15°. Each decoder was normalized by the summed response magnitude of its pixel subset; in this way, each sub-population response was equally represented in the population tuning curve.
The formulation for the reliability-weighted templates and the decoding is summarized below retinotopic response time course (rret(t)), columnar response time course (r090(t)), columnar for the population tuning time course (rθ (t)).
Response Pooling across Experiments
Across experiments, the varying effectiveness of VSD staining led to large variations in the noise level and amplitude of the ΔF/F response. For pooling data across experiments, response amplitudes were normalized based on singular value decomposition (SVD). Here, the decoded response of the reference target only Detection block was used. These were trials of varying target contrast levels on the uniform, mean luminance background instead of the oriented grating background, collected on the same day. The assumption is that the neural response amplitude and dynamics to the target should be the same irrespective of the experiment day. Retinotopic and columnar decoder responses for the target only block were calculated as with the background detection block as described above. Target contrast response for each experiment were interpolated so that all assessed target contrast levels across experiments were represented, then SVD was performed over the frames -100 to 200ms about the stimulus onset across all experiments. In this way, the first component of the SVD (SVD1) represented the average neural dynamics of the target contrast response, and the SVD1 coefficients represent the magnitude of this target contrast response each experiment day. Response from each experiment was then scaled to match the experiment with the largest SVD1 coefficient. This was performed for each decoded response separately. When pooling data for the mean and standard deviation, the inverse of the scaling factor squared was used as the reliability weighting.
Lastly, responses are expressed in a modified z-score, obtained by normalizing the response by its standard deviation. To calculate this standard deviation, responses were grouped by presented stimuli, and the means of each group was subtracted. The residuals from the mean-subtraction from frames 50 to 250ms post stimulus onset was pooled according to the aforementioned experiment reliability weights Wexp to obtain the response standard deviation.
Response beyond the saccade may contain unwanted signals. Frames beyond the reaction time for each trial were therefore omitted from summary statistics. For the integrating response within trials, responses were averaged up to the frame of saccade. For frame-by-frame averaging across trials for response time course, trials were dropped out from the averaging beyond their reaction time frame.
Descriptive Trends across Background orientation
Trends were fitted to the normalized VSD response across background orientations (e.g., gray curve in Fig. 4D). VSD responses were first averaged over a specific range of frames, and for illustration only the trends of the averaged response across orientations were fitted with either a flat line or a Gaussian.
The best fitting trend by the F-test were chosen for display in the figures.
Behavior Correlation
The retinotopic and the columnar response time courses were correlated against the monkey’s behavior judgement. Pearson’s correlation coefficient was estimated between the instantaneous response in time and overall behavior sensitivity index d’. To account for the different trial counts between background orientations, a trial count weighted version of the correlation coefficient was adopted:
The p-value for the weighted coefficients were estimated using the standard t-score replacing the degree of freedom with an entropy based effective estimate from the weights.
Normalization model of orientation masking dynamics
A simple model of the neuronal population response with divisive normalization described our results qualitatively. In this model, orientation columnar response was tuned to one of 12 different orientations: -75° to 90° in 15° increments. The responses of each orientation column were specified by the simple normalization model summarized in Figure 8. The input stimulus is specified by the contrast and orientation of the target, the contrast and orientation of the background, and the duration of the stimulus: s = (CT,θT,CB,θB, D) . In the model, the input stimulus generates and excitation signal re (t |s,θmax) that is linear with stimulus contrast and a normalization signal rn (t |s,θmax) that is also linear with stimulus contrast. Without loss of generality, all signals were scaled by the 24% contrast target only response averaged over 50-200ms. These excitation and normalization signals are controlled by the excitation and normalization parameters, Ωe and Ωn, described below. The normalization signal is then combined with a normalization constant r0 to obtain the normalization factor. The normalization constant limits how small the normalization factor can become. The normalized response is obtained by dividing the excitation signal by the normalization factor. The final response is then obtained by applying a response exponent p, which is similar to applying a spiking nonlinearity:
In Figure 8, we show the final responses corresponding to the center of the stimulus x0 . The excitatory response at that location for background, and target plus background, is obtained by convolving the effective input contrast signals with the spatial-temporal impulse-response function and evaluating at x0
where cBe (x,t) and cTBe (x,t) are the effective input contrast signals, and he (x,t) is the spatiotemporal impulse response function. The effective excitatory contrast of the background for the orientation channel with preferred orientation θmax is given by
where CB (x) is the background contrast, σ e is the falloff parameter of column’s orientation turning function, and w(t; D) is a temporal pulse function of width D . Similarly, the effective target contrast is given by
The effective excitatory contrast for target plus background is slightly more complicated because there can be some contrast summation or cancellation depending on the phase and orientation of the target relative to the background
where λ (x) is the contrast correction factor,
The spatiotemporal impulse response function is the separable product of a Gaussian distribution and a gamma distribution
where se is the standard deviation of the 2D Gaussian and ae,be are the two parameters of a gamma distribution: .
The formulas for the normalization signal are the same as for the excitatory signal, except the four parameters are allowed to differ: Ω = (σe, se,ae,be), Ω = (σn, sn, an,bn).
Parameter values from known properties of single neurons in primary visual cortex were adopted. The response exponent p was constrained to 2.0, consistent with single neuron contrast-response functions (Albrecht & Hamilton 1982, Geisler & Albrecht 1997, Sclar et al 1990). It was assumed that the peak orientation of the excitatory signal and suppressive normalization signal θ maX were the same (Cavanaugh et al 2002a), but that the orientation bandwidth of the normalization signal was greater than that of the excitation signal, σn > σ e (Cavanaugh et al 2002b), and that the spatial pooling region for the normalization signal was larger than that for the excitation signal, sn > s e (Cavanaugh et al 2002a, Cavanaugh et al 2002b, Levitt & Lund 2002, Sceniak et al 2001). As shown in Fig. 8 – figure supplement 1, we found that setting sn > se had little effect on the modeling of our empirical results (Figs. 5); for simplicity, our final model assumed sn = se . Finally, it was assumed that the temporal dynamics of the normalization signal, determined by parameters an and bn, are slower than those for the excitation signal, determined by parameters ae and be (Groen et al 2022, Zhou et al 2019).
The following were the parameter values used in Figure 8: r0 = 0.03125,
All analyses were done using Matlab R2018a.
Statistics
Two animals were examined to verify the consistency of experimental approach and results. Multiple recordings were made from the same animals. The number of recordings were based on previous experience; no statistical method was used to predetermine sample size.
Statistical analyses were conducted in Matlab (R2018a).
Data and Code Availability
The data and custom code from this study will be made available upon manuscript acceptance.
References
- Motion Selectivity and the Contrast-Response Function of Simple Cells in the Visual-CortexVisual Neuroscience 7:531–46
- Striate Cortex of Monkey and Cat - Contrast Response FunctionJournal of Neurophysiology 48:217–37
- Stimulus specific responses from beyond the classical receptive field: neurophysiological mechanisms for local-global comparisons in visual neuronsAnnu Rev Neurosci 8:407–30
- Circuits and Mechanisms for Surround Modulation in Visual CortexAnnu Rev Neurosci 40:425–51
- Contribution of feedforward, lateral and feedback connections to the classical receptive field center and extra-classical receptive field surround of primate V1Visual Perception, Part 1, Fundamentals of Vision: Low and Mid-Level Processes in Perception 154:93–120
- Dural substitute for long-term imaging of cortical activity in behaving monkeys and its clinical implicationsJournal of Neuroscience Methods 114:119–33
- Similar masking effects of natural backgrounds on detection performances in humans, macaques, and macaque-V1 population responsesJ Neurophysiol 125:2125–34
- Scale-Invariant Visual Capabilities Explained by Topographic Representations of Luminance and Texture in Primate V1Neuron 100:1–9
- Orientational selectivity of the human visual systemJ Physiol 187:437–45
- Summation and Division by Neurons in Primate Visual-CortexScience 264:1333–36
- Nature and interaction of signals from the receptive field center and surround in macaque V1 neuronsJournal of Neurophysiology 88:2530–46
- Selectivity and spatial distribution of signals from the receptive field surround in macaque V1 neuronsJournal of Neurophysiology 88:2547–56
- Optimal decoding of correlated neural population responses in the primate visual cortexNat Neurosci 9:1412–20
- Optimal Temporal Decoding of V1 Population Responses in a Reaction-Time Detection TaskJ Neurophysiol 99:1366–79
- The relationship between voltage-sensitive dye imaging signals and spiking activity of neural populations in primate V1Journal of Neurophysiology :3281–95
- The tilt illusion: phenomenology and functional implicationsVision Res 104:3–11
- Human luminance pattern-vision mechanisms: masking experiments require a new modelJournal of the Optical Society of America. A, Optics, image science, and vision 11:1710–9
- Local luminance and contrast in natural imagesVision Res 46:1585–98
- Visual cortex neurons in monkeys and cats: Detection, discrimination, and identificationVisual Neuroscience 14:897–919
- VSDI: A new era in functional imaging of cortical dynamicsNature Reviews Neuroscience 5:874–85
- Temporal dynamics of neural responses in human visual cortexJ Neurosci 42:7562–80
- Nonlinear model of neural responses in cat visual cortexComputational models of visual processing :119–33
- Normalization of Cell Responses in Cat Striate CortexVisual Neuroscience 9:181–97
- Distinct spatiotemporal mechanisms underlie extra-classical receptive field modulation in macaque V1 microcircuitseLife 9
- Receptive fields of single neurones in the cat’s striate cortexThe Journal of Physiology 148:574–91
- Receptive fields and functional architecture of monkey striate cortexJournal of Physiology 195:215–43
- The spatial extent over which neurons in macaque striate cortex pool visual signalsVisual Neuroscience 19:439–52
- Nonlinear Lateral Interactions in V1 Population Responses Explained by a Contrast Gain Control ModelJ Neurosci 38:10069–79
- Inhibitory Stabilization of the Cortical Network Underlies Visual Surround SuppressionNeuron 62:578–92
- Collinear stimuli regulate visual responses depending on cell’s contrast thresholdNature 391:580–84
- The Stabilized Supralinear Network: A Unifying Circuit Motif Underlying Multi-Input Integration in Sensory CortexNeuron 85:402–17
- Visual spatial characterization of macaque V1 neuronsJ Neurophysiol 85:1873–87
- Contrast’s effect on spatial summation by macaque V1 neuronsNat Neurosci 2:733–39
- Space and time in visual contextNat Rev Neurosci 8:522–35
- Natural signal statistics and sensory gain controlNature Neuroscience 4:819–25
- Coding of image contrast in central visual pathways of the macaque monkeyVision research 30:1–10
- Constrained sampling experiments reveal principles of detection in natural scenesProc Natl Acad Sci U S A 114:E5731–e40
- Dynamics of depolarization and hyperpolarization in the frontal cortex and saccade goalScience 295:862–65
- Linking V1 Activity to BehaviorAnnual review of vision science 4:287–310
- Imaging cortical dynamics at high spatial and temporal resolution with novel blue voltage-sensitive dyesNeuron 24:791–802
- Strong recurrent networks compute the orientation tuning of surround modulation in the primate primary visual cortexJ Neurosci 32:308–21
- Complex dynamics of V1 population responses explained by a simple gain-control modelNeuron 24:943–56
- Spatial-frequency masking in vision: critical bands and spread of maskingJournal of the Optical Society of America 62:1221–32
- Paradoxical Effects of External Modulation of Inhibitory InterneuronsThe Journal of Neuroscience 17:4382–88
- Model of visual contrast gain control and pattern maskingJournal of the Optical Society of America. A, Optics, image science, and vision 14:2379–91
- Possible neural substrates for orientation analysis and perceptionPerception 16:693–709
- Spatial-Frequency Tuning of Orientation Selective Units Estimated by Oblique MaskingVision Research 23:873–82
- Rapid and precise retinotopic mapping of the visual cortex obtained by voltage sensitive dye imaging in the behaving monkeyJ Neurophysiol 98:1002–14
- Predicting neuronal dynamics with a delayed gain control modelPLoS Comput Biol 15
Article and author information
Author information
Version history
- Sent for peer review:
- Preprint posted:
- Reviewed Preprint version 1:
- Reviewed Preprint version 2:
- Version of Record published:
- Version of Record updated:
Copyright
© 2023, Chen et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 583
- downloads
- 51
- citations
- 0
Views, downloads and citations are aggregated across all versions of this paper published by eLife.