Stimulusdependent relationships between behavioral choice and sensory neural responses
Abstract
Understanding perceptual decisionmaking requires linking sensory neural responses to behavioral choices. In twochoice tasks, activitychoice covariations are commonly quantified with a single measure of choice probability (CP), without characterizing their changes across stimulus levels. We provide theoretical conditions for stimulus dependencies of activitychoice covariations. Assuming a general decisionthreshold model, which comprises both feedforward and feedback processing and allows for a stimulusmodulated neural population covariance, we analytically predict a very general and previously unreported stimulus dependence of CPs. We develop new tools, including refined analyses of CPs and generalized linear models with stimuluschoice interactions, which accurately assess the stimulus or choicedriven signals of each neuron, characterizing stimulusdependent patterns of choicerelated signals. With these tools, we analyze CPs of macaque MT neurons during a motion discrimination task. Our analysis provides preliminary empirical evidence for the promise of studying stimulus dependencies of choicerelated signals, encouraging further assessment in wider data sets.
Introduction
How perceptual decisions depend on responses of sensory neurons is a fundamental question in systems neuroscience (Parker and Newsome, 1998; Gold and Shadlen, 2001; Romo and Salinas, 2003; Gold and Shadlen, 2007; Siegel et al., 2015; van Vugt et al., 2018; O’Connell et al., 2018; Steinmetz et al., 2019). The seminal work of Britten et al., 1996 showed that responses from single cells in area MT of monkeys during a motion discrimination task covaried with behavioral choices. Similar activitychoice covariations have been found in many sensory areas during a variety of both discrimination and detection twochoice tasks (see Nienborg et al., 2012; Cumming and Nienborg, 2016, for a review). Identifying which cells encode choice, and how and when they encode it, is essential to understand how the brain generates behavior based on sensory information.
With twochoice tasks, Choice Probability (CP) has been the most prominent measure (Britten et al., 1996; Parker and Newsome, 1998; Nienborg et al., 2012) used to quantify activitychoice covariations. Although early studies (Britten et al., 1996; Dodd et al., 2001) explored potential dependencies of the CP on the stimulus content, no significant evidence was found of a CP stimulus dependency. Accordingly, it has become common to report for each neuron a single CP value to quantify the strength of activitychoice covariations. This scalar CP value has been typically calculated either only from trials with a single, noninformative stimulus level (e.g. Dodd et al., 2001; Parker et al., 2002; Krug et al., 2004; Wimmer et al., 2015; Katz et al., 2016; Wasmuht et al., 2019), or by pooling trials across stimulus levels (socalled grand CP [Britten et al., 1996]) under the assumption that choicerelated neural signals are separable from stimulusdriven responses (e.g. Verhoef et al., 2015; Pitkow et al., 2015; Smolyanskaya et al., 2015; Bondy et al., 2018). Alternatively, a single CP is sometimes obtained simply averaging CPs across stimulus levels (e.g. Cai and PadoaSchioppa, 2014; Latimer et al., 2015; Liu et al., 2016). Even when activitychoice covariations are modeled jointly with other covariates of the neural responses using Generalized Linear Models (GLMs) (Truccolo et al., 2005; Pillow et al., 2008), the stimulus level and the choice value are also usually used as separate predictors of the responses (Park et al., 2014; Runyan et al., 2017; Scott et al., 2017; Pinto et al., 2019; Minderer et al., 2019).
This focus on characterizing a neuron by a single CP value is mirrored in the existing theoretical studies. Existing theoretical results rely on a standard feedforward model of decision making in which a neural representation of the stimulus is converted by a threshold mechanism into a behavioral choice (Shadlen et al., 1996; Cohen and Newsome, 2009b; Haefner et al., 2013) assuming a single, zerosignal stimulus level, and hence ignoring stimulus dependencies of CPs. Furthermore, so far no analytical mechanistic model accounts for feedback contributions to activitychoice covariations known to be important empirically (Nienborg and Cumming, 2009; Cumming and Nienborg, 2016; Bondy et al., 2018).
The main contribution of this work is to extend CP analysis reporting a single CP value for each cell to a more complete characterization of withincell patterns of choicerelated activity across stimulus levels. First, we extended the analytical results of Haefner et al., 2013 to the general case of informative stimuli and to include both feedforward and feedback sources of the covariation between the choice and each cell. Our results predict that CP stimulus dependencies can appear in a cellspecific way because of stimulusdependencies of crossneuronal correlations. We show that they can also appear for all neurons because of the transformation of the neural representation of the stimulus into a binary choice, if the decisionmaking process relies on a threshold mechanism (or threshold criterion) to convert a continuous decision variable into a binary choice. Second, we developed two new analytical methods (a refined CP analysis and a new generalized linear model with stimuluschoice interactions) with increased power to detect stimulus dependencies in activitychoice covariations. Our new CP analysis isolates withincell stimulus dependencies of activitychoice covariations from acrosscells heterogeneity in the magnitude of the CP values, which may hinder their detection (Britten et al., 1996). Third, we applied this analysis framework to the classic dataset of Britten et al., 1996 containing recordings from neurons in visual cortical area MT and found evidence for our predicted populationlevel thresholdinduced dependency but also additional interesting cellspecific dependencies. We found consistent results on the existence of stimuluschoice interactions in neural activity both with our refined CP analysis and using generalized linear models with interaction terms. Finally, we show that main properties of the additional dependencies found can be explained modeling the crossneuronal correlation structure induced by gain fluctuations (Goris et al., 2014; Ecker et al., 2014; Kayser et al., 2015; Schölvinck et al., 2015), which have been shown to explain a substantial amount of response variability in MT visual cortex (Goris et al., 2014).
Results
We will first present the analysis of a theoretical model of how informative stimuli modulate choice probabilities. We will then analyze MT visual cortex neuronal responses from Britten et al., 1996, applying new methods developed to quantify stimulusdependent activitychoice covariations with CPs and GLMs. This analysis provides preliminary empirical evidence in support of using these new methods for studying stimulus dependencies of activitychoice covariations.
A general account for choicerelated neural signals in the presence of informative stimuli
In a twochoice psychophysical task, such as a stimulus discrimination or detection task, a neuron is said to contain a ‘choicerelated signal’, or ‘decisionrelated signal’ when its activity carries information about the behavioral choice above and beyond the information that it carries about the stimulus (Britten et al., 1996; Parker and Newsome, 1998; Nienborg et al., 2012). The interpretation of choicerelated signals in terms of decisionmaking mechanisms is however difficult. Much progress in our understanding of their meaning has relied on using models to derive mathematically the relationship between the underlying decisionmaking mechanisms and different measures of activitychoice covariation (Haefner et al., 2013; Pitkow et al., 2015) usually used to quantify choicerelated signals.
The most widely used measure of activitychoice covariation for tasks involving two choices is choice probability, $\mathrm{CP}$. The $\mathrm{CP}$ is defined as the probability that a random sample of neural activity from all trials with behavioral choice $D$ equal to 1 is larger than one sample randomly drawn from all trials with choice $D=1$ (Britten et al., 1996; Parker and Newsome, 1998; Nienborg et al., 2012; Haefner et al., 2013):
where $r$ is any measure of the neural activity, which we will here consider to be the neuron’s pertrial spike count. Another prominent measure of choicerelated signals is choice correlation, $\mathrm{CC}$ (Pitkow et al., 2015). This quantity is defined under the assumption that the binary choice $D$ is mediated by an intermediate continuous decision value, $d$. This value may represent the brain’s estimate of the stimulus, or an internal belief about the correct choice. The definition of CC further assumes that the categorical choice $D$ is related to $d$ via a thresholding operation such that the choice depends on whether $d$ is smaller or larger than a threshold θ (Gold and Shadlen, 2007). Its expression is as follows:
where $\mathrm{cov}(r,d)$ is the covariance of the neural responses with $d$, and $\mathrm{var}r$, $\mathrm{var}d$ their variance across trials. Perhaps, the simplest measure of activitychoice covariation, which has been used in empirical studies (Mante et al., 2013; Ruff et al., 2018), is what we called the choicetriggered average, $\mathrm{CTA}$, defined as the difference between a neuron’s average spike count $r$ across trials with behavioral decision $D=1$ minus the average spike count in trials with decision $D=1$:
The CP and CTA quantify activitychoice covariations without assumptions about the underlying decisionmaking mechanisms. However, their interpretation has commonly (Nienborg et al., 2012) been informed in previous analytical and computational studies by assuming a specific feedforward decisionthreshold model of choicerelated signals (Shadlen et al., 1996; Cohen and Newsome, 2009b). Haefner et al., 2013 used that model to derive an analytical expression for CP valid under two assumptions that are often violated in practice: first, the model assumes a causally feedforward structure in which sensory responses caused the decision, and second, it is assumed that both decisions are equally likely. However, the presence of informative stimuli leads to one choice being more likely than the other, hampering the application of the analytical results to Grand CPs and to detection tasks (Bosking and Maunsell, 2011; Smolyanskaya et al., 2015), which involve informative stimuli. Furthermore, decisionrelated signals have empirically been shown to reflect substantial feedback components (Nienborg and Cumming, 2009; Nienborg et al., 2012; Macke and Nienborg, 2019). We will next extend this previous model (Haefner et al., 2013) to obtain a general expression of the CP valid for informative stimuli and regardless of the feedforward or feedback origin of the dependencies between the neural responses and the decision variable.
We first consider a most generic model in which we simply assume that the response r_{i} of the $ith$ sensory neurons covaries with the behavioral decision $D$, but without making any assumption about the origin of that covariation (Figure 1A). We find that to a first approximation (exact solution provided in Methods), the CP of cell $i$ captures the difference between the distributions $p({r}_{i}D=1)$ and $p({r}_{i}D=1)$ resulting from a difference in their means, and hence is related to the CTA:
The $\mathrm{CTA}$ generically quantifies the linear dependencies between responses and choice, and this approximation of the CP does not depend on their feedforward or feedback origin (Figure 1A). We next add the assumption that the relationship between a neuron’s response and the choice is mediated by the continuous variable $d$, as commonly assumed by previous studies and described above (Figure 1B). This splits any correlation between the neural response r_{i} and choice $D$ into the product of the two respective correlations: $\mathrm{c}\mathrm{o}\mathrm{r}\mathrm{r}({r}_{i},D)=\mathrm{c}\mathrm{o}\mathrm{r}\mathrm{r}({r}_{i},d)\mathrm{c}\mathrm{o}\mathrm{r}\mathrm{r}(d,D)$ = ${\mathrm{C}\mathrm{C}}_{i}\mathrm{c}\mathrm{o}\mathrm{r}\mathrm{r}(d,D)$, where ${\mathrm{CC}}_{i}=\mathrm{corr}({r}_{i},d)$ is the choice correlation as defined in Equation 2. It follows (see Methods) that:
where ${\mathrm{CTA}}_{d}$ is the average difference in $d$ between the two choices, in analogy to the ${\mathrm{CTA}}_{i}$ for neuron $i$. Equation 5 describes how activitychoice covariations appear in the model (Figure 1C): the threshold mechanism dichotomizes the space of the decision variable, resulting in a different mean of $d$ for each choice, which is quantified in ${\mathrm{CTA}}_{d}$. If the activity of cell $i$ is correlated with the decision variable $d$ (non zero ${\mathrm{CC}}_{i}$), the ${\mathrm{CTA}}_{d}$ is then reflected in the ${\mathrm{CTA}}_{i}$ of the cell. In previous theoretical work (Haefner et al., 2013), the distribution over $d$ was assumed to be fixed and centered on the threshold value θ. Here, we remove that assumption and consider that $d$ may not be centered on the threshold if the stimulus is informative, containing evidence in favor of one of the two choices, or if the choice is otherwise biased. In those cases, the normalized ${\mathrm{CTA}}_{d}$ in Equation 5, namely ${\mathrm{CTA}}_{d}/\sqrt{\mathrm{var}d}$, can be determined (see Materials and methods) in terms of the probability of choosing choice 1, ${p}_{\text{CR}}\equiv p(D=1)=p(d>\theta )$, which we call the ‘choice rate’, ${p}_{\text{CR}}$. Since the decision variable is determined as the combination of the responses of many cells, its distribution is well approximated by a Gaussian distribution, but now with a nonzero mean determined by the stimulus content. With this assumption, the normalized ${\mathrm{CTA}}_{d}$ for ${p}_{\text{CR}}=0.5$ is equal to $4/\sqrt{2\pi}$, and for each other ${p}_{\text{CR}}$ value differs by a scaling factor
where $\varphi (x)$ is the density function of a zeromean, unit variance, Gaussian distribution, and ${\mathrm{\Phi}}^{1}$ is the corresponding inverse cumulative density function. By construction, $h({p}_{\text{CR}})=1$ for ${p}_{\text{CR}}=0.5$ where it has its minimum. Given the factor $h({p}_{\text{CR}})$, combining Equations 4 and 5 we can relate CP and CC across different ratios ${p}_{\text{CR}}$, corresponding to different stimulus levels, irrespectively of whether CP is caused by feedforward or feedback signals. In the linear approximation (see Methods for the exact formula and derivation with the decisionthreshold model), this relationship reads:
For equal fractions of choices, ${p}_{\text{CR}}=0.5$, this CP expression corresponds to the linear approximation derived in Haefner et al., 2013. Note that extending the CP formula to ${p}_{\text{CR}}\ne 0.5$ required us to also make explicit the dependency of the choice correlations on the choice rate, ${\mathrm{C}\mathrm{C}}_{i}({p}_{\mathrm{C}\mathrm{R}})$. Unlike $h({p}_{\text{CR}})$ which is an effect of the decisionmaking threshold mechanism and shared by all neurons, ${\mathrm{C}\mathrm{C}}_{i}({p}_{\mathrm{C}\mathrm{R}})$ is specific to and generally different for each neuron, reflecting its role in the perceptual decisionmaking process. A CC stimulus dependence may arise as a result of stimulusdependent decision feedback (Haefner et al., 2016; Bondy et al., 2018; Lange and Haefner, 2017), or other sources of stimulusdependent crossneuronal correlations (PonceAlvarez et al., 2013; Orbán et al., 2016) such as shared gain fluctuations (Goris et al., 2014). In fact, we will show below that gaininduced stimulusdependent crossneuronal correlations account for observed features in our empirical data. Note that we do not distinguish between CC stimulus dependencies and a dependence of the CC on ${p}_{\text{CR}}$. We do not make this distinction here because most generally a change in the stimulus level results in a change of ${p}_{\text{CR}}$, and the two cannot be disentangled. However, the ${p}_{\text{CR}}$ more generally depends on other factors such as the reward value, attention level, or arousal state, and in Equation 7 the separate dependencies on the stimulus and ${p}_{\text{CR}}$ can be explicitly indicated as ${\mathrm{CC}}_{i}({p}_{\text{CR}},s)$ when the experimental paradigm allows to separate these two influences.
For simplicity, we presented above only the general relationship between the CP and CC in Equation 7 derived as a linear approximation for weak activitychoice covariations, as this is the regime relevant for single sensory neurons. See Methods for the exact analytical solution from the threshold model (Equation 16) and Appendix 1 for its derivation. Despite the assumption of weak activitychoice covariations, this approximation is very close over the empirically relevant range of CC’s (Figure 2A–B). Below we will focus on a concrete type of CC stimulus dependence, namely originated by gain fluctuations, but it is clear from Equation 7 that any CC stimulus dependence will modify the $\mathrm{CP}({p}_{\text{CR}})$ shape induced purely by the threshold effect. A summary of the overall relation between the CP, $\mathrm{CTA}$, and $\mathrm{CC}$ is provided in Figure 2D.
The model provides a concrete prediction of a stereotyped dependence of CP on ${p}_{\text{CR}}$ through $h({p}_{\text{CR}})$ when the choicerelated signals are mediated by an intermediate decision variable $d$, which is testable using data. First, under the assumption that CC is constant and therefore $h({p}_{\text{CR}})$ is the only source of CP dependence on ${p}_{\text{CR}}$, for a positive CC ($\mathrm{CP}>0.5$), the $\mathrm{CP}({p}_{\text{CR}})$ should have a minimum at ${p}_{\text{CR}}=0.5$ and increase symmetrically as ${p}_{\text{CR}}$ deviates from 0.5 as the result of a change in the stimulus in either direction (Figure 2A). When the CC is negative ($\mathrm{CP}<0.5$), then $\mathrm{CP}({p}_{\text{CR}})$ should have a maximum at ${p}_{\text{CR}}=0.5$ and analogously decrease symmetrically as ${p}_{\text{CR}}$ deviates from 0.5. Second, since the influence of $h({p}_{\text{CR}})$ is multiplicative, it creates higher absolute differences in the CP across different stimulus levels for cells with a stronger CP (either larger or smaller than 0.5). Third, the dependence on $h({p}_{\text{CR}})$ is weak for a wide range of ${p}_{\text{CR}}$ values (Figure 2A), making it empirically detectable only when including highly informative stimuli in the analysis to obtain ${p}_{\text{CR}}$ values very different from 0.5. However, for those ${p}_{\text{CR}}$ values, CP estimates are less reliable, because only for few trials the choice is expected to be inconsistent with the sensory information, meaning that one of the two distributions $p({r}_{i}D=1)$ or $p({r}_{i}D=1)$ is poorly sampled. This means that to detect the $h({p}_{\text{CR}})$ modulation for single cells, many trials would be needed for each value of ${p}_{\text{CR}}$ to obtain good estimates. Because $h({p}_{\text{CR}})$ is common to all cells, averaging $\mathrm{CP}({p}_{\text{CR}})$ profiles across cells can also improve the estimation. This averaging may also help to isolate the $h({p}_{\text{CR}})$ modulation, assuming that cellspecific ${\mathrm{CP}}_{i}$ stimulus dependencies introduced through choice correlations ${\mathrm{CC}}_{i}$ are heterogeneous across cells and average out. We refer to Appendix 1 for a detailed analysis of the statistical power for the detection of $h({p}_{\text{CR}})$ as a function of the number of trials and cells used to estimate an average $\mathrm{CP}({p}_{\text{CR}})$ profile. We will present below (Section ‘Stimulus dependence of choicerelated signals in the responses of MT cell’) evidence for the $h({p}_{\text{CR}})$ modulation from a reanalysis of the data in Britten et al., 1996.
The structure of CP stimulus dependencies induced by response gain fluctuations
We will now focus on a concrete source of stimulusdependent correlations that leads to a nonconstant $\mathrm{CC}({p}_{\text{CR}})$, namely the effect of gain fluctuations into the stimulusresponse relationship (Goris et al., 2014; Ecker et al., 2014; Kayser et al., 2015; Schölvinck et al., 2015). Goris et al., 2014 showed that 75% of the variability in the responses in monkeys MT cells when presented with drifting gratings could be explained by gain fluctuations. We derive the CP dependencies on ${p}_{\text{CR}}$ in a feedforward model of decisionmaking (Shadlen et al., 1996; Haefner et al., 2013) that also models the effect of gain fluctuations in the responses. The feedfoward model considers a population of sensory responses, $\overrightarrow{r}=({r}_{1},\mathrm{\dots},{r}_{n})$, with tuning functions $\overrightarrow{f}(s)=({f}_{1}(s),\mathrm{\dots},{f}_{n}(s))$, responses ${r}_{i}={f}_{i}(s)+{\xi}_{i}$, and a covariance structure $\mathrm{\Sigma}$ of the neuron’s intrinsic variability ${\xi}_{i}$. The responses are read out into the decision variable with a linear decoder
where $\overrightarrow{w}$ are the readout weights. The categorical choice $D$ is made by comparing $d$ to a threshold θ. With this model, the general expression of Equation 7 reduces to
where ${(\mathrm{\Sigma}(s)\overrightarrow{w})}_{i}=\mathrm{cov}({r}_{i},d)$ and $\mathrm{var}d={\overrightarrow{w}}^{\top}\mathrm{\Sigma}(s)\overrightarrow{w}$. This expression corresponds to the one derived by Haefner et al., 2013, except for $h({p}_{\text{CR}})$ and for the fact that we now explicitly indicate the dependence of the correlation structure $\mathrm{\Sigma}(s)$ on the stimulus. The expression relates the CP magnitude to singleunit properties such as the neurometric sensitivity, as well as to population properties, such as the decoder pooling size and the magnitude of the crossneuronal correlations, which determine CC (Shadlen et al., 1996; Haefner et al., 2013). In particular, if the decoding weights are optimally tuned to the structure of the covariability $\mathrm{\Sigma}({p}_{\text{CR}}=0.5)$ at the decision boundary, this results in a proportionality between ${\mathrm{CP}}_{i}({p}_{\text{CR}}=0.5)$ and the neurometric sensitivity of the cells: ${\mathrm{CP}}_{i}({p}_{\text{CR}}=0.5)\propto {f}_{i}^{\prime}/{\sigma}_{{r}_{i}}$ (Haefner et al., 2013), as has been experimentally observed (Britten et al., 1996; Parker and Newsome, 1998). While this feedfoward model is generic, we concretely study CC stimulus dependencies induced by the effect of global gain response fluctuations in crossneuronal correlations. Following Goris et al., 2014 we modeled the responses of cell $i$ in trial $k$ as ${f}_{ik}(s)={g}_{k}{f}_{i}(s)$, where g_{k} is a gain modulation factor shared by the population. We assume that the readout weights $\overrightarrow{w}$ are stimulusindependent. As a consequence, the covariance of population responses $\mathrm{\Sigma}$ has a component due to the gain fluctuations:
where ${\sigma}_{G}^{2}$ is the variance of the gain $g$ and $\overline{\mathrm{\Sigma}}$ is the covariance not associated with the gain, which for simplicity we assume to be stimulus independent. The component of the crossneuronal covariance matrix $\mathrm{\Sigma}$ induced by gain fluctuations is proportional to the tuning curves ($\propto \overrightarrow{f}(s){\overrightarrow{f}}^{T}(s)$). A deviation $\mathrm{\Delta}s\equiv s{s}_{0}$ of the stimulus from the uninformative stimulus s_{0} produces a change $\mathrm{\Delta}\overrightarrow{f}={\overrightarrow{f}}^{\prime}({s}_{0})\mathrm{\Delta}s$ in the population firing rates, which affects the variability of the responses, the variability of the decoder, and their covariance, which all vary with $\mathrm{\Delta}s$. Because the variance of the decoder $\mathrm{var}d={\overrightarrow{w}}^{\top}\mathrm{\Sigma}(s)\overrightarrow{w}$ and the covariance $\mathrm{cov}({r}_{i},d)={(\mathrm{\Sigma}(s)\overrightarrow{w})}_{i}$ both depend on the concrete form of the readout weights, the effect of gaininduced stimulus dependencies on the CP is specific for each decoder. Under the assumption of an optimal linear decoder at the decision boundary s_{0} ($\overrightarrow{w}\propto {\mathrm{\Sigma}}^{1}{\overrightarrow{f}}^{\mathrm{\prime}}({s}_{0})$), we obtain an approximation of the CC dependence on the stimulus deviation $\mathrm{\Delta}s$ from s_{0} (see Methods for details):
where the slope is determined by the coefficient ${\beta}_{{p}_{\mathrm{C}\mathrm{R}}}={\sigma}_{G}{\lambda}_{i}\left[1{\mathrm{C}\mathrm{C}}_{i}^{2}({p}_{\mathrm{C}\mathrm{R}}=0.5)\right]$, with ${\lambda}_{i}$ being the fraction of the variance of cell $i$ caused by the gain fluctuations (Methods). The choice rate ${p}_{\text{CR}}$ is determined by the stimulus $\mathrm{\Delta}s$ as characterized by the psychometric function. For this form of the slope coefficient ${\beta}_{{p}_{\text{CR}}}$ obtained with an optimal decoder all the factors contributing to it are positive (Figure 2C). In Appendix 4 we further analytically describe how gain fluctuations introduce CP stimulus dependencies not only for an optimal decoder, but also for any unbiased decoders. Conversely to the factor $h({p}_{\text{CR}})$, the pattern of $\mathrm{CP}({p}_{\text{CR}})$ profiles produced by the gain fluctuations is cellspecific, with a stronger asymmetric component for cells with higher ${\lambda}_{i}$ (Figure 2C). Furthermore, while the sign of the multiplicative modulation $h({p}_{\text{CR}})$ changes when CC>0 or CC<0, the gaininduced contribution in Equation 11 is additive. As seen in Figure 2C, for cells with a weak activitychoice covariation for uninformative stimuli ($\mathrm{CP}({p}_{\text{CR}}=0.5)$ close to 0.5), this implies that the CP of a neuron can actually change from below 0.5 to above 0.5 across the stimulus range presented in the experiment.
Stimulus dependencies of choicerelated signals in the responses of MT cells
In the light of our findings above, we reanalyzed the classic Britten et al., 1996 data containing responses of neurons in area MT in a coarse motion direction discrimination task (see Methods for a description of the data set). Our objective is to identify any patterns of CP dependence on the choice rate/stimulus level. First, we describe our results testing for the thresholdinduced CP stimulus dependence, $h({p}_{\text{CR}})$, and then more generally we characterize the $\mathrm{CP}({p}_{\text{CR}})$ patterns found in the data using clustering analysis. Finally, as an alternative to CP analysis, we show how to extend Generalized Linear Models (GLMs) of neural activity to include stimuluschoice interaction terms that incorporate the stimulus dependencies of activitychoice covariations derived with our theoretical approach and found above in the MT data.
Testing the presence of a thresholdinduced CP stimulus dependence in experimental data
We start describing how to analyze withincell $\mathrm{CP}({p}_{\text{CR}})$ profiles to test the existence of the thresholdinduced modulation. The theoretically derived properties of $h({p}_{\text{CR}})$ suggest several empirical signatures that will be reflected in the withincell $\mathrm{CP}({p}_{\text{CR}})$ profiles. First, because $h({p}_{\text{CR}})$ introduces a multiplicative modulation of the choice correlation, for informative stimuli it leads to an increase of the CP for cells with positive choice correlation ($\mathrm{CP}>0.5$) and to a decrease for cells with negative choice correlation ($\mathrm{CP}<0.5$). Second, because $h({p}_{\text{CR}})$ is multiplicative, the absolute magnitude of the modulation will be higher for cells with stronger choice correlation, that is CPs most different from 0.5. Third, the effect of $h({p}_{\text{CR}})$ is strongest when one choice dominates and hence most noticeable for highly informative stimuli.
These properties of $h({p}_{\text{CR}})$ indicate that, to detect this modulation, it is necessary to examine withincell $\mathrm{CP}({p}_{\text{CR}})$ profiles isolated from acrosscells heterogeneity in the magnitude of the CP values. Ideally, we would like to calculate a $\mathrm{CP}({p}_{\text{CR}})$ profile for each cell and analyze the shape of these singlecell profiles. However, given the available number of trials, estimates of $\mathrm{C}\mathrm{P}({p}_{\mathrm{C}\mathrm{R}})$ profiles for single cells are expected to be noisy. The estimation error of the CP is higher when ${p}_{\text{CR}}$ is close to 0 or 1, the same ${p}_{\text{CR}}$ values for which the $h({p}_{\mathrm{C}\mathrm{R}})$ modulation would be most noticeable. The standard error of $\widehat{\mathrm{CP}}$ can be approximated as $\mathrm{S}\mathrm{E}\mathrm{M}(\hat{\mathrm{C}\mathrm{P}})\approx 1/\sqrt{12K{p}_{\mathrm{C}\mathrm{R}}(1{p}_{\mathrm{C}\mathrm{R}})}$ (Bamber, 1975; Hanley and McNeil, 1982, see Methods), where $K$ is the number of trials. In the Britten et al. data set the number of trials varies for different stimulus levels, and most frequently $K=30$ for highly informative stimuli. In that case, for ${p}_{\text{CR}}=0.9$, only three trials for choice $D=1$ are expected, and $\mathrm{SEM}(\widehat{\mathrm{CP}})\approx 0.18$. As can be seen from Figure 2A, this error surpasses the order of magnitude of the CP modulations expected from $h({p}_{\text{CR}})$. This means that we need to combine CP estimates of adjacent ${p}_{\text{CR}}$ values, and/or combine estimated $\mathrm{CP}({p}_{\text{CR}})$ profiles across neurons, to reduce the standard error (See Appendix 1 for a detailed analysis of the statistical power for the detection of $h({p}_{\text{CR}})$).
When averaging CPs across neurons, two considerations are important. First, cells that for ${p}_{\text{CR}}=0.5$ have a CP higher or lower than 0.5 should be separated, given that the sign of the CC leads to an inversion of the profile resulting from $h({p}_{\text{CR}})$ (Equation 7). If not separated, the $h({p}_{\text{CR}})$dependence would average out, or the average $\mathrm{CP}({p}_{\text{CR}})$ profile would reflect the proportion of cells with CPs higher or lower than 0.5 in the data set. Second, the average should correspond to an average across cells of withincell $\mathrm{CP}({p}_{\text{CR}})$ profiles, and hence it should only include cells for which a full $\mathrm{CP}({p}_{\text{CR}})$ profile can be calculated. This is important because for each cell $i$ the $h({p}_{\text{CR}})$ modulation is relative to the value of ${\mathrm{CP}}_{i}({p}_{\text{CR}}=0.5)$. If a different subset of cells was included in the average of the CP at each ${p}_{\text{CR}}$ value, the resulting shape across ${p}_{\text{CR}}$ values of the averaged CPs would not be an average of withincell $\mathrm{CP}({p}_{\text{CR}})$ profiles. Conversely, in that case, the resulting shape would reflect the heterogeneity in the magnitude of the CP values across the subsets of cells averaged at each ${p}_{\text{CR}}$ value. In the singlecell recordings from Britten et al., the range of stimulus levels used varies across neurons, and for a substantial part of the cells a full $\mathrm{CP}({p}_{\text{CR}})$ profile cannot be constructed. Following the second consideration, those cells were excluded from the analysis to avoid that they only contributed to the average at certain ${p}_{\text{CR}}$ values.
We derived the following refined procedure to analyze $\mathrm{CP}({p}_{\text{CR}})$ profiles. As a first step, we constructed a $\mathrm{CP}({p}_{\text{CR}})$ profile for each cell. First, for each cell and each stimulus coherence level we calculated a CP estimate if at least four trials were available for each decision. For the experimental data set, CPs are always estimated from its definition (Equation 1), and we will only use the theoretical expression of $h({p}_{\text{CR}})$ to fit the modulation of the experimentally estimated $\mathrm{CP}({p}_{\text{CR}})$ profiles. Second, as a first way to improve the CP estimates, we binned ${p}_{\text{CR}}$ values into five bins and assigned stimulus coherence levels to the bins according to the psychometric function that maps stimulus levels to ${p}_{\text{CR}}$, with the central bin containing the trials from the zerosignal stimulus. A single CP value per bin for each cell was then obtained as a weighted average of the CPs from stimulus levels assigned to each bin. The weights were calculated as inversely proportional to the standard error of the estimates, giving more weight to the most reliable CPs (see Methods). The results that we present hereafter are all robust to the selection of the minimum number of trials and the binning intervals. Unless otherwise stated, in all following analyses we included all the cells ($N=107$) for which we had data to compute CPs in all five bins, thus allowing us to estimate a full withincell $\mathrm{CP}({p}_{\text{CR}})$ profile. As a second step, we averaged the withincell $\mathrm{CP}({p}_{\text{CR}})$ profiles across cells, taking into account the two considerations above. As before, averages were weighted by inverse estimation errors.
Figure 3A shows the averaged $\mathrm{CP}({p}_{\text{CR}})$ profiles. To assess the statistical significance of the CP dependence on ${p}_{\text{CR}}$, we developed a surrogates method to test whether a pattern consistent with the predicted CPincrease for informative stimuli could appear under the null hypothesis that the CP has a constant value independent of ${p}_{\text{CR}}$ (see Methods). For the cells with average CP higher than 0.5, we found that the modulation of the CP was significant ($p=0.0006$), with higher CPs obtained for ${p}_{\text{CR}}$ close to 0 or one in agreement with the model. For cells with average CP lower than 0.5, the modulation was not significant ($p=0.26$). While the actual absence of a modulation would imply that the choicerelated signals in these neurons are not mediated by a continuous intermediate decisionvariable but may be, for example, due to categorical feedback, we point out the lower power of this statistical test due to fewer neurons being in the $\mathrm{CP}<0.5$ group and the expected effect size being lower, too. First, there were 74 cells with CP higher than 0.5 but only 33 with CP lower than 0.5, meaning that the estimation error is larger for the average $\mathrm{CP}({p}_{\text{CR}})$ profile of the cells with $\mathrm{CP}<0.5$. Second, as the modulation predicted by $h({p}_{\text{CR}})$ is multiplicative, its impact is expected to be smaller when the magnitude of $\mathrm{CP}0.5$ is smaller. Figure 3A shows that CP values are on average closer to 0.5 for the cells with $\mathrm{CP}<0.5$, in agreement with Figure 5 of Britten et al., 1996. This means that fewer cells classified in the group with $\mathrm{CP}<0.5$ have choicerelated responses. Therefore, the fact that we cannot validate the prediction of an inverted symmetric $h({p}_{\text{CR}})$ modulation for the cells with $\mathrm{CP}<0.5$ with respect to the cells with $\mathrm{CP}>0.5$ is not strong evidence against the existence of a thresholdinduced CP stimulus dependence. We further confirmed the robustness of the results in a wider set of cells. For this purpose, we repeated the analysis forming subsets separately including cells with a computable CP for the three bins with ${p}_{\text{CR}}$ lower or equal 0.5, and the three with ${p}_{\text{CR}}$ higher or equal than 0.5. Also in this case the observed $\mathrm{CP}({p}_{\text{CR}})$ pattern was significant ($p=0.0013$) for cells with average CP higher than 0.5 (Figure 3B, $N=171$), and nonsignificant for cells with CP lower than 0.5 (p=0.20).
Interestingly, the identified significant $\mathrm{CP}({p}_{\text{CR}})$ dependence for the cells with $\mathrm{CP}>0.5$ goes beyond the symmetric thresholdinduced shape predicted by $h({p}_{\text{CR}})$, both in magnitude and shape (Figure 2A), since the increase is bigger for ${p}_{\text{CR}}$ values close to 1 than to 0. This implies that the choice correlation for each neuron, ${\mathrm{CC}}_{i}({p}_{\text{CR}})$, must systematically change with ${p}_{\text{CR}}$ as well, contributing to the overall CP stimulus dependency observed. In particular, the observed average $\mathrm{CP}({p}_{\text{CR}})$ profile indicates that the CP increase appears to be higher for ${p}_{\text{CR}}>0.5$. The finding of this asymmetry is consistent with results reported in Britten et al., 1996, who found a significant but modest effect of coherence direction on the CP (see their Figure 3). By experimental design, the direction of the dots corresponding to choice $D=1$ was tuned for each cell separately to coincide with their most responsive direction. This means that this asymmetry indicates that CPs tend to increase more when the stimulus provides evidence for the direction eliciting a higher response. However, Britten et al., 1996 found no significant relation between the global magnitude of the firing rate and the CP (see their Figure 3), and we confirmed this lack of relation specifically for the subset of $N=107$ cells (no significant correlation coefficient between average rate and average CP values, $p=0.33$). This eliminates the possibility that higher CPs for high ${p}_{\text{CR}}>0.5$ values are due only to higher responses, and suggests a richer underlying structure of $\mathrm{CP}({p}_{\text{CR}})$ patterns, which we will investigate next using cluster analysis to identify the predominant patterns shared by the withincell $\mathrm{CP}({p}_{\text{CR}})$ profiles.
Characterizing the experimental patterns of CP stimulus dependencies with cluster analysis
We carried out unsupervised $k$means clustering (Bishop, 2006) to examine the patterns of $\mathrm{CP}({p}_{\text{CR}})$ without a priori assumptions about a modulation $h({p}_{\text{CR}})$ associated with the threshold effect. Clustering was performed on $\mathrm{CP}({p}_{\text{CR}})0.5$, with each cell represented as a vector in a fivedimensional space, where five is the number of ${p}_{\text{CR}}$ bins used to summarize the data as described above. To consider both the shape and sign of the modulation, distances between neurons were calculated with the cosine distance between their $\mathrm{CP}({p}_{\text{CR}})$ profiles (one minus the cosine of the angle between the two vectors). Clustering was performed for a range of specified numbers of clusters. Specifying the existence of two clusters, we naturally recovered the distinction between cells with CP higher or lower than 0.5 (Figure 4A). The statistical significance of any $p}_{\mathrm{C}\mathrm{R}$modulation was again assessed constructing surrogate $\mathrm{CP}({p}_{\text{CR}})$ profiles and repeating the clustering analysis on those surrogates. As before, a significant dependence of the CP on ${p}_{\text{CR}}$ was found only for the cluster associated with CP higher than 0.5 ($p=0.0007$ for $\mathrm{CP}>0.5$ and $p=0.21$ for $\mathrm{CP}<0.5$).
As mentioned above, the divergence from $h({p}_{\text{CR}})$ of the average $\mathrm{C}\mathrm{P}({p}_{\mathrm{C}\mathrm{R}})$ profile for cells with $\mathrm{CP}>0.5$ suggests that cellspecific modulations are introduced through ${\mathrm{CC}}_{i}({p}_{\text{CR}})$. While the variability of individual ${\mathrm{CP}}_{i}({p}_{\text{CR}})$ profiles (Figure 3C) is expected to reflect substantially the high estimation errors of $\widehat{\mathrm{CP}}$ for the single cells (Figure 3D), the presence of subclusters can identify $\mathrm{CP}({p}_{\text{CR}})$ patterns common across cells.
We proceed to examine subclusters within the $\mathrm{CP}>0.5$ cluster with a significant $\mathrm{C}\mathrm{P}({p}_{\mathrm{C}\mathrm{R}})$ profile, excluding from our analysis cells within the $\mathrm{CP}<0.5$ cluster (analogous results were found when increasing the number of clusters in a nonhierarchical way, without a priori excluding these cells, see Appendix 3—figure 1A). Average $\mathrm{CP}({p}_{\text{CR}})$ profiles obtained when inferring two subclusters of cells with $\mathrm{CP}>0.5$ are shown in Figure 4B. For both subclusters the $\mathrm{CP}({p}_{\text{CR}})$ dependence is significant ($p=0.0008$ for cluster two and $p=0.0026$ for cluster 3, respectively, in Figure 4B). The larger cluster has a more symmetric shape of dependence on ${p}_{\text{CR}}$, with an increase of CP in both directions when the stimulus is informative, consistent with the prediction of a thresholdinduced CP stimulus dependence $h({p}_{\text{CR}})$. For the smaller cluster the dependence is asymmetric, with a CP increase when the stimulus direction is consistent with the preferred direction of the cells and a decrease in the opposite direction. We verified that no significant difference exists between the firing rates of the cells in the two subclusters (Wilcoxon ranksum test, $p=0.23$). The monotonic shape of the second subcluster mirrors the dependency produced by response gain fluctuations as predicted by the gain model described above. This suggests that the neurons in this subcluster differ from the neurons in the other subcluster by a substantially larger gaininduced variability, a testable prediction for future experiments and further discussed below.
Introducing a second cluster allows for representing each neuron’s $\mathrm{CP}({p}_{\text{CR}})$dependency in the twodimensional space (Figure 4C) spanned by the mean profiles for each of the three clusters. The horizontal axis corresponds to the separation between the two initial clusters, and is closely aligned to the departure of the average CP from 0.5. The vertical axis is defined by the vectors corresponding to the centers of the two subclusters and hence is determined separately for the cells with average CP higher and lower than 0.5 (see Methods for details, and Appendix 3—figure 1A). The vertical axis is associated with the degree to which the $\mathrm{CP}({p}_{\text{CR}})$ dependence is symmetric or asymmetric with respect to ${p}_{\text{CR}}=0.5$. Cells for which the CP increases consistently with its preferred direction of motion coherence lie on the upper halfplane. To further support this interpretation of the axis, we repeated the clustering procedure replacing the nonparametric $k$means procedure with a parametric procedure that defines the subclusters with a symmetric and an asymmetric template, respectively. The data is distributed approximately equally in both spaces (Figure 4C–D).
Similar results were also obtained when increasing the number of clusters nonhierarchically. Introducing a third cluster for all cells leaves almost unaltered the cluster of cells with CP lower than 0.5 (Appendix 3—figure 1B). The cluster of cells with CP higher than 0.5 splits into two subclusters analogous to the ones found from cells with CP higher than 0.5 alone. The distinction between cells with more symmetric and asymmetric $\mathrm{CP}({p}_{\text{CR}})$ dependencies is robust to the selection of a larger number of clusters, that is, clusters with this type of dependencies remain large when allowing for the discrimination of more patterns (Appendix 3—figure 1C). However, we do not mean to claim that the variety of $\mathrm{CP}({p}_{\text{CR}})$ profiles across cells can be reduced to three separable clusters. As reflected in the distributions in Figure 4C–D, the clusters are not neatly separable. Indeed, a richer variety of profiles would be expected if the properties of $\mathrm{CP}({p}_{\text{CR}})$ profiles across cells were associated with their tuning properties and the structure of feedback projections, as we further argue in the Discussion. The predominance of a symmetric and asymmetric pattern would only reflect which are the predominant $\mathrm{CP}({p}_{\text{CR}})$ shapes shared across cells.
This clustering analysis confirms the presence of shared patterns of CP stimulusdependence across cells, whose shape is compatible with the analytical predictions from the threshold and gainrelated dependencies. The symmetric component of CP stimulus dependence is congruent with $h({p}_{\text{CR}})$ (Equation 6), albeit with a larger magnitude than predicted (Figures 2A and 3A, and additional analysis of the statistical power in Appendix 1). This stronger modulation suggests an additional symmetric contribution of the choice correlation $\mathrm{CC}({p}_{\text{CR}})$ and/or a dynamic feedback reinforcing the stronger modulation for highly informative stimuli. However, while the cluster analysis separates the predominant $\mathrm{CP}({p}_{\text{CR}})$ patterns, the Britten et al. data lacks the statistical power to further distinguish between $h({p}_{\text{CR}})$ and symmetric $\mathrm{CC}({p}_{\text{CR}})$ contributions with a similar shape.
Gaininduced CP stimulus dependencies in the MT responses
Three key features of the $\mathrm{CP}({p}_{\text{CR}})$ dependencies observed for the MT cells are qualitatively explained by introducing shared gain fluctuations in the decision threshold model described above (Figure 4E) – the first two manifesting itself on the population (cluster) level and the third one on an individual neuron level. First, a shared gain variability predicts the existence of the asymmetric CP stimulus dependence seen in cluster 3 (Equation 11 and Figure 2C). Second, the average CP of the asymmetric cluster 3 is lower than the average CP of the symmetric cluster 2 (compare red and green profiles in Figure 4B+E). And third, if gain variability is indeed a driving factor for the observed asymmetry in cluster 3, then within this cluster, neurons with a higher amount of gain variability should also have a steeper $\mathrm{CP}({p}_{\text{CR}})$ profile, a prediction we could confirm as described in the next paragraph.
In order to test this prediction, for each neuron in cluster 3, we first computed the degree of asymmetry of its $\mathrm{CP}({p}_{\text{CR}})$ profile from the data directly, by simply fitting a quadratic function to $\mathrm{CP}({p}_{\text{CR}})$ (Methods). Next, and independently of this, we used the method of Goris et al., 2014 to estimate the amount of gain variability for each neuron. Knowing each neuron’s gain variability allowed us to predict each neuron’s degree of asymmetry (slope of $\mathrm{CP}({p}_{\text{CR}})$ as determined by ${\beta}_{{p}_{\text{CR}}}$, using Equation 11). We indeed found a significant correlation between the predicted and the observed slopes ($r=0.58$, $p=0.0018$) supporting the conclusion that shared gain variability underlies the observed asymmetric shape of $\mathrm{CP}({p}_{\text{CR}})$ for the neurons in cluster 3. For cluster 2, in which the symmetric pattern is predominant, no analogous correlation was found ($r=0.15$, $p=0.35$). It is important to note that the asymmetry predicted by the gain variability overestimates the actually observed one by an order of magnitude (average observed slope of $0.002\pm 0.0003$ compared to an average predicted slope of $0.034\pm 0.008$). However, this is not surprising given our simplifying assumption of a single global gain factor across the whole population whereas in practice the gain fluctuations are likely inhomogeneous across the population. Furthermore, the actual readout used by the brain may deviate from the optimal one, further reducing the expected match between predictions and observations. A more precise modeling of CP–stimulus dependencies would require measurements of the crossneuronal correlation structure that is not available from the single unit recordings of Britten et al., 1996 but will be for future population recordings.
Modeling stimulusdependent choicerelated signals with GLMs
The implications of a stimulusdependent relationship between the behavioral choice and sensory neural responses are not restricted to measuring them as CPs, for which activitychoice covariations are quantified without incorporating other explanatory factors of neural responses. To further substantiate the existence of this stimulusdependent relationship in MT data, and to understand how our model predictions could help to refine other analytical approaches, we examined how representing that relationship can improve statistical models of neural responses. In particular, we study how the stimulusdependent choicerelated signals that we discovered may inform the refinement of Generalized Linear Models (GLMs) of neural responses (Truccolo et al., 2005; Pillow et al., 2008). In the last few years, GLMs have been used for modelling choice dependencies together with the dependence on other explanatory variables, such as the external stimulus, response memory, or interactions across neurons (Park et al., 2014; Runyan et al., 2017). Typically, in a GLM of firing rates each explanatory variable contributes with a multiplicative factor that modulates the mean of a Poisson process. In their classical implementation, the choice modulates the firing rate as a binary gain factor, with a different gain for each of the two choices (Park et al., 2014; Runyan et al., 2017; Pinto et al., 2019). The multiplicative nature of this factor already introduces some covariation between the impact of the choice on the rate and the one of the other explanatory variables. However, using a single regression coefficient to model the effect of the choice on the neural responses may be insufficient if choicerelated signals are stimulus dependent, as suggested by our theoretical and experimental analysis.
We developed a GLM (see Methods) that can model stimulusdependencies of choice signals (or, in other words, stimuluschoice interactions) by including multiple choicerelated predictors that allow for a different strength of dependence of the firing rate on the choice for different subsets of stimulus levels (via the choice rate, ${p}_{\text{CR}}$). We fitted this model, which we call the stimulusdependentchoice GLM, to MT data and we compared its crossvalidated performance against two traditional GLMs. In the first type, called the stimulusonly GLM, the rate in each trial is predicted only based on the external stimulus level. In a second type, that we called stimulusindependentchoice GLM and that corresponds to the traditional way to include choice signals in a GLM (Park et al., 2014; Runyan et al., 2017; Scott et al., 2017; Pinto et al., 2019; Minderer et al., 2019), additionally the effect of choice is included, but using only a single, stimulusindependent choice predictor.
To compare the models, we separated the trials recorded from each MT cell (Britten et al., 1996) into training and testing sets, and calculated the average crossvalidated likelihood for each type of model on the heldout testing set. To quantify the increase in predictability when adding the choice as a predictor we defined the relative increase in likelihood (RIL) as the relative increase of further adding the choice as a predictor relative to the increase of previously adding the stimulus as a predictor. RIL measures the relative influence of the choice and the sensory input in the neural responses. Figure 5A compares the crossvalidated RIL values obtained on MT neural data when fitting either the stimulusindependentchoice or the stimulusdependentchoice GLMs. We found that RIL values were mostly higher when allowing for multiple choice parameters, both in terms of average RIL values (Figure 5C) and in terms of the proportion of cells in each cluster for which the RIL was higher than a certain threshold, here selected to be at 10% (Figure 5B).
GLMs that include stimuluschoice interaction terms can be used not only to better describe the firing rate of neural responses, but also to individuate more precisely the neurons or areas by their choice signals. To illustrate this point, we show how adding the interaction terms may change the relative comparison of cells by their RIL values. Consider the three neurons with highest RIL for the stimulusdependentchoice GLM (Figure 5A, and with corresponding $\mathrm{CP}({p}_{\text{CR}})$ profiles shown in Figure 5D). The ranking of cells 1 and 2 by RIL flips with respect to the stimulusindependentchoice GLM because of the higher $\mathrm{CP}({p}_{\text{CR}})$ modulation of cell 2. Similarly, while the RIL with multiple choice parameters for cells 1 and 3 are close, the RIL of cell 3 is substantially lower with a single choice parameter, indicating that its pattern of stimulus dependence is less well captured by a single parameter. The degree to which a model with interaction terms improves the predictability will depend on the shape of the $\mathrm{CP}({p}_{\text{CR}})$ patterns, which themselves are expected to vary across areas or across cells with different tuning properties. For example, we see in Figure 5C that for the cluster with an asymmetric $\mathrm{CP}({p}_{\text{CR}})$ profile (cluster 3), the average RIL with only one choice parameter suggests that this type of cells are not choice driven. The reason is that for the cells in this cluster the sign of the choice influence on the rate can be stimulus dependent, which is impossible to model by a single choice parameter. Furthermore, the profile of the GLM choice parameters across stimulus levels provides a characterization of stimulusdependent choicerelated signals analogous to the $\mathrm{CP}({p}_{\text{CR}})$ profile, in this case within the GLM framework, hence allowing efficient inference including principled regularization and the ability to account for a range of factors beyond choices and stimuli. Overall, we expect that accounting for stimuluschoice interactions in GLMs will allow for a more accurate assessment of the relative importance of stimulus and choice on neural responses.
Discussion
Our work makes several contributions to the understanding of how choice and stimulus signals in neural activity are coupled. The first is that we derived a general analytical model of perceptual decisionmaking predicting how the relationship between sensory responses and choice should depend on stimulus strength, regardless of whether this relationship is due to feedforward or feedback choicerelated signals. The key model assumption is that the link between sensory responses and choices is mediated by a continuous decision variable and a thresholding mechanism. Second, we designed new, more powerful methods to measure withincell dependencies of choice probabilities (CPs) on stimulus strength. Third, we studied CP stimulus dependencies in the classic dataset by Britten et al., 1996. Interestingly, we found a rich and previously unknown structure in how CPs in MT neurons depend on stimulus strength. In addition to a symmetric dependence predicted by the thresholding operation, we found an asymmetric dependence which we could explain by incorporating previously proposed gain fluctuations (Goris et al., 2014) in our model, thereby introducing a stimulusdependent component in the crossneuronal covariance. Finally, we showed that generalized linear models (GLMs) that account for stimuluschoice interactions better explain sensory responses in MT and allow for a more accurate characterization of how stimulusdriven and how choicedriven a cell’s response is.
Advances on analytical solutions of choice probabilities
Previous work has demonstrated that solving analytically models of perceptual decisionmaking can lead to important new insights on the interpretation of the relationship between neural activity and choice in terms of decisionmaking computations (Bogacz et al., 2006; Gold and Shadlen, 2007; Haefner et al., 2013). In particular, previous analytical work on CPs has shown how experimentally measured CPs relate to the readout weights by which sensory neurons contribute to the internal stimulus decoder in a feedforward model, assuming both choices are equally likely (Haefner et al., 2013; Pitkow et al., 2015). Here, we provided a general analytical solution of CPs in a more general model, with informative stimuli resulting in an unbalanced choice rate, and valid both for feedforward and feedback choice signals. We derived the analytical dependency of CP on the probability of one of the choices (${p}_{\text{CR}}\equiv p(\mathrm{choice}=1)$), which mediates the dependence of the CP on the stimulus strength. Our model is therefore directly applicable to both discrimination and detection tasks, for any stimulus strength that elicits both choices. As we demonstrated, these advances in the analytical solution of the decisionthreshold model allowed for detecting and interpreting stimulus dependencies of choicerelated signals in neural activity.
Characterization of patterns of choice probability stimulusdependencies from sensory neurons
Characterizing withincell stimulus dependencies of activitychoice covariations at the population level requires isolating these dependencies from acrosscells heterogeneity in the magnitude of the CP values. Our analytical analysis suggests possible reasons why previous attempts failed to find stimulus dependencies of CPs in real neural data. First, the magnitude of the CP dependence on ${p}_{\text{CR}}$ is proportional to the magnitude of choicerelated signals (i.e. on how different CPs are from 0.5). This implies that neuronspecific dependencies need to be characterized for each cell individually, relative to the CP obtained with the uninformative stimulus. Only neurons for which a full individual CP profile can be estimated should be averaged to determine stimulus dependencies at the population level, or otherwise the overall average CP profile of stimulus dependence will be dominated by variability associated with the different subsets of neurons contributing to the CP estimate at each stimulus level. Second, the thresholdinduced predicted direction of CP dependence on ${p}_{\text{CR}}$ is different for neurons with CP larger or smaller than 0.5, that is, neurons more responsive to opposite choices. This opposite modulation can cancel out the magnitude of the overall thresholdinduced dependence of the CP on stimulus strength when averaging over all neurons, as done in previous analyses (Britten et al., 1996). Informed by these insights we characterized the withincell dependencies of choicerelated signals on stimulus strength. The application of our refined methods to the classic neural data from MT neurons during a perceptual decisionmaking task of Britten et al., 1996 allowed us to find stimulus dependencies of CPs, while previous analyses had not detected a significant effect.
Our understanding of how CPstimulus dependencies may arise within the decisionmaking process, and the methods we used to measure these dependencies in existing data, will allow future studies to perform more finegrained analyses and interpret more appropriately choicerelated signals. Traditional analyses computed a single CP value for each neuron by either concentrating on zerosignal trials (e.g. Dodd et al., 2001; Parker et al., 2002; Krug et al., 2004; Wimmer et al., 2015; Katz et al., 2016; Wasmuht et al., 2019) or calculating grand CPs (Britten et al., 1996) across stimulus levels (e.g. Cai and PadoaSchioppa, 2014; Verhoef et al., 2015; Latimer et al., 2015; Pitkow et al., 2015; Smolyanskaya et al., 2015; Liu et al., 2016; Bondy et al., 2018). Grand CPs are calculated directly as a weighted average of the CPs estimated for each stimulus level, or by pooling the responses from trials of all stimulus levels, after subtracting an estimate of the stimulusrelated component (Kang and Maunsell, 2012). Our theoretical CP analysis shows that the latter procedure also corresponds to a specific type of weighted average (Appendix 2). Using the so computed individual CP values for each cell, areas or populations were then often ranked in terms of their averaged CP values per neurons. Areas with higher CP values are then identified as areas key for decisionmaking (e.g. Nienborg and Cumming, 2006; Cai and PadoaSchioppa, 2014; Pitkow et al., 2015; Yu et al., 2015).
However, if CPs depend on ${p}_{\text{CR}}$, it is clear that a single grand CP value cannot summarize this dependence. The use of average single CPs may thus introduce confounds in their comparison and miss important cellspecific information. For example, $\mathrm{CP}({p}_{\text{CR}})$ patterns with different sign for different ${p}_{\text{CR}}$ values will result in lower average CP values. Similarly, the comparison of the grand CP of a cell across tasks may mostly reflect changes in the sampling in each task of stimulus levels, leading to a change in how much the CP(s) associated with each stimulus level contributes to the grand CP. As a result, the change in the grand CP may be interpreted as indicating the existence of taskdependent choicerelated signals, even if the CP(s) profile is invariant. In the same way, if the structure of $\mathrm{CP}({p}_{\text{CR}})$ patterns covaries with the tuning properties, the comparison of the grand CP across cells with different tuning properties may mostly depend on the sampling of stimulus levels. This limitation is not specific to average CP values, and applies to other measures that consider choicerelated and stimulusdriven components of the response as separable, such as partial correlations (e.g. Zaidel et al., 2017). Our work instead indicates that the shape of the $\mathrm{CP}({p}_{\text{CR}})$ patterns cannot be summarized in the average, and this shape may be informative about the role of the activitychoice covariations, when comparing across cells with different tuning properties, cells from different areas, or across tasks (e.g. Romo and Salinas, 2003; Nienborg and Cumming, 2006; Nienborg et al., 2012; Krug et al., 2016; Sanayei et al., 2018; Shushruth et al., 2018; Jasper et al., 2019; Steinmetz et al., 2019). Our new methods allow quantifying these CP patterns to better characterize the covariations between neural activity and choice across neurons and populations.
A key novelty introduced in our study is the development of a modelinspired methodological procedure for identifying genuine withincell $\mathrm{CP}({p}_{\text{CR}})$ profiles, that would otherwise be masked by acrosscells heterogeneity in the magnitude of the CP values. As representative examples of how our procedure may find previously unnoticed patterns of CP dependencies, we discuss the previous analyses in Britten et al., 1996 and in Dodd et al., 2001. Britten et al., 1996 analyzed the dependence of the CP on the stimulus strength at the population level (see their Figure 3). In particular, for each stimulus level they averaged the CP of all cells for which an estimate of the CP was calculated, without separating cells with CP higher or lower than 0.5. Furthermore, in their data set, the stimulus levels vary across cells, and hence in their analysis different subsets of cells contribute to the CP average at each stimulus level. Dodd et al., 2001 presented a scatter plot of the CPs for all cells and stimulus levels (see their Figure 6). Although this analysis did not average cells with $\mathrm{CP}>0.5$ and $\mathrm{CP}<0.5$, in the scatter plot the cellidentity of each dot is not represented. This means that it is not possible to trace the withincell $\mathrm{CP}(s)$ profiles. Like in the case of Britten et al., also in Dodd et al., 2001 the sampled stimulus levels varied across cells, further confounding the withincell $\mathrm{CP}(s)$ profiles with heterogeneity of CP magnitudes across cells. As shown by our analysis of the data of Britten et al., 1996, our analytical tools can add extra discoveries from these data, by removing some potential confounds that may have obscured the presence of withincell CP patterns. It is important to note however that our modelbased results do not imply in any way that these previous papers reached to inaccurate conclusions, as these analyses were done for purposes other than discovering the withincell patterns predicted by our models. In particular, most of the analysis of Dodd et al., 2001 used only CPs calculated from trials with noninformative stimuli, and their main results did not rely on the evaluation of CP stimulus dependencies. Similarly, while Britten et al., 1996 used zscoring to calculate grand CPs combining all stimulus levels, their analysis did not involve the comparison of grand CPs across areas or types of cells with different tuning properties. As discussed above, it is for this kind of comparisons, when the patterns of $\mathrm{CP}({p}_{\text{CR}})$ profiles may themselves vary across the groups of cells compared, that reducing $\mathrm{CP}({p}_{\text{CR}})$ profiles to a single CP value may confound the comparison.
Generalized linear models with stimuluschoice interactions
Our work has also implications for improving generalized linear models (GLMs) of neural activity, which are very widely used to describe neural responses in the presence of many explanatory variables that could predict the neuron’s firing rate, such as the external stimulus, motor variables, autocorrelations or refractory periods, and the interaction with other neurons (Truccolo et al., 2005). While usually the stimulus and the choice are treated as separate explanatory variables (e.g. Park et al., 2014; Runyan et al., 2017; Scott et al., 2017; Pinto et al., 2019; Minderer et al., 2019), we used GLMs including explicit interactions between choice and stimulus to show that, consistently with the finding of nonconstant $\mathrm{CP}({p}_{\text{CR}})$ patterns, these models improved the goodness of fit for the responses of MT cells. Importantly, making the choice term depend on the choice rate, ${p}_{\text{CR}}$, affected the quantification of how stimulusdriven or choicedriven different cells are, quantified as the increased in predictive power when further adding the choice as a predictor after the stimulus. This suggests a more finegrained way to compare the degree of a neuron’s association with the behavioral choice or the stimulus, for example across neuron types or brain areas (Runyan et al., 2017; Pinto et al., 2019; Minderer et al., 2019). Our GLMs with multiple choice parameters associated with subsets of stimulus levels also allow characterizing the patterns in the vector of choice parameters analogously to our analysis of $\mathrm{CP}({p}_{\text{CR}})$patterns. Furthermore, our approach can be extended straightforwardly to GLMs that model the influence of the choice across the timecourse of the trials (Park et al., 2014), by making the stimuluschoice interaction terms timedependent. GLMs with timedependent stimuluschoice interaction terms can also be useful for experimental settings with multiple sensory cues presented at different times (e.g. Romo and Salinas, 2003; Sanayei et al., 2018) or a continuous timedependent stimulus (Nienborg and Cumming, 2009), to account for a difference in the interaction of stimuli with the choice depending on the time they are presented. Similarly, the interaction terms may also help to model the influence of choice history in the processing of sensory evidence in subsequent trials (Tsunada et al., 2019; Urai et al., 2019), in which case the interaction terms would be between the stimulus and the choice from the previous trial.
Patterns of stimuluschoice interactions as a signature of mechanisms of perceptual decisionmaking
Theoretical and experimental evidence suggests that the patterns of stimulus dependence of choicerelated signals may be informative about the mechanisms of perceptual decisionmaking. Activitychoice covariations have been characterized in terms of the structure of crossneuronal correlations and of feedforward and feedback weights (Shadlen et al., 1996; Cohen and Newsome, 2009b; Nienborg and Cumming, 2010; Haefner et al., 2013; Cumming and Nienborg, 2016). Stimulus dependencies may be inherited from the dependence of crossneuronal correlations on the stimulus (Kohn and Smith, 2005; PonceAlvarez et al., 2013), or from decisionrelated feedback signals (Bondy et al., 2018). Experimental (Nienborg and Cumming, 2009; Cohen and Maunsell, 2009a; Bondy et al., 2018), and theoretical (Lee and Mumford, 2003; Maunsell and Treue, 2006; Wimmer et al., 2015; Haefner et al., 2016; Ecker et al., 2016) work indicates that topdown modulations of sensory responses play an important role in the perceptual decisionmaking process. In particular, feedback signals are expected to show cellspecific stimulus dependencies associated with the tuning properties (Lange and Haefner, 2017). Different coding theories attribute different roles to the feedback signals, for example, conveying predictive errors (Rao and Ballard, 1999) or prior information for probabilistic inference (Lee and Mumford, 2003; Fiser et al., 2010; Haefner et al., 2016; Tajima et al., 2016; Bányai and Orbán, 2019, Bányai et al., 2019; Lange and Haefner, 2020). Accordingly, characterizing the stimulus dependencies of activitychoice covariations in connection with the tuning properties of cells is expected to provide insights into the role of feedback signals and may help to discriminate between alternative proposals. Such an analysis would require simultaneous recordings of populations of neurons tiling the space of receptive fields, and the joint characterization of the crossneuronal correlations and tuning properties. Although this is beyond the scope of this work, we have shown that the analysis methods we proposed are capable of identifying a nontrivial structure in the stimulusdependencies of choicerelated signals. A better understanding of their differences across brain areas, across cells with different tuning properties, or for different types of sensory stimuli, promises further insights into the mechanisms of perceptual decisionmaking.
While we here analyzed singlecell recordings, our conclusions hold for any type of recordings used to study activitychoice covariations. This spans the range from single units (Britten et al., 1996), multiunit activity (Sanayei et al., 2018), and measurements resulting from different imaging techniques at different spatial scales like intrinsic imaging or fMRI (Choe et al., 2014; Thielscher and Pessoa, 2007; Runyan et al., 2017; Michelson et al., 2017). Given the increasing availability of population recordings, larger number of trials due to chronic recordings, and the advent of stimulation techniques to help to discriminate the origin of the choicerelated signals (Cicmil et al., 2015; Tsunada et al., 2016; Yang et al., 2016; Lakshminarasimhan et al., 2018; Fetsch et al., 2018; Yu and Gu, 2018), we expect our tools to help gain new insights into the mechanisms of perceptual decisionmaking.
Materials and methods
We here describe the derivations of the CP analytical solutions, our new methods to analyze stimulus dependencies in choicerelated responses, and we describe the data set from Britten et al., 1996 in which we test the existence of stimulus dependencies.
An exact CP solution for the threshold model
Request a detailed protocolWe first derive our analytical CP expression valid in the presence of informative stimuli, decisionrelated feedback, and topdown sources of activitychoice covariation, such as prior bias, trialtotrial memory, or internal state fluctuations. We follow Haefner et al., 2013 and assume a threshold model of decision making, in which the choice $D$ is triggered by comparing a decision variable $d$ with a threshold θ, so that if $d>\theta $ choice $D=1$ is made, and $D=1$ otherwise. The identification of the binary choices as $D=\pm 1$ is arbitrary and an analogous expression would hold with another mapping of the categorical variable. The choice probability (Britten et al., 1996) of cell $i$ is defined as
and measures the separation between the two choicespecific response distributions $p({r}_{i}D=1)$ and $p({r}_{i}D=1)$. It quantifies the probability of responses to choice $D=1$ to be higher than responses to $D=1$. If there is no dependence between the choice and the responses this probability is $\mathrm{CP}=0.5$. To obtain an exact solution of the CP, we assume that the distribution $p({r}_{i},d)$ of the responses r_{i} of cell $i$ and the decision variable $d$ can be well approximated by a bivariate Gaussian. Under this assumption, following Haefner et al., 2013 (see their Supplementary Material) the probability of the responses for choice $D=1$ follows the distribution
where a more parsimonious expression is obtained using the zscore ${z}_{i}=({r}_{i}\u27e8{r}_{i}\u27e9)/{\sigma}_{{r}_{i}}$. This distribution is a skewnormal (Azzalini, 1985), where $\varphi (\cdot ;0,1)$ is the standard normal distribution with zero mean and unit variance, and $\mathrm{\Phi}(\cdot ;0,1)$ is its cumulative function. Furthermore, ${\sigma}_{{r}_{i},d}$ is the covariance of r_{i} and $d$, ${\sigma}_{d{r}_{i}}$ is the conditional standard deviation of $d$ given r_{i}, and the probability of $D=1$ is
which determines the rate of each choice over trials. The choice $D=1$ could equally be taken as the choice of reference, resulting in an analogous formulation. Intuitively, ${p}_{\text{CR}}$ increases when the mean of the decision variable $\u27e8d\u27e9$ is higher than the threshold θ, and decreases when its standard deviation ${\sigma}_{d}$ increases. The form of the distribution of Equation 13 can be synthesized in terms of ${p}_{\text{CR}}$ and the correlation coefficient ${\rho}_{{r}_{i}d}$, which was named by Pitkow et al., 2015 choice correlation (${\mathrm{CC}}_{i}$). In particular, defining $\alpha \equiv {\rho}_{{r}_{i}d}/\sqrt{1{\rho}_{{r}_{i}d}^{2}}$ and $c\equiv {\mathrm{\Phi}}^{1}({p}_{\text{CR}})/\sqrt{1{\rho}_{{r}_{i}d}^{2}}$
The CP is completely determined by $p({z}_{i}D=1)$ and $p({z}_{i}D=1)$, and these distributions depend only on ${p}_{\text{CR}}$ and the correlation coefficient ${\rho}_{{r}_{i}d}$. Plugging the distribution of Equation 15 into the definition of the CP (Equation 12) an analytical solution is obtained:
where $\mathrm{T}$ is the Owen’s T function (Owen, 1956). In Appendix 1, we provide further details of how this expression is derived. For an uninformative stimulus (${p}_{\text{CR}}=0.5$), the function $\mathrm{T}$ reduces to the arctangent and the exact result obtained in Haefner et al., 2013 is recovered. The dependence on ${\rho}_{{r}_{i}d}$ can be understood because under the Gaussian assumption the linear correlation captures all the dependence between the responses and the decision variable $d$. The dependence on ${p}_{\text{CR}}$ reflects the influence of the threshold mechanism, which maps the dependence of r_{i} with $d$ into a dependence with choice $D$ by partitioning the space of $d$ in two regions.
While Equation 16 provides an exact solution of the CP, in the Results section we present and mostly focus on a linear approximation to understand how the stimulus content modulates the choice probability. This approximation is derived (Appendix 1) in the limit of a small ${\rho}_{{r}_{i}d}$, which leads to CPs close to 0.5 as usually measured in sensory areas (Nienborg et al., 2012). However, as we show in the Results and further justify in the Appendix this approximation is robust for a wide range of ${\rho}_{{r}_{i}d}$ values. The linear approximation relates the choice probability to the Choice Triggered Average (CTA) (Haefner, 2015; Chicharro et al., 2017), defined as the difference of the mean responses for each choice (Equation 3). Given the binary nature of choice $D$, the $\mathrm{CTA}$ is directly proportional to the covariance of the responses and the choice: ${\mathrm{CTA}}_{i}=\mathrm{cov}({r}_{i},D)/[2p(D=1)p(D=1)]$. [Note: This relation holds for the covariance between any variable $x$ and a binary variable $D$, and independently of the convention adopted for the values of $D$: the factor 2 has to be replaced by $ab$ in general for $D=a,b$ instead of $D=1,1$.] This relation between ${\mathrm{CTA}}_{i}$ and $\mathrm{cov}({r}_{i},D)$, given the factorization $\mathrm{corr}({r}_{i},D)={\mathrm{CC}}_{i}\mathrm{corr}(d,D)$ resulting from the mediating decision variable $d$ in the threshold model, allows expressing the ${\mathrm{CTA}}_{i}$ as in Equation 5, connecting ${\mathrm{CTA}}_{i}$ to the choicetriggered average of $d$, ${\mathrm{CTA}}_{d}$. This connection indicates that in the threshold model ${\mathrm{CTA}}_{i}$ is expected to be stimulus dependent, since an informative stimulus $s$ shifts the mean of $d$, thus altering the dichotomization of $d$ produced by the threshold θ. The exact form of ${\mathrm{CTA}}_{d}$ depends on the distribution $p(d)$. However, since $d$ is determined by a whole population of neurons, its distribution is expected to be well approximated by a Gaussian distribution, even if the distribution of neural responses for any single neuron is not Gaussian. With this Gaussian approximation, the normalized ${\mathrm{CTA}}_{d}$ in Equation 5, namely ${\mathrm{CTA}}_{d}/\sqrt{\mathrm{var}d}$, is specified in terms of the probability of choosing choice 1, ${p}_{\text{CR}}\equiv p(D=1)=p(d>\theta )$, by the factor $h({p}_{\text{CR}})$ (Equation 6). In more detail, the CTA is
Neuronal data
Request a detailed protocolTo study stimulus dependencies in the relationship between the responses of sensory neurons and the behavioral choice, we analyzed the data from Britten et al., 1996 publicly available in the Neural Signal Archive (http://www.neuralsignal.org). In particular, we analyzed data from file nsa2004.1, which contains single unit responses of macaque MT cells during a random dot discrimination task. This file contains 213 cells from three monkeys. We also used file nsa2004.2, which contains paired single units recordings from 38 sites from one monkey. In the experimental design, for the single unit recordings the direction tuning curve of each neuron was used to assign a preferrednull axis of stimulus motion, such that opposite directions along the axis yield a maximal difference in responsiveness (Bair et al., 2001). For paired recordings, the direction of stimulus motion was selected based on the direction tuning curve of the two neurons and the criterion used to assign it varied depending on the similarity between the tuning curves. For cells with similar tuning, a compromise between the preferred directions of the two neurons was made. For cells with different tuning, the axis were chosen to match the preference of the most responsive cell. To minimize the influence in our analysis of the direction of motion selection, we only analyzed the most responsive cell from each site. Accordingly, our initial data set consisted in a total of 251 cells. The same qualitative results were obtained when limiting the analysis to data from nsa2004.1 alone. Further criteria regarding the number of trials per each stimulus level were used to select the cells. As discussed below, if not indicated otherwise, we present the results from 107 cells that fulfilled all the criteria required.
Analysis of stimulusdependent choice probabilities
Our analysis of choice probabilities stimulus dependencies is based on examining the patterns in the $\mathrm{CP}({p}_{\text{CR}})$ profile as a function of the probability ${p}_{\text{CR}}\equiv p(D=1)$. We here describe how these profiles are constructed, the surrogatesbased method used to assess the significance of stimulus dependencies, and the clustering analysis used to identify different stimulus dependence patterns. Matlab functions are available at https://github.com/DanielChicharro/CP_DP (Chicharro, 2021; copy archived at swh:1:rev:5850c573860eb04317e7dc550f96b1f47ca91c6a) to calculate weighted average CPs, to obtain CP profiles, and to generate surrogates consistent with the null hypothesis of a constant CP.
Profiles of CP as a function of the choice rate
Request a detailed protocolWe constructed $\mathrm{CP}({p}_{\text{CR}})$ profiles instead of $\mathrm{CP}(s)$ profiles based on the prediction from the theoretical threshold model of the modulatory factor $h({p}_{\text{CR}})$. We estimated the ${p}_{\text{CR}}$ value associated with each random dots coherence level using the psychophysical function for each monkey separately. For each coherence level, we calculated a CP value if at least 15 trials were available in total, and at least four for each choice. In the original analysis of Britten et al., 1996 stimulus dependencies $\mathrm{CP}(s)$ were examined averaging across cells the CP at each coherence level. This analysis did not separate the withincell stimulus dependencies $\mathrm{CP}(s)$ from variability due to changes in choice probabilities across cells. In particular, in the data set the stimulus levels presented vary across cells, which means that for each coherence level the average CP does not only reflect any potential stimulus dependence of the CP but also which subset of cells contribute to the average at that level. Therefore, we binned the range of ${p}_{\text{CR}}$ in a way that for each cell at least one stimulus level mapped to each bin of ${p}_{\text{CR}}$. We here present the results using five bins defined as $[00.3,0.3(0.5\epsilon ),(0.5\epsilon )(0.5+\epsilon ),(0.5+\epsilon )0.7,0.71]$, where ε was selected such that only trials with the uninformative (zero coherence) stimulus were comprised in the central bin. Results are robust to the exact definition of the bins. We selected larger bins for highly informative stimulus levels for two reasons. First, the stimulus levels used in the experimental design do not uniformly cover the range of ${p}_{\text{CR}}$, there are more stimulus levels corresponding to ${p}_{\text{CR}}$ values close to ${p}_{\text{CR}}=0.5$. Second, the CP estimates are worse for highly informative stimuli. In particular, the standard error of the CP estimates depends on the magnitude of the CP itself (Bamber, 1975; Hanley and McNeil, 1982) but for small $\mathrm{CP}0.5$ can be approximated as
where $K$ is the number of trials. The product ${p}_{\text{CR}}(1{p}_{\text{CR}})$ is maximal at ${p}_{\text{CR}}=0.5$ and decreases quadratically when ${p}_{\text{CR}}$ approximates 0 or 1. Furthermore, in the data set the number of trials $K$ is higher for stimuli with low information, while most frequently $K=30$ for highly informative stimuli. We used these estimates of the $\widehat{\mathrm{CP}}$ error to combine the CPs of ${M}_{k}$ different stimulus levels assigned to the same bin $k$ of ${p}_{\text{CR}}$. The average $\mathrm{CP}({p}_{\mathrm{CR},k})$ for bin $k$ was calculated as $\mathrm{CP}({p}_{\mathrm{CR},k})={\sum}_{j}^{{M}_{k}}{w}_{j}\mathrm{CP}({s}_{jk})$ with normalized weights proportional to $\sqrt{{K}_{j}{p}_{\mathrm{CR},j}(1{p}_{\mathrm{CR},j})}$. A full profile $\mathrm{CP}({p}_{\text{CR}})$ could be constructed for 107 cells, while for the rest a CP value could not be calculated for at least one of the bins because of the criteria on the number of trials. Together with the profile $\mathrm{CP}({p}_{\text{CR}})$, we also obtained an estimate of its error as a weighted average of the errors, which corresponds to
where $\u27e8{w}_{\mathrm{U}}\u27e9$ is the average of the unormalized weights ${w}_{\mathrm{U},j}\equiv \sqrt{{K}_{j}{p}_{\mathrm{CR},j}(1{p}_{\mathrm{CR},j})}$. Following this procedure, we can iteratively calculate weighted averages of the CPs across different sets. In particular, we used this same type of average to obtain averaged $\mathrm{CP}({p}_{\text{CR}})$ profiles across cells. Importantly, in contrast to the analysis of Britten et al., 1996, we previously separated the cells into two groups, with a positive or negative average $\mathrm{CP}0.5$ value, given that the effect of $h({p}_{\text{CR}})$ predicts an inverse modulation by ${p}_{\text{CR}}$.
Surrogates to test the significance of CP stimulus dependencies
Request a detailed protocolGiven a certain average profile $\mathrm{CP}({p}_{\text{CR}})$, we want to assess whether the pattern observed is compatible with the null hypothesis of a constant CP value for all ${p}_{\text{CR}}$ values. In particular, because the error of the CP estimates is sensitive to the number of trials $K$ and to ${p}_{\text{CR}}$ (Equation 18), we want to discard that any structure observed is only a consequence of changes of $K$ and ${p}_{\text{CR}}$ across the bins used to calculate the $\mathrm{CP}({p}_{\text{CR}})$ profiles. For this purpose, we developed a procedure to build surrogate data sets compatible with the hypothesis of a flat $\mathrm{CP}({p}_{\text{CR}})$ and that preserves at each stimulus level the number of trials for each choice equal to the number in the original data. The surrogates are built shuffling the trials across stimulus levels to destroy any stimulus dependence of the CP. However, because the responsiveness of the cell changes across levels according to its direction tuning curve, responses need to be normalized before the shuffling. Kang and Maunsell, 2012 showed that, to avoid underestimating the CPs, this normalization should take into account that mean responses at each level are determined by the conditional mean response for each choice and also by the choice rate. Under the assumption of a constant CP, they proposed an alternative zscoring, which estimates the mean and standard deviation correcting for the different contribution of trials corresponding to the two choices (see Appendix 2 for details of their method).
We applied the zscoring of Kang and Maunsell, 2012 to pool the responses within an interval of stimulus levels with low information, preserving only the separation of trials corresponding to each choice. We selected the interval from −1.6% to 1.6% of coherence values, which comprises a third of the informative coherence levels used in the experiments. Because these stimuli have low information they lead to ${p}_{\text{CR}}$ values close to ${p}_{\text{CR}}=0.5$ and hence we can approximate the CP as constant within this interval. The fact that the factor $h({p}_{\text{CR}})$ is almost constant around ${p}_{\text{CR}}=0.5$ (see Figure 2A) further supports this approach. We used this pool of neural responses to sample responses for all stimulus levels in the surrogate data set. For each stimulus level of the surrogate data, the number of trials for each choice was preserved as in the original data. In these surrogates, apart from random fluctuations, any structure in the $\mathrm{CP}({p}_{\text{CR}})$ profiles can only be produced by the changes in $K$ and ${p}_{\text{CR}}$ across bins. To test the existence of significant stimulus dependencies in the original $\mathrm{CP}({p}_{\text{CR}})$ profiles we calculated the differences $\mathrm{\Delta}{\mathrm{CP}}_{k}=\mathrm{CP}({p}_{\mathrm{CR},k+1})\mathrm{CP}({p}_{\mathrm{CR},k})$ for the bins $k=1,\mathrm{\dots},4$. To test for an asymmetric pattern with respect to ${p}_{\text{CR}}=0.5$ the average of $\mathrm{\Delta}{\mathrm{CP}}_{k}$ across bins was calculated. To test for a symmetric pattern the sign of the difference was flipped for the bins corresponding to ${p}_{\text{CR}}<0.5$ before averaging. When testing for a pattern consistent with the modulation predicted by the threshold model, the shape was inverted for cells with average CP lower than 0.5. The same procedure was applied to each surrogate $\mathrm{CP}({p}_{\text{CR}})$ profile. We generated 8000 surrogates and estimate the pvalue as the number of surrogates for which the average $\mathrm{\Delta}\mathrm{CP}$ was higher than for the original data.
Clustering analysis
Request a detailed protocolWe used clustering analysis to examine the patterns in the $\mathrm{CP}({p}_{\text{CR}})$ profiles beyond the stereotyped shape $h({p}_{\text{CR}})$ predicted from the threshold model. We first used nonparametric $k$means clustering for an exploratory analysis of which patterns are more common among the 107 cells for which a complete $\mathrm{CP}({p}_{\text{CR}})$ profile could be constructed. The clustering was implemented calculating cosine distances between vectors defined as $\mathrm{CP}({p}_{\text{CR}})0.5$. The selection of this distance is consistent with the prediction of the threshold model that a different pattern is expected for cells with a CP higher or lower than 0.5. We examined the patterns associated with the clusters as a function of the number of clusters to identify robust patterns of dependence (see Appendix 3—figure 1B,C). We then focused on a symmetric and an asymmetric pattern of $\mathrm{CP}({p}_{\text{CR}})$ with respect to ${p}_{\text{CR}}=0.5$, for cells with average CP higher than 0.5. To better interpret these two clusters, we complemented the analysis with a parametric clustering approach in which a symmetric and asymmetric template were a priori selected to cluster the $\mathrm{CP}({p}_{\text{CR}})$ profiles. To assess the significance of the $\mathrm{CP}({p}_{\text{CR}})$ patterns we repeated the same clustering procedure for surrogate data generated as described above. We refer to Appendix 3 for a more detailed description of the construction, visualization, and significance assessment of the $\mathrm{CP}({p}_{\text{CR}})$ patterns.
The effect of response gain fluctuations on choice probabilities
Request a detailed protocolTo model the effect on the CP of response gain fluctuations we adopted a classic feedforward encoding/decoding model (Shadlen et al., 1996; Haefner et al., 2013), with a linear decoder $d={\overrightarrow{w}}^{\top}\overrightarrow{r}$ (Equation 8), for which the CP depends on crossneuronal correlations and the readout weights $\overrightarrow{w}$ following Equation 9. This expression can be derived from Equation 7 directly calculating the choice correlation from its definition (Equation 2). The expressions $\mathrm{cov}({r}_{i},d)={(\mathrm{\Sigma}(s)\overrightarrow{w})}_{i}$ and ${\sigma}_{d}^{2}={\overrightarrow{w}}^{\top}\mathrm{\Sigma}(s)\overrightarrow{w}$ are obtained as derived in the Supplementary Material S1 of Haefner et al., 2013. For this model, if the readout weights are optimized to the form of covariability for the uninformative stimulus s_{0} at the decision boundary, the CPs are proportional to the neurometric sensitivity of the cells (Haefner et al., 2013; Pitkow et al., 2015), a relationship for which there is some experimental support (e.g. Britten et al., 1996; Parker et al., 2002, reviewed in Nienborg et al., 2012). In more detail, modeling the responses as ${r}_{i}={f}_{i}(s)+{\xi}_{i}$, with tuning functions $\overrightarrow{f}(s)=({f}_{1}(s),\mathrm{\dots},{f}_{n}(s))$ and a covariance structure $\mathrm{\Sigma}$ of the neuron’s intrinsic variability ${\xi}_{i}$, the optimal readout weights have the form
where ${\overrightarrow{f}}^{\prime}({s}_{0})$ and $\mathrm{\Sigma}({s}_{0})$ are the derivative of the tuning curves and the responses covariance matrix, respectively, for $s={s}_{0}$. With these optimal weights, the covariability of the population responses is unbundled, with ${\mathrm{\Sigma}}^{1}({s}_{0})$ canceling the effect of $\mathrm{\Sigma}({s}_{0})$ in $\mathrm{cov}({r}_{i},d)={(\mathrm{\Sigma}(s)\overrightarrow{w})}_{i}$, and for each cell the CC is proportional to its own neurometric sensitivity ${f}_{i}^{\prime}/{\sigma}_{{r}_{i}}$, namely
While this expression is valid for the uninformative stimulus s_{0}, we examined how this CP expression is perturbed for other informative stimuli $s$ in the presence of gain fluctuations that make the covariance structure $\mathrm{\Sigma}(s)$ stimulusdependent, altering the structure for which the readout weights are optimized. Goris et al., 2014 estimated that in MT gain fluctuations accounted for more than 75% of the variance in the responses to sinusoidal gratings, and we found that in the data set of Britten et al., 1996 gain fluctuations also explain a large fraction of trialtotrial variability of the neurons (62 ± 25% across neurons). Trialtotrial excitability fluctuations are modeled as a a gain modulatory factor g_{k}, such that the tuning function for cell $i$ in trial $k$, is ${f}_{ik}(s)={g}_{k}{f}_{i}(s)$. In general, the magnitude of the gain may vary across cells, as well as the degree to which the gain cofluctuates across cells. We here modeled a global gain fluctuation affecting the response of the whole population. Given the gain variability, the covariance structure can be partitioned as in Equation 10, as the sum of a component $\overline{\mathrm{\Sigma}}$ unrelated to the gain fluctuations – which for simplicity we consider to be stimulusindependent – and the gaininduced covariance ${\sigma}_{G}^{2}\overrightarrow{f}(s){\overrightarrow{f}}^{T}(s)$. In a first order approximation, a change $\mathrm{\Delta}s=s{s}_{0}$ in the stimulus leads to a change in the covariance structure such that
where $\mathrm{\Sigma}({s}_{0})=\overline{\mathrm{\Sigma}}+{\sigma}_{G}^{2}\overrightarrow{f}({s}_{0}){\overrightarrow{f}}^{T}({s}_{0})$ is the covariance structure for which the weights are optimized. Combining this covariance structure with the form of the optimal readout weights (Equation 20), we derive the changes in $\mathrm{cov}({r}_{i},d)$, ${\sigma}_{{r}_{i}}^{2}$, and ${\sigma}_{d}^{2}$ with $\mathrm{\Delta}s$, and given Equation 9 determine the perturbation of the CP, leading to the following CP expressions
where ${\sigma}_{d}^{2}({s}_{0})={\overrightarrow{w}}^{\top}\mathrm{\Sigma}({s}_{0})\overrightarrow{w}$, ${\sigma}_{{r}_{i}}^{2}({s}_{0})={\mathrm{\Sigma}}_{ii}({s}_{0})$, ${\beta}_{{\sigma}_{r}}\equiv {\sigma}_{G}^{2}{f}_{i}({s}_{0}){f}_{i}^{\prime}({s}_{0})/{\sigma}_{{r}_{i}}^{2}({s}_{0})$, and ${\lambda}_{i}^{2}\equiv {\sigma}_{{r}_{i}g}^{2}({s}_{0})/{\sigma}_{{r}_{i}}^{2}({s}_{0})$, as introduced in Equation 11, with s_{0} resulting in ${p}_{\text{CR}}=0.5$ for an unbiased decoder. Equation 23a relates the choice correlation for ${p}_{\text{CR}}=0.5$ with the choice correlation ${\mathrm{CC}}_{i0}({p}_{\text{CR}}=0.5)$ that cell $i$ would have if there were no gain fluctuations (${\sigma}_{G}^{2}=0$). The coefficient ${\beta}_{\mathrm{CC}}\equiv \sqrt{1{\lambda}_{i}^{2}}$ indicates a decrease in the CC in the presence of gain fluctuations, because of the increase in the response variability produced by the fluctuations, namely ${\sigma}_{{r}_{i}g}^{2}={\sigma}_{G}^{2}{f}_{i}^{2}$. Equation 23b describes the $\mathrm{CC}({p}_{\text{CR}})$ profile induced by the gain fluctuations. The second summand corresponds to the increase in the choice correlation due to a new component of $\mathrm{cov}({r}_{i},d)$ proportional to $\mathrm{\Delta}s$, given that the whole population response determining $d$ is jointly modulated by the gain. In the first summand, the factor $\left[1{\beta}_{{\sigma}_{r}}\mathrm{\Delta}s({p}_{\text{CR}})\right]$ indicates an attenuation of ${\mathrm{CC}}_{i}({p}_{\text{CR}}=0.5)$ analogous to $\sqrt{1{\lambda}_{i}^{2}}$ in Equation 23a, associated with an increase of the variance in the responses r_{i} due to $\mathrm{\Delta}s$. Rearranging the terms in Equation 23b, and taking into account the form of the CC for ${p}_{\text{CR}}=0.5$ (Equation 21), the expression in Equation 11 is obtained, which indicates that the overall effect of the gain fluctuations is an increase of the choice correlation for the stimuli to which the cell is more responsive. A more general form of this expression is derived in Appendix 4, valid for any unbiased decoder.
Apart from producing an asymmetric $\mathrm{CP}({p}_{\text{CR}})$ pattern, the gain fluctuations also create a negative covariation between the CP at ${p}_{\text{CR}}=0.5$ and the degree of asymmetry of the $\mathrm{CP}({p}_{\text{CR}})$ pattern. This covariation appears because the cells with a higher portion of their variability driven by the gain fluctuations (higher ${\lambda}_{i}$) have a higher attenuation of ${\mathrm{CC}}_{i}({p}_{\text{CR}}=0.5)$, given Equation 23a, while both a higher ${\lambda}_{i}$ and smaller ${\mathrm{CC}}_{i}({p}_{\text{CR}}=0.5)$ lead to an increase in the slope ${\beta}_{{p}_{\mathrm{C}\mathrm{R}}}\equiv {\sigma}_{G}{\lambda}_{i}(1{\mathrm{C}\mathrm{C}}_{i}^{2}({p}_{\mathrm{C}\mathrm{R}}=0.5))$ of the dependence on $\mathrm{\Delta}s({p}_{\text{CR}})$ in Equation 23b. Furthermore, a smaller ${\mathrm{CC}}_{i}({p}_{\text{CR}}=0.5)$ also leads to a smaller effect of the multiplicative symmetric modulation $h({p}_{\text{CR}})$, further contributing to the negative covariation between the magnitude of the CP and predominance of the symmetric or asymmetric pattern.
To illustrate the properties common to the model and to the $\mathrm{CP}({p}_{\text{CR}})$ patterns from the MT data, Figure 4E shows CPs from Equation 23 as a function of ${p}_{\text{CR}}$ for examples combining four values of ${\mathrm{CC}}_{i0}({p}_{\text{CR}}=0.5)$ and two values of ${\sigma}_{G}^{2}$, while the other parameters of the cell responses are kept constant. In particular, to determine ${\lambda}_{i}^{2}$ only in terms of the strength of the gain we fixed the rate to ${f}_{i}({p}_{\text{CR}}=0.5)=10\mathrm{spike}/\mathrm{s}$ and considered the variance not associated with the gain to be equal to that rate, so that ${\lambda}_{i}^{2}=1/(1+0.1/{\sigma}_{G}^{2})$. Accordingly, the values of ${\sigma}_{G}^{2}$ in Figure 4E, namely ${\sigma}_{G}^{2}=0.1$ and ${\sigma}_{G}^{2}=0.01$, correspond to ${\lambda}_{i}^{2}=0.5$ and ${\lambda}_{i}^{2}=0.09$, respectively. Further analysis of the model is provided in Appendix 4, where we also discuss the form of the $\mathrm{CP}({p}_{\text{CR}})$ pattern produced by gain fluctuations when the decoder is composed by two pools of opposite choice preference (Shadlen et al., 1996).
To experimentally estimate the coefficients ${\beta}_{{p}_{\text{CR}}}^{(exp)}$ we fitted a quadratic regression of the CPs on the stimulus levels. To theoretically estimate the coefficients ${\beta}_{{p}_{\text{CR}}}^{(th)}$, we used the negative binomial model of Goris et al., 2014 to estimate ${\sigma}_{G}^{2}$ for each cell and used the form ${\beta}_{{p}_{\mathrm{C}\mathrm{R}}}\equiv {\sigma}_{G}{\lambda}_{i}(1{\mathrm{C}\mathrm{C}}_{i}^{2}({p}_{\mathrm{C}\mathrm{R}}=0.5))$ predicted by the gain model (Equation 11) to estimate ${\beta}_{{p}_{\text{CR}}}^{(th)}$.
Generalized linear models modeling the interaction between stimulus and choice predictors
Request a detailed protocolWe implemented a new GLM, called stimulusdependentchoice GLM, that includes regression coefficients quantifying the effect on the firing rate of interactions between stimulus and choice. This model of the firing rate of each neuron was compared to two simpler and traditional models: a stimulusonly GLM, which includes only stimulus predictors of the neuron’s firing rate, and a stimulusindependentchoice GLM, which includes together with the stimulus predictor a single, stimulusindependent choice predictor.
In more detail, all three GLMs were Poisson models in which the mean firing rate $\mu ({r}_{i})$ of cell $i$ was generally expressed by the following equation:
The terms ${\mathrm{\Sigma}}_{j=0}^{4}{a}_{j}{s}^{j}$ are present in all three types of GLM, and model the stimulus influence with a fourth order polynomial function of the stimulus level. These are the only terms of the stimulusonly GLM.
The choice dependence is modeled by ${\mathrm{\Sigma}}_{j=1}^{{N}_{c}}{I}_{{P}_{j}}({p}_{\text{CR}}){b}_{j}D$, with the parameter ${N}_{c}$ (${N}_{c}\in \{1,2,3\}$) setting the number of possible different levels of stimulusdependent choice (we restricted the fitting to up to three different choice levels for simplicity, and because we found empirically this to work well for the MT data analyzed here). ${I}_{{P}_{j}}({p}_{\text{CR}})$ is an indicator function which equals one if a ${p}_{\text{CR}}$ value belongs to the subset ${P}_{j}$ of values selected to be associated with the choice parameter b_{j}, and is zero otherwise. For the stimulusindependentchoice model, we set ${N}_{c}=1$ so that the choice affects the predicted responses equally for all stimulus levels. For the stimulusdependentchoice GLM, we set ${N}_{c}>1$. For this stimulusdependentchoice GLM, we determined the subsets of stimulus levels associated with each of those parameters using the $\mathrm{CP}({p}_{\text{CR}})$ profiles for a first characterization of the stimulus dependencies. Like for the CP analysis, for each cell we determined which coherence values could be included in the analysis given a criterion requiring a minimum number of trials for each choice (at least 4). The existence of nonmonotonic $\mathrm{CP}({p}_{\text{CR}})$ profiles, such as the symmetric pattern around ${p}_{\text{CR}}=0.5$, indicated that it would be suboptimal to tile the domain of ${p}_{\text{CR}}$ with ${N}_{p}$ bins and assign a different choiceparameter level to each bin. Accordingly, we first estimated the $\mathrm{CP}({p}_{\text{CR}})$ profile of each cell and then used $k$means clustering with an Euclidean distance to cluster the components of $\mathrm{CP}({p}_{\text{CR}})$, corresponding to the bins of ${p}_{\text{CR}}$, into ${N}_{c}$ subsets. A different GLM choiceparameter b_{j} was then assigned to each choiceparameter level $j=1,\mathrm{\dots}{N}_{c}$.
We compared the predictive power of the three types of models using crossvalidation. To avoid that the choiceparameters fitted were affected by the ratio of trials with each choice, we matched the number of trials of each choice used to fit the model at each choiceparameter level. We first merged in two pools, one for each choice separately, the trials of all stimulus levels assigned to the same choiceparameter level. We then determined the number of trials from each pool to be included in the fitting set as an 80% of the trials available in the smallest pool, hence matching the number of trials selected from each choice. The remaining trials were left for the testing set. This procedure was repeated for each choiceparameter level and a GLM model was fitted on the fitting set obtained combining the selected trials for all levels. This random separation between fitting and testing data sets was repeated 50 times and the average predictive power was calculated. Performance was then quantified comparing the increase in the likelihood of the data in the testing set with respect to the likelihood of the null model which assumes a constant firing rate (L_{0}). To determine if incorporating the choice as a predictor improved the prediction, we examined the relative increase in likelihood (RIL) defined as the ratio of the likelihood increase $L(choice,stimulus)L(stimulus)$ and the increase $L(stimulus){L}_{0}$. For the stimulusdependentchoice models, we selected the most predictive model from ${N}_{c}=2,3$. To evaluate the improvement when considering stimulusdependent choice influences, we compared the RIL obtained for the stimulusdependentchoice and stimulusindependentchoice models.
Code availability
Request a detailed protocolThe codes for the analysis of Choice Probability stimulusdependencies and GLMs with stimuluschoice interaction terms are available at https://github.com/DanielChicharro/CP_DP (copy archived at swh:1:rev:5850c573860eb04317e7dc550f96b1f47ca91c6a).
Appendix 1
The analytical threshold model of choice probability
We here provide details of how the CP analytical expression of Equation 16 is obtained from the definition of Choice Probability (Equation 12) when the probability of the responses for each choice has the form of Equation 15, derived from the threshold model. We subsequently characterize the statistical power for the detection of the thresholdinduced modulation $h({p}_{\text{CR}})$, as a function of the magnitude of the CP, the number of trials, and the number of cells used to estimate an average $\mathrm{CP}({p}_{\text{CR}})$ profile.
Derivation of the CP analytical expression
Plugging the distribution of Equation 15 into the definition of the CP we get
This expression is derived analogously to Equation S1.2 in Haefner et al., 2013, and generalizes the case examined there, which corresponds to $c=0$. To solve this expression, we use some results involving integrals of normal distributions:
where $b=\sqrt{1+{\alpha}^{2}}$ and $\mathrm{T}$ is the Owen’s T function (Owen, 1956). The equality above is valid for the cases $q=0,n=1,2$, and $q=1,n=1$, which we used to derive the expressions of the CP and $\mathrm{CTA}$. Using the equality for $q=0,n=1,2$ into Equation (A1) we obtain the CP expression of Equation 16. On the other hand, the case with $q=1,n=1$ allows deriving the expression of the CTA (Equation 17) calculating ${\u27e8{r}_{i}\u27e9}_{D=1}$ and ${\u27e8{r}_{i}\u27e9}_{D=1}$ using the form of Equation 15 for $P({r}_{i}D=1)$.
The CP linear approximation of Equation 4 is generically valid when the activitychoice covariations are well captured by the linear dependence between the responses and the choice. It can be generically derived with a first order approximation such that $d$ or $D$ only affect the mean of the distribution of the responses r_{i}. We here only present a restricted derivation, specifically from the exact CP solution resulting from the threshold model. It can be checked that the same approximation follows for example from the exact solution of the CP obtained when taking the conditional distributions $p({r}_{i}D)$ to be Gaussians (Dayan and Abbot, 2001; Carnevale et al., 2013) and not skew normals (Equation 15) like for the threshold model. Expanding Equation 16 in terms of ${\rho}_{{r}_{i}d}$ we get a polynomial approximation
This expansion contains only odd order terms, producing a symmetry of $\mathrm{CP}0.5$ with respect to the sign of ${\rho}_{{r}_{i}d}$. This explains why Haefner et al., 2013 found that the linear approximation was accurate for a wide range of ${\rho}_{{r}_{i}d}$ values, since the choice correlation needs to be high so that the contribution of ${\rho}_{{r}_{i}d}^{3}$ is relevant. Up to order 3.
Since ${\mathrm{\Phi}}^{1}(0.5)=0$, for ${p}_{\text{CR}}$ values for which $1{({\mathrm{\Phi}}^{1}({p}_{\text{CR}}))}^{2}<1$ the third order term makes a smaller contribution than for the uninformative case. This is true for ${\mathrm{\Phi}}^{1}({p}_{\text{CR}})\in (\sqrt{2},\sqrt{2})$, which leads to ${p}_{\text{CR}}\in (0.08,0.92)$. This means that the linear approximation is expected to be an even better approximation in this range than for ${p}_{\text{CR}}=0.5$. Furthermore, for ${({\mathrm{\Phi}}^{1}({p}_{\text{CR}}))}^{2}<1$ the third order contribution is positive, so that for the ${p}_{\text{CR}}$ values fulfilling this constraint, ${p}_{\text{CR}}\in (0.16,0.84)$, the linear approximation is expected to underestimate the CP. The range of ${p}_{\text{CR}}$ in which the linear approximation underestimates or overestimates the CP can be seen in Figure 2 of the main article.
Statistical power for the detection of thresholdinduced CP stimulus dependencies
In Figure 2, we showed the shape and magnitude of the thresholdinduced CP modulation for different values of $\mathrm{CP}({p}_{\text{CR}}=0.5)$. The magnitude of this modulation is small and is most noticeable for the extreme ${p}_{\text{CR}}$ values, for which the estimation of the CPs is also the poorest. As discussed in the Results, we calculated weighted average CPs both across stimulus levels and across cells to reduce the standard error of the resulting averaged withincell $\mathrm{CP}({p}_{\text{CR}})$ profiles. We here characterize the statistical power for the detection of this CP modulation as a function of the magnitude of $\mathrm{CP}({p}_{\text{CR}}=0.5)$, the number of cells, and the number of trials per stimulus level. For this purpose, we generated responses following the probability distribution analytically derived in the model (Equation 15). We selected five $\mathrm{CP}({p}_{\text{CR}}=0.5)$ levels, namely $\{0.55,0.6,0.65,0.7,0.75\}$, and for each we simulated cell responses corresponding to different ${p}_{\text{CR}}$ values. We simulated responses for a collection of 5000 cells, selecting their $\mathrm{CP}({p}_{\text{CR}}=0.5)$ values from a uniform distribution centered at each $\mathrm{CP}({p}_{\text{CR}}=0.5)$ level, and a range of width 0.1. We repeated the same procedure using different numbers of trials. For each selection of the number of trials, we generated three times that number of trials for the ${p}_{\text{CR}}$ bin corresponding to an uninformative stimulus, to allow the shuffling of trials used in the construction of surrogate data, as for the experimental data. We estimated averaged $\mathrm{CP}({p}_{\text{CR}})$ profiles for different numbers $N$ of cells. We repeated this estimation 500 times, independently randomly sampling from the 5000 cells the $N$ cells used for the average. For each of these repetitions, we generated the surrogate data required to implement the surrogates test described in Methods.
Appendix 1—figure 1 shows the pvalues obtained when testing for a symmetric increase of the CP with extreme ${p}_{\text{CR}}$ values. As expected, pvalues decrease with the number of cells, the number of trials, and the magnitude of the CP. This characterization of the pvalues indicates the utility of our method to calculate averaged withincell $\mathrm{CP}({p}_{\text{CR}})$ profiles, combining CP estimates across neighboring stimulus levels analogously to increasing the number of trials as well as averaging CP estimates across cells. We can compare these predicted pvalues with the pvalues obtained when analyzing the experimental data. Concretely, for cluster 2 in Figure 4B, which contains $N=48$ cells, the average $\mathrm{CP}({p}_{\text{CR}}=0.5)$ value is $\u27e8\mathrm{CP}({p}_{\text{CR}}=0.5)\u27e9=0.57$ and, for each of the two ${p}_{\text{CR}}$ bins most distant from ${p}_{\text{CR}}=0.5$, the average number of trials available is $\u27e8K\u27e9=147$, comprising all stimulus levels assigned to those bins. These values of $\u27e8\mathrm{CP}({p}_{\text{CR}}=0.5)\u27e9$, $N$, and $\u27e8K\u27e9$ resulted in a pvalue $p=0.0008$ (Figure 4B), which is substantially smaller than what predicted in the simulations of Appendix 1—figure 1A. Accordingly, while the model correctly predicts the existence of a symmetric CP modulation with higher CP values for extreme ${p}_{\text{CR}}$ values, it underestimates its actual strength. As discussed in Results, this stronger symmetric modulation may be due to other symmetric contributions from $\mathrm{CC}({p}_{\text{CR}})$ in addition to $h({p}_{\text{CR}})$, or to a dynamic amplification of the thresholdinduced modulation, for example due to reinforcing decisionrelated feedback signals. Independently of the origin if this higher effect size, the method developed to better characterize withincell $\mathrm{CP}({p}_{\text{CR}})$ dependencies allowed us to detect the actual presence of CP stimulus dependencies in the Britten et al., 1996 data, and promises to be a useful tool to characterize the properties of these dependencies.
Appendix 2
The relation between the weighted average CP and the grand CP of zscored responses
We here describe the connection between a weighted average CP and the corrected zscoring procedure Kang and Maunsell, 2012 used to calculate a grand CP pooling the responses across stimulus levels. We will use z_{c} to refer to the responses with the corrected zscoring, as opposed to the standard zscoring $z$. For each stimulus level $s$, the corrected zscoring is calculated as ${z}_{c}=(r{\stackrel{~}{\mu}}_{rs})/{\stackrel{~}{\sigma}}_{rs}$ with
where ${\mu}_{rD=\pm 1,s}$ are the mean responses to each choice for the stimulus $s$, ${\sigma}_{rD=\pm 1,s}^{2}$ are the variance of the responses to each choice for the stimulus $s$, and $\mathrm{\Delta}{\mu}_{rD,s}={\mu}_{rD=1,s}{\mu}_{rD=1,s}$ is the CTA for the fixed stimulus level $s$.
The relation between the CP and the CTA of Equation 4 holds due to the definition of the measures and does not depend on the nature of the response variable, and hence also holds when the CP is not calculated for the raw responses but for their corrected zscores. Furthermore, the relation holds when the CP is calculated for a fixed stimulus level, or when the responses are pooled across stimulus levels to calculate the grand CP. For the latter case
where we drop the cell index $i$ for simplicity. We here use the notation $\mathrm{\Delta}{\mu}_{{z}_{c}D}={\mathrm{CTA}}_{{z}_{c}}$ and ${\sigma}_{{z}_{c}}=\mathrm{var}{z}_{c}$, in comparison to Equation 4. Furthermore, in this section we need to explicitly differentiate which measures are calculated for a fixed stimulus, as indicated by the conditioning on $s$ in the subindex, and which measures are calculated from the distribution of the pooled responses, in which case there is no stimulus subindex, for example, for $\mathrm{\Delta}{\mu}_{{z}_{c}D}$. The grand ${\mathrm{CP}}_{{z}_{c}}$ and $\mathrm{\Delta}{\mu}_{{z}_{c}D}$ are calculated from the distributions $p({z}_{c}D=\pm 1)$ obtained after pooling, while for a fixed stimulus ${\mathrm{CP}}_{{z}_{c}}(s)$ and $\mathrm{\Delta}{\mu}_{{z}_{c}D,s}$ are calculated from $p(zD=\pm 1,s)$. The choice rate ${p}_{\text{CR}}$ is defined as in the main text as ${p}_{\text{CR}}\equiv p(D=1s)$, although previously the dependence on the fixed stimulus was implicit.
We use the definition of z_{c} according to Equation (A5) to calculate $\mathrm{\Delta}{\mu}_{{z}_{c}D}$. Given the definition of ${\stackrel{~}{\mu}}_{rs}$ and ${\stackrel{~}{\sigma}}_{rs}$, we obtain that the conditional means ${\mu}_{{z}_{c}D=1,s}$ and ${\mu}_{{z}_{c}D=1,s}$ are equal to ${\mu}_{{z}_{c}D=1,s}=\mathrm{\Delta}{\mu}_{rD,s}/(2{\stackrel{~}{\sigma}}_{rs})$ and ${\mu}_{{z}_{c}D=1,s}=\mathrm{\Delta}{\mu}_{rD,s}/(2{\stackrel{~}{\sigma}}_{rs})$, respectively. We calculate $\mathrm{\Delta}{\mu}_{{z}_{c}D}$ as follows:
The first equality corresponds to the definition of $\mathrm{\Delta}{\mu}_{{z}_{c}D}$. In the second equality, we have estimated the means ${\mu}_{{z}_{c}D=\pm 1}$ in terms of the conditional means ${\mu}_{{z}_{c}D=\pm 1,s}$ using the general property that the mean of a variable is equal to the average of its conditional means. The third equality results from inserting the actual values of ${\mu}_{{z}_{c}D=\pm 1,s}$. Given Equation (A7), the choicetriggered average obtained after pooling the normalized responses z_{c} across stimulus levels corresponds to a weighted average of $\mathrm{\Delta}{\mu}_{rD,s}/{\stackrel{~}{\sigma}}_{rs}$ across stimulus levels. Indeed, the factor $[p(sD=1)+p(sD=1)]/2$ is properly normalized and plays the role of a weight ${w}_{{z}_{c}}(s)$, since $\int \mathrm{d}sp(sD=\pm 1)=1$ and hence $\int ds{w}_{{z}_{c}}(s)=1$. Moreover, ${\stackrel{~}{\sigma}}_{rs}$ only introduces a secondorder correction with respect to the standard normalization with ${\sigma}_{rs}$. In particular, given the skewnormal distributions (Equation 15) resulting from the threshold model, both $\mathrm{\Delta}{\mu}_{rD,s}^{2}$ and ${\sigma}_{rD=\pm 1,s}^{2}$ depend quadratically on the strength of the activitychoice covariations, as determined by the choice correlation (Arnold and Beaver, 2000; Azzalini, 2005). Neglecting this secondorder correction we have that $\mathrm{\Delta}{\mu}_{rD,s}/{\stackrel{~}{\sigma}}_{rs}\cong \mathrm{\Delta}{\mu}_{rD,s}/{\sigma}_{rs}$ and ${\sigma}_{{z}_{c}}\cong {\sigma}_{z}=1$. Furthermore, taking into account the general relation between the CP and CTA (Equation 4), $\mathrm{\Delta}{\mu}_{rD,s}/{\sigma}_{rs}$ approximates $2\sqrt{\pi}({\mathrm{CP}}_{r}(s)1/2)$, and analogously, as indicated in Equation (A6), $\mathrm{\Delta}{\mu}_{{z}_{c}D}/{\sigma}_{{z}_{c}}$ approximates $2\sqrt{\pi}({\mathrm{CP}}_{{z}_{c}}1/2)$. Altogether, Equation (A7) can be expressed for the CP as
As mentioned above, the weights ${w}_{{z}_{c}}(s)\equiv [p(sD=1)+p(sD=1)]/2$ are properly normalized to $\int {w}_{{z}_{c}}(s)ds=1$, and hence ${\mathrm{CP}}_{{z}_{c}}$ is approximated as a weighted average of the CPs of the responses at each stimulus level, ${\mathrm{CP}}_{r}(s)$. This shows that in fact the use of corrected zscores to pool responses across stimulus levels to calculate a grand CP is in the linear approximation equivalent to calculating a weighted average of the CPs at each stimulus level, with a specific selection of the average weights, namely ${w}_{{z}_{c}}(s)$. An analogous derivation with the uncorrected zscoring shows that in that case the pooling across stimulus levels is associated with an improper use of unormalized weights, which analytically confirms the arguments and simulations in Kang and Maunsell, 2012 indicating that a grand CP calculated with the standard zscoring provides a biased estimation of an underlying stimulus independent CP (see a detailed derivation in section S2 of Chicharro et al., 2019). The weights ${w}_{{z}_{c}}(s)$ differ from the ones inversely proportional to the standard error of the CP estimates (Equation 18). In particular, if the stimulus set is designed such that across all stimulus levels the rate of the choices is balanced, i. e. if $p(D=1)=p(D=1)$, then these weights simplify to ${w}_{{z}_{c}}(s)=p(s)$, that is, the CPs are averaged according to the relative number of trials available for each stimulus level. When there are no CP stimulus dependencies, the weights related to the estimate error are preferable since the grand CP will provide a better estimate of the underlying constant CP. In the presence of CP stimulus dependencies, any grand CP calculated as a weighted average across stimulus levels may introduce some confoundings in the comparison of grand CPs across cell types, areas, or across tasks. For example, if the distribution of the presented stimuli $p(s)$ is not uniform, the weights ${w}_{{z}_{c}}(s)=p(s)$, will assign a higher weight to the $\mathrm{CP}(s)$ of certain stimulus levels, and difference in the grand CP across cells may reflect for which stimulus levels the cells compared have a higher $\mathrm{CP}(s)$. Accordingly, characterizing the $\mathrm{CP}(s)$ patterns can also help to understand if differences in grand CPs reflect functionally meaningful differences or are produced by the grand CP weighted average estimation.
Appendix 3
Clustering analysis
We here provide further details about the alternative procedures used to cluster the $\mathrm{CP}({p}_{\text{CR}})$ profiles, about the visualization of the clusters, and about how to assess the significance of the $\mathrm{CP}({p}_{\text{CR}})$ patterns. As a first step, we implemented a nonparametric $k$means clustering analysis to cluster the $\mathrm{CP}({p}_{\text{CR}})$ profiles of the 107 cells for which a full profile could be constructed. We started using $C=2$ clusters (Figure 4A) and found that this nonparametric approach, when using the cosine distance, recovered qualitatively the same patterns obtained when separating a priori the cells into cells with an average CP higher or lower than 0.5 (Figure 3A). From the patterns of the two clusters only the one of cells with an average CP higher than 0.5 was found significant (see below for details on the significance analysis). Given this difference in significance, we subsequently increased the number of clusters in two alternative ways. In a first approach, we a priori separated the cells with an average CP higher or lower than 0.5 and continued the clustering analysis separately for these two groups. Appendix 3—figure 1A shows the obtained subclusters with $C=2$ for the two groups separately. As a second approach, we increased the number of clusters to $C=3$ without any previous separation in two groups. The resulting clusters (Appendix 3—figure 1B) indicated that the separation of the two subclusters for the cells with average CP higher than 0.5 naturally appears without enforcing the separation. Increasing the number of clusters without any a priori separation provided evidence that the two main patterns for cells with average CP higher than 0.5 are robust and still contain a substantial portion of the cells even for $C=6$ (Appendix 3—figure 1C). We therefore focused our posterior analysis in characterizing the features of these symmetric and asymmetric patterns.
To evaluate the significance of the $\mathrm{CP}({p}_{\text{CR}})$ patterns found with the clustering analysis, we repeated the same clustering procedure for the surrogate data generated as described in Methods. For each surrogate, each of the $C$ clusters found was associated with the most similar original pattern of the ones being tested. For example, in Figure 4B, when testing the significance of the symmetric and asymmetric patterns for cells with average CP higher than 0.5, each of the two surrogate cluster patterns was assigned to the most similar pattern among the symmetric and asymmetric one. Subsequently, the average of $\mathrm{\Delta}{\mathrm{CP}}_{k}$ across bins was calculated for the original and surrogate profiles as explained in the Methods. The pvalue corresponding to each original pattern was calculated from the number of surrogate patterns associated with it for which the average $\mathrm{\Delta}{\mathrm{CP}}_{k}$ was higher.
To visualize the clusters in Figure 4 and Appendix 3—figure 1A, we constructed orthonormal axes using either the vectors corresponding to the center of the clusters or the selected templates, for nonparametric and parametric clustering, respectively. In the case of nonparametric clustering, the xaxis corresponds to the separation between the two initial clusters, and is closely aligned to the departure of the average CP from 0.5. The yaxis was built as a projection orthogonal to the xaxis of the vector connecting the center of the two subclusters. When templates were used, the xaxis corresponds to the template with a constant CP and the yaxis was built as an orthogonal projection of the template with an asymmetric profile (a vector with a positive unit slope).
Appendix 4
The effect of gain fluctuations on the CP
We here derive a general CP expression accounting for the effect of response gain fluctuations valid for a feedforward encoding/decoding decisionmaking model with any unbiased weights, and subsequently focus on the decoder based on two pools of cells with opposite choice preference, as previously studied in Shadlen et al., 1996. As described in Methods, we consider a decoder $d={\overrightarrow{w}}^{\top}\overrightarrow{r}$ (Equation 8), estimating the decision variable $d$ from the responses ${r}_{i}={f}_{i}(s)+{\xi}_{i}$, with tuning functions $\overrightarrow{f}(s)=({f}_{1}(s),\mathrm{\dots},{f}_{n}(s))$ and a covariance structure $\mathrm{\Sigma}(s)$ of the neuron’s intrinsic variability ${\xi}_{i}$. The general expression of the CP valid for any stimulus level and agnostic about the source of the activity source covariations is
and, as described in Equation 9, the CC is determined by the covariance matrix $\mathrm{\Sigma}(s)$ and the readout weights $\overrightarrow{w}$ as
where $\mathrm{cov}({r}_{i},d)(s)={(\mathrm{\Sigma}(s)\overrightarrow{w})}_{i}$ and ${\sigma}_{d}^{2}(s)={\overrightarrow{w}}^{\top}\mathrm{\Sigma}(s)\overrightarrow{w}$. Given that the covariance matrix has a structure $\mathrm{\Sigma}(s)=\overline{\mathrm{\Sigma}}+{\sigma}_{G}^{2}\overrightarrow{f}(s){\overrightarrow{f}}^{\top}(s)$ (Equation 10), with the component ${\sigma}_{G}^{2}\overrightarrow{f}(s){\overrightarrow{f}}^{\top}(s)$ induced by the gain fluctuations, a change $\mathrm{\Delta}s=s{s}_{0}$ in the stimulus level departing from the uninformative stimulus s_{0} alters the covariance structure as indicated in Equation 22. This leads to the following perturbation of the CC
which is a generalization of Equation 23b, where $\mathrm{\Delta}{\sigma}_{{r}_{i}}^{2}(s)$, $\mathrm{\Delta}{\sigma}_{d}^{2}(s)$, and $\mathrm{\Delta}\mathrm{cov}({r}_{i},d)(s)$ are the changes produced in the variances and covariance due to $\mathrm{\Delta}s$. From Equations (A10) and 22, the change in the variability of the responses is $\mathrm{\Delta}{\sigma}_{{r}_{i}}^{2}(s)=2{\sigma}_{G}^{2}{f}_{i}({s}_{0}){f}_{i}^{\prime}({s}_{0})$, the change in the variability of the decision variable is $\mathrm{\Delta}{\sigma}_{d}^{2}(s)=2{\sigma}_{G}^{2}{\overrightarrow{w}}^{\top}\overrightarrow{f}({s}_{0}){\overrightarrow{w}}^{\top}{\overrightarrow{f}}^{\prime}({s}_{0})$, and the covariance varies in $\mathrm{\Delta}\mathrm{cov}({r}_{i},d)(s)={\sigma}_{G}^{2}{f}_{k}({s}_{0}){\overrightarrow{w}}^{\top}{\overrightarrow{f}}^{\prime}({s}_{0})+{\sigma}_{G}^{2}{f}_{k}^{\prime}({s}_{0}){\overrightarrow{w}}^{\top}\overrightarrow{f}({s}_{0})$. Furthermore, for any unbiased decoder ${\overrightarrow{w}}^{\top}{\overrightarrow{f}}^{\prime}({s}_{0})=1$, so that $d={\overrightarrow{w}}^{\top}\overrightarrow{r}$ properly estimates $d(s)d({s}_{0})={\overrightarrow{w}}^{\mathrm{\top}}\overrightarrow{{f}^{\mathrm{\prime}}({s}_{0})}\mathrm{\Delta}s=\mathrm{\Delta}s$ (MorenoBote et al., 2014). Taking this into account, Equation (A11) can be expressed as
Here ${\lambda}_{i}\equiv {\sigma}_{{r}_{i}g}({s}_{0})/{\sigma}_{{r}_{i}}({s}_{0})$ quantifies the portion of the responses variance of cell $i$ caused by the gain fluctuations, as defined in Equation 11, and ${\lambda}_{d}\equiv {\overrightarrow{w}}^{\top}\overrightarrow{f}({s}_{0})/{\sigma}_{d}({s}_{0})$ quantifies the portion of the variance of the decision variable caused by the gain fluctuations. We define ${\eta}_{i}\equiv {f}_{i}^{\prime}({s}_{0})/{\sigma}_{{r}_{i}}({s}_{0})$, which quantifies the neurometric sensitivity of the cell, and ${\eta}_{d}\equiv 1/{\sigma}_{d}({s}_{0})$, which quantifies the behavioral sensitivity. Note that Pitkow et al., 2015 defined the socalled neural threshold ${\theta}_{i}$ and behavioral threshold ${\theta}_{d}$ such that ${\eta}_{i}=1/{\theta}_{i}$ and ${\eta}_{d}=1/{\theta}_{d}$, but in our case the measures of sensitivity are more suited to describe the dependence of the CC. In particular, the ${\mathrm{CC}}_{i}({s}_{0})$ of the uninformative stimulus has an attenuation factor that depends on the relative increase in variability in the responses and in the decision variable, each determined by the product ${\lambda}_{k}{\eta}_{k}$ of the sensitivity to the change in the stimulus and the relative magnitude of the variance produced by the gain fluctuations. On the other hand, the additional contribution to ${\mathrm{CC}}_{i}(s)$ depends on the crossproducts ${\lambda}_{k}{\eta}_{{k}^{\prime}}$, of the relative magnitude of the gainrelated variance for the cell and the sensitivity of the decision variable, or viceversa. Rearranging the terms this equation can also be expressed as
From this expression, Equation 23b in Methods is recovered for the case of the optimal decoder, since the CC with the optimal decoder at s_{0} is equal to the ratio of the neural and the behavioral threshold (Pitkow et al., 2015), that is, $\mathrm{CC}({s}_{0})={\eta}_{i}/{\eta}_{d}$, as indicated in Equation 21. For this optimal decoder the second summand of Equation (A13) is canceled out, while in the first ${\mathrm{CC}}_{i}({s}_{0}){\eta}_{i}/{\eta}_{d}$ equals ${\mathrm{CC}}_{i}^{2}({s}_{0})$, ensuring that the slope of dependence on $\mathrm{\Delta}s$ is positive. More generally, ${\lambda}_{d}={\overrightarrow{w}}^{\top}\overrightarrow{f}({s}_{0})$ is zero when the decoder is uncorrelated to the magnitude of the tuning curves. In this case the performance of the decoder is not affected by global gain fluctuations when presenting a noninformative stimulus (MorenoBote et al., 2014; Ecker et al., 2016) and the gain does not contribute to the variability of $d$ or its covariance with the cell responses. This additional assumption was used to determine Equation 23a, such that only the variance of the cell changes with respect to the case of no gain fluctuations.
We now additionally examine the decoder formed by two pools of cells with opposite choice preference, because of the role it has played in previous understanding of activitychoice covariations (Shadlen et al., 1996; Cohen and Newsome, 2009b). We consider the particular configuration examined in Haefner et al., 2013, such that the decoder is formed by $n$ neurons divided in two pools of $n/2$ neurons, all with the same variance ${\sigma}_{{r}_{i}}^{2}({s}_{0})$, and with the same intrapool covariance ${\mathrm{cov}}_{}({r}_{i},{r}_{j})$ for all pairs of cells within the same pool and the same interpool ${\mathrm{cov}}_{\u27c2}({r}_{i},{r}_{j})$ for all pairs across pools. The readout weights all have the same magnitude, with opposite sign for the cells of the two pools. For this configuration, Haefner et al., 2013 derived (see their Suppl. Material S5) that ${\mathrm{CC}}_{i}({s}_{0})\approx \pm \sqrt{1/n+\mathrm{\Delta}\varrho /2}$, where $\mathrm{\Delta}\varrho \equiv (12/n){\rho}_{}{\rho}_{\u27c2}$ is the difference between the intra and interpool correlations, in the limit of a large pool, and the sign of the choice correlation is the opposite across pools. They also showed that ${\sigma}_{d}({s}_{0})=n{\sigma}_{{r}_{i}}({s}_{0}){\mathrm{CC}}_{i}({s}_{0})/c$, with $c$ being a normalization factor of the weights to ensure that the decoder is unbiased (${\overrightarrow{w}}^{\top}{\overrightarrow{f}}^{\prime}({s}_{0})=1$). Accordingly, for this decoder ${\eta}_{i}/{\eta}_{d}=n{f}_{i}^{\prime}({s}_{0}){\mathrm{CC}}_{i}({s}_{0})/c$. Assuming that for a pair of neuron/antineuron in the two pools their firing rates have derivatives of same magnitude and opposite sign, $c=n\u27e8{f}^{\prime}({s}_{0})\u27e9$, being $\u27e8{f}^{\prime}({s}_{0})\u27e9$ the average of the magnitude of the derivatives. This leads to ${\eta}_{i}/{\eta}_{d}={f}_{i}^{\prime}({s}_{0}){\mathrm{CC}}_{i}({s}_{0})/\u27e8{f}^{\prime}({s}_{0})\u27e9$. Further following the idea that the two pools contain neurons and antineurons with the same response properties but opposite choice preference (Cohen and Newsome, 2009b), ${\lambda}_{d}={\overrightarrow{w}}^{\top}\overrightarrow{f}({s}_{0})=0$, since for the uninformative stimulus each neuron with tuning curve ${f}_{k}({s}_{0})$ is paired by a cell in the other pool with the same firing rate and an equal weight but of opposite sign. Accordingly, for this decoder Equation (A13) takes the form
Given the structure of the covariance matrix, this decoder is in fact optimal if furthermore all cells had the same derivative ${f}_{i}^{\prime}({s}_{0})$, in which case Equation (A14) equals Equation 11 and for all cells the $\mathrm{CP}(s)$ pattern has a positive slope in dependence on $\mathrm{\Delta}s$. More generally, with this twopools decoder and covariance matrix, the $\mathrm{CP}(s)$ pattern can have a negative slope for those cells with larger derivatives. Indeed, only for the optimal decoder the general model of Equation (A13) guarantees a positive slope.
Finally, in Appendix 4—figure 1 we further characterize the dependencies predicted by the model when using an optimal decoder (Equations 11 and 23). The covariation of the coefficients ${\beta}_{{p}_{\mathrm{C}\mathrm{R}}}\equiv {\sigma}_{G}{\lambda}_{i}(1{\mathrm{C}\mathrm{C}}_{i}^{2}({p}_{\mathrm{C}\mathrm{R}}=0.5))$ from Equation 23b and ${\beta}_{\mathrm{CC}}\equiv \sqrt{1{\lambda}_{i}^{2}}$ from Equation 23a that modulate the strength of the $\mathrm{CP}({p}_{\text{CR}})$ dependence and the magnitude of ${\mathrm{CP}}_{i}({p}_{\text{CR}}=0.5)$, respectively, is shown in Appendix 4—figure 1A. We determined ${\lambda}_{i}^{2}$ only in terms of the strength of the gain as for Figure 4E, namely as ${\lambda}_{i}^{2}=1/(1+0.1/{\sigma}_{G}^{2})$. The range ${\sigma}_{G}^{2}=[0\mathrm{\hspace{0.25em}0.5}]$ corresponds to ${\lambda}_{i}^{2}=[0\mathrm{\hspace{0.25em}0.83}]$. Appendix 4—figure 1BC further illustrate how combinations of different ${\mathrm{CC}}_{i0}({p}_{\text{CR}}=0.5)$ and ${\sigma}_{G}^{2}$ populate the 2D space of $\mathrm{CP}({p}_{\text{CR}})$ profiles as in Figure 4D,E. $\mathrm{CP}({p}_{\text{CR}})$ profiles were simulated randomly sampling the average CP values from the ones observed for the MT cells. For Appendix 4—figure 1B,D, the fluctuation gains were uniformly sampled from the interval ${\sigma}_{G}^{2}=[0\mathrm{\hspace{0.25em}0.1}]$, corresponding to ${\lambda}_{i}^{2}=[0\mathrm{\hspace{0.25em}0.5}]$. For Appendix 4—figure 1C,E, the 2D space was not evenly sampled, simulating a further dependence between ${\mathrm{CC}}_{i0}({p}_{\text{CR}}=0.5)$ and ${\sigma}_{G}^{2}$ values which determines the exact balance between the symmetric and asymmetric dependencies observed in the average profiles associated with each cluster.
Data availability
No data was collected as part of this study.

The Neural Signal ArchiveID Macaque. A relationship between behavioral choice and the visual responses of neurons in macaque MT.
References

A class of distributions which includes the normal onesScandinavian Journal of Statistics 12:171–178.

The Skewnormal distribution and related multivariate families*Scandinavian Journal of Statistics 32:159–188.https://doi.org/10.1111/j.14679469.2005.00426.x

Correlated firing in macaque visual area MT: time scales and relationship to behaviorThe Journal of Neuroscience 21:1676–1697.https://doi.org/10.1523/JNEUROSCI.210501676.2001

The area above the ordinal dominance graph and the area below the receiver operating characteristic graphJournal of Mathematical Psychology 12:387–415.https://doi.org/10.1016/00222496(75)900012

Noise correlations and perceptual inferenceCurrent Opinion in Neurobiology 58:209–217.https://doi.org/10.1016/j.conb.2019.09.002

Attention improves performance primarily by reducing interneuronal correlationsNature Neuroscience 12:1594–1600.https://doi.org/10.1038/nn.2439

Feedforward and feedback sources of choice probability in neural population responsesCurrent Opinion in Neurobiology 37:126–132.https://doi.org/10.1016/j.conb.2016.01.009

BookTheoretical Neuroscience, Computational and Mathematical Modeling of Neural SystemsCambridge, Massachusetts: The MIT press.

Perceptually bistable threedimensional figures evoke high choice probabilities in cortical area MTThe Journal of Neuroscience 21:4809–4821.https://doi.org/10.1523/JNEUROSCI.211304809.2001

On the structure of neuronal population activity under fluctuations in attentional stateThe Journal of Neuroscience 36:1775–1789.https://doi.org/10.1523/JNEUROSCI.204415.2016

Statistically optimal perception and learning: from behavior to neural representationsTrends in Cognitive Sciences 14:119–130.https://doi.org/10.1016/j.tics.2010.01.003

Neural computations that underlie decisions about sensory stimuliTrends in Cognitive Sciences 5:10–16.https://doi.org/10.1016/S13646613(00)015679

The neural basis of decision makingAnnual Review of Neuroscience 30:535–574.https://doi.org/10.1146/annurev.neuro.29.051605.113038

Predicting perceptual decisions using visual cortical population responses and choice historyThe Journal of Neuroscience 39:6714–6727.https://doi.org/10.1523/JNEUROSCI.003519.2019

Stimulus dependence of neuronal correlation in primary visual cortex of the macaqueJournal of Neuroscience 25:3661–3673.https://doi.org/10.1523/JNEUROSCI.510604.2005

Comparing perceptual signals of single V5/MT neurons in two binocular depth tasksJournal of Neurophysiology 92:1586–1596.https://doi.org/10.1152/jn.00851.2003

Defining the V5/MT neuronal pool for perceptual decisions in a visual stereomotion taskPhilosophical Transactions of the Royal Society B: Biological Sciences 371:20150260.https://doi.org/10.1098/rstb.2015.0260

Inferring decoding strategies for multiple correlated neural populationsPLOS Computational Biology 14:e1006371.https://doi.org/10.1371/journal.pcbi.1006371

Characterizing and interpreting the influence of internal variables on sensory activityCurrent Opinion in Neurobiology 46:84–89.https://doi.org/10.1016/j.conb.2017.07.006

Hierarchical bayesian inference in the visual cortexJournal of the Optical Society of America A 20:1434–1448.https://doi.org/10.1364/JOSAA.20.001434

Choice (history) correlations in sensory cortex: cause or consequence?Current Opinion in Neurobiology 58:148–154.https://doi.org/10.1016/j.conb.2019.09.005

Featurebased attention in visual cortexTrends in Neurosciences 29:317–322.https://doi.org/10.1016/j.tins.2006.04.001

Decisionrelated activity in sensory neurons: correlations among neurons and with behaviorAnnual Review of Neuroscience 35:463–483.https://doi.org/10.1146/annurevneuro062111150403

Macaque V2 neurons, but not V1 neurons, show choicerelated activityJournal of Neuroscience 26:9567–9578.https://doi.org/10.1523/JNEUROSCI.225606.2006

Correlations between the activity of sensory neurons and behavior: how much do they tell Us about a neuron's causality?Current Opinion in Neurobiology 20:376–381.https://doi.org/10.1016/j.conb.2010.05.002

Tables for computing bivariate normal probabilitiesThe Annals of Mathematical Statistics 27:1075–1090.https://doi.org/10.1214/aoms/1177728074

Bridging neural and computational viewpoints on perceptual DecisionMakingTrends in Neurosciences 41:838–852.https://doi.org/10.1016/j.tins.2018.06.005

Encoding and decoding in parietal cortex during sensorimotor decisionmakingNature Neuroscience 17:1395–1403.https://doi.org/10.1038/nn.3800

Neuronal activity and its links with the perception of multi–stable figuresPhilosophical Transactions of the Royal Society of London. Series B: Biological Sciences 357:1053–1062.https://doi.org/10.1098/rstb.2002.1112

Sense and the single neuron: probing the physiology of perceptionAnnual Review of Neuroscience 21:227–277.https://doi.org/10.1146/annurev.neuro.21.1.227

Flutter discrimination: neural codes, perception, memory and decision makingNature Reviews Neuroscience 4:203–218.https://doi.org/10.1038/nrn1058

Cognition as a window into neuronal population spaceAnnual Review of Neuroscience 41:77–97.https://doi.org/10.1146/annurevneuro080317061936

Cortical state determines global variability and correlations in visual cortexJournal of Neuroscience 35:170–178.https://doi.org/10.1523/JNEUROSCI.499413.2015

A computational analysis of the relationship between neuronal and behavioral responses to visual motionThe Journal of Neuroscience 16:1486–1510.https://doi.org/10.1523/JNEUROSCI.160401486.1996

Comparison of DecisionRelated signals in sensory and motor preparatory responses of neurons in area LIPThe Journal of Neuroscience 38:6350–6365.https://doi.org/10.1523/JNEUROSCI.066818.2018

Population code dynamics in categorical perceptionScientific Reports 6:22536.https://doi.org/10.1038/srep22536

Neural correlates of perceptual choice and decision making during feardisgust discriminationJournal of Neuroscience 27:2908–2917.https://doi.org/10.1523/JNEUROSCI.302406.2007

Causal contribution of primate auditory cortex to auditory perceptual decisionmakingNature Neuroscience 19:135–142.https://doi.org/10.1038/nn.4195

Choicerelated activity in the anterior intraparietal area during 3D structure categorizationJournal of Cognitive Neuroscience 27:1104–1115.https://doi.org/10.1162/jocn_a_00773

Origins of choicerelated activity in mouse somatosensory cortexNature Neuroscience 19:127–134.https://doi.org/10.1038/nn.4183
Decision letter

Joshua I GoldSenior Editor; University of Pennsylvania, United States

Kristine KrugReviewing Editor; University of Oxford, United Kingdom

Jochen BraunReviewer; OttovonGuerickeUniversity Magdeburg, Germany

Andrew J ParkerReviewer; University of Oxford, United Kingdom

Klaus WimmerReviewer; Centre de Recerca Matemàtica, Spain
In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.
Acceptance summary:
Perceptual decisions rely on neural activity of sensory neurons. This work presents new modelbased analytical tools to understand the relationship between sensory stimulus, sensory neural activity and perceptual choice. Dependencies between neural and behavioural observables in studies of decision making can be quite complex and nonintuitive, as different observables depend in different ways on unobserved hidden states (e.g., decision variable, decision criterion). This paper derives these dependencies for standard assumptions (Gaussian variability) and predicts how dependencies will change under more realistic assumptions (gain modulation). This is a thoughtful reworking of previously published data, making the suggestion that there is a functional relationship between ongoing neural activity and behavioral decisions. This new analysis is however still limited by the available data. Ultimately, this paper suggests new avenues that should be explored by neuroscientists when modelling perceptual decisions.
Decision letter after peer review:
Thank you very much for submitting your article "Stimulus dependent relationships between behavioral choice and sensory neural responses" for consideration by eLife. Your article has been reviewed by 3 peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Joshua Gold as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Jochen Braun (Reviewer #1); Andrew J Parker (Reviewer #3).
The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.
As the editors have judged that your manuscript is of interest, but as described below that additional experiments/analysis are required before it is published, we would like to draw your attention to changes in our revision policy that we have made in response to COVID19 (https://elifesciences.org/articles/57162). First, because many researchers have temporarily lost access to the labs, we will give authors as much time as they need to submit revised manuscripts. We are also offering, if you choose, to post the manuscript to bioRxiv (if it is not already there) along with this decision letter and a formal designation that the manuscript is "in revision at eLife". Please let us know if you would like to pursue this option. (If your work is more suitable for medRxiv, you will need to post the preprint yourself, as the mechanisms for us to do so are still in development.)
Summary:
The manuscript "Stimulus dependent relationships between behavioral choice and sensory neural responses" by Haefner and colleagues addresses interpretations of choice probability, which is the capacity of sensory neurons to predict the decisions made by experimental subjects in sensory decision tasks. Variability in the responses of individual neurons is predictive of the subject's choice. This paper further develops previous theoretical findings concerning this "choice probability". At the centre of the new modelling approach is the important relationship between these variable responses and the strength of the stimulusrelated component of neuronal firing and how this can be formalised in an algorithm.
Essential revisions:
This manuscript further develops previous theoretical findings concerning "choice probability", the relationship of the trialtotrial variation of firing rates of a (sensory) neuron and the behavioral choice of a subject in perceptual decision making experiments. CPs and their interpretation have a long history, including the debate about their origin (do CPs reflect a causal effect of sensory neural variability on the decision or do they arise from topdown feedback signals?) Here, the authors identify an additional factor by showing that CP depends on the "choice rate" (the fraction of left vs. right choices). Since this choice rate depends on stimulus strength, CP itself will depend on stimulus strength. They also show experimental evidence for this effect (in the classical Britten et al. dataset) that has been overlooked until now and present methods for refined analysis for linking the responses of sensory neurons to choices.
The reviewers raised a number of central concerns about the relationship between the model and the available empirical data. They agree that the relationship between pCR and CP must be firmly established as a general experimental fact, or reasons given why it is apparent in some conditions and not others. Also, the theoretical interpretation of the relationship between pCR and CP needs to deliver accurate predictions of the experimental results and also to demonstrate consistency with theoretical models of how other factors affect CP (interneuronal correlation and stimulus sensitivity, being the primary variables of concern here).
1. The relationship between pCR and CP appears more at the core of this rather than the related stimulus strength vs CP, but there is a quantitative mismatch between model prediction and empirically observed data, especially in Figure 3a and Figure 4b. Interpretation of Figure 3a. Our initial concern when reading the paper was that the predicted effect of choice rate on CP (dotted line in Figure 3a) is so small that there will not be enough statistical power to detect it in the available experimental data. Moreover, the analysis is complicated by the fact that the effect of h(p_{CR}) is partly masked by the gain modulation effect (Figure 4b). However, for the n=48 neurons (Figure 4b) that show the predicted Ushape of CP vs. p_{CR}, the effect is actually much larger than the theoretical prediction (e.g. or p_{CR} ~0.1 the CP is close to 0.65 when in theory it should be ~0.58). What could be the reason for this quantitative mismatch?
We would like to see in this regard:
a. A clear discussion of and situation of the results within the available empirical literature on CP, especially with regards to quantitative effect size in Britten et al. 1996 and Dodd et al. 2001.
We agree that in some previous papers CP is computed from zscored spike counts across different stimulus conditions (and thus different choice rate values). However, there are several other papers (including Britten et al. 1996, Nienborg and Cumming 2009, Wimmer et al. 2015, Katz et al. 2016) that compute CP from "zero coherence trials" with choice rate ~ 0.5 (or the equivalent for binocular disparity in V5/MT: Dodd et al. 2001; Krug et al. 2004; Wasmuht et al. 2019). These papers should be mentioned in order to avoid the impression that all previous work suffers from a (slightly) incorrect way of computing CP.
b. Reanalysis of the main effect in different subsets of cells. We are concerned that modelling the group for which CP < 0.5 includes a lot of noise, because the members of this are almost always nonsignificant, as shown in Dodd et al. and in Britten et al. (See their Figure 5). If you look at Dodd et al., there is no sign of this relationship between pCR and CP. Moreover, in Dodd et al. there are very few 'wrong way' choice probabilities: so few in fact that they are within the statistical bounds of repeated measures. So the procedure of separating the CPs into >0.5 and <0.5 is not likely to alter the result at all from the published Figure in Dodd et al. It might be helpful to recalculate the results for Figure 3A but only including those neurons that show an individually significant CP > 0.5.
c. A simulation of the factors that affect expected effect size. The simulation should show what size of the effect is expected given the limited amount of noisy data (number of neurons, trials, etc.). This would allow you to determine whether the experimental result lies with the confidence bounds of the theoretical prediction.
There is apparently no such relationship between CP and p_{CR} in the Dodd et al. data (but it could perhaps show up when selecting and binning the neurons as described in the manuscript, line 288). In any case, a possible explanation for the discrepancy could be that the expected relationship is so small that it cannot reliably be detected from ~100 neurons. We also think also this issue can be clarified by simulating the size of the effect for limited data.
2. Clarifications are further needed in term of how the fluctuations in the stimulusrelated gain of neuronal firing are responsible for the emergence of stronger CPs at higher performance levels. As we understand it, this is required both in terms of the formal implementation in the model and in terms of the proposed neurobiological implementation.
The authors present a detailed analysis of abstract decisionmaking models. They relate noisy neuronal responses ri, a hypothetical covert decision variable d and decision threshold θ, and an overt behavioural choice D. The authors assume a bivariate Gaussian association between ri and d, with a certain correlation coefficient, and from this minimal basis derive exact or approximate expressions for choice rate pCR, choice probability CPi, choice correlation CCi, between ri and d, and choicetriggered averages of ri, CTAi , and of d, CTAd. The treatment extends previous work in that it covers the entire range of choice rates pCR, not only the special case pCR = 0.5.
A key result is that choice probability CPi changes multiplicatively as a function of pCR, increasing as the decision grows more consistent in either direction, with the baseline level set by choice correlation CCi. An important implication is that the dependence of CPi on pCR, which is shared by all cells and expected to be Ushaped, can be averaged over cells with different choice correlation CCi, provided that cells with positive and negative choice correlations are distinguished.
To test these predictions, the authors reanalyze MT recordings from Britten et al. (1996) and were able to confirm a Ushaped dependence of average CP on pCR, which was statistically significant for cells with positive CCi. However, contrary to predictions, the Ushaped dependence was asymmetric and more pronounced when the more frequent choice (pCR > 0) is consistent with the preferred stimulus of the cell (positively correlated cells CCi>0). A cluster analysis of empirical individual cell dependencies of CPi on pCR revealed that, in addition to the predicted Ushaped dependencies, the presence of unexpected monotonically increasing dependencies.
To clarify the origin of these unexpected dependencies, the authors consider the effect of trialtotrial response gain fluctuations (Goris et al., 2014) and, with the help of the model of these authors, confirm that gain fluctuations account for 62% of the observed trialtotrial variance. The authors point out that gain fluctuations add a stimulusdependent component to the noise covariance of neural responses, which is inherited by choice correlation CCi and by choice probability CPi. In Methods and Supplementary Text S4, the authors derive that this stimulusdependent component can itself depend asymmetrically on pCR, to an extent that is specific to each cell (i.e., the specific coupling between response and gain). Unfortunately, the authors do not offer an intuitive argument about the origin of this asymmetry.
Please explain how gain fluctuations lead to rising dependencies on pCR in some cells. Which cells are these? Do they combine strong stimulus modulation with weak choice correlation? Without such an explanation, the entire cluster analysis appears incomplete and ultimately pointless.
3. Following on from this, there is also the question how could the stronger CP with increasing pCR be implemented in different ways in actual neural terms. Would for example the pool size of neurons contributing to the decision change? With regards to the model, how does the effect of pCR x CP intersect with the experimentally much stronger interaction effect of neurometric sensitivity and size of the CP?
The authors analyse this from the classical Britten et al. [1] data set, producing a framework for analysis in Figure 2 and the outcome of analysis in Figures3 and 4 of the paper. This type of analysis is not new. The specific form of the plots in Figures2,3,4 appears in Dodd et al. [2] in Figure 6, while Figure 3 in Britten et al. delivers almost the same information.
In regard to the clustering analysis, it seems that the big driver for the formation of clusters is the division between CP >0.5 and CP<0.5. For values of CP<0.5, there is not really a functional account of these, as they do not relate to the tuning of the cells for motion. The lack of functional meaning is highlighted by the fact that cluster 1 in Figure 4a (blue==CP <0.5) is statistically nonsignificant. Not unrelated to this lack of significance is the fact that Figure 13 of Britten et al. and Figure 6 of Parker et al. [3] show that CPs are stronger for neurons that are more sensitive to the visual task. The usual interpretations of this are either the intuitive claim that more sensitive neurons are more tightly involved in the task and therefore have higher CPs or, more subtly, that neurons with weaker sensitivity have lower degrees of interneuronal correlation.
The rest of the analysis in this paper advances the idea that fluctuations in the stimulusrelated gain of neuronal firing are responsible for the emergence of stronger CPs at higher performance levels. The authors write on ll2434 that "Briefly, the contribution of gain modulations to the covariance of the responses increases with neuronal firing rates, which in turn are stimulusmodulated as determined by tuning functions.". However, the lead author has already published a nice theoretical summary [4] showing that CP is related to the level of interneuronal correlations in the pool. Indeed, the analysis showed that under some conditions (large pool, correlated noise and at least one or two members of the pool contributing significantly to perceptual readout) one might take CP as substitute indicator for the interneuronal correlation of the decision pool. In the light of the earlier analysis, the present paper does not address the very relevant question of changes in the membership of the neuronal pool with stimulus strength. In effect, if readout weights change with stimulus strength, then CP will be expected to change. Equally, if pool membership changes then interneuronal correlation may be expected to change. We did not see anything in this analysis here that definitively ties down the change in CP to stimulusrelated gain changes.
l 242243: " We derived the specific form of CC𝑖(𝑝CR) predicted from gain modulations in the threshold decision model to explain additional CP stimulus dependencies beyond the symmetric modulation by h(𝑝CR ). Briefly, the contribution of gain modulations to the covariance of the responses increases with neuronal firing rates, which in turn are stimulusmodulated as determined by tuning functions. This leads to an asymmetric component of CP(𝑝CR), with higher CPs for 𝑝CR values associated with stimuli preferred by the cell. Furthermore, while stronger gain fluctuations increase this asymmetric stimulus dependence, they also decrease the magnitude of the cellspecific CP because they add variability to the responses unrelated to the choice "
As neuronal sensitivity has normally been measured, the change in response due to a change in the stimulus is assessed relative to the variability of neuronal firing. The description above implies that stimulus modulations in the Haefner models translate into response changes on top of which random gain modulations are applied. At first sight, there does not seem any room in the model for low firing rate, low variability neurons to contribute to CP, even though such neurons may have high neurometric sensitivity. One the other hand, it may well be that the new Haefner model all shakes down to give the established experimental result that CP is linked to neuronal sensitivity. If that's correct, then the paper is currently rather obscure on this point and it will be useful for the paper to lay this out clearly.
4. Editing for accessibility and readability
Unfortunately, our genuine enthusiasm for the manuscript is somewhat dampened by its length, by its opacity in places, and by the high degree of topic familiarity that it presupposes. For example, the discussion of grand CP on page 7 and in section S2 of the supplementary material, is difficult to follow even for someone in the field. Accordingly, if readability could be improved, the usefulness would be even greater.
We appreciate the theoretical advance of the paper. It is very useful to have equations that clarify the relationship between previously used measures (CP, CTA, CC) and the effect of informative stimuli. Overall, by the very nature of the topic, the paper is rather technical. Our impression was that the paper is not easy to read, in particular when we think about a broader readership that is not familiar with details of the theory of CP and the typical interpretation of CP measurements. We understand that this is not an easy job because the interpretation of CP is complicated (bottomup vs. topdown contributions, relationship to spike count correlations, etc) but it would help to revise the Results section and add some more details and (where possible) intuitive interpretation of the theoretical and experimental results to guide the reader. As an example, it turns out that gain fluctuations are an important factor to explain CP vs p_{CR} but this is only treated very briefly in the results (gain fluctuation model is not explained, no figure is shown about how well the model explains the variability observed in the data, etc).
[Editors' note: further revisions were suggested prior to acceptance, as described below.]
Thank you very much for submitting your article "Stimulusdependent relationships between behavioral choice and sensory neural responses" for consideration by eLife. Your article has been reviewed by 3 peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Joshua Gold as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Jochen Braun (Reviewer #1); Klaus Wimmer (Reviewer #2).
The reviewers have discussed their reviews with one another, and the Reviewing Editor has drafted this to help you prepare a final revised submission.
All the reviewers agreed on the significant theoretical advance of your manuscript. However, there were some concerns about the generality of the proposed model given that the data supporting it have limitations. Therefore, we propose a small but essential revision that makes it clear to the reader that the empirical support for the new model requires further explorations. Additionally, we have added the reviewers' further suggestions for you as recommendations.
Essential Revision:
1. Overall, the empirical part looks a little bit like an attempt to drag meaning out of a weak relationship, at least as measured experimentally in the single, but classical data set of Britten et al. The more detailed explanation of the clustering analysis reveals that the main effect of interest is driven by one critical data point from the original Britten et al. data. This is the value of CP for pCR = 0.85, in the data set for which CP > 0.5. Much of the paper depends on just how confident we are about a difference in CP that rises from 0.57 for pCR=0.5 to 0.64 for pCR=0.85. The SEMs calculated for individual neurons in Figure 3d (lower right panel) are not encouraging in this regard.
We think that further empirical tests are needed to obtain clarity about significance about key elements of the proposed model but this is beyond the scope of the current manuscript. It should be flagged to the reader, though.
In order to clarify to the reader that the empirical part provides only an initial analysis – as was also explicitly mentioned by the authors in their response to the previous reviews – we ask the authors to insert something close to the following statement in the abstract and in the first paragraph of the results:
This paper provides preliminary empirical evidence for the promise of studying stimulus dependencies of choicerelated signals, which requires further exploration in a wider data set.
2. The revisions are acceptable in that the presentation indeed streamlined in many respects (thank you!). However, one large reservation remains: the main findings are NOT explained in an intuitive manner.
Basically, when we distinguish neural responses associated with choice D=1 from those associated with choice D=0, we obtain two more or less distinct distributions, P(RD=1) and P(RD=0). When these distributions are well separated, then the choice probability CP (average probability that an RD=1 is larger than an RD=0) is somewhat larger than when these distributions are more overlapping.
However, both qualitative and quantitative aspects of the relation between choice probability CP and choice rate PCR are highly dependent on assumptions. For example, the symmetric Ushape depends on Gaussian variability of responses. When responses are assumed to be Poissonvariable, for example, the symmetric Ushape is replaced by a monotonic decline (Matlab code available upon request).
Gain modulation changes the picture in exactly this way. After gain modulation, response variability is no longer Gaussian and, additionally, the set of responses with the larger average is more affected than the set with the smaller average. In other words, gain modulation alters the shape of response distributions and in consequence also the shape of CP = f(PCR).
In spite of this reservation, I have to confess that this paper has been quite useful to me, because it forced me to think through these issues.
3. With regards to the dependence of the results on one critical data point in the Britten et al. data set (CP for pCR = 0.85) and the question of significance to other data sets, various suggestions for further data worth testing were made.
We wonder, for instance, whether Geoff Ghose's lab (https://pubmed.ncbi.nlm.nih.gov/30067123/ ; https://pubmed.ncbi.nlm.nih.gov/19109454/ ) is a better source of data to support this exercise. In particularly, his recordings could be searched for signs of noisy gain modulations, which is the mechanism that lies at the core of this analysis. Other suggestions were as mentioned before the Dodd et al. 2001/Wasmuht et al. 2019 data.
But we agree this is beyond the current scope of the manuscript.
4. Line 10: "activitychoice covariations are traditionally quantified with a single measure of choice probability (CP), without characterizing their changes across stimulus levels" I appreciate the need to demonstrate novelty in the paper, but this statement does a disservice to earlier researchers who recognized the possibility of a stimuluschoice interaction but did not find any evidence of such an interaction. The earlier papers are clear on this point. So the novelty here is not the failure of earlier researchers to think through their results carefully; the novelty here is an apparent improvement in sensitivity of the analysis methods. This may sound less exciting but is still important, if correct.
5. Line 65 "if the decision process uses a decision threshold"
6. Line 96 "choice probability, CP, defined as the probability that a random sample from all trials"; the reader only learns on line 99 what this is a random sample of. It would better read as "a random sample of neural activity r " and then explain in a separate sentence what r might be in any particular experimental situation.
7. Line 147: "threshold value 𝜃" the authors insist on referring to this parameter 𝜃 as a threshold value. This usage will inflame debate as to whether there is a "high threshold assumption" baked into this model, where "high threshold" here means a classical high threshold in visual detection models, as opposed to a signal detection model. As classical high threshold theory is now rejected for detection models, I think it would be better here and throughout to refer to 𝜃 as a criterion value, which is what it is and better aligns with the language of signal detection theory.
8. Lines 129169: this development is still very difficult to follow and labors over what are some fairly basic points. It would better be rewritten with better structure at about half the length. For example, compare lines 111112 and 134135, which could be combined into a single point that is made once at the right stage in the argument.
9. Lines 170209: the writing continues in a stilted manner with multiple crossreferences to other material.
10. Lines 1734 "We here will refer to CP stimulus dependencies and CP(𝑝CR ) patterns interchangeably". We rather fear that this is going to cause a lot of readers to trip over. I can see that these two are interchangeable from the theoretical perspective of these authors, but many will think that behavior pCR is dependent on a number of factors other than the stimulus. In the field of monkey neurophysiology, pCR will depend on reward, attention, arousal and so forth, in a way that the stimulus does not. What We think the authors actually mean is something like "Within the structure of our model, there is a fixed relationship between the dependence of CP on the stimulus strength and the dependence of CP on the choice rate pCR, for each threshold."
11. Lines 263265 "we show how to extend Generalized Linear Models (GLMs), a popular model to characterize the factors modulating neural activity, to include a stimuluschoice interaction terms" As a statistical procedure, this is fairly routine stuff and could be abbreviated considerably.
12. Lines 376377 "Specifying the existence of two clusters, we naturally recovered the distinction between cells with CP higher or lower than 0.5" We struggled to find a clear and unambiguous summary of the statistics associated with this. We can see that a consistent pattern emerges when the cluster number is increased from 2 to 3, but looking at the distributions in Figure 4c and Figure 4d does not appear to reveal clusters. The significance values in Figures4a and 4b relate to the significance of the modulation effect for each cluster, not the significance of the cluster separations. The methods section and supplementary analysis section crossreference each other but neither seems to answer the simple question of whether 1 versus 2 clusters is statistically justified, let alone the step from 2 to 3.
13. Lines 612638 This discussion suggests that there may be withincell changes in CP as a function of pCR that may have been hidden by pooling across populations of cells. But in the end, this paper has problems in detecting real changes in this relationship at the level of recordings from single cells. The small SEMs that attach to the data in Figure 3a do indeed reflect pooling across a population sample; they do not relate to changes in individual neurons. The panel in Figure 3d shows the true picture for individual cells. So this discussion begs the question as to why the population analyses in Dodd et al. do not show the predicted relationships. The point made by the authors about withincell changes does not appear to be material in this regard.
https://doi.org/10.7554/eLife.54858.sa1Author response
Essential revisions:
This manuscript further develops previous theoretical findings concerning "choice probability", the relationship of the trialtotrial variation of firing rates of a (sensory) neuron and the behavioral choice of a subject in perceptual decision making experiments. CPs and their interpretation have a long history, including the debate about their origin (do CPs reflect a causal effect of sensory neural variability on the decision or do they arise from topdown feedback signals?) Here, the authors identify an additional factor by showing that CP depends on the "choice rate" (the fraction of left vs. right choices). Since this choice rate depends on stimulus strength, CP itself will depend on stimulus strength. They also show experimental evidence for this effect (in the classical Britten et al. dataset) that has been overlooked until now and present methods for refined analysis for linking the responses of sensory neurons to choices.
The reviewers raised a number of central concerns about the relationship between the model and the available empirical data. They agree that the relationship between pCR and CP must be firmly established as a general experimental fact, or reasons given why it is apparent in some conditions and not others. Also, the theoretical interpretation of the relationship between pCR and CP needs to deliver accurate predictions of the experimental results and also to demonstrate consistency with theoretical models of how other factors affect CP (interneuronal correlation and stimulus sensitivity, being the primary variables of concern here).
1. The relationship between pCR and CP appears more at the core of this rather than the related stimulus strength vs CP, but there is a quantitative mismatch between model prediction and empirically observed data, especially in Figure 3a and Figure 4b. Interpretation of Figure 3a. Our initial concern when reading the paper was that the predicted effect of choice rate on CP (dotted line in Figure 3a) is so small that there will not be enough statistical power to detect it in the available experimental data. Moreover, the analysis is complicated by the fact that the effect of h(pCR) is partly masked by the gain modulation effect (Figure 4b). However, for the n=48 neurons (Figure 4b) that show the predicted Ushape of CP vs. pCR, the effect is actually much larger than the theoretical prediction (e.g. or pCR ~0.1 the CP is close to 0.65 when in theory it should be ~0.58). What could be the reason for this quantitative mismatch?
We now better clarify that the CP(p_{CR}) dependency is the result of two factors: h(p_{CR}) shared by the entire population, and CC(p_{CR}) which is cellspecific. Our model predicts the former and allows us to infer the latter. These revisions are reported in lines 160164 and 170174.
We thank the Reviewers for the opportunity to clarify these important points. As we now discuss in lines 170174, we refer to a CP(s) stimulus dependence or CP(p_{CR}) dependence interchangeably, since most often and in particular in our case p_{CR} values change due to changes in stimulus levels, and hence the two quantities are intertwined and monotonically related. As discussed in lines 160164, according to the model in Equation 7, a dependence of the CP on the p_{CR} and by extension on the stimulus level may appear in two ways. First, through the modulation factor h(p_{CR}) which arises due to the thresholding operation when converting an internal, continuous estimate of the stimulus into a binary decision. Independent of that, it can appear in a second way, due to a stimulusdependent choice correlation CC(s), for instance as the result of stimulusdependent crossneuronal correlations. Importantly, the Ushape due to the factor h(p_{CR}) is common to all cells, while the CP stimulus dependencies induced by CC are cellspecific (as motivated in lines 164167 and lines 672680). This makes both effects dissociable empirically – at least in principle.
We agree with the Reviewer that the modulation induced by h(p_{CR}) is small, and therefore is challenging to detect it in existing experimental data. This is why we developed our refined methods, which allow examining withincell CP modulations by the p_{CR}. As we argue in lines 195202, for each individual cell i the CP_{i}(p_{CR}) profile jointly reflects h(p_{CR}) and any potential dependence CC_{i}(p_{CR}). The average across a population of neurons will therefore average out cellspecific CP_{i}(p_{CR}) dependencies associated with CC_{i}(p_{CR}). This will make the Ushape dependency due to the h(p_{CR}), which is shared by the entire neural population and does not average away, more visible.
Empirically, we found that the average <CP(p_{CR})> in Figure 3 has an asymmetric dependency that goes beyond that predicted by the h(p_{CR}) factor, which motivated us to use a cluster analysis to further separate h(p_{CR}) from cellspecific contributions through CC(p_{CR}) and to shed further light on general patterns of CC_{i}(p_{CR}) dependencies in the data.
Still, the cluster analysis cannot by itself separate h(p_{CR}) from other CC(p_{CR}) patterns that have the same shape (that is only possible by changes in the experimental design to dissociate stimulus strength from p_{CR}). It only separates the most predominant profiles of CP(p_{CR}) dependencies across cells. This means that the symmetric dependency of cluster 2 in Figure 4B may result not only from h(p_{CR}), but also from other symmetric CC(p_{CR}) patterns shared by the cells. Its symmetric shape is compatible with the symmetric shape of h(p_{CR}) predicted by the model. However, as pointed out by the Reviewers, there is a quantitative mismatch in its magnitude. We now address this mismatch in the revised text (lines 424431). First, the mismatch can be due to additional symmetric dependencies through CC(p_{CR}) patterns that are also shared across the cells. Second, the quantitative mismatch can be due to the abstraction of the decisionmaking dynamics necessary to derive the analytical model. The analytical framework of the model, as developed by the work of Shadlen et al., 1996, Haefner et al. 2013, and others, constitutes an abstraction of any biological implementation of the decision process. The quantitative deviations could be caused by effects neglected by the model, such as a dynamic feedback contribution amplifying the Ushape dependence. In this regard, we consider that the qualitative prediction of the Ushape of h(p_{CR}) is itself remarkable, given the degree of abstraction of the CP model.
Furthermore, our empirical characterization of existing patterns of CPstimulus dependencies and our theoretical analyses of their contributing factors and predicted shapes are themselves original contributions, because these dependencies have mostly been unnoticed in previous analyses or have been ignored because no interpretation was available. Although motivated by the challenge of testing the prediction of the Ushape modulation, our new refined methods to examine withincell CP(p_{CR}) profiles are agnostic about their origin, and as we have shown provide a powerful tool to characterize this dimension of the cells choicerelated neural responses that has not been thoroughly explored yet.
Importantly, despite focusing on the two main predominant patterns of the cluster analysis, we do not claim that those are the only patterns present. Conversely, as we discussed in lines 672683, previous theoretical and experimental work would support the presence of a richer structure of CP(p_{CR}) dependencies, which we would not be able to characterize because of the limitations of the data in terms of number of trials and number of cells. However, with rapidly improving recording technology allowing for chronic recordings (i.e. many more trials than currently) in addition to recording many neurons concurrently, we expect future studies to map this structure in much greater detail resulting in insights into the functional role and place in the microcircuit of the different clusters of neurons exposed by our analysis.
We would like to see in this regard:
a. A clear discussion of and situation of the results within the available empirical literature on CP, especially with regards to quantitative effect size in Britten et al. 1996 and Dodd et al. 2001.
We agree that in some previous papers CP is computed from zscored spike counts across different stimulus conditions (and thus different choice rate values). However, there are several other papers (including Britten et al. 1996, Nienborg and Cumming 2009, Wimmer et al. 2015, Katz et al. 2016) that compute CP from "zero coherence trials" with choice rate ~ 0.5 (or the equivalent for binocular disparity in V5/MT: Dodd et al. 2001; Krug et al. 2004; Wasmuht et al. 2019). These papers should be mentioned in order to avoid the impression that all previous work suffers from a (slightly) incorrect way of computing CP.
We now discuss prior studies more accurately, by better delineating their respective contributions and by distinguishing between those reporting CP(s=0) and those computing a Grand CP. We clarify that our primary goal is to advance and promote the analysis of the CP dependence on p_{CR}/stimulus level based on a deeper understanding of the nature of decisionrelated signals (both choice and detect probabilities) in sensory neurons. Both Grand CPs (computed in various ways) and CP(s=0) are scalar summaries of the richer and more informative underlying function that considers the CP changes across stimulus values. We clarified the importance of isolating withincell modulations of the CP by p_{CR} from the acrosscell heterogeneity of CP magnitudes, which is key for studying the withincell patterns of CP as a function of p_{CR}. These revisions are reported in lines 3744 and 5759.
Thanks to the reviewer for pointing out the necessity to further clarify how we propose to study CP(p_{CR}) patterns building on previous studies. We now cite this previous work in lines 3744, by better recognizing the importance of previous contributions and by better specifying that in many cases CPs have been calculated only from noninformative stimuli, or alternatively weighted averages or corrected zscoring are used to combine data across stimulus levels. We emphasize the novelty with respect to these alternatives in our consideration of CP stimulus dependencies (lines 5759). We also emphasize that our proposal to construct withincell CP(p_{CR}) profiles is not meant to serve only not even primarily as a way to improve the estimation of a single grand CP value per cell. The calculation of a single CP from trials with noninformative stimuli is meant to study choice driven neural responses isolated from stimulusdriven signals. Conversely, as discussed in lines 5759, 683690, our aim is to (a) theoretically motivate the need to study the shape of the CP(p_{CR}) patterns, and (b) demonstrate our power to do so with refined methods using a classic and public dataset as an example.
We have now improved the description of how our refined analysis differs from previous analysis in Britten et al. 1996 (lines 616621). We have also incorporated to this comparison the important work of Dodd et al. 2001 (lines 621633). In the original submission we explicitly referred to the key differences with respect to the analysis in Figure 3 of Britten et al. only in Discussion. Now we expanded the description of the key steps of our analysis (lines 271307) to more clearly indicate why it should improve the characterization of within cell CP dependencies on p_{CR}. The key difference with respect to Figure 3 of Britten et al. 1996 and Figure 6 of Dodd et al. 2001 is that in our analysis we separate the withincell modulations of the CP by p_{CR} from the acrosscell heterogeneity of the CP magnitudes. Conversely, in Britten et al. 1996 an average of CP values across cells was carried out at each stimulus level without ensuring that the same cells were contributing to each of these averages. Similarly, in a scatter plot of the form presented in Figure 6a of Dodd et al. 2001, the cellidentity of each dot is lost, and hence it is not possible to trace the withincell profile CP(p_{CR}), making it harder to discriminate withincell modulations from across cells CP heterogeneity. The construction of CP(p_{CR}) profiles for each cell isolates withincell modulations. The fact that the modulation h(p_{CR}) is multiplicative, as we indicate in lines 189190 and 275277, makes the detection of modulations even more sensitive to the isolation of withincell comparisons, since the magnitude of any modulation is relative to the magnitude of the CP value obtained with the noninformative stimulus. Furthermore, CP(p_{CR}) profiles also have the advantage that they can subsequently be used with cluster analysis as we do, but also with other pattern recognition methods to characterize which patterns of CP stimulus dependencies are predominant across cells.
As we explain in lines 297307, an important step when averaging across cells is that we need to ensure that our average CP profiles as presented in Figure 3 really correspond to an average of the individual withincell CP profiles of the cells included in the analysis, and are not the result of different subpopulation of neurons contributing to the average at each dot of the curve. That is, our average should correspond to an average of the withincell CP modulations, and be isolated from heterogeneity in the CP magnitude across cells. This distinguishes our refined analysis from the type of previous analysis used in Britten et al. 1996 (Figure 3) or Dodd et al. 2001 (Figure 6), in which acrosscell CP variability may hinder withincell modulations. We have discussed these differences between these previous works and our new method in detail in lines 616638. In addition, we also have improved the comparison of our analysis with the one of Britten et al. in the Introduction (lines 6769). In particular, we corrected a key problem in the old text (old lines 7174 in the previous submission) that misleadingly suggested that the key feature of our novel method was the separation of cells of opposite choice preferences. We now clarified that they key feature is, more broadly, the characterization of withincell CP profiles.
b. Reanalysis of the main effect in different subsets of cells. We are concerned that modelling the group for which CP < 0.5 includes a lot of noise, because the members of this are almost always nonsignificant, as shown in Dodd et al. and in Britten et al. (See their Figure 5). If you look at Dodd et al., there is no sign of this relationship between pCR and CP. Moreover, in Dodd et al. there are very few 'wrong way' choice probabilities: so few in fact that they are within the statistical bounds of repeated measures. So the procedure of separating the CPs into >0.5 and <0.5 is not likely to alter the result at all from the published Figure in Dodd et al. It might be helpful to recalculate the results for Figure 3A but only including those neurons that show an individually significant CP > 0.5.
We completely agree with these suggestions. We note that cells with CP<0.5 were already excluded for our further clusterbased analysis, and this is better specified in revision. We also agree that is useful to better discuss the implications of Dodd’s work on our results and the suggestions that arise from our work on how to extend Dodd’s previous work. These issues have been clarified in revision. These revisions are reported in lines 330343, 297307, and 616633.
This is a good suggestion, and we agree that this issue needs clarifications. We agree with the Reviewers that "a separation of CPs into >0.5 and <0.5 is not likely to alter the result at all from the published Figure 6 in Dodd et al. 2001". However, as discussed above and now emphasized in lines 297307 and 616629, apart from the separation between cells with CPs >0.5 and <0.5 another important component of our refined analysis is to isolate withincell CP(p_{CR}) profiles from the across cells CP heterogeneity. In Figure 3A of Britten et al. 1996 the fact that the average of CPs at each coherence level is calculated without a separation of CPs into >0.5 and <0.5 affects the identification of the Ushape, but this lack of separation is not the only factor hiding the CP(p_{CR}) patterns. In more detail, in the data of Britten et al. 1996 the set of coherence levels used varies across cells, which means that a different subpopulation of cells is contributing to each dot of their Figure 3A, so that the analysis of withincell CP(p_{CR}) profiles is lost. This is the reason why in our analysis we take special care that the same neurons are averaged at each p_{CR} level, even at the cost of excluding a portion of the cells from the analysis because a CP value cannot be calculated for all p_{CR} levels.
In the case of Dodd et al. 2001, all the results of the paper rely on the analysis of CPs from zero disparity trials, and hence their main conclusions stand and are not affected by our new advances. We make this point clear in our reference to Dodd et al. 2001 (lines 629633). The point of their work that could be revisited at the light of our advances is that in the specific case of their Figure 6A the capacity to identify CP(p_{CR}) dependencies may have been affected by the lack of isolation of withincell CP(p_{CR}) modulations. From our understanding of the experimental procedure of Dodd et al. 2001, the range of disparity levels changes from unit to unit, being usually five or seven. This means that different cells may be contributing more dots in a certain range of the yaxis (% Choice PREF). Moreover, it is not possible to identify the withincell CP(p_{CR}) profiles from their published scatter plot. Furthermore, in our understanding, Figure 6A includes a dot for each stimulus condition in which the animal made at least one incorrect response. As Dodd et al. indicates, this is reflected in the increasing spread of the values at the top and the bottom of the ordinate axis of Figure 6A. We attenuated this kind of problems in our refined method by constructing p_{CR} bins within which a weighted average of several CPs is calculated (lines 308318). Because the CP(p_{CR}) profile is a vector constructed using 5 bins, this means that the estimate for each bin is already more reliable than individual estimates for single stimulus levels.
Altogether, our method was specifically designed to better estimate withincell CP(p_{CR}) profiles, and this analysis may reveal some structure difficult to appreciate from the joint scatter plot of CPs from all cells. We now indicate this point also with regard to the discussion of Figure 6 of Dodd et al. 2001 (lines 621633).
[Speculative note: We think it would be very interesting to collect more data in the experimental paradigm of Dodd et al., ideally with chronic recordings of many neurons at the same time. Comparing the results of our analysis of the CP(p_{CR}) dependency for such a dataset with one collected in a more conventional discrimination task (e.g. motion direction discrimination) might provide deeper insights into the perceptual decisionmaking mechanisms underlying both, and how they might differ. For instance, one might find that the h(p_{CR}) dependency is absent in the rotating cylinder task suggesting that the decision in that task – unlike in a motion direction discrimination task – is not mediated by a continuous internal estimate that is being thresholded.]
We also agree with the reviewers that "modelling the group for which CP < 0.5 includes a lot of noise". Indeed, in lines 330339 we indicate that the fact that we do not find a modulation consistent with the factor h(p_{CR}) significant for the cells with CP<0.5 can be explained by two factors. One of the factors is that fewer cells are included in this group, so that the estimated average CP profile is noisier. The other factor is related to the point of the Reviewers, namely the fact that the h(p_{CR}) modulation is multiplicative to CP0.5, and hence if few cells in the group of CP<0.5 have a CP magnitude significantly different from 0.5, then CP0.5 = 0+noise, and a consistent effect of the multiplicative factor h(p_{CR}) cannot be isolated when averaging across cells. We now further highlighted (lines 339343) this fact that as pointed out by the Reviewers "CP < 0.5 includes a lot of noise" and this explains why the prediction of the Ushape modulation could only be corroborated to be significant for the cells with CP>0.5. As a result, the lack of a significant inverted Ushape modulation for the cells with CP<0.5 does not constitute evidence against our model. Furthermore, note that the cells with CP <0.5 do not affect the subsequent discrimination between a symmetric and an asymmetric pattern in Figure 4b (cluster 1 is the same in Figure 4a and 4b), since they are already excluded for that analysis, as we now highlight (lines 386387).
c. A simulation of the factors that affect expected effect size. The simulation should show what size of the effect is expected given the limited amount of noisy data (number of neurons, trials, etc.). This would allow you to determine whether the experimental result lies with the confidence bounds of the theoretical prediction.
There is apparently no such relationship between CP and p_{CR} in the Dodd et al. data (but it could perhaps show up when selecting and binning the neurons as described in the manuscript, line 288). In any case, a possible explanation for the discrepancy could be that the expected relationship is so small that it cannot reliably be detected from ~100 neurons. We also think also this issue can be clarified by simulating the size of the effect for limited data.
We now include a power analysis based on simulations that provides a clearer intuition about the expectations for effect size as a function the number of trials, the number of neurons, and the magnitude of the CPs. These simulations are reported in lines 12381276.
We thank the Reviewers for the suggestion to perform simulations that provide additional intuition of the expected statistical power for the detection of the CP modulation h(p_{CR}). We now implemented simulations of how the detection of a significant modulation h(p_{CR}) depends on the number of trials, the number of neurons, and the magnitude of the CPs (see new section S1.2 in the Supplementary Material). As expected, these simulations show that increasing the number of cells and of trials used to estimate an average CP(p_{CR}) profile increases the statistical power for the detection of a CP modulation. This indicates the utility of our refined analysis of withincell CP(p_{CR}) dependencies, which can substantially improve the statistical power of the analysis by efficiently combining CP values, both across stimulus levels and across cells. Most importantly, in our method CP values are combined while preserving the characterization of withincell CP(p_{CR}) profiles.
These simulations also indicate that the pvalues obtained for the experimental data, in particular p = 0.0008 for the red curve of Figures 4B, are smaller than predicted from the model. This new analysis complements the comparison of the experimental effect size (red curve) and predicted effect size (dashed black curve) in Figure 3A, and in general the comparison of the experimental results (Figures 34) with the analytical effect sizes displayed in Figure 2, which already pointed to the higher magnitude of the experimental effect size. In the revised paper we now address this quantitative mismatch between the predictions of the h(pcr) modulation and the CP(pcr) pattern obtained for the symmetric cluster (red curve) in Figure 4b (lines 424431).
As already discussed above in a previous reply, the quantitative mismatch can be explained either by the presence of further symmetrical sources of CP stimulusdependence through CC(p_{CR}), and/or dynamic feedback amplifying the thresholdinduced Ushape dependence. Given the level of abstraction of our CP model, we think it is remarkable that qualitative evidence compatible with h(p_{CR}) is found. Furthermore, independently of the quantitative fit to the prediction of the h(p_{CR}) modulation, it is in itself an achievement of our method that the characterization of withincell CP(p_{CR}) allows identifying any existing pattern of CP stimulus dependence, since their existence had been previously unnoticed. This identification is a first step towards inferring the underlying mechanisms producing a characteristic structure of CP(p_{CR}) patterns, as we discussed in lines 672683.
Finally, we agree with the Reviewers that our analysis applied to the Dodd et al. data may show the h(pCR) dependence (but see the speculative note above).
2. Clarifications are further needed in term of how the fluctuations in the stimulusrelated gain of neuronal firing are responsible for the emergence of stronger CPs at higher performance levels. As we understand it, this is required both in terms of the formal implementation in the model and in terms of the proposed neurobiological implementation.
The authors present a detailed analysis of abstract decisionmaking models. They relate noisy neuronal responses ri, a hypothetical covert decision variable d and decision threshold θ, and an overt behavioural choice D. The authors assume a bivariate Gaussian association between ri and d, with a certain correlation coefficient, and from this minimal basis derive exact or approximate expressions for choice rate pCR, choice probability CPi, choice correlation CCi, between ri and d, and choicetriggered averages of ri, CTAi , and of d, CTAd. The treatment extends previous work in that it covers the entire range of choice rates pCR, not only the special case pCR = 0.5.
A key result is that choice probability CPi changes multiplicatively as a function of pCR, increasing as the decision grows more consistent in either direction, with the baseline level set by choice correlation CCi. An important implication is that the dependence of CPi on pCR, which is shared by all cells and expected to be Ushaped, can be averaged over cells with different choice correlation CCi, provided that cells with positive and negative choice correlations are distinguished.
To test these predictions, the authors reanalyze MT recordings from Britten et al. (1996) and were able to confirm a Ushaped dependence of average CP on pCR, which was statistically significant for cells with positive CCi. However, contrary to predictions, the Ushaped dependence was asymmetric and more pronounced when the more frequent choice (pCR > 0) is consistent with the preferred stimulus of the cell (positively correlated cells CCi>0). A cluster analysis of empirical individual cell dependencies of CPi on pCR revealed that, in addition to the predicted Ushaped dependencies, the presence of unexpected monotonically increasing dependencies.
To clarify the origin of these unexpected dependencies, the authors consider the effect of trialtotrial response gain fluctuations (Goris et al., 2014) and, with the help of the model of these authors, confirm that gain fluctuations account for 62% of the observed trialtotrial variance. The authors point out that gain fluctuations add a stimulusdependent component to the noise covariance of neural responses, which is inherited by choice correlation CCi and by choice probability CPi. In Methods and Supplementary Text S4, the authors derive that this stimulusdependent component can itself depend asymmetrically on pCR, to an extent that is specific to each cell (i.e., the specific coupling between response and gain). Unfortunately, the authors do not offer an intuitive argument about the origin of this asymmetry.
Please explain how gain fluctuations lead to rising dependencies on pCR in some cells. Which cells are these? Do they combine strong stimulus modulation with weak choice correlation? Without such an explanation, the entire cluster analysis appears incomplete and ultimately pointless.
We now provide a more complete description of the gain modulation model and how it relates to the results of the cluster analysis in the Results section. Some of the final questions raised by the Reviewers are excellent ones but will require data to characterize the cells and their responses (e.g. structure of crossneuronal correlations? what layer? Excitatory/inhibitory? Target of feedback projections? Projecting to other cells within the same column/the same area/other cortical areas/thalamus/…?) beyond those available in Britten et al. data. We performed the cluster analysis to demonstrate that there is structure in the CP(pCR) dependencies, and that beyond the modulation h(p_{CR}) common to all cells there is additional structure that is cellspecific and can be expected to be related to the cell properties mentioned above. Now we’re handing this back to the field to figure out how these structures correlate with other properties or structures since we believe that this will provide insights into the neural circuits underlying perceptual decisionmaking. Together with more finegrained improvements reported below we specifically improved the presentation of the gain model in Results (lines 205256), Methods (lines 882957) and Suppl. Information S4.
Thanks to the Reviewers for the summary of our findings. We now better explain that the finding of an asymmetric component of the CP(p_{CR}) dependence is not "contrary to predictions" of our model, as presented in Eq 7. We indicate in lines 160164 that the CP(p_{CR}) dependence can be caused by the thresholdinduced factor h(p_{CR}) but also by choice correlations CC(p_{CR}). We indicate in lines 197200 that averaging across neurons is expected to help to isolate the h(p_{CR}) pattern because it is common to all cells, while cellspecific patterns are expected to be introduced by CC_{i}(p_{CR}). The identification of the Ushape of h(p_{CR}) relies on the assumption that CC(p_{CR}) is stimulus independent or that the CP(p_{CR}) dependencies induced by CC(p_{CR}) are sufficiently heterogeneous across cells so that when averaging across cells they average out and the predominant modulation observable in the average profile <CP(p_{CR})> is h(p_{CR}). The fact that we find the asymmetric component of CP(p_{CR}) is against this assumption, suggesting that there is a shared pattern of CC(p_{CR}) among part of the cells (lines 349353), but not against the predictions of our model. Indeed, as we discuss in lines 164167 and 672680, there is evidence from previous theoretical and experimental work suggesting that choice correlations CCs have cellspecific stimulus dependencies, and hence given Eq 7 it should be expected that also the CPs have cellspecific stimulusdependent patterns. As we discuss in lines 683692, our refined method to characterize withincell CP(p_{CR}) should help to examine this dimension of the data that has not been thoroughly explored yet, and which can provide insights into the nature of the underlying decisionmaking mechanisms beyond what is captured by a single CP value.
As mentioned above, the finding of the asymmetric pattern is evidence of a stimulus dependence of the CC. This raises the question of which can be the origin of this component of the CC stimulus dependence that is shared by the neurons. As we mention in lines 164167 and 672680 it can be expected that CC stimulus dependencies have a rich structure reflecting the structure, for example, of feedback connections. However, as indicated in lines 458460, to examine the structure of CP(p_{CR}) patterns associated with the computational decisionmaking mechanisms would require a characterization the crossneuronal correlation structure of the responses which is beyond the single cell recordings of Britten et al. This is why we chose to examine if the asymmetric component of the CP(p_{CR}) could be explained by a nonspecific mechanism for which prior empirical support exists (Goris et al. 2014), such as the presence of trialtotrial gain fluctuations.
We have now substantially improved the description of the gain model (lines 205256) and add new analyses quantifying how well the gain model can predict the experimentally observed asymmetric CP pattern extracted from the cluster analysis (lines 443460). In particular, we now report also that the theoretical coefficients estimated from the gain model significantly correlate with the experimentally estimated coefficients for the asymmetric cluster, although overestimating their magnitude.
The separation in a new Section for the gain model (lines 205256) helps to clearly differentiate between the generic model introduced before, and the specific model that considers gain fluctuations as the source of a CC(p_{CR}) contribution. In this section we describe the main components of the model and introduce the resulting analytical CP expression (Eq 11), which in the previous version was only provided in Methods. For this model of gain fluctuations, we specifically adopted a purely feedforward encoding/decoding model, as previously studied in Shadlen et al. (1996) and Haefner et al. (2013). Equation 9 is a generalization of the CP expression of Haefner et al. (2013) to all stimulus levels. We now further explain this extension in connection with the previous results of Haefner et al., which characterized the dependence between CPs and properties such as the neurometric sensitivity and the crossneuronal correlation structure (lines 218227).
In more detail, we consider that the neural decoder that determines the choice is tuned to be optimal at the decision boundary, that is, that the readout weights are tuned to the structure of covariability in the population responses for the noninformative stimuli. For this optimal readout, the CP is proportional to the neurometric sensitivity (Haefner et al. 2013, Pitkow 2015). However, in the presence of gainfluctuations, this structure of covariability changes for different stimuli due to the additional gainrelated component in Eq 10. We now explain in more detail how the CC stimulus dependence appears as a consequence of the stimulus dependence of the gainrelated component of the crossneuronal correlations in Eq 10 (lines 234243).
We have also improved the description of the formal implementation of the model in Methods (lines 882957) and in section S4 of the Suppl. Material. In Eq 23 and lines 924934 we explain how the changes in the crossneuronal correlation structure of Eq 22 create additional response variability that attenuates the CC but also create a new component of covariation between the decision variable and the singlecell responses due to the shared gain fluctuations common to the neural population. A more detailed derivation is now extended in section S4.
Moreover, to further understand which properties of the singlecell responses and of the decoder determine the form of the asymmetric gaininduced CP(p_{CR}) component, in S4 we further generalize the model (lines 14681484). This generalization is valid not only for an optimal decoder but for any unbiased decoder. We describe how the stimulus dependencies of CC induced by the gain are determined In Equation S12. The shape of these dependencies depends on cellspecific properties, namely on the neurometric sensitivity of the cells and on the relative contribution of the gain to their variability, as well as on population properties, such as the behavioral sensitivity of the decoder to changes in the stimulus, and on the relative contribution of the gain to the variability of the decision variable. In the particular case of the model with an optimal decoder in which we focus in the main article, as explained in lines 244245 for each cell the strength of the asymmetric component is determined by the CC magnitude for p_{CR} = 0.5 as well as by the relative contribution of gain to their variability (λ_{i}).
Additionally, to further connect our model to previous computational (Shadlen et al. 1996) and analytical (Haefner et al. 2013) studies of the relation between CPs and the structure of crossneuronal correlations, we now analyzed in detail the effect of gain fluctuations when the decoder is formed by two pools of neurons/antineurons with opposite choice preference (lines 14271452).
While the gain model explains main features of the CP(p_{CR}) patterns observed with the cluster analysis we would like to highlight that the value of the cluster analysis stands independently of modeling the observed patterns with the gain model. This is now clearer since the results of the cluster analysis and the gain model are in separate subsections. As we now discuss in lines 349353 and 381382, the cluster analysis identifies the existence of a statistically significant asymmetric CP(p_{CR}) pattern together with the symmetric pattern that the thresholdinduced modulation h(p_{CR}) predicts. The presence of this asymmetric pattern is a signature of the existence of stimulus dependent choice correlations CC(p_{CR}), something that had not been identified in previous studies of CPs, but that is consistent with both theoretical and experimental work (lines 672680) which studies the structure of stimulus dependent decisionrelated feedback signals, and more broadly stimulusdependent crossneuronal correlations. In this sense, our cluster analysis resolves the apparent contradiction between expectations from this bulk of previous work and the lack of evidence of CC stimulus dependencies.
A full characterization of the CP(p_{CR}) patterns in relation to cell types would require a joint characterization of properties such as the connectivity structure and crossneuronal correlation structure, which are not available from the single unit recordings of Britten et al. 1996 (lines 458460). Despite this limitation, the cluster analysis is useful because it reveals a significant structure of CP(p_{CR}) previously unnoticed, and validates the use of withincell CP(p_{CR}) profiles to study the interaction between stimulusdriven and choicedriven signals in neural responses. Similarly, the gain model we present shows that an asymmetric component of CP stimulus dependence can be caused by a generic mechanism such as trialtotrial gain fluctuations, and is informative about the relative strength of the asymmetric CP stimulus dependencies for the cluster in which the asymmetric dependence is predominant (lines 443460), although overestimating its magnitude. We expect that the analysis of withincell CP(p_{CR}) patterns will in future work be a useful tool to identify a finer structure of CP stimulus dependencies across cell, distinctive of specific mechanisms of the decisionmaking process, such as the structure of stimulusdependent decisionrelated feedback.
3. Following on from this, there is also the question how could the stronger CP with increasing pCR be implemented in different ways in actual neural terms. Would for example the pool size of neurons contributing to the decision change? With regards to the model, how does the effect of pCR x CP intersect with the experimentally much stronger interaction effect of neurometric sensitivity and size of the CP?
The h(pCR) modulation that we derive does not require any change in
“implementation”, pool size or membership of the decision pool. This modulation is already embedded (undescribed) in the Shadlen et al. 1996 model and in the Haefner et al. 2013 model when considering a p_{CR} different than 0.5. Our results extend the results in Haefner et al. to pCR0.5 and thereby open the door to a principled study of the CP(pCR) relationship. In fact, without our results – both on h(pCR) and the impact of the known gain variability – a measurement of a Ushaped CP(pCR) or monotonic CP(pCR) relationship might lead to suggestions that pool size and/or pool membership must be changing with pCR/stimulus (which appears rather unlikely given that this would imply a change of the decoder driven precisely by the property to be decoded). These revisions are reported in lines 210231 and 14051415.
The authors analyse this from the classical Britten et al. [1] data set, producing a framework for analysis in Figure 2 and the outcome of analysis in Figures3 and 4 of the paper. This type of analysis is not new. The specific form of the plots in Figures2,3,4 appears in Dodd et al. [2] in Figure 6, while Figure 3 in Britten et al. delivers almost the same information.
We respectfully disagree with this summary. All panels of Figure 2 represent novel mathematical results and insights that have not previously appeared anywhere. While Britten et al. and Dodd et al. present plots similar to those in Figure 3, our method introduces a key new refinement separating withincell CP patterns of stimulus dependence from the across cell heterogeneity of CP magnitudes. Please find above in reply to point (1a) of the Reviewers a more detailed discussion of how we now improved the comparison with these previous studies. A comparison with these Figures from Dodd et al. and Britten et al. is now discussed in the Discussion, lines 612638.
In regard to the clustering analysis, it seems that the big driver for the formation of clusters is the division between CP >0.5 and CP<0.5. For values of CP<0.5, there is not really a functional account of these, as they do not relate to the tuning of the cells for motion. The lack of functional meaning is highlighted by the fact that cluster 1 in Figure 4a (blue==CP <0.5) is statistically nonsignificant. Not unrelated to this lack of significance is the fact that Figure 13 of Britten et al. and Figure 6 of Parker et al. [3] show that CPs are stronger for neurons that are more sensitive to the visual task. The usual interpretations of this are either the intuitive claim that more sensitive neurons are more tightly involved in the task and therefore have higher CPs or, more subtly, that neurons with weaker sensitivity have lower degrees of interneuronal correlation.
Please see our previous answers regarding CP<0.5 in reply to point (1b) of the Reviewers. The reviewer’s points about the relationship across cells between the magnitude of the CP and the neurometric threshold (d’) is orthogonal to the analysis of the withincell dependence of CP as a function of p_{CR} that we focus on here. Put differently, a single neuron’s CP magnitude may be a function this single neuron’s d’ and additionally is modulated by the p_{CR}, which is shared by all neurons by virtue of being a behavioral quantity. Furthermore, as we now explained in our improved description of the gain model, the neurometric sensitivity also affects the strength of the CP stimulus dependence induced by the gain.
The rest of the analysis in this paper advances the idea that fluctuations in the stimulusrelated gain of neuronal firing are responsible for the emergence of stronger CPs at higher performance levels. The authors write on ll2434 that "Briefly, the contribution of gain modulations to the covariance of the responses increases with neuronal firing rates, which in turn are stimulusmodulated as determined by tuning functions.". However, the lead author has already published a nice theoretical summary [4] showing that CP is related to the level of interneuronal correlations in the pool. Indeed, the analysis showed that under some conditions (large pool, correlated noise and at least one or two members of the pool contributing significantly to perceptual readout) one might take CP as substitute indicator for the interneuronal correlation of the decision pool. In the light of the earlier analysis, the present paper does not address the very relevant question of changes in the membership of the neuronal pool with stimulus strength. In effect, if readout weights change with stimulus strength, then CP will be expected to change. Equally, if pool membership changes then interneuronal correlation may be expected to change. We did not see anything in this analysis here that definitively ties down the change in CP to stimulusrelated gain changes.
l 242243: " We derived the specific form of CC𝑖(𝑝CR) predicted from gain modulations in the threshold decision model to explain additional CP stimulus dependencies beyond the symmetric modulation by h(𝑝CR ). Briefly, the contribution of gain modulations to the covariance of the responses increases with neuronal firing rates, which in turn are stimulusmodulated as determined by tuning functions. This leads to an asymmetric component of CP(𝑝CR), with higher CPs for 𝑝CR values associated with stimuli preferred by the cell. Furthermore, while stronger gain fluctuations increase this asymmetric stimulus dependence, they also decrease the magnitude of the cellspecific CP because they add variability to the responses unrelated to the choice "
The gain model examines the effect on CP(p_{CR}) of a particular structure of stimulusdependent crossneuronal correlations, namely produced by shared gain fluctuations across neurons. We clarified the advances with respect to Haefner et al. 2013. We assume that only the crossneuronal correlations are stimulusdependent, while the readout weights are independent of the stimulus. We believe this assumption is reasonable since the crossneuronal correlations depend on the dynamics of the network, while the weights are expected to be hardwired in the network structure and the decoder is not expected to depend on the very sensory stimulus it has to decode. We did not claim (nor could we given the available data, as explained in the ‘In Brief’ summary in replied to point 2 above) that gain variability is the only explanation for a monotonic increase of CP with stimulus strength. What we show is that the gain model explains main features of the patterns of CP stimulus dependence observed with the cluster analysis. We have reinforced these conclusions with additional analysis of the predictive power of the gain model (lines 443460). The paper has been revised to make the gain model clearer (lines 205256, 882957).
As neuronal sensitivity has normally been measured, the change in response due to a change in the stimulus is assessed relative to the variability of neuronal firing. The description above implies that stimulus modulations in the Haefner models translate into response changes on top of which random gain modulations are applied. At first sight, there does not seem any room in the model for low firing rate, low variability neurons to contribute to CP, even though such neurons may have high neurometric sensitivity. One the other hand, it may well be that the new Haefner model all shakes down to give the established experimental result that CP is linked to neuronal sensitivity. If that's correct, then the paper is currently rather obscure on this point and it will be useful for the paper to lay this out clearly.
As mentioned above, our model does not amend the findings in Haefner et al. 2013 that support with their analytical CP model the experimental finding that CP is proportional to the neurometric sensitivity. Haefner et al. derived that relation for p_{CR} = 0.5 and it equally holds in our extended model, hence supporting that neurons with low firing rate and low variability contribute to the internal decoder. Using this suggestion and other suggestions (see reply to point 2) we have improved the description of the gain model to better explain the role of the neurometric sensitivity both determining the CP magnitude at p_{CR} = 0.5 and the strength of the modulation induced by the gain. The connection with the neurometric sensitivity can be found in lines 220227 and 14051415.
We now provide an expanded discussion of all the points raised by the Reviewers in point 3.
As detailed in reply to point (2), we have extended the analysis of the CP stimulus dependency patterns extracted from the cluster analysis jointly with the predictions of the gain model (lines 443460) and substantially rewritten the presentation of the CP gain induced model in the main text (lines 205256), Methods (lines 882957), and Suppl. Material (section S4). We now better explain the role of single cell properties, namely the neurometric sensitivity of the cells and the relative contribution of the gain to their variability, as well as population properties, such as the behavioral sensitivity of the decoder to changes in the stimulus, and the relative contribution of the gain to the variability of the decision variable.
As indicated in lines 5961 and 125128, our general model is agnostic with respect to the feedforward or feedback origins of activitychoice covariations. The only assumption is that "the link between sensory responses and choices is mediated by a continuous decision variable and a thresholding mechanism" (lines 529530). This general model (Eq 7) identifies two sources of CP stimulus dependencies (lines 160164). The stereotypical modulation h(p_{CR}), common to all cells and thresholdinduced, and potentially stimulus dependencies inherited from the choice correlation CC(p_{CR}). To study the form of CC(p_{CR}) associated with a specific source of crossneuronal correlations we focused on the model of gain induced correlations of Goris et al. 2014 (Eq 10). When studying these gaininduced CP stimulus dependencies we follow Haefner et al. 2013 and adopt a traditional purely feedforward encoding/decoding model. Eq 9 is equal to the expression derived by Haefner et al. 2013, except for the factor h(p_{CR}) and considering that the crossneuronal correlation structure can be potentially stimulus dependent.
We now highlight in lines 230231 that we model the effect of these gaininduced stimulus dependent crossneuronal correlations while assuming that the readout weights and hence the population size are stimulus independent. We believe it is reasonable to assume that the readout weights are independent of the stimulus value, since otherwise the form of the decoder would depend precisely on what the decoder estimates. On the other hand, we admit that the traditional linear decoder (Eq 8) used in the analytical model following Shadlen et al. 1996 and Haefner et al. 2013, among others is an approximation of the internal neural decoder, and that stimulusdependent weights may reflect dynamic aspects of the decisionmaking process neglected when using the linear decoder. Either way, the assumption of stimulus independent weights can also be taken as a modeling choice to determine which CP stimulus dependencies can be derived purely from the stimulus dependent crossneuronal correlations induced by gain fluctuations.
In reply to point (2) of the Reviewers above we already described our improvements connecting the extended model to the previous analysis of Haefner et al. 2013, which examined the interpretability of CPs in relation to properties such as the neurometric sensitivity and crossneuronal correlations for the case of uninformative stimuli. We here address the more specific points of the Reviewers to complement that description. As we now describe in lines 210229, the CP model including gain fluctuations is a generalization to all stimulus levels of the model of Haefner et al. 2013, and hence their characterization of the CP in relation to the neurometric sensitivity and crossneuronal correlations for uninformative stimuli still holds and is consistent with our analysis. Indeed, all the analysis of Haefner et al. 2013 focus on the properties of the term CC(p_{CR} = 0.5) in Eq 11, and not on the rest of this expression. The key difference and novelty with respect to this previous work is that in our work we do not focus on the characterization of which neural properties determine the magnitude of CPs across neurons, but withincell changes of the CP across stimulus levels.
In lines 218227 we refer to the previous results from Haefner et al. 2013, indicating that the relation between the CP and the neurometric sensitivity for p_{CR} = 0.5 also holds in our model. This is described in more detail in lines 10161018 and Equation 21 in Methods. This means that, equally to the previous model of Haefner et al. 2013, neurons with low firing rate and low variability can have a high CP at p_{CR} = 0.5, if they have high neurometric sensitivity. Also the strength of the asymmetric CP(p_{CR}) pattern is related to the neurometric sensitivity since in Equation 11 the slope depends on the factor [1CC_{i}(p_{CR}=0.5)] and CC_{i}(p_{CR}=0.5) is proportional to the neurometric sensitivity. Apart from the case of an optimal decoder studied in the main text, we now in the Suppl. Material S4 derive a more general gain model valid for any unbiased decoder, which explains more generally how the strength of the gaininduced asymmetric stimulusdependence depends on the neurometric sensitivity of each cell (denoted as \eta_{i} in Eqs S12 and S13).
Also the previous conclusions of the model of Haefner et al. 2013 regarding the connection between CPs and the crossneuronal correlation structure hold in our extended model, since they describe properties of the CP for p_{CR} = 0.5, not properties of the dependence CP(p_{CR}) across p_{CR} values (or equivalently across stimulus levels). We believe this is clearer in this revised version since we now describe the gain model in a separate section (lines 205256) and we explicitly show the extended feedforward model of Eq 9, which shows the dependence of the CP on the crossneuronal correlation structure. In lines 218234 we explain how this model extends the one of Haefner et al. 2013. To further facilitate the connection of our model with previous work studying the relation between CPs and crossneuronal correlations, now in the Suppl Material S4 we additionally derived the form of CP(p_{CR}) dependencies for the paradigmatic model of a decoder formed by two pools of neurons/ anti neurons (lines 14271452). For this model, Haefner et al. 2013 found that, as indicated by the Reviewers, the CP for the uninformative stimulus is determined by the crossneuronal correlations, in particular by the difference between withinpool and between pools correlations (see now lines 14341437). We show that also for this particular twopool based decoder gain fluctuations are expected to produce an asymmetric CP stimulus dependence, as modeled in Equation S14.
Regarding the novelty of our methods of analysis with respect to Figure 3 of Britten et al. 1996 and Figure 6 of Dodd et al. 2001, please see our detailed reply above to point (1) raised by the Reviewers. Very briefly, the key difference is that our method isolates withincell CP(p_{CR}) profiles from the heterogeneity of the CP magnitude across cells. This is now repeatedly emphasized in the revised paper, and especially described in lines 297307 and 612638.
We agree with the description of the Reviewers of the reason why no significant modulation of the CP with p_{CR} is found for the cells with CP<0.5. In lines 330339 we provide two explanations for this lack of significance. Apart from a smaller power due to the smaller size of the group of cells with CP<0.5, we also indicated that these cells have a smaller CP magnitude. Because the factor h(p_{CR}) is multiplicative to CP0.5, a smaller CP0.5 results in a weaker CP modulation by p_{CR}. We now elaborated on this second point following the argument of the Reviewers that in few cases CPs <0.5 are significant (lines 339343). We indicate that therefore the fact that we do not observe an inverted Ushape dependence as predicted by our model for the cells with CP<0.5, is not strong evidence against the presence of the existence of a thresholdinduced CP stimulus dependence.
We agree with the Reviewers that the separation in two clusters reflects the division between cells with CP>0.5 and CP<0.5. However, beyond this separation, the cluster analysis naturally subdivides the group of cells with CP>0.5 into cells with a predominantly symmetric or asymmetric CP(p_{CR}) pattern, and the identification of these patterns is not driven by the sign of the CP>0.5 or <0.5. We have now highlighted that the separation of clusters 2 and 3 in Figure 4b is a subdivision of the cells with CP>0.5 (lines 386387), although in fact these results are robust and analogous results are found when identifying three clusters without a priori excluding the cells with CP<0.5 (lines 387389).
4. Editing for accessibility and readability
Unfortunately, our genuine enthusiasm for the manuscript is somewhat dampened by its length, by its opacity in places, and by the high degree of topic familiarity that it presupposes. For example, the discussion of grand CP on page 7 and in section S2 of the supplementary material, is difficult to follow even for someone in the field. Accordingly, if readability could be improved, the usefulness would be even greater.
We appreciate the theoretical advance of the paper. It is very useful to have equations that clarify the relationship between previously used measures (CP, CTA, CC) and the effect of informative stimuli. Overall, by the very nature of the topic, the paper is rather technical. Our impression was that the paper is not easy to read, in particular when we think about a broader readership that is not familiar with details of the theory of CP and the typical interpretation of CP measurements. We understand that this is not an easy job because the interpretation of CP is complicated (bottomup vs. topdown contributions, relationship to spike count correlations, etc) but it would help to revise the Results section and add some more details and (where possible) intuitive interpretation of the theoretical and experimental results to guide the reader. As an example, it turns out that gain fluctuations are an important factor to explain CP vs pCR but this is only treated very briefly in the results (gain fluctuation model is not explained, no figure is shown about how well the model explains the variability observed in the data, etc).
We did our very best to improve readability while at the same time addressing all concerns and suggestions of the Reviewers. We sincerely appreciate the effort of the Reviewer’s in helping us to improve every aspect of the paper.
We thank the Reviewers for indicating the necessity to make the paper more accessible. Throughout the paper we have worked in improving its readability. We simplified the description of the grand CP in the main text and moved it to the Discussion (old lines 245266, now lines 593603) to make clear the effect that CP stimulus dependencies may have on the interpretation of grand CPs and the additional information that can provide CP(p_{CR}) profiles. We simplified section S2 focusing on the connection between the corrected zscore of Kang and Maunsell and a weighted CP average, which is the result relevant for our reasoning (lines 584588). The additional description of how for the standard zscore the bias in the estimated grand CP (first pointed out by Kang and Maunsell) can be understood as a consequence of an unnormalized weighted average has been removed, since it is not necessary for our reasoning. See the reply to point (11) below for more details on our improvement of section S2.
We appreciate that in particular our explanation of the gain model in the main text was insufficient and it was unclear how to situate our contributions in relation to the previous work of Haefner et al. 2013. We now better explain the model in the Results in a separate subsection (lines 205256) and we also have rewritten the description in Methods (lines 882957) and Supl. Material S4. As described in detail in reply to previous points of the Reviewers, we now much better describe the neuronal properties, such as the neurometric sensitivity, that determine the strength of an asymmetric CP(p_{CR}) pattern induced by gain fluctuations.
Furthermore, we have expanded the analysis of the symmetric and asymmetric clusters extracted from the cluster analysis using the gain model (lines 443453). We now also indicate that in cluster 3, when the asymmetric pattern is predominant, the coefficients estimated with the gain model are significantly correlated with the experimental estimates, although overestimating their magnitude. We also better explain the theoretical and experimental limitations in the analysis of the gain model (lines 454460) despite the merits of this simple model explaining main features of the additional asymmetric CP pattern that could not be explained by the thresholdinduced stimulus dependency.
Finally, we acknowledge that we have struggled with the tradeoff between length and understandability. Invariably, any point that we cut from the text will have to be filled in by an interested reader. And we worry that the effort to do so for an interested reader, would exceed the annoyance from having to skim over seemingly irrelevant text for a reader interested in other aspects of our work. However, we welcome of course any specific suggestions of where to expand and what to cut beyond the changes in the current revision.
[Editors' note: further revisions were suggested prior to acceptance, as described below.]
Essential Revision:
1. Overall, the empirical part looks a little bit like an attempt to drag meaning out of a weak relationship, at least as measured experimentally in the single, but classical data set of Britten et al. The more detailed explanation of the clustering analysis reveals that the main effect of interest is driven by one critical data point from the original Britten et al. data. This is the value of CP for pCR = 0.85, in the data set for which CP > 0.5. Much of the paper depends on just how confident we are about a difference in CP that rises from 0.57 for pCR=0.5 to 0.64 for pCR=0.85. The SEMs calculated for individual neurons in Figure 3d (lower right panel) are not encouraging in this regard.
We think that further empirical tests are needed to obtain clarity about significance about key elements of the proposed model but this is beyond the scope of the current manuscript. It should be flagged to the reader, though.
In order to clarify to the reader that the empirical part provides only an initial analysis – as was also explicitly mentioned by the authors in their response to the previous reviews – we ask the authors to insert something close to the following statement in the abstract and in the first paragraph of the results:
This paper provides preliminary empirical evidence for the promise of studying stimulus dependencies of choicerelated signals, which requires further exploration in a wider data set.
We added the following sentence in the Abstract:
“Our analysis provides preliminary empirical evidence for the promise of studying stimulus dependencies of choicerelated signals, encouraging further assessment in wider data sets.” (lines 2022)
At the beginning of the Results section we also added these sentences (lines 8586):
“This analysis provides preliminary empirical evidence in support of using these new methods for studying stimulus dependencies of activitychoice covariations.”
2. The revisions are acceptable in that the presentation indeed streamlined in many respects (thank you!). However, one large reservation remains: the main findings are NOT explained in an intuitive manner.
Basically, when we distinguish neural responses associated with choice D=1 from those associated with choice D=0, we obtain two more or less distinct distributions, P(RD=1) and P(RD=0). When these distributions are well separated, then the choice probability CP (average probability that an RD=1 is larger than an RD=0) is somewhat larger than when these distributions are more overlapping.
However, both qualitative and quantitative aspects of the relation between choice probability CP and choice rate PCR are highly dependent on assumptions. For example, the symmetric Ushape depends on Gaussian variability of responses. When responses are assumed to be Poissonvariable, for example, the symmetric Ushape is replaced by a monotonic decline (Matlab code available upon request).
We really appreciate the Reviewer taking time to probe the robustness of our analytical results. Below, we include Matlab code for a simple simulation based on the neuronantineuron model (Britten et al. 1992) showing that our results also hold for Poisson neurons.
As explained in lines 759771, the shape of the factor h(p_{CR}) in the choicetriggered average CTA_{i} (Equation 17) is determined based only on the assumption of the distribution p(d) being Gaussian. This approximation is likely excellent due to the Central Limit Theorem since d is the combination of many sensory neurons. We now further justify this in lines 153155. On the other hand, the relationship between the CTA_{i} and CP_{i} depends also on the Gaussian assumption for the neural response distribution. While this assumption is less accurate (e.g. when Poisson), the CTACP relationship derived by Haefner et al. 2013 is very robust to deviations, down to only a few spikes per trial (see Figure S2 in Haefner et al. 2013). As part of an earlier preliminary version of our study (Chicharro et al. bioRxiv cited below), we actually already performed numerical simulations with Poisson responses to verify the robustness of our analytical results and confirmed the above insights. In our simulations, we found divergencies that were minor and with a different shape than the additional asymmetries that the gain model could explain. See Figure 3CD in:
Decisionrelated signals in the presence of nonzero signal stimuli, internal bias, and feedback Daniel Chicharro, Stefano Panzeri, Ralf M. Haefner bioRxiv 118398; doi: https://doi.org/10.1101/118398
Based on the insights in the manuscript and numerical checks in the above two references, we can only speculate that in the Reviewer’s simulation the choice correlation might not have been constant as a function of the stimulus, in which case the shape of the thresholdinduced modulation h(p_{CR}) cannot be isolated from CC(p_{CR}). The thresholdinduced Ushape, albeit slightly distorted, persists down to populations of only 2 Poisson neurons, firing only 5 spikes per trial on average (the more of either, the better our approximations). We also include in Author response image 1 a figure showing multiple simulations with only 30 neurons. Equivalent results are obtained by increasing both neuron number and withinpool noise correlations.
stim=(0.6:0.2:0.6);%change in the firing rate with informative stimuli
for i=1:length(stim)
r1=poissrnd(5+stim(i),15,1e4); % 15 neurons & 1e4 trials
r2=poissrnd(5stim(i),15,1e4); % 15 antineurons & 1e4 trials
choice=sign(mean(r1)mean(r2)); % difference of average population responses
choice(choice==0)=2*binornd(1,0.5,1,sum(choice==0))1;
pcr(i)=sum(choice==1)/length(choice); % choice ratio
for j=1:size(r1,1)
cp2(j,i)=ChoiceProbability(r1(j,choice==1),r1(j,choice==1));
end
end
plot(pcr,mean(cp2),'b.');
(ChoiceProbability(x,y) is a function returning the choice probability value for the two sets of responses x, y)
Gain modulation changes the picture in exactly this way. After gain modulation, response variability is no longer Gaussian and, additionally, the set of responses with the larger average is more affected than the set with the smaller average. In other words, gain modulation alters the shape of response distributions and in consequence also the shape of CP = f(PCR).
Equation 7 shows that the dependence of CP on p_{CR} can be decomposed into two parts. The first part is the modulation h(p_{CR}), shared by all neurons. As discussed above, this modulation appears in the CTA_{i} under the assumption of p(d) being Gaussian, it is inherited by the CP_{i} (Equation 7) and is robust for distributions p(r) that depart from Gaussianity. The second part is a modulation specific to each neuron that is due to a stimulusdependence of the choice correlation CC_{i}. To be precise: it is not the nonGaussianity of p(r) that determines this dependence of CP on p_{CR}, it is the stimulusdependence of CC_{i} (Equation 7).
These insights were the motivation for our gain model: the idea that the stimulus dependent covariance of Equation 10 would likely result in a stimulusdependent CC(s) that then is reflected in CP(p_{CR}). It is through this additional contribution CC(p_{CR}) and not modifying the Ushape factor h(p_{CR}) induced by the threshold that the gain fluctuations contribute to the CP stimulus dependencies.
In spite of this reservation, I have to confess that this paper has been quite useful to me, because it forced me to think through these issues.
Thank you very much. Making these issues explicit, and shedding light on the deeper relationships underlying CP, has been a major motivation for our paper.
3. With regards to the dependence of the results on one critical data point in the Britten et al. data set (CP for pCR = 0.85) and the question of significance to other data sets, various suggestions for further data worth testing were made.
We wonder, for instance, whether Geoff Ghose's lab (https://pubmed.ncbi.nlm.nih.gov/30067123/ ; https://pubmed.ncbi.nlm.nih.gov/19109454/ ) is a better source of data to support this exercise. In particularly, his recordings could be searched for signs of noisy gain modulations, which is the mechanism that lies at the core of this analysis. Other suggestions were as mentioned before the Dodd et al. 2001/Wasmuht et al. 2019 data.
But we agree this is beyond the current scope of the manuscript.
Thank you very much for these suggestions. We agree with the Reviewers that further data sets will need to be examined in the future to confirm the existence of CP stimulus dependencies. In fact, in particular the shape of any dependence associated with the Choice Correlation CC(p_{CR}), can be expected to depend on the particular role of the cells and on the task, so it would be very interesting to examine different data sets and the CP stimulus dependencies therein and to understand how any difference of their shape across data sets can be explained in terms of the particularities of the tasks or the properties of the cells. We also have high hopes for new datasets that come with cell type and layer information, as well as from chronic recordings where the same neurons are held across multiple days allowing for more accurate CP estimates.
4. Line 10: "activitychoice covariations are traditionally quantified with a single measure of choice probability (CP), without characterizing their changes across stimulus levels" I appreciate the need to demonstrate novelty in the paper, but this statement does a disservice to earlier researchers who recognized the possibility of a stimuluschoice interaction but did not find any evidence of such an interaction. The earlier papers are clear on this point. So the novelty here is not the failure of earlier researchers to think through their results carefully; the novelty here is an apparent improvement in sensitivity of the analysis methods. This may sound less exciting but is still important, if correct.
We agree with the Reviewers. Indeed, in the original submission we were giving historical context to the focus on a single CP based on early works such as the one of Britten et al. 1996, which examined the possibility of CP stimulus dependencies but did not find a significant dependence. This was lost in the simplifications of the Introduction. We now have recovered this important explanation in lines 3638. We have also substituted ‘traditionally’ by ‘commonly’ in the piece of text cited by the Reviewers.
5. Line 65 "if the decision process uses a decision threshold"
Please see our explanation under (7) below. We expanded this sentence to include
‘if the decisionmaking process relies on a threshold mechanism (or threshold criterion) to convert a continues decision variable into a binary choice’ (lines 6667)
6. Line 96 "choice probability, CP, defined as the probability that a random sample from all trials"; the reader only learns on line 99 what this is a random sample of. It would better read as "a random sample of neural activity r " and then explain in a separate sentence what r might be in any particular experimental situation.
Thank you, we now indicate that it is a sample of neural activity earlier in the explanation (line 99).
7. Line 147: "threshold value 𝜃 " the authors insist on referring to this parameter 𝜃 as a threshold value. This usage will inflame debate as to whether there is a "high threshold assumption" baked into this model, where "high threshold" here means a classical high threshold in visual detection models, as opposed to a signal detection model. As classical high threshold theory is now rejected for detection models, I think it would be better here and throughout to refer to 𝜃 as a criterion value, which is what it is and better aligns with the language of signal detection theory.
Thanks for pointing out this potential misunderstanding. We use ‘threshold’ as it is used for example in the literature of decisionmaking models that consider the accumulation of evidence until a threshold is reached, triggering a decision, and also often in the literature examining decisions in LIP. For example, the idea of a decision threshold is used in this way in the Review paper by Gold and Shadlen 2007. The model used in Shadlen et al. 1996, and in Haefner et al. 2013 can be viewed as a simplified analytically tractable version of this type of models.
To minimize misunderstandings, we added further explanation in the Introduction (see our answer to point (5)) that allows understanding the meaning of this decision threshold. We believe that this initial clarification, together with the detailed explanation of the meaning of the decision threshold early in the Results section and in Figure 1 should avoid confusion. The revised text reads as follows (lines 6467):
‘We show that they can also appear for all neurons because of the transformation of the neural representation of the stimulus into a binary choice, if the decisionmaking process relies on a threshold mechanism (or threshold criterion) to convert a continues decision variable into a binary choice.’
8. Lines 129169: this development is still very difficult to follow and labors over what are some fairly basic points. It would better be rewritten with better structure at about half the length. For example, compare lines 111112 and 134135, which could be combined into a single point that is made once at the right stage in the argument.
We simplified this explanation removing the more technical details of the threshold model not yet required at this point (lines 122123). However, we believe these are important conceptual points: carefully relating the neuronspecific quantities to the choicespecific one, and we worry that shortening would increase the risk of a firsttime reader missing important information. Also, some level of redundancy between lines 111112 (now 116117) and 134135 (now 136137) is justified by the discursive structure of the text: lines 111112 serve to contrast with the subsequent explanation, that indicates that despite the lack of additional assumptions in the definition of the CP and CTA, their interpretation has been mostly led by a feedforward model. On the other hand, in lines 134135 we are already describing the assumptions of a threshold mechanism, introducing the continuous decision variable d.
9. Lines 170209: the writing continues in a stilted manner with multiple crossreferences to other material.
To improve readability, we now connected the explanations of lines 174179 with the previous text. Nonetheless, we believe all of these are important points, and that the crossreferences are important, too. We don’t know how to simplify without omissions, or risking comprehension by a firsttime reader. However, we would gladly implement any specific suggestion.
This section accomplishes 3 things:
 Explaining the possible sources of variation in p_{CR}.
 Justifying the use of the linear relationship for all our explanations and pointing to the exact formulas.
 Laying out the empirical predictions of our theoretical results.
10. Lines 1734 "We here will refer to CP stimulus dependencies and CP(𝑝CR ) patterns interchangeably". We rather fear that this is going to cause a lot of readers to trip over. I can see that these two are interchangeable from the theoretical perspective of these authors, but many will think that behavior pCR is dependent on a number of factors other than the stimulus. In the field of monkey neurophysiology, pCR will depend on reward, attention, arousal and so forth, in a way that the stimulus does not. What We think the authors actually mean is something like "Within the structure of our model, there is a fixed relationship between the dependence of CP on the stimulus strength and the dependence of CP on the choice rate pCR, for each threshold."
We have now clarified this point. We now write in lines 174179: “Note that we do not distinguish between CC stimulus dependencies and a dependence of the CC on p_{CR}. We do not make this distinction here because most generally a change in the stimulus level results in a change of p_{CR}, and the two cannot be disentangled. However, the p_{CR} more generally depends on other factors such as the reward value, attention level, or arousal state, and in Equation 7 the separate dependencies on the stimulus and p_{CR} can be explicitly indicated as CC_{i}(p_{CR}, s) when the experimental paradigm allows to separate these two influences.”
So the model is flexible to accommodate different dependencies of CC on the stimulus s and on p_{CR}, because the details of this dependencies are irrelevant for the model, that is, the threshold model does not characterize the form of CC(p_{CR}), or CC(p_{CR}, s). The reason why we do not distinguish between the dependence on p_{CR} and on s is because in most experimental settings the two are intertwined. This is the case for example in the Britten et al. data we reanalyze, in which a relation p_{CR}(s) is estimated experimentally by the psychometric function. We hope that explicitly mentioning that Equation 7 can be rewritten with CC_{i}(p_{CR}, s) when these two effects are separable will avoid the misunderstanding that our model is limited to assuming a one to one relation between p_{CR} and stimulus levels.
11. Lines 263265 "we show how to extend Generalized Linear Models (GLMs), a popular model to characterize the factors modulating neural activity, to include a stimuluschoice interaction terms" As a statistical procedure, this is fairly routine stuff and could be abbreviated considerably.
Thanks, we have shortened this description, now in lines 270 and 478.
12. Lines 376377 "Specifying the existence of two clusters, we naturally recovered the distinction between cells with CP higher or lower than 0.5" We struggled to find a clear and unambiguous summary of the statistics associated with this. We can see that a consistent pattern emerges when the cluster number is increased from 2 to 3, but looking at the distributions in Figure 4c and Figure 4d does not appear to reveal clusters. The significance values in Figures4a and 4b relate to the significance of the modulation effect for each cluster, not the significance of the cluster separations. The methods section and supplementary analysis section crossreference each other but neither seems to answer the simple question of whether 1 versus 2 clusters is statistically justified, let alone the step from 2 to 3.
As correctly indicated by the Reviewers, the significance values relate to the modulation effects, and not to the cluster separations. The analysis of the Supplementary figure S2c indicates that the distinction between the symmetric and asymmetric cluster is robust, in the sense that they still contain a substantial number of cells even when the number of clusters is allowed to be 6. However, our objective here was not to conclude that there is a concrete number of separable clusters. Indeed, we argue in the discussion that from considerations about the structure of stimulusdependent feedback signals, it could be expected to exist a richer structure of CP stimulus dependencies (lines 683694). This richer uncharacterized structure may explain the lack of separability of the clusters in the distributions of Figure 4c and 4d, although it may be due as well to noise in the estimated CP values. We limited the analysis to three clusters because of the limitations of statistical power of the data set. We considered a third cluster because qualitatively it was instructive to detect a CP dependence that was not symmetric as predicted by the thresholdeffect h(p_{CR}), since the asymmetric CP dependence implies (according to Equation 7) that it has to be originated through a choice correlation CC(p_{CR}) modulation.
We now clarify in lines 425431 that we do not claim that these clusters are the only existing ones, or even that a finite number of CP(p_{CR}) patterns exists, leading to a finite number of clusters. For example, if the CP(p_{CR}) profiles were associated with the structure of stimulusdependent feedback across cells with different tuning functions, a continuum of CP(p_{CR}) dependencies would be expected, in agreement with the continuum of tuning functions.
13. Lines 612638 This discussion suggests that there may be withincell changes in CP as a function of pCR that may have been hidden by pooling across populations of cells. But in the end, this paper has problems in detecting real changes in this relationship at the level of recordings from single cells. The small SEMs that attach to the data in Figure 3a do indeed reflect pooling across a population sample; they do not relate to changes in individual neurons. The panel in Figure 3d shows the true picture for individual cells. So this discussion begs the question as to why the population analyses in Dodd et al. do not show the predicted relationships. The point made by the authors about withincell changes does not appear to be material in this regard.
The difference may be in the fact that, as indicated in lines (302303), our average corresponds to an average across cells of withincell CP(p_{CR}) profiles. The importance of this is discussed in lines 304309. It is further motivated in lines (569576). When we average across cells we are averaging their individual withincell CP(p_{CR}) profiles. Therefore, for each particular p_{CR} value all cells contribute in the same way.
As explained in lines 629634, from Figure 6 of Dodd et al. 2001 we cannot directly read the withincell CP(p_{CR}) profiles. Given that, in our understanding, the set of stimulus levels presented to each cell were different, it is not possible to think of an average of all the dots in Figure 6a with CP>0.5 as analogous to an average of the withincell CP(p_{CR}) profiles of the cells with CP>0.5. We do not know though how strong was the variability in the set of stimulus levels presented to each cell and whether using our method a clear CP(p_{CR}) dependence would be found. We do not discard that in that data set there is really no significant CP stimulus modulation, but we believe that our argument about how to isolate withincell CP(p_{CR}) profiles –even if averaged– from across cell heterogeneities, is still applicable in this case.
We do appreciate that a higher number of trials per neuron would be required to definitively answer the question about the nature of the CP(p_{CR}) relationship, and that this manuscript provides theory and insights that we will only be fully exploitable in the future. With regard to Dodd et al., it is entirely possible that CPs in MT during the rotating cylinder task do not depend on p_{CR}, while they do depend on p_{CR} during a classic motion discrimination task. This is exactly what would be expected if the CPs during the rotating cylinder task were mostly due to feedback from a binary variable – as entirely plausible – rather than linked to the decision through a continuous decisionvariable. Only in the latter case does our theory predict the symmetric h(p_{CR}) relationship.
https://doi.org/10.7554/eLife.54858.sa2Article and author information
Author details
Funding
National Institute of Neurological Disorders and Stroke (R01 NS108410)
 Stefano Panzeri
National Institute of Neurological Disorders and Stroke (U19 NS107464)
 Stefano Panzeri
National Eye Institute (R01 EY028811)
 Ralf M Haefner
Fondation Bertarelli
 Daniel Chicharro
National Institute of Neurological Disorders and Stroke (U19 NS118246)
 Ralf M Haefner
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
This work was supported by the BRAIN Initiative (grants No. R01 NS108410 and No. U19 NS107464 to SP, U19 NS118246 to RMH), the Fondation Bertarelli, and the CRCNS program (grant R01 EY028811 to RMH). We thank KH Britten, WT Newsome, MN Shadlen, S Celebrini, and JA Movshon for making their data available in the Neural Signal Archive (http://www.neuralsignal.org/), and W Bair for maintaining this database.
Senior Editor
 Joshua I Gold, University of Pennsylvania, United States
Reviewing Editor
 Kristine Krug, University of Oxford, United Kingdom
Reviewers
 Jochen Braun, OttovonGuerickeUniversity Magdeburg, Germany
 Andrew J Parker, University of Oxford, United Kingdom
 Klaus Wimmer, Centre de Recerca Matemàtica, Spain
Publication history
 Received: January 2, 2020
 Accepted: April 6, 2021
 Accepted Manuscript published: April 7, 2021 (version 1)
 Accepted Manuscript updated: April 9, 2021 (version 2)
 Version of Record published: June 7, 2021 (version 3)
 Version of Record updated: June 15, 2021 (version 4)
Copyright
© 2021, Chicharro et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 1,551
 Page views

 239
 Downloads

 2
 Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Neuroscience
Developmental language disorder (DLD) is a common neurodevelopmental disorder characterised by receptive or expressive language difficulties or both. While theoretical frameworks and empirical studies support the idea that there may be neural correlates of DLD in frontostriatal loops, findings are inconsistent across studies. Here, we use a novel semiquantitative imaging protocol – multiparameter mapping (MPM) – to investigate microstructural neural differences in children with DLD. The MPM protocol allows us to reproducibly map specific indices of tissue microstructure. In 56 typically developing children and 33 children with DLD, we derived maps of (1) longitudinal relaxation rate R1 (1/T1), (2) transverse relaxation rate R2* (1/T2*), and (3) Magnetization Transfer saturation (MTsat). R1 and MTsat predominantly index myelin, while R2* is sensitive to iron content. Children with DLD showed reductions in MTsat values in the caudate nucleus bilaterally, as well as in the left ventral sensorimotor cortex and Heschl’s gyrus. They also had globally lower R1 values. No group differences were noted in R2* maps. Differences in MTsat and R1 were coincident in the caudate nucleus bilaterally. These findings support our hypothesis of corticostriatal abnormalities in DLD and indicate abnormal levels of myelin in the dorsal striatum in children with DLD.

 Neuroscience
Many studies have investigated the contributions of vision, touch, and proprioception to body ownership, i.e., the multisensory perception of limbs and body parts as our own. However, the computational processes and principles that determine subjectively experienced body ownership remain unclear. To address this issue, we developed a detectionlike psychophysics task based on the classic rubber hand illusion paradigm where participants were asked to report whether the rubber hand felt like their own (the illusion) or not. We manipulated the asynchrony of visual and tactile stimuli delivered to the rubber hand and the hidden real hand under different levels of visual noise. We found that (1) the probability of the emergence of the rubber hand illusion increased with visual noise and was well predicted by a causal inference model involving the observer computing the probability of the visual and tactile signals coming from a common source; (2) the causal inference model outperformed a nonBayesian model involving the observer not taking into account sensory uncertainty; (3) by comparing body ownership and visuotactile synchrony detection, we found that the prior probability of inferring a common cause for the two types of multisensory percept was correlated but greater for ownership, which suggests that individual differences in rubber hand illusion can be explained at the computational level as differences in how priors are used in the multisensory integration process. These results imply that the same statistical principles determine the perception of the bodily self and the external world.