Value-based attentional capture affects multi-alternative decision making
Reply to annotation by Chau and colleagues
Sebastian Gluth(1), Mikhail S. Spektor(1,2), Jörg Rieskamp(1)
- Department of Psychology, University of Basel, Switzerland
- Department of Psychology, University of Freiburg, Germany
In their annotation, Chau and colleagues replied in 5 points to our eLife publication in which
we reported that the central behavioral effect of Chau et al. (2014, Nature Neuroscience;
henceforth Chau2014) is neither replicable (i.e., is not found in our new data) nor
reproducible (i.e., it is not present in their original data when analyzed correctly). Here, we
respond to their reply in a step-by-step manner to provide some corrections and clarifications.
1) Positive and negative distractor effects. In their reply, Chau and colleagues emphasized
that they had reported both positive and negative distractor effects in their original paper. This
appears as an astonishing perspective change, given that the original paper put a strong and
unequivocal focus on the positive effect. For example, the abstract (p. 463) reads:
“The model predicts […] greater difficulty choosing between two options in the presence of a
third very poor, as opposed to very good, alternative. Both investigation of human decisionmaking
and […] bore out this prediction.”
Consistently, both Figure 1c and Figure S1 in Chau2014 show that their proposed biophysical
model predicts a negative effect of HV-D. The effect might be stronger in more difficult
trials, but the direction is the same irrespective of trial difficulty.
In their reply, Chau and colleagues now refer to Figure S8 to point out that they also reported
negative distractor effects. However, this figure refers to the behavioral results of a task that
differed from the fMRI experiment and that was tested with only 12 participants. Crucially,
this effect is not predicted by their model.
2) Positive distractor effect predictions. Chau and colleagues state that the biophysical
model is unlikely to be unique in predicting positive distractor effects. We fully agree with
this statement and thus want to stress the importance of (quantitative) model comparisons, an
approach that we used in our work but that has not been used in Chau2014.
3) Relative accuracy vs. absolute accuracy. In their reply, Chau and colleagues argue that
investigating relative choice accuracy “is preferred” to absolute choice accuracy. In contrast,
we would argue that for a comprehensive understanding of the behavioral results, it is
important to look at both relative and absolute choice accuracy together with a detailed
analysis of the different types of errors that participants can make. Contrary to Chau2014, our
paper provides this full picture of participants’ behavior.
- The (HV-LV)×(HV-D) interaction effect. The reply by Chau and colleagues suggests that
the interaction between HV-LV and HV-D could be used to test for positive or negative
distractor effects. We want to reiterate here that the main hypothesis of Chau2014 and our
study did not concern the interaction term but the “main” effect of HV-D. To test for the HVD
effect, however, it is important to calculate the interaction term based on mean-centered (or
standardized) variables. In our paper, we list several standard textbooks and seminal
publications that stress the necessity of this step to avoid non-essential multicollinearity. We
also show that this multicollinearity is dramatic in the case of Chau2014 (r = .862 with
uncentered variables as opposed to r = -.064 with mean-centered variables). Finally, using
both multiple-regression analyses and simulations we provide converging evidence that the
negative HV-D effect reported in Chau2014 was merely a by-product of the incorrect
implementation of the interaction term.
Concerning the interaction itself, we think this effect is not interpretable and should not be
used to infer the presence or absence of distractor effects: In our publication (Figure 5–figure
supplement 2), we show that even a correctly calculated interaction term tends to become
significantly positive in simulated data in which this interaction effect was assumed to be nonexistent
when generating the data. The reason for this statistical artifact seems to be the
positive correlation between HV-LV and HV-D (the artifact disappears when simulating data
based on trials in which HV-LV and HV-D are uncorrelated).
5) Sample size. Chau and colleagues point out that their study did not only include the fMRI
sample of 21 participants, but also 4 pilot datasets with a total of 34 participants. Furthermore,
they mention a new dataset with 40 participants. They claim that both positive and negative
distractor effects are present in these data.
We appreciate that the authors with their reply made all their data available at an online
repository. This gave the opportunity to analyze the complete dataset, in particular those data
that we had no access to before (i.e., the pilot data and the new data). As in our publication,
we conducted the multiple regression analysis on relative choice accuracy that included the
HV-D term and the interaction term (HV-LV)×(HV-D) that was correctly based on meancentered
variables. The script to analyze these data can be found on our OSF project website
(https://osf.io/8r4fh/).
Taking all pilot data together, we found no evidence for a negative HV-D effect on relative
choice accuracy. The effect is even significantly positive (t(33) = 2.13, p = .040; d = 0.37).
This might be seen as evidence for a divisive normalization effect, but the robustness of the
effect is questionable, given that the effect does not reach significance when omitting the
interaction term (t(33) = 1.67, p = .105; d = 0.29). Interestingly, even when using the
incorrectly determined interaction term no negative effect is observed (t(33) = 0.28, p = .778;
d = 0.05), contrary to Chau2014’s suggestion.
With respect to the new dataset, we found a remarkably strong negative effect of HV-D (t(33)
= -12.48, p < .001; d = -1.97). This would seem as tremendous support of Chau2014. Note,
however, that these new data contain 50% trials with positive values for HV, LV and D, and
50% trials with negative values (a fact not mentioned in the reply by Chau and colleagues).
Since all other experiments (of Chau2014 and our study) used only positive values for the
options, we restricted the analysis to positive-value trials. In this case, the HV-D effect is not
significant anymore (t(33) = -1.04, p = .305; d = -0.16).
Thus, the strong negative effect of HV-D in this new dataset is driven by trials in which all
options have negative values. This may point to an interesting violation of independence in
decision making in the loss domain, but too little is known about this novel experiment at the
current stage to warrant any conclusions. Most importantly, when focusing only on options
with positive values as they had been used in all previous studies, again no evidence for a
negative HV-D effect could be observed. This finding (together with the results of the pilot
datasets) lend further support for our conclusion that the behavioral effect of Chau2014 is not
replicable.
Posted by eLife staff on behalf of Gluth et al.