Wei Mun Chan

Annotations

  1. Mechano-redox control of integrin de-adhesion
    The assembly and function of the platelet flow chambers are described in more detail in Bio-protocol (Dupuy et al., 2019)

    Comment on Version 4

    A citation to Bio-Protocol (Dupuy et al., 2019) was added to the start of the Platelet adhesion assays in folow chambers section in the Materials and methods:

    The assembly and function of the platelet flow chambers are described in more detail in Bio-protocol (Dupuyet al., 2019).

    The following citation has been added to the Reference list:

    Dupuy, A, Lining, AJ, Passam, FH. (2019). Straight Channel Microfluidic Chips for the Study of Platelet Adhesion under Flow. Bio-protocol 9(6): e3195. DOI: https://doi.org/10.21769/BioProtoc.3195

    added by eLife editorial staff

  2. Multi-neuron intracellular recording in vivo via interacting autopatching robots
    Manual patching yields are generally not reported in literature.

    Comment on Version 3

    Due to an internal miscommunication we realised that we had in fact reached out to the wrong individual and not the corresponding author of Jouhanneau et al. 2015. We apologise for the oversight, we have therefore replaced the incorrect sentence in the originally published article "To investigate the yields that manual multi-patch clampers expect, we reached out to the authors of the Jouhanneau et al. 2015 paper but they did not respond." with "Manual patching yields are generally not reported in literature.”

    Added on behalf of the authors by eLife editorial staff

  3. Dissection of affinity captured LINE-1 macromolecular complexes

    Dear John,

    Thanks for highlighting this. Our production team have issued fixed the order of the Figures of the figures so that Figure 1 is displayed in the appropriate section.

    Added by eLife editorial staff

    This is a reply.
  4. Value-based attentional capture affects multi-alternative decision making
    Value-based attentional capture affects multi-alternative decision making

    Annotation to “Value-based attentional capture affects multi-alternative decision making” Bolton KH Chau(1)*, Nils Kolling(2, 3), Laurence T Hunt(3), Mark E Walton(2,3), and Matthew FS Rushworth(2,3)

    1. Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong
    2. Department of Experimental Psychology, University Oxford, OX1 3UD, UK
    3. Wellcome Centre for Integrative Neuroimaging (WIN), University of Oxford, Oxford OX3 9DU, UK

    (1) Positive and negative distractor effects (Introduction, second paragraph). The 2014 paper did not just report evidence of a positive distractor effect but also negative distractor effects predicted by divisive normalization (Louie et al., 2014).

    The positive distractor effect was observed in the behavioural data reported in the main manuscript and supplementary materials. It was also apparent in the signal recorded from ventromedial prefrontal cortex (vmPFC).

    It is, however, important to note that the positive distractor effect was most robust on difficult trials where the value difference between the two available options (high value, HV, and low value, LV options) was small. By contrast, on easy trials (when there was there was a large value difference between HV and LV) there was a negative distractor effect (Supplementary Information figure 8: SI.8) similar to that predicted by the divisive normalization model. The effect was particularly strong in the experiment reported in SI.8. Overall there was a negative distractor (D) effect but the HV×D and LV×D interaction terms suggested that the distractor effect became positive on difficult trials.

    In summary, there was evidence for both positive and negative distractor effects and the sign of the effect depended on the difficulty of the trial.

    (2) Positive distractor effect predictions (Introduction, second paragraph). It is indeed true that the 2014 paper showed that a biophysical attractor model predicts a positive distractor effect in difficult decisions (decisions in which the high value, HV, and low value, LV, option values are close). However, it is unlikely to be unique in making this prediction. For example, modified versions of other models such as the drift diffusion model will make a similar prediction if separate diffusion processes occur in parallel to mediate competitions for selection between not just HV and LV but between HV and the distractor (D) and between LV and D. If the parallel drift diffusion processes mediating the various competitions interact then higher D values will delay the selection of either HV or LV until strong evidence for one or other option has accumulated.

    (3) Relative accuracy vs absolute accuracy (Introduction, last paragraph). Looking at the relative choices between two options (i.e. relative accuracy) is a critical behavioural index for testing whether and how the presence of a third option impacts on decisions between other choices in a manner that violates the principle of independence from irrelevant alternatives. This is the case for the 2014 paper, the Louie2013 paper and other papers that looked at decoy effects (e.g. Usher and McClelland, 2004, Psychol Rev; Roe et al., 2001, Psychol Rev; Tsetsos et al., 2010, PNAS; Li et al., 2018, PNAS). The approach of looking at relative accuracy is preferred to absolute accuracy, because it dissociates the critical effect of D on decisions between HV and LV (which is central to a positive/negative distractor effect or a decoy effect) and the straightforward effect that D’s value has on choices of D itself (which is well-reported in most decision making studies).

    (4) The (HV-LV)×(HV-D) interaction effect (Results, “Experiment 4 and reanalysis of the Chau2014 dataset”). Gluth and colleagues suggest that the negative HV-D effect (or “positive distractor effect”) in the 2014 paper was found as a result of “an incorrect implementation of the interaction term (HV-LV)×(HV-D)”. It is indeed possible to debate how best to calculate and interpret an interaction term correctly but for that very reason it is advisable to examine the effect of the distractor at different levels of HV-LV. Just such analyses were reported in the 2014 paper. They were performed by median splitting the data by HV-LV (i.e. dividing the trials into easy and hard trials) and testing the effect of the distractor on easy and difficult trials separately (SI.4). We showed that there were opposite distractor effects on trials with these two levels of HV-LV. Gluth and colleagues performed a very similar analysis in Figure 5-figure supplement 4 of their paper using their own data as well as the data from the 2014 paper. However, they took a more ambitious approach of separating the trials into twelve levels (as opposed to two levels) and claimed that a significant negative HV-D effect was only found in one of the twelve levels. In addition, Gluth and colleagues combined data across their own experiments and tested the effect of (HV-LV)×(HV-D). They showed that there was a trend for a positive effect of (HV-LV)×(HV-D), although from Figure 5A it is unclear whether it passed the significance threshold (it is hard to see whether the error bar showing the 95% confidence interval passes the zero line or not). It is possible that in the current paper, Gluth and colleagues have some evidence for both positive and negative distractor effects in their own data.

    (5) Sample size (Results, “Experiment 4 and reanalysis of the Chau2014 dataset”). It is important to be clear that while the 2014 paper reported the behaviour of 21 participants that had also participated in an fMRI study, it also reported several additional behavioural experiments in the Supplementary Information. As a result 55 data sets in total were reported in 2014. In order to avoid any ambiguity about the availability of these data we have requested that the eLife editors link all these data to Gluth’s and colleagues’ manuscript with immediate effect so that they can be examined by any interested party at any time (https://doi.org/10.5061/dryad.040h9t7). In 2017 an additional 40 data sets were collected. These have not previously been reported but again we have requested that the eLife editors link all these data to this manuscript with immediate effect so that they can be examined by any interested party at any time. In summary, 1) positive distractors effects similar to those discussed by Chau et al., 2014, 2) negative distractor effects (i.e. divisive normalization effects) similar to those predicted by Louie et al., 2011, and 3) attentional capture effects such as those that Gluth and colleagues focus on in the current manuscript are all simultaneously present in these 95 data sets. The effects are not mutually exclusive.

    Posted by eLife staff on behalf of Chau et al.

  5. Value-based attentional capture affects multi-alternative decision making
    Value-based attentional capture affects multi-alternative decision making

    Reply to annotation by Chau and colleagues

    Sebastian Gluth(1), Mikhail S. Spektor(1,2), Jörg Rieskamp(1)

    1. Department of Psychology, University of Basel, Switzerland
    2. Department of Psychology, University of Freiburg, Germany

    In their annotation, Chau and colleagues replied in 5 points to our eLife publication in which we reported that the central behavioral effect of Chau et al. (2014, Nature Neuroscience; henceforth Chau2014) is neither replicable (i.e., is not found in our new data) nor reproducible (i.e., it is not present in their original data when analyzed correctly). Here, we respond to their reply in a step-by-step manner to provide some corrections and clarifications.

    1) Positive and negative distractor effects. In their reply, Chau and colleagues emphasized that they had reported both positive and negative distractor effects in their original paper. This appears as an astonishing perspective change, given that the original paper put a strong and unequivocal focus on the positive effect. For example, the abstract (p. 463) reads:

    “The model predicts […] greater difficulty choosing between two options in the presence of a third very poor, as opposed to very good, alternative. Both investigation of human decisionmaking and […] bore out this prediction.”

    Consistently, both Figure 1c and Figure S1 in Chau2014 show that their proposed biophysical model predicts a negative effect of HV-D. The effect might be stronger in more difficult trials, but the direction is the same irrespective of trial difficulty.

    In their reply, Chau and colleagues now refer to Figure S8 to point out that they also reported negative distractor effects. However, this figure refers to the behavioral results of a task that differed from the fMRI experiment and that was tested with only 12 participants. Crucially, this effect is not predicted by their model.

    2) Positive distractor effect predictions. Chau and colleagues state that the biophysical model is unlikely to be unique in predicting positive distractor effects. We fully agree with this statement and thus want to stress the importance of (quantitative) model comparisons, an approach that we used in our work but that has not been used in Chau2014.

    3) Relative accuracy vs. absolute accuracy. In their reply, Chau and colleagues argue that investigating relative choice accuracy “is preferred” to absolute choice accuracy. In contrast, we would argue that for a comprehensive understanding of the behavioral results, it is important to look at both relative and absolute choice accuracy together with a detailed analysis of the different types of errors that participants can make. Contrary to Chau2014, our paper provides this full picture of participants’ behavior.

    1. The (HV-LV)×(HV-D) interaction effect. The reply by Chau and colleagues suggests that the interaction between HV-LV and HV-D could be used to test for positive or negative distractor effects. We want to reiterate here that the main hypothesis of Chau2014 and our study did not concern the interaction term but the “main” effect of HV-D. To test for the HVD effect, however, it is important to calculate the interaction term based on mean-centered (or standardized) variables. In our paper, we list several standard textbooks and seminal publications that stress the necessity of this step to avoid non-essential multicollinearity. We also show that this multicollinearity is dramatic in the case of Chau2014 (r = .862 with uncentered variables as opposed to r = -.064 with mean-centered variables). Finally, using both multiple-regression analyses and simulations we provide converging evidence that the negative HV-D effect reported in Chau2014 was merely a by-product of the incorrect implementation of the interaction term.

    Concerning the interaction itself, we think this effect is not interpretable and should not be used to infer the presence or absence of distractor effects: In our publication (Figure 5–figure supplement 2), we show that even a correctly calculated interaction term tends to become significantly positive in simulated data in which this interaction effect was assumed to be nonexistent when generating the data. The reason for this statistical artifact seems to be the positive correlation between HV-LV and HV-D (the artifact disappears when simulating data based on trials in which HV-LV and HV-D are uncorrelated).

    5) Sample size. Chau and colleagues point out that their study did not only include the fMRI sample of 21 participants, but also 4 pilot datasets with a total of 34 participants. Furthermore, they mention a new dataset with 40 participants. They claim that both positive and negative distractor effects are present in these data.

    We appreciate that the authors with their reply made all their data available at an online repository. This gave the opportunity to analyze the complete dataset, in particular those data that we had no access to before (i.e., the pilot data and the new data). As in our publication, we conducted the multiple regression analysis on relative choice accuracy that included the HV-D term and the interaction term (HV-LV)×(HV-D) that was correctly based on meancentered variables. The script to analyze these data can be found on our OSF project website (https://osf.io/8r4fh/).

    Taking all pilot data together, we found no evidence for a negative HV-D effect on relative choice accuracy. The effect is even significantly positive (t(33) = 2.13, p = .040; d = 0.37). This might be seen as evidence for a divisive normalization effect, but the robustness of the effect is questionable, given that the effect does not reach significance when omitting the interaction term (t(33) = 1.67, p = .105; d = 0.29). Interestingly, even when using the incorrectly determined interaction term no negative effect is observed (t(33) = 0.28, p = .778; d = 0.05), contrary to Chau2014’s suggestion.

    With respect to the new dataset, we found a remarkably strong negative effect of HV-D (t(33) = -12.48, p < .001; d = -1.97). This would seem as tremendous support of Chau2014. Note, however, that these new data contain 50% trials with positive values for HV, LV and D, and 50% trials with negative values (a fact not mentioned in the reply by Chau and colleagues). Since all other experiments (of Chau2014 and our study) used only positive values for the options, we restricted the analysis to positive-value trials. In this case, the HV-D effect is not significant anymore (t(33) = -1.04, p = .305; d = -0.16).

    Thus, the strong negative effect of HV-D in this new dataset is driven by trials in which all options have negative values. This may point to an interesting violation of independence in decision making in the loss domain, but too little is known about this novel experiment at the current stage to warrant any conclusions. Most importantly, when focusing only on options with positive values as they had been used in all previous studies, again no evidence for a negative HV-D effect could be observed. This finding (together with the results of the pilot datasets) lend further support for our conclusion that the behavioral effect of Chau2014 is not replicable.

    Posted by eLife staff on behalf of Gluth et al.

  6. Value-based attentional capture affects multi-alternative decision making

    Reply to annotation by Chau and colleagues

    Sebastian Gluth(1), Mikhail S. Spektor(1,2), Jörg Rieskamp(1)

    1. Department of Psychology, University of Basel, Switzerland
    2. Department of Psychology, University of Freiburg, Germany

    In their annotation, Chau and colleagues replied in 5 points to our eLife publication in which we reported that the central behavioral effect of Chau et al. (2014, Nature Neuroscience; henceforth Chau2014) is neither replicable (i.e., is not found in our new data) nor reproducible (i.e., it is not present in their original data when analyzed correctly). Here, we respond to their reply in a step-by-step manner to provide some corrections and clarifications.

    1) Positive and negative distractor effects. In their reply, Chau and colleagues emphasized that they had reported both positive and negative distractor effects in their original paper. This appears as an astonishing perspective change, given that the original paper put a strong and unequivocal focus on the positive effect. For example, the abstract (p. 463) reads:

    “The model predicts […] greater difficulty choosing between two options in the presence of a third very poor, as opposed to very good, alternative. Both investigation of human decisionmaking and […] bore out this prediction.”

    Consistently, both Figure 1c and Figure S1 in Chau2014 show that their proposed biophysical model predicts a negative effect of HV-D. The effect might be stronger in more difficult trials, but the direction is the same irrespective of trial difficulty.

    In their reply, Chau and colleagues now refer to Figure S8 to point out that they also reported negative distractor effects. However, this figure refers to the behavioral results of a task that differed from the fMRI experiment and that was tested with only 12 participants. Crucially, this effect is not predicted by their model.

    2) Positive distractor effect predictions. Chau and colleagues state that the biophysical model is unlikely to be unique in predicting positive distractor effects. We fully agree with this statement and thus want to stress the importance of (quantitative) model comparisons, an approach that we used in our work but that has not been used in Chau2014.

    3) Relative accuracy vs. absolute accuracy. In their reply, Chau and colleagues argue that investigating relative choice accuracy “is preferred” to absolute choice accuracy. In contrast, we would argue that for a comprehensive understanding of the behavioral results, it is important to look at both relative and absolute choice accuracy together with a detailed analysis of the different types of errors that participants can make. Contrary to Chau2014, our paper provides this full picture of participants’ behavior.

    1. The (HV-LV)×(HV-D) interaction effect. The reply by Chau and colleagues suggests that the interaction between HV-LV and HV-D could be used to test for positive or negative distractor effects. We want to reiterate here that the main hypothesis of Chau2014 and our study did not concern the interaction term but the “main” effect of HV-D. To test for the HVD effect, however, it is important to calculate the interaction term based on mean-centered (or standardized) variables. In our paper, we list several standard textbooks and seminal publications that stress the necessity of this step to avoid non-essential multicollinearity. We also show that this multicollinearity is dramatic in the case of Chau2014 (r = .862 with uncentered variables as opposed to r = -.064 with mean-centered variables). Finally, using both multiple-regression analyses and simulations we provide converging evidence that the negative HV-D effect reported in Chau2014 was merely a by-product of the incorrect implementation of the interaction term.

    Concerning the interaction itself, we think this effect is not interpretable and should not be used to infer the presence or absence of distractor effects: In our publication (Figure 5–figure supplement 2), we show that even a correctly calculated interaction term tends to become significantly positive in simulated data in which this interaction effect was assumed to be nonexistent when generating the data. The reason for this statistical artifact seems to be the positive correlation between HV-LV and HV-D (the artifact disappears when simulating data based on trials in which HV-LV and HV-D are uncorrelated).

    5) Sample size. Chau and colleagues point out that their study did not only include the fMRI sample of 21 participants, but also 4 pilot datasets with a total of 34 participants. Furthermore, they mention a new dataset with 40 participants. They claim that both positive and negative distractor effects are present in these data.

    We appreciate that the authors with their reply made all their data available at an online repository. This gave the opportunity to analyze the complete dataset, in particular those data that we had no access to before (i.e., the pilot data and the new data). As in our publication, we conducted the multiple regression analysis on relative choice accuracy that included the HV-D term and the interaction term (HV-LV)×(HV-D) that was correctly based on meancentered variables. The script to analyze these data can be found on our OSF project website (https://osf.io/8r4fh/).

    Taking all pilot data together, we found no evidence for a negative HV-D effect on relative choice accuracy. The effect is even significantly positive (t(33) = 2.13, p = .040; d = 0.37). This might be seen as evidence for a divisive normalization effect, but the robustness of the effect is questionable, given that the effect does not reach significance when omitting the interaction term (t(33) = 1.67, p = .105; d = 0.29). Interestingly, even when using the incorrectly determined interaction term no negative effect is observed (t(33) = 0.28, p = .778; d = 0.05), contrary to Chau2014’s suggestion.

    With respect to the new dataset, we found a remarkably strong negative effect of HV-D (t(33) = -12.48, p < .001; d = -1.97). This would seem as tremendous support of Chau2014. Note, however, that these new data contain 50% trials with positive values for HV, LV and D, and 50% trials with negative values (a fact not mentioned in the reply by Chau and colleagues). Since all other experiments (of Chau2014 and our study) used only positive values for the options, we restricted the analysis to positive-value trials. In this case, the HV-D effect is not significant anymore (t(33) = -1.04, p = .305; d = -0.16).

    Thus, the strong negative effect of HV-D in this new dataset is driven by trials in which all options have negative values. This may point to an interesting violation of independence in decision making in the loss domain, but too little is known about this novel experiment at the current stage to warrant any conclusions. Most importantly, when focusing only on options with positive values as they had been used in all previous studies, again no evidence for a negative HV-D effect could be observed. This finding (together with the results of the pilot datasets) lend further support for our conclusion that the behavioral effect of Chau2014 is not replicable.

    Posted by eLife staff on behalf of Gluth et al.

    This is a reply.
  7. Measuring NDC80 binding reveals the molecular basis of tension-dependent kinetochore-microtubule attachments
    General assessment:

    **Comment on version 4 ** The first part of the author response (up to heading "Essential revisions") was mistakenly omitted in the original article and has been added in the new version.

    Added by eLife editorial staff.

  8. Adaptation to constant light requires Fic-mediated AMPylation of BiP to protect against reversible photoreceptor degeneration

    In the annotation by Ron and colleagues, the seminal study on eukaryotic Fic-mediated AMPylation of BiP was neglected to be mentioned. In Ham et al. JBC 2014 , eukaryotic Fic was shown to be localized to the ER and its major substrate was the ER chaperone BiP. In vitro, AMPylation was shown to occur on Threonine 366, a residue found in the “rigid nucleotide binding domain” that appears to be exposed primarily when BiP is in its active conformation. Subsequent studies, as mentioned by Ron and colleagues, demonstrated that another site, Threonine 518, was also AMPylated by Fic.

    We agree that ultimately it will be important to detect the temporal and spatial changes in specific AMPylation sites of BiP during adaption of the visual system to constant light. The dynamic changes visualized by ER stress sensors in subsets of cells in retina and lamina are an important reminder, however, of the challenge to detect specific PTMs from such complex mixtures by mass spectrometry approaches, in contrast to in-vitro material from cell lines, overexpression systems, or isolated from homogenous cell populations. Hopefully, cell-type-specific tagging of BiP will allow us to address this issue in the future.

    Lest we forget, fic knockout and BiP-T366A flies are viable, but neurologically challenged. And so, with the beauty of genetics, we discover new relevant modifications in a complex system.

    Andrew Moehlman, Amanda Casey, Kelly Servage, Kim Orth, Helmut Krämer, UT Southwestern Medical Center, Dallas, 75390; Kim.Orth@UTSouthwestern.edu; Helmut.Kramer@UTSouthwestern.edu.

    Annotation "added by eLife editorial staff" on the behalf of the authors

    This is a reply.
  9. The zinc-finger transcription factor Hindsight regulates ovulation competency of Drosophila follicles
    The ex vivo follicle rupture assay was performed as described previously (Deady and Sun, 2015) with more details at Bio-protocol (Knapp et al., 2018).

    Comment on Version 3

    Citation to the Bio-protocol method added to the "Ex vivo follicle rupture, gelatinase assay, and quantitative RT-PCR" section of the Materials and methods.

    Added by eLife editorial staff

  10. Identification and functional characterization of muscle satellite cells in Drosophila
    The detailed protocol of muscle injury can be found at Bio-protocol (Chakraborty et al., 2018).

    Comment on Version 3

    Citation to the Bio-protocol method added to the "Muscle injury" section of the Materials and methods.

    Added by eLife editorial staff