Towards a Neurometric-based Construct Validity of Trust

  1. Department of Psychology, National Taiwan University, Taipei, Taiwan
  2. Neurobiology and Cognitive Science Center, National Taiwan University, Taipei, Taiwan
  3. Center for Artificial Intelligence and Advanced Robotics, National Taiwan University, Taipei, Taiwan
  4. Gordon F. Derner School of Psychology, Adelphi University, Garden City, NY, 11530
  5. Institute of Psychology, Leiden University, Leiden, The Netherlands
  6. Leiden Institute for Brain and Cognition (LIBC), Leiden, The Netherlands
  7. Department of Psychology, Rutgers University, Newark, NJ, 07102
  8. Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755

Peer review process

Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, and public reviews.

Read more about eLife’s peer review process.


  • Reviewing Editor
    Clare Press
    University College London, London, United Kingdom
  • Senior Editor
    Michael Frank
    Brown University, Providence, United States of America

Reviewer #1 (Public Review):

The authors aimed to develop a whole-brain multivariate pattern predicting decisions to trust and to use this pattern to assess the construct validity of the concept of trust. To this end, they used machine learning to develop and validate a whole-brain pattern capable of predicting decisions to trust in three previously published fMRI datasets in which participants played an economic trust game. They then assessed how this pattern was expressed in several other published fMRI datasets operationalizing various psychological concepts. They observed that the trust pattern could discriminate between risky or safe economic decisions and different emotional states but could not discriminate between several other concepts such as reward/losses, famous/unfamiliar face perception, etc. Spatial similarity analyses across datasets showed converging results.

This study adopts a rigorous analytical approach, examining fMRI data from thousands of participants spanning fifteen datasets to investigate the relationship between the multivariate pattern of trust and other psychological concepts. Researchers interested in the concept of trust will find this work valuable. More importantly, it exemplifies the potential of using brain data to explore the construct validity of psychological concepts through this methodological approach.

Despite the strengths of this study, there are several points that, in my view, need further attention:

1. The trust pattern developed and validated by the authors is based on one type of task, the economic trust game. This means that the multivariate trust pattern developed by the authors is heavily dependent on how trust is specifically defined and operationalized within this task, which may limit its generalizability. Without evidence that the model generalizes to other operationalizations of trust, the authors should interpret their results more conservatively. Unless additional evidence is given, this should be presented as a pattern of the "decision to trust in an economic context".

2. In datasets 1-1 and 1-2, trust is operationalized as a form of social gambling, where participants choose to share money (trust) with someone else, hoping to triple their investment but risk losing it all, with the alternative being to keep the money (distrust). However, these datasets also include non-social control conditions (the lottery condition in Fareri et al., 2012, and the computer condition in Fareri et al., 2015), which are not discussed in this paper. Evaluating how the trust model behaves in these control conditions seems crucial, as they provide the closest comparison to similar tasks that exclude the trust component. If the trust model is not specific to social decisions in the original datasets (i.e., it cannot distinguish between gambling and not gambling), this significant limitation should be addressed and discussed.

3. The analytical strategy used to establish convergent and discriminant validity is based on the significance of the average group accuracy of forced-choice tests to assess the capacity of the model to discriminate between different concepts (e.g. rewards vs. loss, safety vs. risk). The model is assumed to be specific to trust when the accuracy is not significantly different from chance and related to the other construct when the accuracy is significantly above chance. However, the absence of an effect is related to the power of the test, and in several cases, the sample sizes were relatively small. The use of one-tailed tests also exacerbates this issue since only effects in the hypothesized directions can be significant. These analyses could be improved by adopting a different approach to evaluate support for the null effect, by setting a higher bar for what is considered a generalization of the model, or by interpreting the results more carefully to recognize that lack of evidence isn't necessarily evidence of absence.

Reviewer #2 (Public Review):

The authors set out to characterise "trust" in terms of a spatial pattern of neural responses, and then validate whether different tasks, in different datasets, express this pattern or do not express it, according to their hypotheses. They based their approach on linear classifiers (Support Vector Machines), which they trained to distinguish trust from distrust in an investment game, and then applied the classifier to other datasets. Additionally, they performed visualisations of the similarity among participants and among tasks in their neural responses, using dimensionality reduction techniques.

The key strength of this study is the use of multiple datasets to test whether a single study's characterisation of trust, in terms of a spatial pattern of neural responses, generalises to other tasks and populations. This is a nice use for existing data, which bolsters the interpretation of fMRI results, demonstrating that they are generalisable. While I am not a specialist in decoding methods, the analyses appear to have been performed conscientiously and to a high standard. The manuscript is also clearly written.

It's worth noting an obvious but important statistical point. In this study, the *inability* of a classifier to distinguish between conditions in particular datasets is taken as evidence that those conditions do not differ in terms of the effect of interest (trust). In this case, these results make sense, in that they are consistent with the authors' hypotheses. However, there are various reasons why the classifier may not work well on particular datasets - e.g. differences in noise, or a lack of linear separability between patterns (which might mandate a non-linear classifier or a different SVM kernel). Therefore, any null result obtained with classical statistics should be interpreted with caution.

Reviewer #3 (Public Review):

This is a timely and impressive study that applies a neuroscientific approach to provide an objective measurement of the psychological construct of trust. Drawing links from psychometrics, the presented neurometric approach will be beneficial to many open research questions within and beyond the field.

There are multiple strengths to highlight. First, the study followed and moved beyond best practices in psychometrics research to establish the neurometrics of trust. Second, it made use of multiple datasets to rigorously validate the model and tested its specificity and generalizability. The choice of these datasets was well justified and informed by previous studies. Third, the study combined a series of data-driven approaches to provide converging and complementary evidence of their neurometric model, and this sets an excellent example for future work in similar veins.

There were a few things that would be helpful to clarify, on top of the already comprehensive paper. First, it will be helpful to draw an even closer side-by-side analogy between neurometrics and psychometrics. Imaginably this work will benefit both psychology and neuroscience; using an illustration (such as a box) detailing the counterpart of neurometrics with respect to psychometrics will be very helpful for many researchers. Relatedly, I am curious about what the "end product" will be by using the neurometrics approach. In psychometrics, the product will naturally be the scale/questionnaire, and then there is the related validity & reliability check, etc. So is the multivariate pattern map the product, or something else? Practically, how can users make use of the maps as easily as using a questionnaire? Second, the relationship between trust and no-reward (and similarly between distrust and reward) is indeed puzzling. The authors attributed that to the non-linear nature of the methodology. But if this is true, does the non-linear nature of the methods also hamper the other results? It is perhaps worth checking the reward-related maps at the decision stage (to reflect the anticipation) rather than the outcome state (where participants actually saw the win/loss). Lastly, the measurement of "pattern expression" and the associated "expression difference" lacks detailed explanations, as in, what do the magnitude and sign mean? How to interpret them?

  1. Howard Hughes Medical Institute
  2. Wellcome Trust
  3. Max-Planck-Gesellschaft
  4. Knut and Alice Wallenberg Foundation