Dissociable Roles of Reward Prediction Error in the Contrasting Mood Dynamics of Depression and Anxiety

  1. Center for Neurocognition and Social Behavior, Institute of Artificial Intelligence, Shenzhen University of Advanced Technology, Shenzhen, China
  2. Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (BNU), Faculty of Psychology, Beijing Normal University, Beijing, China
  3. CNRS - Centre d’Economie de la Sorbonne, Panthéon-Sorbonne University, Paris, France
  4. Institute for brain research and rehabilitation, South China Normal University, Guangzhou, China
  5. State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
  6. Chinese Institute for Brain Research, Beijing, China
  7. University of Groningen, Department of Biomedical Sciences of Cells & Systems, Section Cognitive Neuroscience, University Medical Center Groningen, Groningen, Netherlands

Peer review process

Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, and public reviews.

Read more about eLife’s peer review process.

Editors

  • Reviewing Editor
    Andreea Diaconescu
    University of Toronto, Toronto, Canada
  • Senior Editor
    Jonathan Roiser
    University College London, London, United Kingdom

Reviewer #1 (Public review):

This is a very interesting paper. The research question is intriguing, allowing the authors to address commonly observed comorbidities between depression and anxiety and their dissociable and opposite relationship to mood fluctuations and sensitivity to reward prediction errors. The computational analyses are very in-depth, including many state-of-the-art checks and validations. Another strength is the inclusion of several large or very large samples, including a patient sample in addition to the general population sample.

I have the following questions:

(1) Factor analysis I found the hierarchical organization of the factors interesting. While this is a very common procedure in, for example, the field of intelligence (producing sub-scores and a general g factor), it is not yet very commonly used in the field of computational psychiatry (though it has been validated before for anxiety/depression, so it is used here with good reason). I was also impressed by the methodological depth. In particular, it was of note how thoroughly done it was (for example, repeating the EFA on the second half of the data set). I have one question though: is the sample size too small for the exploratory analyses, given the number of items? Given the stability across the half-split, I imagine it is not. Perhaps the authors could spell out how many items, what would be the recommended standard for a subject-to-item ratio, and comment on this. A very technical point, the authors should specify how they extracted the factor scores from the other data sets (is it using the Thurstone or Bartlett method)? From experience (though not doing a hierarchical factor analysis), Bartlett can be somewhat better compared to the default (Thurstone) - better as in the resulting factors more closely recapitulating the factor correlations in the original sample (and independence of responses of other participants in a sample for computing a person's factor score). Could you also comment on similarities or divergences in this hierarchical factor analysis approach from another one recently used transdiagnostically in Wise et al. (2026, Translational Psychiatry)?

(2) Linking factors to task parameters As I understand it, the authors relate the orthogonalized depression/anxiety to task parameters (sensitivity to RPEs on mood and mood variations) using correlations. In order to have a better understanding of how this relates to other commonly used approaches, I would pose two questions:

(i) What are the correlations when the full (non-orthogonalized) factor scores for depression and anxiety are used? Are the signs the same? (ii) What are the results when, instead of the independent correlations, the authors perform b_RPE ~ anxiety + depression (again using the non-orthogonalized factors)?

I'm assuming all of these analyses should give the same results if the authors' hypothesis of opposing effects of anxiety and depression holds true.

Minor comments:

(1) The authors should write down when the data were collected for each study. This is because AI capabilities have massively increased since ~2020 in quite specific steps (with the public release of new AI models), meaning that AI is likely to have been able to do tasks and questionnaires without detection if data were collected recently.

(2) The authors should include a statement in the methods section that checks for AI were done. If none yet, could you do any? Recent papers (Westwood, PNAS 2025; van der Stigchel PNAS, 2026) point to the risk since at least the release of o4-mini (used in the cited paper to create very human-like behaviour).

(3) It would have been good to collect questionnaires of other, thought to be unrelated psychiatric traits, like compulsivity or schizophrenia symptoms, to check the specificity of the results, also under the assumption that higher scores on either of these skewed questionnaires can pick up individual differences in 'bad questionnaire completion'. The authors should comment on the absence of other questionnaires in the discussion in the limitations section.

(4) The authors could include a more explicit sentence in the abstract stating that the anxiety result did not hold up in the clinical population.

Reviewer #2 (Public review):

Summary:

Despite their common co-occurrence, depression and anxiety are known to alter mood fluctuations in opposite ways. Here, the authors aimed at distinguishing depression-specific from anxiety-specific from psychopathology-general effects of reward processing on mood fluctuations, focusing on reward prediction errors (RPEs), which are known to be linked to mood fluctuations. This mechanistic study aims at uncovering the process through which these psychopathologies are associated with mood modulations. The authors were able to appropriately test their hypothesis and obtained results corroborating their conclusions.

This work provides a convincing demonstration of the relevance of computational psychiatry (Huys et al, 2016) and the use of decision neuroscience to shed light on the interplay of anxiety, depression, and mood.

Strengths:

The authors used a tripartite model to distinguish depression vs anxiety, as well as a computational model distinguishing reward expectation (EV in the model) from outcome processing through RPE, which are two sequential cognitive processes.

The manuscript adequately addresses the concerns one would have regarding risk-attitudes and regarding referring to trending statistical results.

Weaknesses:

The sample size of the clinical sample (N=116) may not be sufficient to detect anxiety-specific effects due to the high rate of comorbid anxious depression. It would be beneficial to include the number of MDD vs GAD vs anxious depression diagnoses in the clinical population, as this would likely shine light on the power limitations.

Reviewer #3 (Public review):

Summary:

In this submission, Wang and colleagues jointly examine the association between depression and anxiety symptoms and individuals' affective reactivity to reward prediction errors in Ruttledge et al.'s gambling paradigm. Taking a bifactor approach to anxiety and depression in several non-clinical (and one clinical sample), the authors find that anxiety-specific symptoms relate to over-reactivity of mood to reward prediction errors (RPEs) as well as heightened mood variability, while depression-specific symptoms relate to blunted mood sensitivity to RPEs. These depression- but not anxiety-specific relationships replicated in patient samples.

Strengths:

I was impressed that the data-driven, transdiagnostic approach employed by the authors uncovered specific relationships between anxiety and depression-specific factors and RPE reactivity in a well characterized task and computational model, especially in a non-clinical sample. This sheds new light on how these affective processes may be perturbed-and importantly, in different ways-by anxiety and depression symptoms. Likewise, the replication of the depression-specific finding (RPE hypo-reactivity) in a clinical sample was nice to see.

Weaknesses:

(1) While the anxiety- and depression-specific factors had differential effects on mood variability (Figure 2A-D) and RPE reactivity (Figure 2E-G) in all samples, such that the correlations between the two factors and these mood parameters were significantly different, the anxiety factor was not consistently (significantly) associated with either mood-related parameter across samples. However, the authors resolve anxiety-specific predictive effects when they collapse across datasets. While it is intuitive that achieving a larger effective sample size would afford the power necessary to detect such individual differences, this struck me as a major caveat for this set of results.

(2) The authors observe associations between the 'common factor' of depression and anxiety and risk-attitude tendencies, presumably the alpha (exponent) parameter in a prospect theory-type subjective value model. But where is this analysis explained? (i.e. how was this model formulated and how were risk attitude parameters estimated?) And what is the interpretation of this finding - is there precedent for looking at risk attitudes in this task? And why would these predictive effects only be observed in relation to the common, but not unique, factors of anxiety and depression?

  1. Howard Hughes Medical Institute
  2. Wellcome Trust
  3. Max-Planck-Gesellschaft
  4. Knut and Alice Wallenberg Foundation