Daily life fluctuations in affect predict within-person changes in a real-world measure of cognitive processing speed

  1. Centro de Neurociencias Cognitivas, Universidad de San Andrés, Buenos Aires, Argentina
  2. Global Brain Health Institute, Trinity College Dublin, Dublin, Ireland
  3. School of Psychology, Trinity College Dublin, Dublin, Ireland
  4. Department of Psychiatry, Trinity College Dublin, Dublin, Ireland
  5. Department of Psychology, University of California Berkeley, Berkeley, United States
  6. Ageing Research and Development Division, Institute of Public Health, Dublin, Ireland
  7. The Bamford Centre for Mental Health and Wellbeing, Ulster University, Belfast/Coleraine, United Kingdom

Peer review process

Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, public reviews, and a provisional response from the authors.

Read more about eLife’s peer review process.

Editors

  • Reviewing Editor
    Shuo Wang
    Washington University in St. Louis, St. Louis, United States of America
  • Senior Editor
    Michael Frank
    Brown University, Providence, United States of America

Reviewer #1 (Public review):

Summary:

A study researching the relationship between affective shifts and cognitive performance in a daily life setting.

Strengths:

The evidence provided is compelling: the findings are conceptually replicated in three samples of adequate size and statistical rigor in analyzing the data, with methods beyond the current state of the art in applied research. For example, using two-step multilevel vector autoregressive models that were adopted to allow the inclusion of covariates, and contemporaneous effects corrected for temporal relations and background covariates. In addition, the authors use beautiful visualizations to convey the different samples used (Figure 1) and intuitive and rich figures to convey their obtained results.

In summary, the authors were able to convincingly show that higher negative affect is linked to slower cognitive processing speed, with results supporting their conclusions.

Weaknesses:

I have one major concern. Although a check for careless responding has been conducted on the basis of long reaction times, I wonder whether, beyond long response times, any other sanity checks with respect to, e.g., careless responding were done? For example, a lack of variability of EMA items over subsequent occasions, e.g., say 15, is often seen as an indicator of careless responding, especially when using VAS items. In line 693, it is stated, "We added a small amount of random noise, ranging from -0.1 to +0.1, to each EMA time series to allow models to converge when EMA time series showed minimal variance over time", which I understand, but this lack of variability could also be caused by participants stopping to take the study seriously. For datasets 1 and 2, this might be more difficult to assess (due to the limited response values), but maybe the authors can get an indication of this in dataset 3?

Reviewer #2 (Public review):

Summary:

In this paper, Fittipaldi et al. assessed whether cognitive processing speed - as operationalized by the Digital Questionnaire Response Time (DQRT) - and affect (both positive and negative) are related in contemporaneous and temporaneous ways, both between and within-subject. At the between-person level, they found positive relationships with DQRT and negative affect, and the opposite for positive affect. This was similar at the within-subject contemporaneous level.

The authors further test Granger-causality in the dynamics, for both Affect -> DQRT and DQRT -> Affect. They find that affect and t-1 is associated with DQRT in the same manner as in the other models (positively for negative affect, and negatively for positive affect). Interestingly, DQRT -> Affect was largely non-significant for most affect items.

This study adds important information on the associations between affect and cognitive measures outside the lab, showcasing a methodological approach to translate laboratory research to new contexts.

Strengths

Overall, this study has a strong methodological approach, which is commendable. The use of three independent samples with different affective measures is a good way to showcase the validity of the findings. The multi-level modelling approach is also done thoroughly and appropriately within the context of MLVAR modelling. The findings are also well visualized, making it easy to follow along with the interconnected and potentially confusing analyses.

Weaknesses

The authors use the DQRT as a measure of cognitive processing, which isn't fully validated or substantiated as such. The authors do address this as a limitation, but I believe it warrants a much broader discussion, as the construct being assessed may not be the construct intended by the authors. This makes it difficult to ascertain whether the conclusion drawn (that affect impacts cognitive function) is valid. I would rather frame it that there are associations between affect and response times, which can indicate many different things, be it potentially careless responding or other mechanisms at play.

Author response:

Reviewer #1:

We thank the reviewer for this important point. Beyond long reaction times, we did not originally exclude participants based on low EMA variability. We agree this is a relevant concern, particularly given the need to add small random noise to some EMA series for model convergence. In the revised manuscript, we will assess additional indicators of careless responding, including within-person EMA variability (e.g., standard deviation or proportion of modal responses) following Jaso et al., 2022 criteria. We will conduct sensitivity analyses excluding low-variability responses or participants and report whether these checks affect the robustness of the results. We will also clarify in the Discussion that minimal EMA variance may reflect either true affective stability or reduced engagement, and discuss how this ambiguity may affect interpretation.

Reviewer #2:

We thank the reviewer for raising this fundamental conceptual concern. We agree that more research is needed to fully understand the processes captured by DQRT. In the revised manuscript, we will more clearly reference and summarize prior validation work from our lab providing strong support for a cognitive characterization of DQRT as a measure of cognitive processing speed, while also explicitly acknowledging potential confounds and limitations (Teckentrup et al., 2025). We will clarify that our DQRT computation followed those validated procedures, including exclusion of extreme values above the sample-specific median + 2 SD. In addition, consistent with Reviewer #1’s comment, we will expand the Discussion of how potential careless responding and non-cognitive factors may influence DQRT. We will further tone down language implying causal inference.

References

Jaso, B. A., Kraus, N. I., & Heller, A. S. (2022). Identification of careless responding in ecological momentary assessment research: From posthoc analyses to real-time data monitoring. Psychological Methods, 27(6), 958.

Teckentrup, V., Rosická, A. M., Donegan, K. R., Gallagher, E., Hanlon, A. K., & Gillan, C. M. (2025). Digital questionnaire response time (DQRT): A ubiquitous and low-cost digital assay of cognitive processing speed. Behavior Research Methods, 57(7), 200.

  1. Howard Hughes Medical Institute
  2. Wellcome Trust
  3. Max-Planck-Gesellschaft
  4. Knut and Alice Wallenberg Foundation