Abstract
Optimistically biased belief updating is essential for mental health and resilience in adversity. Here, we asked how experiencing the COVID-19 pandemic affected optimism biases in updating beliefs about the future. One hundred and twenty-three participants estimated the risks of experiencing adverse future life events in the face of belief-disconfirming evidence either outside the pandemic (n=58) or during the pandemic (n=65). While belief updating was optimistically biased and Reinforcement-learning-like outside the pandemic, the bias faded, and belief updating became more rational Bayesian-like during the pandemic. This malleability of anticipating the future during the COVID-19 pandemic was further underpinned by a lower integration of positive belief-disconfirming information, fewer but stronger negative estimations, and more confidence in base rates. The findings offer a window into the putative cognitive mechanisms of belief updating during the COVID-19 pandemic, driven more by quantifying the uncertainty of the future than by the motivational salience of optimistic outlooks.
Introduction
Anticipating the future is an essential part of human thinking. These beliefs guide how we understand the world; updating them is vital for learning to make better predictions and to generalize across contexts. Interestingly, belief updating tends to be optimistically biased1. Even when confronted with negative information, humans often downplay its importance to maintain an optimistic view of the future. The underlying mechanism of optimistically biased belief updating involves an asymmetry in learning from positive and negative belief-disconfirming information2–4, which can unfold in two ways following Reinforcement learning (RL) or Bayes rule5.
Conceptually, Reinforcement learning (RL) and Bayesian models of belief updating are complementary but make different assumptions about the hidden process humans may use to adjust their beliefs when faced with information that contradicts them. The RL models assume that belief updating is proportional to the estimation error. The key idea of the estimation error expresses the difference between how much someone believes for example they will experience a future life event and the actual prevalence of the event in the general population. This difference can be positive or negative. A scaling and an asymmetry parameter quantify the propensity to consider the estimation error magnitude and its valence, respectively. These two free parameters form the learning rate, which indicates how fast and biased participants update their beliefs.
In contrast, Bayesian models assume that following Bayes’ rule the posterior, updated belief is a new hypothesis, formed by pondering prior knowledge with new evidence. The prior knowledge consists in information about the prevalence of life events in the general population. The new evidence comprises various alternative hypotheses. It examines how likely a specific event is to occur or not occur for oneself, compared to the likelihood that it will happen or not happen to others. This probabilistic adjustment of beliefs about future life events can be considered as an approximation of a participant’s confidence in the future. The two free parameters of the Bayesian belief updating model scale how much the initial belief deviates from the updated, posterior belief (i.e., scaling parameter) and the propensity to consider the valence of this deviance (i.e., asymmetry parameter).
Although RL-like and Bayesian updating models make different assumptions about the updating strategy, they are complementary and powerful formalizations of human reasoning. Both models provide insight into hidden, latent variables of the updating process. Most notably, the learning rate and its components, the scaling and asymmetry parameters can vary between individuals and contexts and, through this variance, offer possible explanations for the idiosyncrasy in belief-updating behavior and its cognitive biases.
The COVID-19 pandemic represented a chronic, adverse life context, drastically altering individual and social realities. The existing literature documenting the impact of the COVID-19 pandemic and the associated changes in everyday life on mental health has shown that stress, anxiety, and depression have increased during the pandemic6,7. Moreover, previous work has demonstrated that belief updating during reversal learning became more erratic and was linked to a rise in paranoia during the COVID-19 pandemic across the US8.
However, it is unknown how optimism biases in belief updating about the future and their underlying putative mechanisms changed during the experience of such an unprecedented, adverse life event. Our hypothesis was twofold. We argued that maintaining optimistically biased belief updating under lasting, adverse life conditions is adaptive. The optimism biases are beneficial in exploratory behavior, reduce stress, and improve mental and physical health and well-being9–15. These benefits promote resilience, which is especially important for fitness and survival during a pandemic16,17. On the contrary, optimism biases can lead to suboptimal decision-making. Contextual factors such as acute stress, perceived threat, and depression have been shown to reduce or even reverse optimistically biased belief updating18–24. These findings suggest that optimistically biased belief updating should be weaker when experiencing a pandemic.
We leveraged a belief-updating dataset from 123 participants tested between 2019 and 2022 to rule between these alternative hypotheses. Among them, fifty-eight participants were tested outside the context of the COVID-19 pandemic, either in October 2019, three months before the outbreak in France (n=30) or two years later in June 2022 (n=28), after the lift of the sanitary state of emergency. Their belief updating behavior was compared to sixty-five participants tested during the sanitary state of emergency due to the COVID-19 outbreak in France. This was either during the first very strict lockdown of social and economic life (e.g., schools closed, stay-at-home orders, shops, and museums closed) from March to April 2020 (n=34) or one year later in Mai 2021 (n=31), when lockdown was less strict (e.g., schools open, museums and shops closed, part-time curfew), but the COVID-19 pandemic still unfolding. Belief updating was measured by a behavioral task that asked participants to estimate their risk of experiencing adverse future life events before and after receiving information about these events’ actual base rates. Observed belief updating behavior was fitted to a RL-like and a Bayesian updating model to gain insight into potential underlying strategies of belief updating. The learning rates were compared across groups for insight into how experiencing the COVID-19 pandemic changed beliefs about the future and their updating in the face of belief-disconfirming evidence.
Results
Effects of experiencing the COVID-19 pandemic on optimistically biased belief updating
A linear mixed effects (LME) model was fitted to belief updates to test whether belief updating was less or more biased during the COVID-19 pandemic. The model found a significant interaction estimation error valence by context (β = −5.54, SE = 1.69, t(232) = −3.28, p = 0.001, 95% CI [−8.87 – −2.21]; SI Table 2), which hold when further controlling for the distance variable (SI Table 3). The power of this effect was 75% leaving a 25% risk for type II errors. As shown in Figures 1a and b, optimistically biased belief updating disappeared during the COVID-19 pandemic compared to participants tested outside the pandemic. More specifically, it was decreased among participants tested during the initial COVID-19-related strict lockdown in March and April 2020 (EE valence by context 1: β = −7.39, SE = 2.29, t(228) = −3.32, p = 0.002, 95% CI [−11.91 – −2.86]; SI Table 4), as well as in Mai 2021 (EE valence by context 2: β = −5.59, SE = 2.36, t(228) = −2.37, p = 0.02, 95% CI [−10.24 – −0.93]; SI Table 4), compared to those tested before the outbreak in October 2019, respectively. The bias re-emerged among participants tested one year later at the time of the lift of the sanitary state of emergency in June 2022, returning to levels akin to those observed before the pandemic in October 2019 (EE valence by context 3: β = −2.11, SE = 2.46, t(228) = −0.86, p = 0.39, 95% CI [−6.95 – 2.73]; Figure 1a, SI Table 4). The effect of the COVID-19 pandemic on belief updating was driven by a significant decrease in belief updating following good news during the pandemic compared to participants tested outside the pandemic (t(121) = 2.66, p = 0.009, Cohen’s d = 0.48, two sampled, two-tailed t-test, Figure 1b). No contextual group difference was observed for belief updating following bad news (t(121) = −1.77, p = 0.08, Cohen’s d = −0.32, two sampled, two-tailed t-test, Figure 1b). This effect could be reproduced when fitting an analogous LME to belief updates observed in the group of participants (n=28) who were tested both before and during the pandemic (EE valence by context interaction: β = −7.66, SE = 1.49, t(103) = −5.13, p < 0.001, 95% CI [−10.62 – 4.70]; SI Figure 1, SI Table 5). Moreover, previous studies of optimistically biased belief updating calculated the estimation error (EE) on the difference between the estimate for someone else (eBR) and the base rate (BR), following: EE = eBR − BR4,5,25,26. When categorizing trials as good or bad news based on this alternative EE calculation the context-by-EE valence interaction remained significant (SI Table 6). Note that all effects were controlled for participants age, years of higher education, gender, confidence in the base rates, belief updating task design, and estimation error magnitude.

Behavioral results.
(a) Boxplots display the belief-updating bias (i.e., the difference between the belief update for good news and belief update for bad news) in each of the four participant groups, tested before the pandemic in October 2019 (n=30), during the first lockdown from March to April 2020 (n=34), with less restrictive measures in Mai 2021 (n=31), and at the end of the pandemic in June 2022 (n=28). (b) Belief updating for good and bad news during (n=65) and outside the pandemic (n=58). (c) Confidence ratings, and (d) estimation errors for bad and good news during and outside the pandemic. Boxplots in all panels display 95% confidence intervals, with boxes indicating the interquartile range from Q1 25th to Q3 75th percentile. The horizontal black lines indicate medians and whiskers range from minimum to maximum values and span 1.5 times the interquartile range. The dots correspond to individual participants. The squares in the boxplots in (b) correspond to mean observed updates (purple) and mean modelled updates (blue; averaged across 1000 estimations) from the best-fitting models in each context, which were the optimistically biased RL-like model of belief updating outside and the rational Bayesian model of belief updating during the Covid-19 pandemic. The source data file provides exact p-values. *p < 0.05 two-sampled, two-tailed t-tests, * p < 0.05 two-sampled, one-tailed t-tests.
Effects of experiencing the COVID-19 pandemic on belief updating variables
As shown in Figure 1c, experiencing the COVID-19 pandemic influenced participants’ confidence in the base rates, with significantly lower confidence ratings observed among those tested outside the pandemic compared to those tested during it (β = 14.11, SE = 4.52, t(233) = 3.12, p = 0.002, 95% CI [5.19 – 23.02]; SI Table 7). Moreover, a significant interaction of EE valence by context (β = 2.19, SE = 0.67, t(233) = 3.28, p = 0.001, 95% CI [0.88 – 3.51]; SI Table 8) was found for absolute estimation error magnitude. This finding indicated that participants tested during the pandemic overestimated their risk of experiencing adverse future life events relative to base rates more largely than participants tested outside the pandemic (t(121) = −3.01, p = 0.003, Cohen’s d = −0.54, Figure 1d). On the contrary, the two groups did not differ significantly in the magnitude of negative estimation errors (i.e., initial underestimations relative to base rates; t(121) = −0.49, p = 0.63, Cohen’s d = −0.09, two-sampled, two-tailed t-tests; Figure 1d). This finding contrasts with the observed difference in how often they made positive estimation errors (i.e., the number of good news trials). Participants tested during the pandemic overestimated less frequently than participants tested outside the pandemic (t(121) = 2.40, p = 0.02, Cohen’s d = 0.43, two-sampled, two-tailed t-test). No significant difference between groups was found for the frequency of underestimations (i.e., reflected by the number of bad news trials; t(121) = −1.85, p = 0.07, Cohen’s d = −0.33, two-sampled, two-tailed t-test). These results indicated that participants held fewer but stronger negative future outlooks during the pandemic compared to those tested outside the pandemic.

Computational model comparisons.
Twelve alternative models from RL-like (blue) and Bayesian (orange) updating model families were fitted to observed belief updates for participants tested during the COVID-19 pandemic (left panel columns) and outside the pandemic (right column panels). (a) Protected exceedance probabilities for each of the 12 alternative models, which is the probability that the model predominates in the population above and beyond chance. (b) Posterior model attributions. Colored cells display the probability that individual participants (y-axis) will be best explained by a model version (x-axis). (c) Estimated model frequencies correspond to how many participants are expected to be best described by a model version, with error bars corresponding to standard deviations. The red line indicates the null hypothesis that all model versions are equally likely in the cohort (chance level). Labels on the x-axis of the histogram and bar graphs indicate the model versions with non-silenced parameters (S – scaling, A – Asymmetry) and PR – personal relevance of events.
Effects of experiencing the COVID-19 pandemic on putative mechanisms of belief updating
We then sought to identify which putative strategy participants used to update their beliefs about the future during and outside the pandemic. To answer this question, we used computational modeling and model comparisons to rule between twelve alternative models. This approach revealed that belief updating outside the pandemic was more RL-like and optimistic (pxp = 1, Ef = 0.77), while during the pandemic, it was best explained by a rational Bayesian updating model (pxp = 0.90, Ef = 0.43; Figure 2a-c). Similar findings were obtained when conducting model comparisons in the participants tested both before and during the lockdown (n=28; SI Figure 3).
Effects of experiencing the COVID-19 pandemic on hidden, latent variables of belief updating
Next, we compared the effects of experiencing the COVID-19 pandemic on the learning rates and its components. To show that this adverse context effect was indeed mediated by alterations in asymmetrical learning we compared the scaling and asymmetry parameters obtained from the overall best-fitting model across the whole dataset of n=123 participants. This was Model 1 – the optimistically biased RL-like model of belief updating (Ef = 0.40, pxp =0.99, SI Figure 6).
A linear mixed effects model (LME), analogous to the LME fitted to observed belief updates, was fitted to the learning rates and detected a main effect of EE valence (β = 0.09, SE = 0.01, t(236) = 7.14, p = 1.18e-11, 95% CI [0.06 – 0.11]; SI Table 9), and a significant interaction EE valence by context (β = −0.03, SE = 0.02, t(236) = −2.11, p = 0.04, 95% CI [−0.07 – −0.002]; Figure 3a, SI Table 9). A main effect of EE valence (β = 0.08, SE = 0.02, t(105) = 3.22, p = 0.002, 95% CI [0.03 – 0.12]; SI Table 10) and context (β = −0.10, SE = 0.03, t(105) = −3.10, p = 0.002, 95% CI [−0.17 – −0.04]; SI Table 10) on learning rates was detected when comparing the participants, who were tested both before and during the pandemic. As shown in Figure 3a, all participants’ learning rates were lower in response to bad news than to good news. Still, the difference between good and bad news learning rates was significantly reduced for participants tested during the pandemic. In line with the observed belief updating after good and bad news, the effect of context on the learning rates was driven by a decrease in the learning rates from positive estimation errors in participants tested during the pandemic compared to participants tested outside the pandemic (t(121) = 2.17, p = 0.03, Cohen’s d = 0.39, two-sampled, two-tailed t-test). Both groups did not differ in their learning rates from negative estimation errors (t(121) = 0.87, p = 0.39, Cohen’s d = 0.16, two sampled, two-tailed t-test).
Parameter recovery was successful for the scaling (r = 0.92, p < .001) and asymmetry (r = 0.82, p < .001) components of the learning rates (Figure 3b), which indicated that the model gave identifiable values for these parameters (parameter recovery was also conducted on each group and each model family separately, results are reported in SI section 2b and SI Figure 4). We, therefore, were able to explore potential group differences in the learning rate components in more detail. Linear mixed effects modeling found a main effect of context for the asymmetry component (β = −0.04, SE = 0.02, t(117) = −2.32, p = 0.02, 95% CI [−0.07 – −0.01]; Figure 3c, SI Table 11), but not for the scaling component (β = −0.07, SE = 0.05, t(117) = −1.54, p = 0.13, 95% CI [−0.16 – 0.02]; Figure 3c, SI Table 12). The average asymmetry of learning rates was positive in both groups but significantly smaller in participants tested during the pandemic than those tested outside (t(121) = 2.00, p = 0.048, Cohen’s d = 0.36, two-sampled, two-tailed t-test, Figure 3c). This result indicated that participants considered positive estimation errors more than negative ones but less when experiencing the COVID-19 pandemic. Similar results were found in the within subject group (n = 28), with a significant main effect of context on asymmetry (β = −0.06, SE = 0.02, t(51) = −3.72, p = 0.001, 95% CI [−0.09 – −0.03]; SI Table 13), but not on scaling (β = −0.10, SE = 0.05, t(51) = −1.96, p = 0.06, 95% CI [−0.21 – 0.003]; SI Table 14).

Parameter comparisons between participants tested during (n=65) and outside (n=58) the COVID-19 pandemic.
(a) Learning rates. Boxplots display 95% confidence intervals for learning rates from the RL-like updating model that assumed updating is proportional to the estimation error with an asymmetry and a scaling learning rate component. (b) Parameter recovery for learning rate components of the overall best fitting Model 1 (n=123). Pearson’s correlation between generating and recovered parameters for scaling (left panel) and asymmetry (right panel) learning rate component. r –Pearson’s correlation coefficient against zero. Source data and exact p-values are provided as a Source Data file. (c) Group comparisons for scaling and asymmetry components. Boxplots display 95% confidence intervals for the learning rate’s scaling (left panel) and the asymmetry (right panel) component. Boxes in all boxplots correspond to the interquartile range from Q1 25th percentile to Q3 75th percentile. The horizontal black lines indicate medians and whiskers range from minimum to maximum values and span 1.5 times the interquartile range. The dots correspond to individual participants. *p < 0.05. P-values were obtained with two-sampled, two-tailed t-tests between groups, and exact p-values are provided in the source data file.
Discussion
This study investigated how experiencing the COVID-19 pandemic impacted the optimism biases in updating beliefs about the future. Belief updating was optimistically biased before the COVID-19 outbreak, it faded during the COVID-19 pandemic and reemerged after the pandemic. The lack of optimistically biased belief updating during the pandemic was related to three effects: (1) a decreased sensitivity to positive belief-disconfirming information, (2) fewer but stronger negative beliefs about the future, and (3) more confidence in base rates. Computational modeling showed that belief updating during the pandemic was best described by a rational Bayesian model. In contrast, an optimistic RL-like model best-approximated belief updating outside the pandemic. Both models showed that the attenuated optimistically biased belief updating during the pandemic was not due to a learning deficit. The groups were similar in how much they integrated overall evidence in favor or against initial beliefs. On the contrary, it was explained by a diminished learning asymmetry in considering positive belief-disconfirming evidence that paralleled the observed belief-updating behavior.
The finding that optimistically biased belief updating faded during the pandemic favors the hypothesis that experiencing an adverse life event such as a pandemic weakens optimistic outlooks. It further aligns with the body of research that has explored the malleability of the optimistically biased belief updating and information integration under acute threat and stress and mood disorders such as depression18–24,27. While our findings align with these previous findings, we also observed a difference. Notably, our sample tested during the pandemic considered positive, favorable information less while showing no change in negative, unfavorable information consideration. Differences in populations and task designs might explain these odds. However, it could also be specific to experiencing the COVID-19 pandemic, which involved an immediate, unpredictable, and global health threat with high uncertainty about its outcome and significant psychological repercussions6,28.
Mental health assessments during the COVID-19 pandemic indicated that anxiety, stress, paranoia, and depression levels were more prevalent in the population7,8. The rapid spread of SARS-CoV-2 and the emergence of COVID-19 cases worldwide constitute a challenging situation that, in five months, has shifted from an elusive and distant threat to an immediate and drastic health and economic crisis. All citizens were confronted daily with alarming figures such as infection rates or mortality, and rather mundane everyday activities, from grocery shopping to jogging, became stressful and threatening situations during which one could catch a potentially fatal infection. Moreover, for many, the COVID-19-related lockdowns of social and economic life implied a physical cut-off from friends and relatives, and life plans, routines, and activities were severely disrupted. It also implied a substantial economic risk. Previous research has shown that economic uncertainties, particularly during marked economic inequality and epidemics, can contribute to belief-updating fallacies reflected by the rise of conspiracy theories29.
We did not collect physiological measures of stress or information about the COVID-19 infection status of participants, which precludes a direct exploration of the immediate effects of experiencing the infection on belief-updating behavior and the potential interaction with anxiety and stress levels. Although subjective ratings of the perceived risk of death from COVID-19 correlated negatively to the beliefs updating bias measured during the pandemic, this result was obtained in a subset of participants, retrospectively (SI section 4). We thus cannot directly attribute the observed lack of optimistically biased belief updating during the lockdown to psychological causes such as heightened anxiety and stress. This limitation is noteworthy, as the impact of experiencing the pandemic on belief updating about the future could differ between those who directly experienced infection and those who remained uninfected. It is also important to acknowledge that our study was timely and geographically limited to the context of the COVID-19 outbreak in France. Cultural variations and differences in governmental responses to contain the spread of SARS-CoV-2 may have impacted the optimism biases in belief updating differently.
The observed lack of optimistically biased belief updating may be interpreted as an adaptive response to the experience of an unprecedented level of uncertainty and chronic threat during a global crisis. Although we did not have access to anxiety and stress perceptions during and outside the pandemic, our computational modeling results corroborate to some extent this interpretation. Notably, belief updating was more optimistically biased RL-like outside the pandemic and more rational Bayesian-like during the pandemic. The biased RL-like updating behavior observed outside the pandemic indicated that participants relied on the motivational salience of positive estimation errors to teach them how to update their beliefs about the future by trial and error. This finding aligns with past work, showing that RL-like updating models best explain belief updating in non-threatening, non-stressful, and predictable laboratory contexts5. It further suggests that RL strategies are a computationally efficient way to guide decision-making and belief formation when the environment is stable and predictable30. For instance, in environments with well-defined reward structures, the human brain has been shown to efficiently rely on RL and avoid a computational overhead associated with the Bayesian-like inference process31. On the contrary, belief updating was more rational Bayesian-like during the COVID-19 pandemic, indicating that participants weighed the uncertainty of evidence in favor and against their prior beliefs. This finding aligns with research about Bayesian networks to model semantic knowledge processing under uncertainty32 and with work that uses Bayes rule to understand how humans learn and choose under uncertainty33–35.
It is essential to acknowledge that computational modeling provides insight into potential mechanisms, but this excludes inferences on whether humans indeed update beliefs in the way the best-fitting model assumes. Other models, such as evidence accumulation models, may also work when humans update their beliefs about the future and their immediate surroundings36. Unfortunately, we did not assess reaction times during belief updating, which is crucial for fitting evidence accumulation models such as drift-diffusion models to observed behavior. However, we can infer from our findings that the two model families employed to fit observed belief-updating behavior represented two different but complementary prediction strategies. These strategies were then used to function in the uncertainty of real-life conditions. We call for more studies investigating these computational models’ psychological and biological validity under certainty and uncertainty.
In this study we tested how actual adverse experiences affect the updating of negative future outlooks in healthy participants and in analogy to studies conducted in depressed patients19,20,24 following the cognitive model of depression37. One open question is whether findings were specific to the adverse event framing38–40. We argue that under normal, non-adverse contexts, belief updating should also be optimistically biased for positive life events, as shown by previous research41,42. However, how context such as experiencing a challenging or favorable situation influence the updating of beliefs about positive and negative outlooks remains an open question.
In conclusion, our results provide insight into the resilience and adaptability of belief-updating processes during and following the COVID-19 pandemic. They demonstrate the malleability of the human ability to anticipate the future and how it can adapt to real-life conditions under which an overly optimistic view of future risks would be harmful.
Methods
Ethical considerations
The study protocol followed the Declaration of Helsinki and was approved by the local ethics committee at Sorbonne University. All participants provided informed consent. The authors declare no competing interests.
Participants
One hundred twenty-five participants (mean age = 37.50 ± 1.28, 99 females) allotted to four different groups were recruited for the study (see Table 1 and SI Table 15) via a public advertisement. Two participants from the group tested in June 2022 were excluded from the analyses because they always indicated the same risk estimate for each event.
Experimental design
The first group of 30 participants (mean age = 33.73 ± 1.96, 18 females) was recruited in October 2019 before the COVID-19 outbreak in France (Figure 4a). These participants were tested in the laboratory. A second group of 34 participants (mean age = 42.24 ± 3.34, 25 females) was recruited from March to April 2020 for online testing during the first COVID-19-related lockdown of social and economic life, with schools closed. A third group of 31 participants (mean age = 42.42 ± 3.35, 20 females) was recruited and tested online immediately after the last lockdown and still during the COVID-19 pandemic in Mai 2021. A fourth group of 30 participants (mean age = 34.66 ± 2.71, 16 females) was recruited at the lift of the COVID-19 pandemic-related state of emergency and tested in the laboratory in June 2022 (Figure 4a). This group was also used to rule out an eventual effect of task design. Half of them (n = 15) performed a one-run task design, and the other half (n = 15) performed a two-run task design (e.g., see in more detail the belief updating task description below). Note the 30 participants tested before the COVID-19 pandemic were recontacted during the first strict lockdown to re-perform the belief updating task online (Figure 4a). This allowed us to check for the effects of experiencing a COVID-19-related lockdown within the same cohort of participants. Two of the thirty participants of this group did not respond. Therefore, the sample size for the within-group test-retest analyses was 28 participants.

Experimental design.
(a) Timeline of testing. Four groups were tested, before the COVID-19 outbreak in October 2019, during the first complete lockdown of social and economic life in March and April 2020, after a partial lockdown in Mai 2021, and after the lift of the pandemic-related state of emergency in June 2022. (b) Belief updating task. Panels show subsequent appearances on the screen within a good news trial (left panels) and a bad news trial (right panel). Responses were self-paced. The task goal was to estimate the risk of experiencing different adverse future life events (e.g., tooth decay) for oneself (E1) and for somebody else (eBR) before and after (E2) being presented with information about the event’s prevalence in the general population (i.e., base rate).
Sample sizes:
The sample sizes were determined by a power analysis using the power curve function in R (version 1.2.5033) and building on the good news/bad news bias observed in the first group tested in October 2019 before the COVID-19 outbreak in France. The sample size required to replicate a significant effect of estimation error valence on the updating with a power between 80% and 90% lay between 28 and 35 participants, respectively.

Sociodemographic data for all four groups (N = 123).
♀: Female; ♂: Male; Note: education is the number of years completed in higher education after a high school diploma.
Belief Updating Task
All participants performed a belief-updating task (Figure 4b). For in-person testing, stimulus presentation and response recording were done with the Psychophysics toolbox in MATLAB (R2018b, Update 6, version 9.5.0.1265761). The online testing was done using Qualtrics (Qualtrics Software, version March 2020 of Qualtrics, Copyright 2020 Qualtrics. Available at https://www.qualtrics.com).
The task comprised 40 trials with 40 adverse lifetime events and base rates. In each trial, participants were asked to estimate the likelihood of experiencing an adverse event in the future for themselves and somebody else before and after receiving information about the likelihood of occurrence in the general population (i.e., the base rate). The adverse life events and their actual base rates were taken from previously published work in healthy controls2,43. The base rates for events were normal to uniformly distributed (W = 0.952, p = 0.088, Shapiro-Wilk test). The base rates ranged between 10% and 70%, with a mean of 40%. Participants rated their estimates between 3% and 77%, which ensured that for most likely (base rate = 70%) and most unlikely events (base rate = 10%) there was enough space (7%) to update beliefs toward the base rates42,43. Moreover, all statistical models included the absolute estimation errors as a control for variance potentially explained by different estimation error magnitudes42,43.
In more detail, as illustrated in Figure 4b, each of the 40 trials began with presenting an adverse life event. Participants estimated their own risk and subsequently the risk of someone else their age and gender. Then the base rate of the event for occurring in the general population was displayed on the computer screen. Participants rated their confidence in the accuracy of the presented base rate. Finally, they re-estimated their risk for experiencing the event now informed by the base rate.
The task design varied between some groups:
Fifty-eight participants underwent assessment outside the COVID-19 pandemic, with 45 performing a two-run task design (n=30 tested before the outbreak in October 2019; n = 15 tested at the end of the sanitary state of emergency in June 2022). The remaining 13 participants tested outside the pandemic performed the one-run task design like the 65 participants tested during the pandemic.
In the two-run task design, participants performed a first run of 40 trials. Each trial started with the display of an adverse lifetime event. Participants were asked to estimate the risk of experiencing this event in the future for themselves (E1 rating) and for somebody else (eBR rating). At the end of each trial, they received information about the event’s base rates and rated their confidence. In a second run, they saw an adverse future life event and its base rate on each trial, they then re-estimated their risk (E2 rating) on a trial-by-trial basis.
The one-run task design is displayed in Figure 4b and consists of one run of 40 trials. Within each trial, participants first estimated the risk of experiencing a future adverse lifetime event for themselves (E1 rating) and for somebody else (eBR rating), were presented with the base rate for this event (BR), rated their confidence in the base rate and re-estimated their risk of experiencing the event in the future (E2 rating). Note that all analyses were controlled for these differences in task design, which had non-significant effects on belief updating, confidence ratings, estimation error magnitude, and learning rates (see corresponding SI tables with LME results).
Belief updating task measures of interest:
The estimation error indicated whether participants overestimated or underestimated their likelihood of experiencing an adverse event (E1) relative to its actual base rate (BR). The estimation error (EE) was calculated according to the equation i:
The estimation error was further used to categorize trials into good or bad news trials:
For good news trials, the estimation error was positive (EE > 0), which indicated an overestimation of one’s likelihood of experiencing an adverse life event relative to the base rate of that event (E1 > BR). For bad news trials, the estimation error was negative (EE < 0), which indicated an underestimation of one’s likelihood of experiencing an adverse event relative to its actual base rate (E1 < BR).
The main variable of interest was the magnitude of belief updating (UPD), which was calculated as the difference between the first (E1) and the second (E2) estimate after receiving information about the base rate (BR). Notably, the update was calculated for good and bad news trials, respectively, following equation ii:
Lastly, the difference between updating after good and bad news was calculated to assess the updating bias following equation iii:
A positive difference indicated that participants updated their beliefs about their lifetime risks of experiencing adverse life events more frequently following good news than bad news.
For each participant, trials that did not receive a response (on average 0.44 trials per subject) and trials with an EE = 0 (on average 0.63 trials per subject) were excluded from the analyses.
The distance measured the extent to which participants consider their probability of experiencing a given adverse event (E1) different from the lifetime risk of someone from a similar socio-economic background (eBR). If positive, it reflected an optimistic bias in initial estimates. The following, distance = eBR – E1, was calculated. Additional analysis to control for this measure was added to the SI (SI Table 3).
Model-free statistical analyses of observed belief updating behavior:
The main aim of this study was to assess how belief updating was affected by the context of experiencing the COVID-19 pandemic.
We, therefore, conducted between-context analyses, contrasting groups tested during (i.e., during the first lockdown in March/April 2020 and immediately after the last lockdown one year later in Mai 2021) and outside the pandemic context (i.e., before the outbreak in October 2019 and one year after the last lockdown in June 2022). All statistical tests were conducted using the MATLAB Statistical Toolbox (MATLAB 2018b, MathWorks) and JASP (JASP 0.16.4).
A first linear mixed effects model (LME 1) was fitted to the belief updating, following equation iv:
The model included fixed effects for estimation error magnitude (|EE|), estimation error valence (EE valence, coded −1 for bad news trials and 1 for good news trials), context (coded 0 for outside and 1 for during the COVID-19 pandemic), task design (coded 1 for one-run, 2 for two-run design), age, gender (coded 0 for male, 1 for female participants), level of education, and the interaction of interest EE valence by context. The model also included random intercepts nested by subject number and random slopes for estimation error magnitude and valence.
Subsequently, the same Linear Mixed Effects (LME) model was applied again to the belief update to explore the categorical effect of context in conjunction with EE valence. This allowed for a more specific comparison of the impact of EE valence between contexts (groups): those tested before the COVID-19 pandemic outbreak in October 2019 (baseline) compared to those tested during the initial COVID-19-related lockdown (context 1), those tested immediately after the last lockdown during the pandemic in Mai 2021 (context 2), and those tested one year post-pandemic in June 2022 (context 3), respectively.
Post-hoc two-tailed and one-tailed t-tests were conducted to characterize the directionality of detected main effects and interactions.
Posthoc power analysis
The best fitting computational models of belief updating in each context (i.e., during and outside the pandemic) and their free parameters were used to simulate new belief updates44. Simulations were repeated 1000 times. At each iteration, the above-described linear mixed effects model (equation iv) was fitted to the simulated belief updates. The frequency across 1000 iterations with which the LME detected a significant interaction of valence by context on simulated belief updating indicates the power of this interaction effect and the chance for type II errors of failing to reject the null hypothesis when the effect was there.
Model-based analyses of belief-updating behavior
To gain more insight into putative cognitive mechanisms of belief updating during and outside the COVID-19 pandemic, two families of non-linear computational models were fitted to observed belief updating behavior, which is specified below.
Model specifications
(1) Reinforcement learning model of belief updating
A Reinforcement learning-like model assumed that belief updating is proportional to the magnitude of the estimation error following Kuzmanovic and Rigoux, 20175. The learning rate scaled the effect of the estimation error on belief updating following the generic equation v:
Importantly, the learning rate was estimated for good and bad news trials separately and following equations vi and vii:
For both types of trials, the learning rate was composed of two components that varied across participants. The scaling parameter (S) measured the extent to which a participant took the prediction error into account when updating beliefs. The asymmetry parameter (A) indicated to what extent the belief updating differed for positive and negative estimation errors. The priors for scaling and asymmetry were untransformed and unbound. The mean of the prior distribution for scaling was set to one. Thus, a scaling of one meant that the updating magnitude equaled the estimation error magnitude. The mean of the prior distribution for the asymmetry parameter was set to zero. An asymmetry parameter value larger than zero meant positively biased updating, whereas an asymmetry parameter smaller than zero meant negatively biased belief updating.
A version of the RL-like model of belief updating took the personal relevance (PR) of presented adverse future life events into account following equations viii and ix:
The PR weighed the estimation error (EE) and corresponded to the difference between the estimated base rate (eBR – the estimated risk for somebody else) and the initial estimate (E1 – the estimated risk for oneself). Based on the sign of this difference between eBR and E1, the PR was calculated following equations x to xii:
(2) Bayesian belief updating model
A second family of computational models was fitted to belief updating behavior and assumed that belief updating was proportional or equal to the Bayes rule, following equations xiii and xiv5:
The scaling parameter (S) corresponded to the tendency of participants to update their beliefs in response to the presented base rate following Bayes’ rule. A scaling smaller than one (S < 1) indicated lesser belief updating than what was predicted by the Bayes rule, and a scaling larger than one (S > 1) indicated more updating than predicted by the Bayes rule.
The Bayes rule was used to define a Bayesian second estimate (E2b, the updated belief), which was calculated following equations xv and xvi:
With the Prior = P(BR), corresponding to the base rate (BR) of each event following equation xvii:
The Likelihood Ratio (LHR) indicates the probability of the initial estimate (E1) relative to the likelihood of the alternative estimated base rate (eBR) following equation xviii:
Alternative models of these two model families (RL and Bayesian) were fitted to the observed belief-updating behavior. Each model alternative represented a different combination of free parameters composing the learning rate to test a total of 12 assumptions about the cognitive process underlying belief updating:
RL model 1. Belief updating is asymmetrical and proportional to the estimation error: S + A varied across participants.
RL model 2. Belief updating is non-asymmetrical and proportional to the estimation error: S varied, A was silent (fixed to zero).
RL model 3. Asymmetrical updating is equal to the estimation error: S is fixed (to one), and A varied.
RL model 4. Updating equals the estimation error: S and A were fixed.
RL model 5. Belief updating is asymmetrical, proportional to the estimation error, and moderated by the personal relevance of events (PR): S + A varied. PR was weighting the EE following equations x., xi. and xii.
RL model 6. Belief updating is non-asymmetrical and proportional to the estimation error moderated by PR: S varied, A was fixed, and PR weighted the EE following equations x., xi. and xii.
RL model 7. Asymmetrical updating equals to the estimation error moderated by PR: S was fixed, A varied, and PR weighted the EE following equations x., xi. and xii.
RL model 8. Updating equals the estimation error moderated by PR: S and A were fixed, and PR weighted the EE following equations x., xi. and xii.
Bayesian model 1. Belief updating is asymmetrical and proportional to Bayes rule: S and A varied.
Bayesian model 2. Belief updating is proportional to a rational Bayes rule: S varied, and A was fixed.
Bayesian model 3. Belief updating equals an asymmetrical Bayes rule: S was fixed, and A varied.
Bayesian model 4. Belief updating equals a rational Bayes rule: S and A were fixed.
Model estimation
Models were estimated following the procedure reported by Kuzmanovic and Rigoux, 20175 and Bottemanne et al., 202224. In short, models were not hierarchical, and parameter estimation was thus less sensitive to differences in group sample sizes. For each participant, optimal scaling and asymmetry parameter values were obtained using Bayesian variational inferences implemented in the VBA toolbox45.
Model comparisons
The free energy approximations for a model’s evidence in each participant were entered into a random effect Bayesian model comparison that yielded the two criteria considered for model selection: the estimated model frequency (Ef) in each group (estimate the frequency of each model in the population) and the protected exceedance probability (pxp), which corresponded to the probability that the hypothesis predominates in the population, above and beyond chance.
Parameter recovery
Parameter recovery analysis was conducted to check whether the free parameters of the winning models were identifiable and described the data better than any other set of parameters. The procedure was the same as reported in Bottemanne et al., 202224. In short, to validate the accuracy of the fitting procedure in providing meaningful parameter values, simulated belief updating data was generated using the observed parameter values for both the optimistic RL model and the optimistic Bayesian updating model. Subsequently, we applied the fitting procedure to these simulated data to iteratively ‘recover’ the parameters. Thereby, the means of the parameters were set to correspond to the observed sample means (i.e., scaling = 0.39 ± 0.02, asymmetry = 0.07 ± 0.01 for the RL model; scaling = 0.42 ± 0.03, asymmetry = 0.05 ± 0.01 for the Bayesian model). This process was iterated to simulate 40 values of belief updates 123 times. The model was then inverted by fitting it to the simulated data, yielding a new set of recovered values for scaling and asymmetry. Finally, the recovered and estimated parameters were compared by assessing their correlation using Pearson’s correlation coefficients.
Parameter comparisons
To compare learning rates and learning rate components across groups, we used the parameters from the optimistically biased RL-like model (RL model 1), which performed best when fitted to the whole dataset (Ef = 0.40, pxp = 0.99) and reproduced the observed updating behavior as shown in SI Figure 6.
Individual learning rates from this RL model 1 and their scaling and asymmetry components were the dependent variables (DV) of the following generic linear mixed effects model (equation xix and xx):
The model included fixed effects for news valence (valence, coded 1 for good news, −1 for bad news), context (coded 0 for outside the pandemic, 1 for during the pandemic), task design (coded 1 for one-run and 2 for two-run), age, gender (coded 0 for male, 1 for female participants), and level of education. It also tested the interaction of interest context by valence. The intercept was nested at random by subject number.
Post-hoc one-sampled and two-sampled t-tests were conducted to characterize the directionality of effects.
Model recovery
To check if the 12 models were identifiable a model recovery analysis was conducted using the VBA toolbox. In more detail, behavior was simulated using each of the 12 models with parameters estimated from participants’ actual behavior. These simulated datasets were then refitted to all alternative models. The model comparison procedure was then performed to evaluate whether each model could accurately recover the parameters that generated the data. This resulted in a 12x12 confusion matrix that compared the performance of all models in fitting each simulated dataset (SI Figure 5). The matrix shows the estimated frequency when fitting to the 12 models (y axis) the behavior generated by each model (x axis) and provided evidence for strong recovery of nearly all models and importantly, the two winning models: the optimistically biased RL-like model and the rational Bayesian model of belief updating. This analysis thus rules out that the two model families were confused and mitigate concerns about the validity of the model selection.
Additional files
Supplementary information
1. Additional behavioral analysis
a. Within-group comparisons

Belief updating within the same group of participants tested before and during the COVID-19 pandemic (n=28).
Boxplots display 95% confidence intervals for belief updating after bad (left panel) and good (right panel) news and during and outside the pandemic. Boxes indicate the interquartile range from Q1 25th to Q3 75th percentile. The horizontal black lines indicate medians and whiskers range from minimum to maximum values and span 1.5 times the interquartile range. The dots correspond to individual participants. The source data file provides exact p-values. *p < 0.05 two-sampled, two-tailed t-tests.
Belief updating was compared within a group of participants (n=28), who were tested both before (October 2019) and during the COVID-19 pandemic (March-April 2020). A mixed effects linear regression analysis of belief updating showed a significant main effect of EE valence (β = 4.05, SE = 1.22, t(103) = 3.31, p = 0.001, 95% CI [1.63 – 6.47]), as well as a significant effect of testing context (β = −4.41, SE = 1.49, t(103) = −2.95, p = 0.004, 95% CI [−7.37 – −1.45]), and more importantly a significant EE valence by testing context interaction (β = −7.66, SE = 1.49, t(103) = −5.13, p < 0.001, 95% CI [−10.62 – −4.70], SI Figure 1, SI Table 4). A significant main effect of gender was also found (β = 3.53, SE = 1.64, t(103) = 2.15, p = 0.03, 95% CI [0.27 – 6.79]), with women updating their beliefs more than men. Post-hoc t-tests revealed that participants tested before the emergence of the pandemic updated their beliefs more after receiving good news (mean UPDgood = 14.38 ± 1.14) than after receiving bad news (mean UPDbad = 6.23 ± 1.17; t(27) = 4.93, p < 0.001, Cohen’s d = 0.93, paired sample, two-tailed t-test). This asymmetry in belief updating was not observed when the same 28 participants were tested during the first lockdown (mean UPDgood = 2.25 ± 1.76; mean UPDbad = 9.30 ± 2.47; t(27) = −1.84, p = 0.08, Cohen’s d = −0.35, paired sample, two-tailed t-test).
b. Sources of variance in belief updating
In Figure 1b, participants tested during the COVID-19 pandemic showed more variance in belief updating in response to both good and bad news than those tested outside the pandemic. This variability might be because they ignored the base rates when updating their beliefs, possibly influenced by the shift to online testing during the pandemic.
If participants paid attention to the base rates, they were expected to update toward the base rate, yielding positive values for the update. On the contrary, ignoring the base rates can be reflected by updating away from the base rate, with second estimates (E2) that, in the case of good news trials, lay above the first estimate (E1; e.g., E1 = 60%, BR = 40 %, E2 = 70 %, UPD = E1 − E2 = − 10 %). Likewise, in the case of bad news trials, second estimates may lay below the first estimate (e.g., E1 = 20 %, BR = 40 %, E2 = 10 %, UPD = E2 − E1 = −10%). We examined the number of trials with such paradoxical second estimates yielding negative values for the belief update. We found no significant difference (t(121) = 1.77, p = 0.08, Cohen’s d = 0.32, two-sample two-tailed t-test) between the number of paradoxical trials in participants tested outside (6.09 ± 0.47 trials on average) and during (4.77 ± 0.56 trials on average) the pandemic. This suggests that participants tested online exhibited no greater propensity for paradoxical responses than those tested in person (non-significant effect of context on the number of paradoxical trials: β = −0.19, SE = 1.57, t(234) = −0.12, p = 0.90, 95% CI [−3.28 – 2.90]; SI Table 16).
Second, we tested if the observed variance in belief updating between the groups was due to second estimates that over- and undershot the base rates. For instance, individuals who undershot indicated second estimates below base rates signaling good news (e.g., E1 = 40 %, BR = 20 %, E2 = 10 %). In contrast, individuals who overshot indicated second estimates above the base rates signaling bad news (e.g., E1 = 10%, BR = 20%, E2 = 40%). Undershooting might indicate an attuned sensitivity to good news and overshooting an attuned sensitivity to bad news. We found that participants tested outside the COVID-19 pandemic undershot more often (on average 4.48 ± 0.46 trials) than they overshot (on average 2.38 ± 0.33 trials; t(57) = 3.76, p < 0.001, Cohen’s d = 0.49, paired-sample, two-tailed t-test). This propensity aligned with the bias to update beliefs more after good than after bad news in this group. Conversely, participants tested during the pandemic didn’t show a significant difference in the number of trials where they under- (2.00 ± 0.32 trials on average) or overshot (3.08 ± 0.73 trials on average; t(64) = −1.34, p = 0. 19, Cohen’s d = −0.17, paired-sample, two-tailed t-test), aligning with the absence of the good news/bad news bias in belief updating. Critically, when comparing the two groups, we found a significant positive interaction testing context by type of shooting (β = 1.66, SE = 0.50, t(233) = 3.33, p = 0.001, 95% CI [0.68 – 2.65]; SI Table 17). Post-hoc t-tests showed that participants tested outside the COVID-19 pandemic undershot more often than participants tested during the pandemic (t(121) = 4.48, p < 0.001, Cohen’s d = 0.81, two-sample two-tailed t-test). No differences were observed between the groups regarding overshooting (t(121) = −0.84, p = 0.40, Cohen’s d = −0.15, two-sample two-tailed t-test). These findings indicated that participants tested outside the pandemic were more sensitive to good news, corroborating the findings of a more robust good news/bad news bias in belief updating found in this group.
c. Optimism biases in initial beliefs
As now shown in SI Figure 2 the participants displayed an optimism bias in their initial belief estimates: They estimated that adverse events are more likely to happen to others than to themselves (β = 3.02, SE = 0.86, t (232) = 3.53, p = 0.001, 95% CI [1.33 – 4.71]; SI Table 18). However, the pandemic context had no significant effect (β = −1.91, SE = 3.00, t (232) = −0.64, p = 0.52, 95% CI [−7.82 – 4.00]; SI Table 18). We conclude from these additional analyses that the pandemic context specifically influenced optimistically biased belief updating but did not affect optimism bias in initial beliefs.
Note that optimism bias is the propensity to believe that negative events are more likely to happen to others than to oneself. In contrast, the terms optimistically biased belief updating or optimism biases in belief updating used throughout the main text refers specifically to the valence effect in belief updating —updating beliefs more in response to good news than to bad news.

Optimism bias in initial beliefs about adverse future life events.
First estimates of the likelihood of and adverse life event happening to oneself (left) or someone else (right) and before (n=58) and during (n=65) the COVID-19 pandemic. Boxplots display 95% confidence intervals with boxes indicating the interquartile range from Q1 25th to Q3 75th percentile. The horizontal black lines indicate medians and whiskers range from minimum to maximum values and span 1.5 times the interquartile range. The individual dot and vertical line in the middle correspond to the means and standard errors. The contiguous dots correspond to individual participants.
2. Additional computational analyses
a. Model comparisons among participants tested both before and during the pandemic
Bayesian model comparison in the group of participants tested both before and during the lockdown showed that belief updating was more RL-like and optimistic before the lockdown (Ef = 0.73, pxp = 1), and rational Bayesian-like during the lockdown (Ef = 0.61, pxp = 0.99; SI Figure 3).

Estimated model frequencies for participants tested both before and during the COVID-19 pandemic.
(a) Posterior model attributions. Colored cells display the probability that individual participants (y-axis) will be best explained by a model version (x-axis). (b) Estimated model frequencies. The histograms display average posterior model frequencies that reflect how many participants are expected to be best described by a model version, with error bars corresponding to standard deviations. The red line indicates the null hypothesis that all model versions are equally likely in the cohort (chance level). Labels on the x-axis of the histograms indicate the model versions with non-silenced parameters (S – scaling, A–asymmetry), and PR – personal relevance factor.
b. Parameter recovery for the winning model in each group of participants
To ensure that parameter estimates were robust for the best-fitting models within each context (i.e., during and outside the pandemic), we conducted parameter recovery in each group of participants. For the participants tested outside the pandemic scaling and asymmetry parameters were strongly recovered by the RL-like model (Model 1; SI Figure 4, scaling: r = 0.92, p = .001; asymmetry: r = 0.75, p < .001). For the group tested during the pandemic recovery for the scaling (r = 0.85, p < .001) and asymmetry (r = 0.71, p < .001) parameters by the Bayesian model of belief updating was good (SI Figure 4).

Parameter recovery for the wining model family according to context.
Pearson’s correlation between generating and recovered parameters for scaling (upper panel) and asymmetry (lower panel) learning rate component in participants tested outside (n = 58; left panel) and during (n = 65; right panel). The blue doted lines correspond to 95% confidence intervals. r – Pearson’s correlation coefficient against zero.
c. Model recovery and confusion analysis of the computational models of Belief Updating
To ensure the robustness and specificity of our computational models, we performed a model recovery analysis. Using simulated data, we tested whether each of the 12 models in our comparison framework could be correctly identified under the estimation and selection criteria used in our study. Results indicated that all models were well-recovered (SI Figure 5), except for models in both families that use fixed parameters (i.e., scaling = 1 and asymmetry = 0), which was better recovered by the model with the scaling parameter fixed but the asymmetry parameter estimated iteratively.

Model recovery confusion matrix.
The matrix displays the estimated model frequencies from the model recovery analysis. Each column represents the generative model used to simulate behavioral data, while each row indicates the model used to recover data during the fitting procedure. Higher values along the diagonal (blue) indicate successful recovery, confirming that each model can be reliably distinguished from the others. Off-diagonal values (gray) reflect potential misattributions.
3. Comparison of observed and modeled behavior
To check if the overall best fitting optimistically biased RL model reproduced observed belief updating behavior, we compared the observed belief updates in each participant to the updates predicted by the overall winning model (model 1 – RL model with both parameters estimated). This comparison is depicted in SI Figure 6, which highlights a strong correspondence between the actual data and the modeled estimates.

Observed and modelled belief updating for the whole participant sample (n=123).
This figure illustrates the percentage of belief update for each participant (blue line) and the estimated belief update (black line) from the overall best fitting optimistically biased RL-like model of belief updating. The shaded blue area reflects the variance in observed data. The colored background highlights the four groups of participants tested in different contexts – before the COVID-19 pandemic (gray), during the 1st lockdown (red), at time of last lockdown release (beige), and one year later (green).
4. Correlations between self-reported attitudes during the COVID-19 lockdown and belief updating
Some self-reported anxiety and perceived risk measures experienced during the lockdown were collected in a subset of participants (n=40, counting n=21 tested both before and during the 1st strict lockdown, and n=19 tested solely during the 1st lockdown). These reports were given retrospectively at the time of release of the 1st lockdown in summer 2020 when the pandemic was still unfolding (SI Table 1).
Exploratory correlations revealed some noteworthy trends, which though did not survive corrections for multiple comparisons. We found that participants who reported to have perceived a bigger risk of death due to contagion were also those who were less optimistically biased when updating their beliefs about adverse future life risks during the first strict COVID-19-related lockdown (r = −0.36, p = 0.02, Pearson’s correlations).
Moreover, parameter estimates from the computational models of belief updating showed associations with specific survey responses: The Bayesian model’s scaling parameter correlated positively with adherence to distancing measures (r = 0.41, p = 0.01) and negatively with the need for social contact (r = −0.37, p = 0.02). This result indicated that participants who were updating their beliefs faster were more likely to follow preventive guidelines and felt less social craving. Meanwhile, the asymmetry parameter correlated negatively with mask wearing (r = −0.41, p = 0.01), positively with physical contact with close others (r = 0.32, p = 0.04) and satisfaction with social interactions (r = 0.33, p = 0.04). This suggests that participants who displayed some asymmetry in belief updating during the COVID-19 pandemic were less likely to comply with mask-wearing rules and more likely to engage in social interactions.
Note that, the correlations between these questionnaire measures of behavior during COVID-19 related lockdowns and the free parameters hold for both model families (RL and Bayesian). However, for the sake of parsimony we reported only correlations to the parameters from the Bayesian model, as it provided the best fit to our observed belief updating behavior during the COVID-19 period.
5. Belief updating task instructions
The following instructions (translated from French) were displayed to participants prior to performing the task:
This task measures your beliefs about future life events. You will be presented with 40 different future life events, one at a time. At each time you are asked to estimate:
The likelihood of the event happening in your future life.
The likelihood of the event happening to someone else your age and gender.
You will then see the base rate for the event and will be asked to:
3. Rate your confidence in the base rate information – how much you believe it to be accurate on a scale between 0 and 100%.
At the end of each trial and after having seen the base rate for the event you are again asked to estimate:
4. The likelihood of the event happening in your future life.
Please estimate the likelihoods of the future life events on a scale between a minimum likelihood of 3% and a maximum likelihood of 77%.

Survey responses in n=40 participants tested during the pandemic.


Linear Mixed-Effects Model results fitting the average Belief Updates (UPD) in participants tested outside (n=58) and during (n=65) the pandemic


Linear Mixed-Effects Model results fitting the average Belief Updates (UPD) in participants tested outside (n=58) and during (n=65) the pandemic, corrected for distance defined by the difference between the estimate for oneself (E1) and for others (eBR).


Linear Mixed-Effects Model results fitting the average Belief Updates (UPD) in participants tested before the COVID-19 outbreak in France (October 2019, n=30, baseline), and comparing them to participants tested during the first lockdown in March/April 2020 (n=34, context 1), one year later in Mai 2021 with less strict measures in place (n=31, context 2), and at the lift of the sanitary state of emergency in June 2022 (n=28, context 3).


Linear Mixed-Effects Model results fitting the average Belief Updates (UPD) in participants tested both before and during the pandemic (n = 28)


Linear Mixed-Effects Model results fitting the average belief updates (UPD) in participants tested outside (n=58) and during (n=65) the pandemic, corrected for distance, and with estimation errors (EE) calculated based on the estimate for someone else (eBR)

Linear Mixed-Effects Model results fitting the average confidence ratings in participants tested outside (n=58) and during (n=65) the pandemic

Linear Mixed-Effects Model results fitting the average absolute Estimation Error (EE) in participants tested outside (n=58) and during (n=65) the pandemic

Linear Mixed-Effects Model results fitting the average Learning Rates from the RL-like model in participants tested outside (n=58) and during (n=65) the pandemic

Linear Mixed-Effects Model results fitting the average Learning Rates for RL-like model in participants tested both before and during the pandemic (n = 28)

Linear Mixed-Effects Model results fitting the average asymmetry in the RL-like model in participants tested outside (n=58) and during (n=65) the pandemic

Linear Mixed-Effects Model results fitting the average scaling in the RL-like model in participants tested outside (n=58) and during (n=65) the pandemic

Linear Mixed-Effects Model results fitting the average asymmetry in the RL-like model in participants tested both before and during the pandemic (n = 28)

Linear Mixed-Effects Model results fitting the average scaling in the RL-like model in participants tested both before and during the pandemic (n = 28)

Sociodemographical data (N = 123)

Linear Mixed-Effects Model results fitting the average number of paradoxical trials in participants tested outside (n=58) and during (n=65) the pandemic

Linear Mixed-Effects Model results fitting the average number of under- and overshooting in participants tested outside (n=58) and during (n=65) the pandemic

Linear Mixed-Effects Model results fitting initial beliefs about the likelihood of adverse future life events for oneself (E1) and for others (eBR) in participants tested outside (n=58) and during (n=65) the pandemic.
Note the perspective regressor (coded 0 for E1 and 1 for eBR) tested if and how beliefs differed when assessed for oneself than for others.
References
- 1.Unrealistic optimism about future life eventsJournal of Personality and Social Psychology 39:806–820
- 2.How unrealistic optimism is maintained in the face of realityNature Neuroscience 14:1475–1479
- 3.Forming Beliefs: Why Valence MattersTrends in Cognitive Sciences 20:25–33
- 4.Influence of vmPFC on dmPFC Predicts Valence-Guided Belief FormationJ. Neurosci 38:7996–8010
- 5.Valence-Dependent Belief Updating: Computational ValidationFront. Psychol 8:1087
- 6.COVID-19 and mental health: A review of the existing literatureAsian Journal of Psychiatry 52:102066
- 7.WHOMental Health and COVID-19: Early evidence of the pandemic’s impact: Scientific brief, 2 March 2022. World Health Organization (2022).World Health Organization
- 8.Paranoia and belief updating during the COVID-19 crisisNat Hum Behav 5:1190–1202
- 9.Illusion and well-being: A social psychological perspective on mental healthPsychological Bulletin 103:193–210
- 10.Distinguishing optimism from neuroticism (and trait anxiety, self-mastery, and self-esteem): A reevaluation of the Life Orientation TestJournal of Personality and Social Psychology 67:1063–1078
- 11.Optimism, pessimism, and psychological well-beingIn:
- Chang E. C.
- 12.Does Happiness Promote Career Success?Journal of Career Assessment 16:101–116
- 13.OptimismClinical Psychology Review 30:879–889
- 14.Dispositional optimismTrends in Cognitive Sciences 18:293–299
- 15.Is Optimism Associated With Healthier Cardiovascular-Related Behavior?: Meta-Analyses of 3 Health BehaviorsCirc Res 122:1119–1134
- 16.The Glass is Half-Full: Overestimating the Quality of a Novel Environment is AdvantageousPLoS ONE 7:e34578
- 17.The evolution of misbeliefBehav Brain Sci 32:493–510
- 18.Depressive symptoms are associated with unrealistic negative predictions of future life eventsBehaviour Research and Therapy 44:861–882
- 19.Losing the rose tinted glasses: neural substrates of unbiased belief updating in depressionFront. Hum. Neurosci 8
- 20.Depression is related to an absence of optimistically biased belief updating about future life eventsPsychol. Med 44:579–592
- 21.Updating Beliefs under Perceived ThreatJ. Neurosci 38:7901–7911
- 22.Risk Perception and Optimism Bias during the Early Stages of the COVID-19 Pandemichttps://doi.org/10.31234/osf.io/epcyb
- 23.Self-beneficial belief updating as a coping mechanism for stress-induced negative affectScientific Reports 11:17096
- 24.Evaluation of Early Ketamine Effects on Belief-Updating Biases in Patients With Treatment-Resistant DepressionJAMA Psychiatry 79:1124
- 25.How Robust Is the Optimistic Update Bias for Estimating Self-Risk and Population Base Rates?PLoS ONE 9:e98848
- 26.Self‐specific Optimism Bias in Belief Updating Is Associated with High Trait OptimismBehavioral Decision Making 28:281–293
- 27.Under Threat, Weaker Evidence Is Required to Reach Undesirable ConclusionsJ. Neurosci 41:6502
- 28.The psychological impact of quarantine and how to reduce it: rapid review of the evidenceThe Lancet 395:912–920
- 29.Conspiracy Theories: A Public Health Concern and How to Address ItFront. Psychol 12:682931
- 30.Reinforcement Learning and Episodic Memory in Humans and Animals: An Integrative FrameworkAnnu. Rev. Psychol 68:101–128
- 31.Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral controlNature Neuroscience 8:1704–1711
- 32.Probabilistic Reasoning in Intelligent Systems: Networks of Plausible InferenceMorgan Kaufmann
- 33.How to Grow a Mind: Statistics, Structure, and AbstractionScience 331:1279–1285
- 34.Optimal Predictions in Everyday CognitionPsychol Sci 17:767–773
- 35.Computational rationality: A converging paradigm for intelligence in brains, minds, and machinesScience 349:273–278
- 36.Evidence accumulation is biased by motivation: A computational accountPLoS Comput Biol 15:e1007089
- 37.Cognitive therapy of depressionNew York: Guilford Press
- 38.Unrealistic optimism about future life events: a cautionary notePsychol Rev 118:135–154
- 39.A pessimistic view of optimistic belief updatingCogn Psychol 90:71–127
- 40.Optimism where there is none: Asymmetric belief updating observed with valence-neutral life eventsCognition 218:104939
- 41.Optimistic belief updating despite inclusion of positive eventsLearning and Motivation 58:88–101
- 42.Optimistic update bias holds firm: Three tests of robustness following Shah et alConsciousness and Cognition: An International Journal 50:12–22
- 43.A guideline and cautionary Note: How to use the belief update task correctlyMethods in Psychology 6:100091
- 44.Ten simple rules for the computational modeling of behavioral dataeLife 8:e49547
- 45.VBA: A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural DataPLoS Comput Biol 10:e1003441
Article and author information
Author information
Version history
- Preprint posted:
- Sent for peer review:
- Reviewed Preprint version 1:
- Reviewed Preprint version 2:
Cite all versions
You can cite all versions using the DOI https://doi.org/10.7554/eLife.101157. This DOI represents all versions, and will always resolve to the latest one.
Copyright
© 2024, Khalid et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 236
- downloads
- 10
- citations
- 0
Views, downloads and citations are aggregated across all versions of this paper published by eLife.