Abstract
Optimistically biased belief updating is essential for mental health and resilience in adversity. Here, we asked how experiencing the COVID-19 pandemic affected optimism biases in updating beliefs about the future. One hundred and twenty-three participants estimated the risks of experiencing adverse future life events in the face of beliefdisconfirming evidence either outside the pandemic (n=58) or during the pandemic (n=65). While belief updating was optimistically biased and Reinforcement-learning-like outside the pandemic, the bias faded, and belief updating became more rational Bayesian-like during the pandemic. This malleability of anticipating the future during the COVID-19 pandemic was further underpinned by a lower integration of positive belief-disconfirming information, fewer but stronger negative estimations, and more confidence in base rates. The findings offer a window into the putative cognitive mechanisms of belief updating during the COVID-19 pandemic, driven more by quantifying the uncertainty of the future than by the motivational salience of optimistic outlooks.
Introduction
Anticipating the future is an essential part of human thinking. These beliefs guide how we understand the world; updating them is vital for learning to make better predictions and to generalize across contexts. Interestingly, belief updating tends to be optimistically biased(1). Even when confronted with negative information, humans often downplay its importance to maintain an optimistic view of the future. The underlying mechanism of optimistically biased belief updating involves an asymmetry in learning from positive and negative belief-disconfirming information(2)(3)(4). This optimism bias can be formalized in two ways(5). Reinforcement learning (RL) models assume that belief updating is proportional to the estimation error, which reflects a form of surprise when we face information that disconfirms our initial beliefs. For example, the positive surprise of learning that the risk of contracting a disease is lower than initially thought implies a motivational salience, guiding toward lowering the initial risk estimation and avoiding worsening beliefs(5).
On the contrary, many aspects of human reasoning, including belief updating, can be formalized using Bayes’ rule. Bayesian updating models assume that initial beliefs are prior probabilities or guesses that express how strongly a person predicts future life events within a finite set of possible outcomes before being presented evidence in line and at odds with this initial belief. Following Bayes’ rule, a belief is updated by combining the observed evidence for the initial belief with its likelihood ratio, which expresses the evidence’s diagnosticity(6). For example, updating one’s risk for contracting a disease in a Bayesian way means quantifying the uncertainty of this event for oneself compared to others. This can be optimal and non-biased, which we refer to here as rational Bayesian. On the contrary, it can be optimistically biased, with more Bayesian beliefs updating when the odds against contracting a disease are favorable.
Although RL-like and Bayesian updating models make different assumptions about the updating strategy, they are complementary and powerful formalizations of human reasoning. Both models provide insight into hidden, latent variables of the updating process. Most notably, the learning rate indicates how quickly a person updates their beliefs. It comprises two free parameters: The scaling parameter weighs how much the estimation error is considered following an RL-like updating model or the product of the evidence for the prior and its diagnosticity following a Bayesian updating model. The asymmetry parameter expresses how much this scaling differs in the face of positive and negative belief-disconfirming evidence. Both the scaling and asymmetry parameters can vary between individuals and contexts and, through this variance, offer possible explanations for the idiosyncrasy in belief-updating behavior and its deviation from rationality.
The COVID-19 pandemic represented a chronic, adverse life context, drastically altering individual and social realities. The existing literature documenting the impact of the COVID-19 pandemic and the associated changes in everyday life on mental health has shown that stress, anxiety, and depression have increased during the pandemic(7)(8). Moreover, previous work has demonstrated that belief updating during reversal learning became more erratic and was linked to a rise in paranoia during the COVID-19 pandemic across the US(9). However, it is unknown how optimism biases in belief updating about the future and their underlying putative mechanisms changed during the experience of such an unprecedented, adverse life event. Our hypothesis was twofold. We argued that maintaining optimistically biased belief updating under lasting, adverse life conditions is adaptive. The optimism bias benefits exploratory behavior, reduces stress, and improves mental and physical health and well-being(10)(11)(12)(13)(14)(15). These benefits promote resilience, especially for fitness and survival during a pandemic(15)(16) (17).
On the contrary, optimism biases can lead to suboptimal decision-making. Contextual factors such as acute stress, perceived threat, and depression have been shown to reduce or even reverse optimistically biased belief updating(17)(18)(19)(20)(21)(22)(23). These findings suggest that optimistically biased belief updating should be weaker when experiencing a pandemic.
We leveraged a belief-updating dataset from 123 participants tested between 2019 and 2022 to rule between these alternative hypotheses. Among them, fifty-eight participants were tested outside the context of the COVID-19 pandemic, either in October 2019, three months before the outbreak in France (n=30) or two years later in June 2022 (n=28), after the lift of the sanitary state of emergency. Their belief updating behavior was compared to sixty-five participants tested during the sanitary state of emergency due to the COVID-19 outbreak in France. This was either during the first very strict lockdown of social and economic life (e.g., schools closed, stay-at-home orders, shops, and museums closed) from March to April 2020 (n=34) or one year later in Mai 2021 (n=31), when lockdowns were less strict (e.g., schools open, museums and shops closed, part-time curfew), but the COVID-19 pandemic still unfolding. Belief updating was measured by a behavioral task that asked participants to estimate their risk of experiencing adverse future life events before and after receiving information about these events’ actual base rates. Observed belief updating behavior was fitted to an RL-like and a Bayesian updating model to gain insight into potential underlying strategies of belief updating. The learning rates were compared across groups for insight into how experiencing the COVID-19 pandemic changed beliefs about the future and their updating in the face of belief-disconfirming evidence.
Methods
Ethical considerations
The study protocol followed the Declaration of Helsinki and was approved by the local ethics committee at Sorbonne University. All participants provided informed consent. The authors declare no competing interests.
Participants
One hundred twenty-five participants (mean age = 37.50 ± 1.28, 99 females) allotted to four different groups were recruited for the study (see Table 1 and SI Table 15) via a public advertisement. Two participants from the group tested in June 2022 were excluded from the analyses because they always indicated the same risk estimate for each event.
Experimental design
As shown in Figure 1a, a first group of 30 participants (mean age = 33.73 ± 1.96, 18 females) was recruited in October 2019 before the COVID-19 outbreak in France. These participants were tested in the laboratory. A second group of 34 participants (mean age = 42.24 ± 3.34, 25 females) was recruited for online testing during the first COVID-19-related lockdown of social and economic life, with schools closed from March to April 2020. A third group of 31 participants (mean age = 42.42 ± 3.35, 20 females) was recruited and tested online immediately after the last lockdown and still during the COVID-19 pandemic in Mai 2021. A fourth group of 30 participants (mean age = 34.66 ± 2.71, 16 females) was recruited at the lift of the COVID-19 pandemic-related state of emergency and tested in the laboratory in June 2022. This group was also used to rule out an eventual effect of task design. Half of them performed a one-run (n = 15) task design, and the other half performed a two-run (n = 15) task design (e.g., see in more detail the belief updating task description below). Note the 30 participants tested before the COVID-19 pandemic were recontacted during the first strict lockdown to re-perform the belief updating task online. This allowed us to check for the effects of experiencing a COVID-19-related lockdown within the same cohort of participants. Two of the thirty participants of this group did not respond. Therefore, the sample size for the within-group test-retest analyses was 28 participants.
Sample sizes:
The sample sizes were determined by a power analysis using the power curve function in R (version 1.2.5033) and building on the good news/bad news bias observed in the first group tested in October 2019 before the COVID-19 outbreak in France. The sample size required to replicate a significant effect of estimation error valence on the absolute updating with a power between 80% and 90% lay between 28 and 35 participants, respectively.
Belief Updating Task
All participants performed a belief-updating task (Figure 1b). For in-person testing, stimulus presentation and response recording were done with the Psychophysics toolbox in MATLAB (R2018b, Update 6, version 9.5.0.1265761). The online testing was done using Qualtrics (Qualtrics Software, version March 2020 of Qualtrics, Copyright 2020 Qualtrics. Available at https://www.qualtrics.com).
The task comprised 40 trials with 40 adverse lifetime events and base rates. In each trial, participants were asked to estimate the likelihood of experiencing an adverse event in the future for themselves and somebody else before and after receiving information about the likelihood of occurrence in the general population (i.e., the base rate). The adverse life events and their actual base rates were taken from previously published work in healthy controls(2). The base rates were uniformly distributed and ranged between 10 and 70%. Participants were instructed to rate their estimates between 3% and 77%.
The task design variations:
Fifty-eight participants underwent assessment outside the COVID-19 pandemic, with 45 performing a two-run task design (n=30 tested before the outbreak in October 2019; n = 15 tested at the end of the sanitary state of emergency in June 2022). The remaining 13 participants tested outside the pandemic performed the one-run task design like the 65 participants tested during the pandemic.
In the two-run task design, participants performed a first run of 40 trials. Each trial started with the display of an adverse lifetime event. Participants were asked to estimate the risk of experiencing this event in the future for themselves (E1 rating) and for somebody else (eBR rating). At the end of each trial, they received information about the event’s base rates and rated their confidence. In a second run, they saw an adverse future life event and its base rate on each trial. They then re-estimated their risk (E2 rating) on a trial-bytrial basis. The one-run task design is displayed in Figure 4b and consists of one run of 40 trials. Within each trial, participants first estimated the risk of experiencing a future adverse lifetime event for themselves (E1 rating) and for somebody else (eBR rating), were presented with the base rate for this event (BR), rated their confidence in the base rate and re-estimated their risk of experiencing the event in the future (E2 rating). Note that all analyses were controlled for these differences in task design, which had non-significant effects on belief updating, confidence ratings, estimation error magnitude, and learning rates (see SI tables 2, 3, 5, 6, 7).
Belief updating task measures of interest:
The estimation error indicated whether participants overestimated or underestimated their likelihood of experiencing an adverse event (E1) relative to its actual base rate (aBR). The estimation error (EE) was calculated according to the equation i:
The estimation error was further used to categorize trials into good or bad news trials:
For good news trials, the estimation error was positive (EE > 0), which indicated an overestimation of one’s likelihood of experiencing an adverse life event relative to the base rate of that event (E1 > aBR). For bad news trials, the estimation error was negative (EE < 0), which indicated an underestimation of one’s likelihood of experiencing an adverse event relative to its actual base rate (E1 < aBR).
The main variable of interest was the magnitude of belief updating (UPD), which was calculated as the difference between the first (E1) and the second (E2) estimate after receiving information about the base rate (BR). Notably, the update was calculated for good and bad news trials, respectively, following equation ii:
Lastly, the difference between updating after good and bad news was calculated to assess the updating bias following equation iii:
A positive difference indicated that participants updated their beliefs about their lifetime risks of experiencing adverse life events more frequently following good news than bad news.
For each participant, trials that did not receive a response (on average of 0.44 trials per subject) and trials with an EE = 0 (on average of 0.63 trials per subject) were excluded from the analyses.
The distance measured the extent to which participants consider their probability of experiencing a given adverse event (E1) different from the lifetime risk of someone from a similar socio-economic background (eBR). If positive, it reflected an optimistic bias in initial estimates. The following, distance = eBR — E1, was calculated.
Model-free statistical analyses of observed belief updating behavior:
The main aim of this study was to assess how belief updating was affected by the context of experiencing the COVID-19 pandemic.
We, therefore, conducted between-context analyses, contrasting groups tested during (i.e., during the first lockdown in March/April 2020 and immediately after the last lockdown one year later in Mai 2021) and outside the pandemic context (i.e., before the outbreak in October 2019 and one year after the pandemic in June 2022). All statistical tests were conducted using the MATLAB Statistical Toolbox (MATLAB 2018b, MathWorks) and JASP (JASP 0.16.4).
A first linear mixed effects model (LME 1) was fitted to the absolute belief updating, following equation iv:
The model included fixed effects for estimation error magnitude (|EE|), estimation error valence (EEvalence, coded -1 for bad news trials and 1 for good news trials), context (coded 0 for outside and 1 for during the COVID-19 pandemic), task design (coded 1 for one-run, 2 for two-run design), age, gender (coded 0 for male, 1 for female participants), level of education, and the interaction of interest EE valence by context. The model also included random intercepts nested by subject number and random slopes for estimation error magnitude and valence.
Subsequently, the same Linear Mixed Effects (LME) model was applied again to the absolute belief update to explore the categorical effect of context in conjunction with EE valence. This allowed for a more specific comparison of the impact of EE valence between contexts (groups): those tested before the COVID-19 pandemic outbreak in October 2019 (baseline) compared to those tested during the initial COVID-19-related lockdown (context 1), those tested immediately after the last lockdown during the pandemic in Mai 2021 (context 2), and those tested one year post-pandemic in June 2022 (context 3), respectively.
Post-hoc two-tailed and one-tailed t-tests were conducted to characterize the directionality of detected main effects and interactions.
Model-based analyses of belief-updating behavior
To gain more insight into putative cognitive mechanisms of belief updating during and outside the COVID-19 pandemic, two families of non-linear computational models were fitted to observe belief updating behavior, which is specified below.
Model specifications
(1) Reinforcement learning model of belief updating.
A Reinforcement learning-like model assumed that belief updating is proportional to the magnitude of the estimation error following Kuzmanovic and Rigoux, 2017 (5). The learning rate scaled the effect of the estimation error on belief updating following the generic equation v:
Importantly, the learning rate was estimated for good and bad news trials separately and following equations vi and vii:
For both types of trials, the learning rate was composed of two components that varied across participants. The scaling parameter (S) measured the extent to which a participant took the prediction error into account when updating beliefs. The asymmetry parameter (A) indicated to what extent the belief updating differed for positive and negative estimation errors. The priors for scaling and asymmetry were untransformed and unbound. The mean of the prior distribution for scaling was set to 1. Thus, a scaling of 1 meant that the updating magnitude equaled the estimation error magnitude. The mean of the prior distribution for the asymmetry parameter was set to zero. An asymmetry parameter value larger than zero meant positively biased updating, whereas an asymmetry parameter smaller than zero meant negatively biased belief updating.
A version of the RL-like model of belief updating took the personal relevance (PR) of presented adverse future life events into account following equations viii and ix:
The PR weighed the estimation error (EE) and corresponded to the difference between the estimated base rate (eBR — the estimated risk for somebody else) and the initial estimate (E1 — the estimated risk for oneself). Based on the sign of this difference between eBR and E1, the PR was calculated following equations x to xii:
(2) Bayesian belief updating model
A second family of computational models was fitted to belief updating behavior and assumed that belief updating was proportional or equal to the Bayes rule, following equations xiii and xiv:
The scaling parameter (S) corresponded to the tendency of participants to update their beliefs in response to the presented base rate. A scaling smaller than one (S < 1) indicated lesser belief updating than what was predicted by the Bayes rule, and a scaling larger than one (S > 1) indicated more updating than predicted by the Bayes rule.
The Bayes rule was used to define a Bayesian second estimate (E2b, the updated belief), which was calculated following equations xv and xvi:
With the Prior = P(BR), corresponding to the base rate (BR) of each event following equation xvii:
The Likelihood Ratio (LHR) indicates the probability of the initial estimate (E1) relative to the likelihood of the alternative estimated base rate (eBR) following equation xviii:
Alternative models of these two model families (RL and Bayesian) were fitted to the observed belief-updating behavior. Each model alternative represented a different combination of free parameters composing the learning rate to test a total of 12 assumptions about the cognitive process underlying belief updating:
RL model 1. Belief updating is asymmetrical and proportional to the estimation error: S + A varied across participants.
RL model 2. Belief updating is non-asymmetrical and proportional to the estimation error: S varied, A was silent (fixed to zero)
RL model 3. Asymmetrical updating is equal to the estimation error: S is fixed (to 1), and A is varied
RL model 4. Updating equals the estimation error: S and A were fixed
RL model 5. Belief updating is asymmetrical, proportional to the estimation error, and moderated by the personal relevance of events (PR): S + A vary. PR is weighting the EE following equations v. and vi.equations v. and vi.
RL model 6. Belief updating is non-asymmetrical and proportional to the estimation error moderated by PR: S varies, A is fixed, and PR weighs the EE following equations v. and vi.equations v. and vi.
RL model 7. Asymmetrical updating equal to the estimation error moderated by PR: S is fixed, A varies, and PR weighs the EE following equations v. and vi.equations v. and vi.
RL model 8. Updating equals the estimation error moderated by PR: S and A are fixed, and PR weighs the EE following equations v. and vi.equations v. and vi.
Bayesian model 1. Belief updating is asymmetrical and proportional to Bayes rule: S and A vary.
Bayesian model 2. Belief updating is proportional to a rational Bayes rule: S varies, and A is fixed.
Bayesian model 3. Belief updating equals an asymmetrical Bayes rule: S is fixed, and A varies.
Bayesian model 4. Belief updating equals a rational Bayes rule: S and A are fixed.
Model estimation
Models were estimated following the procedure reported by Kuzmanovic and Rigoux 2017 and Bottemanne et al., 2022(5)(21). In short, models were not hierarchical, and parameter estimation was thus less sensitive to differences in group sample sizes. For each participant, optimal scaling and asymmetry parameter values were obtained using Bayesian variational inferences implemented in the VBA toolbox(24).
Model comparisons
The free energy approximations for a model’s evidence in each participant were entered into a random effect Bayesian model comparison that yielded the two criteria considered for model selection: the estimated model frequency (Ef) in each group and the exceedance probability (pxp), which corresponded to the likelihood of the model to describe best the observed belief updating behavior in each participant.
Parameter recovery
Parameter recovery analysis was conducted to check whether the free parameters of the winning models were identifiable and described the data better than any other set of parameters. The procedure was the same as reported in Bottemanne et al. 2022(21). In short, to validate the accuracy of the fitting procedure in providing meaningful parameter values, simulated belief updating data was generated using the observed parameter values for both the optimistic RL model and the optimistic Bayesian updating model. Subsequently, we applied the fitting procedure to these simulated data to iteratively ‘recover’ the parameters. Thereby, the means of the parameters were set to correspond to the observed sample means (i.e., scaling = 0.37 ± 0.02, asymmetry = 0.06 ± 0.01 for the RL model; scaling = 0.40 ± 0.02, asymmetry = 0.04 ± 0.01 for the Bayesian model). This process was iterated to simulate 40 values of belief updates 123 times. The model was then inverted by fitting it to the simulated data, yielding a new set of recovered values for scaling and asymmetry. Finally, the recovered and estimated parameters were compared by assessing their correlation using Pearson’s correlation coefficients.
Parameter comparisons
To compare learning rates and learning rate components across groups, we used the parameters from the optimistically biased RL-like model (RL model 1), which performed best when fitted to the whole dataset (Ef = 0.40, pxp = 1).
Individual learning rates from this RL model 1 and their scaling and asymmetry components were the dependent variables (DV) of the following generic linear mixed effects model (equation xix and xx):
The model included fixed effects for news valence (valence, coded 1 for good news, -1 for bad news), context (coded 0 for outside the pandemic, 1 for during the pandemic), task design (coded 1 for one-run and 2 for two-run), age, gender (coded 0 for male, 1 for female participants), and level of education. It also tested the interaction of interest context by valence. The intercept was nested at random by subject number.
Post-hoc one-sampled and two-sampled t-tests were conducted to characterize the directionality of effects.
Results
Effects of experiencing the COVID-19 pandemic on the optimism bias in belief updating
A linear mixed effects (LME) model was fitted to absolute belief updates to test whether belief updating was less or more biased during the COVID-19 pandemic. The model found a significant interaction estimation error valence by context (β = -5.54, SE = 1.69, t(232) = -3.28, p = 0.001, 95% CI [-8.87 – -2.21]; SI table 2). As shown in Figures 1a and b, the optimism bias in belief updating disappeared during the COVID-19 pandemic compared to participants tested outside the pandemic. More specifically, it was decreased among participants tested during the initial COVID-19-related strict lockdown in March and April 2020 (EEvalence by context 1: β = -7.39, SE = 2.29, t(228) = -3.32, p = 0.002, 95% CI [-11.91 – -2.86]; SI table 3), as well as in Mai 2021 (EEvalence by context 2: β = -5.59, SE = 2.36, t(228) = -2.37, p = 0.02, 95% CI [-10.24 – -0.93]; SI table 3), compared to those tested before the outbreak in October 2019, respectively. The bias re-emerged among participants tested one year later at the time of the lift of the sanitary state of emergency in June 2022, returning to levels akin to those observed before the pandemic in October 2019 (EEvalence by context 3: β = -2.11, SE = 2.46, t(228) = -0.86, p = 0.39, 95% CI [-6.95 – 2.73]; Figure 2a, SI table 3). The effect of the COVID-19 pandemic on belief updating was driven by a significant decrease in belief updating following good news during the pandemic compared to participants tested outside the pandemic (t(121) = 2.66, p = 0.009, Cohen’s d = 0.48, two sampled, two-tailed t-test, Figure 2b). No contextual group difference was observed for belief updating following bad news (t(121) = -1.77, p = 0.08, Cohen’s d = -0.32, two sampled, two-tailed t-test, Figure 1b). This effect could be reproduced when fitting an analogous LME to belief updates observed in a group of participants (n=28) who were tested both before and during the pandemic (EE valence by context interaction: β = -7.66, SE = 1.49, t(103) = -5.13, p < 0.001, 95% CI [-10.62 – 4.70]; SI figure 1, SI table 4). Note that all effects were controlled for participant age, years of higher education, gender, confidence in the base rates, belief updating task design, and estimation error magnitude.
Effects of COVID-19-related lockdown on belief updating variables
As shown in Figure 2c, experiencing the COVID-19 pandemic influenced participants’ confidence in the base rates, with significantly lower confidence ratings observed among those tested outside the pandemic compared to those tested during it (β = 14.11, SE = 4.52, t(233) = 3.12, p = 0.002, 95% CI [5.19 – 23.02]; SI Table 5). Moreover, a significant interaction of EE valence by context (β = 2.19, SE = 0.67, t(233) = 3.28, p = 0.001, 95% CI [0.88 – 3.51]; SI Table 6) was found for absolute estimation error magnitude. This finding indicated that participants tested during the pandemic overestimated their risk of experiencing adverse future life events relative to base rates more largely than participants tested outside the pandemic (t(121) = -3.01, p = 0.003, Cohen’s d = -0.54, Figure 2d). On the contrary, the two groups did not differ significantly in the magnitude of negative estimation errors (i.e., initial underestimations relative to base rates, t(121) = -0.49, p = 0.63, Cohen’s d = -0.09, two-sampled, two-tailed t-tests (Figure 2d). This finding contrasts with the observed difference in how often they made positive estimation errors (i.e., the number of good news trials). Participants tested during the pandemic overestimated less frequently than participants tested outside the pandemic (t(121) = 2.40, p = 0.02, Cohen’s d = 0.43). No significant difference between groups was found for the frequency of underestimations (i.e., reflected by the number of bad news trials; t(121) = -1.85, p = 0.07, Cohen’s d = -0.33). These results indicated that participants held fewer but more negative future outlooks during the pandemic compared to those tested outside the pandemic.
Effects of COVID-19-related lockdown on putative mechanisms of belief updating.
We then sought to identify which putative strategy participants used to update their beliefs about the future during and outside the pandemic. To answer this question, we used computational modeling and model comparisons to rule between twelve alternative models. This approach revealed that belief updating outside the pandemic was more RL-like and optimistic (pxp = 1, Ef = 0.77), while during the pandemic, it was best explained by a rational Bayesian updating model (pxp = 0.90, Ef = 0.43; Figure 3a-c). Similar findings were obtained when conducting model comparisons in the participants tested before and during the lockdown (n=28, SI Figure 2).
Effects of the Covid-19-related lockdown on hidden, latent variables of belief updating
Next, we compared the effects of experiencing the COVID-19 pandemic on the learning rates and its components.
A linear mixed effects model (LME), analogous to the LME fitted to observed belief updates, was fitted to the learning rates and detected a main effect of EE valence (β = 0.09, SE = 0.01, t(236) = 7.14, p = 1.18e-11, 95% CI [0.06 – 0.11]), and a significant interaction EE valence by context (β = -0.03, SE = 0.02, t(236) = -2.11, p = 0.04, 95% CI [-0.07 – -0.002]; Figure 4a, SI table 7). A main effect of EE valence (β = 0.07, SE = 0.02, t(105) = 3.22, p = 0.002, 95% CI [0.03 – 0.12]; SI table 8) and context (β = -0.10, SE = 0.03, t(105) = -3.10, p = 0.002, 95% CI [-0.17 – -0.04]; SI table 8) on learning rates was detected when comparing the participants, who were both tested before and during the pandemic. As shown in Figure 4a, all participants’ learning rates were lower in response to bad news than to good news. Still, the difference between good and bad news learning rates was significantly reduced for participants tested during the pandemic. In line with the observed belief updating after good and bad news, the effect of context on the learning rates was driven by a decrease in the learning rates from positive estimation errors in participants tested during the pandemic compared to participants tested outside the pandemic (t(121) = 2.17, p = 0.03, Cohen’s d = 0.39). Both groups did not differ in their learning rates from negative estimation errors (t(121) = 0.87, p = 0.39, Cohen’s d = 0.16, two sampled, two-tailed t-tests).
Parameter recovery was successful for the scaling and asymmetry components of the learning rates (Figure 4b), which indicated that the model gave identifiable values for these parameters. We, therefore, were able to explore potential group differences in the learning rate components in more detail. Linear mixed effects modeling found a main effect of context for the asymmetry component (β = -0.04, SE = 0.02, t(117) = -2.32, p = 0.02, 95% CI [-0.07 – -0.01]; Figure 4c, SI table 9), but not for the scaling component (β = -0.07, SE = 0.05, t(117) = -1.54, p = 0.13, 95% CI [-0.16 – 0.02]; Figure 3c, SI table 10). The average asymmetry of learning rates was positive in both groups but significantly smaller in participants tested during the pandemic than those tested outside (t(121) = 2.00, p =0.048, Cohen’s d = 0.36, two-sampled, two-tailed t-test, Figure 4c). This result indicated that participants considered positive estimation errors more than negative ones but less when experiencing the COVID-19 pandemic. Similar results were found in the within-subject group (n = 28), with a significant main effect of context on asymmetry (β = -0.06, SE = 0.02, t(51) = -3.72, p = 0.001, 95% CI [-0.09 – -0.03]; SI table 11), but not on alpha (β = -0.10, SE = 0.05, t(51) = -1.96, p = 0.06, 95% CI [-0.21 – 0.003]; SI table 12).
Discussion
This study investigated how experiencing the COVID-19 pandemic impacted the optimism bias in updating beliefs about the future. Belief updating was optimistically biased before the COVID-19 outbreak. It faded during the COVID-19 pandemic and reemerged after the pandemic. The lack of optimistically biased belief updating during the pandemic was related to three effects: (1) a decreased sensitivity to positive belief-disconfirming information, (2) fewer but stronger negative beliefs about the future, and (3) more confidence in base rates. Computational modeling showed that a rational Bayesian model best-described belief updating during the pandemic.
In contrast, an optimistic RL-like model best approximated belief updating outside the pandemic. Both models showed that the attenuated optimism bias in belief updating during the pandemic was not due to a learning deficit. The groups were similar in how much they integrated overall evidence in favor or against initial beliefs. On the contrary, it was explained by a diminished learning asymmetry in considering positive belief-disconfirming evidence that paralleled the observed belief-updating behavior.
The finding that optimistically biased belief updating faded during the pandemic favors the hypothesis that experiencing an adverse life event such as a pandemic weakens optimistic outlooks. It further aligns with the body of research that has explored the malleability of the optimistically biased belief updating and information integration under acute threat and stress and mood disorders such as depression(17)(19)(18)(20). While our findings align with these previous findings, we also observed a difference. Notably, our sample tested during the pandemic considered positive, favorable information less while showing no change in negative, unfavorable information consideration. Differences in populations and task designs might explain these odds. However, it could also be specific to experiencing the COVID-19 pandemic, which involved an immediate, unpredictable, and global health threat with high uncertainty about its outcome and significant psychological repercussions (7)(25).
Mental health assessments during the COVID-19 pandemic indicated that anxiety, stress, paranoia, and depression levels were more prevalent in the population(8)(9). The rapid spread of SARS-CoV-2 and the emergence of COVID-19 cases worldwide constitute a challenging situation that, in five months, has shifted from an elusive and distant threat to an immediate and drastic health and economic crisis. All citizens were confronted daily with alarming figures such as infection rates or mortality, and rather mundane everyday activities, from grocery shopping to jogging, became stressful and threatening situations during which one could catch a potentially fatal infection. Moreover, for many, the COVID-19-related lockdowns of social and economic life implied a physical cut-off from friends and relatives, and life plans, routines, and activities were severely disrupted. It also implied a substantial financial risk. Previous research has shown that economic uncertainties, particularly during marked economic inequality and epidemics, can contribute to beliefupdating fallacies reflected by the rise of conspiracy theories(26). We did not collect information about the COVID-19 infection status of participants, which precludes a direct exploration of the immediate effects of experiencing the infection on belief-updating behavior and the potential interaction with anxiety and stress levels. This limitation is noteworthy, as the impact of experiencing the pandemic on belief updating about the future could differ between those who directly experienced infection and those who remained uninfected. It is also important to acknowledge that our study was timely and geographically limited to the context of the COVID-19 outbreak in France. Cultural variations and differences in governmental responses to contain the spread of SARS-CoV-2 may have impacted the optimism biases differently.
The observed lack of optimistically biased belief updating may be interpreted as an adaptive response to the experience of an unprecedented level of uncertainty and chronic threat during a global crisis. Although we did not have access to anxiety and stress perceptions during and outside the pandemic, our computational modeling results corroborate to some extent this interpretation. Notably, belief updating was more optimistically biased RL-like outside the pandemic and more rational Bayesian-like during the pandemic. The biased RL-like updating behavior observed outside the pandemic indicated that participants relied on the motivational salience of positive estimation errors to teach them how to update their beliefs about the future by trial and error. This finding aligns with past work, showing that RL-like updating models best explain belief updating in non-threatening, non-stressful, and predictable laboratory contexts(5). It further suggests that RL strategies are a computationally efficient way to guide decision-making and belief formation when the environment is stable and predictable(27). For instance, in environments with well-defined reward structures, the human brain has been shown to efficiently rely on RL and avoid a computational overhead associated with the Bayesianlike inference process(28).
On the contrary, belief updating was more rational and Bayesian-like during the COVID-19 pandemic, indicating that participants weighed the uncertainty of evidence in favor of and against their prior beliefs. This finding aligns with research about Bayesian networks to model semantic knowledge processing under uncertainty and with work that uses Bayes rules to understand how humans learn and choose under uncertainty(30)(31) (32).
It is essential to acknowledge that computational modeling provides insight into potential mechanisms, but this excludes inferences on whether humans indeed update beliefs in the way the best-fitting model assumes. Other models, such as evidence accumulation models, may also work when humans update their beliefs about the future and their immediate surroundings(33). Unfortunately, we did not assess reaction times during belief updating, crucial for fitting evidence accumulation models such as drift-diffusion models to observed behavior. However, we can infer from our findings that the two model families employed to fit observed belief-updating behavior represented two different but complementary prediction strategies. These strategies were then used to function in the uncertainty of real-life conditions. We call for more studies investigating these computational models’ psychological and biological validity under certainty and uncertainty.
In conclusion, our results provide insight into the resilience and adaptability of beliefupdating processes during and following the COVID-19 pandemic. They demonstrate the malleability of the human ability to anticipate the future and how it can adapt to real-life conditions under which an overly optimistic view of future risks would be harmful.
Acknowledgements
We thank Tali Sharot for helpful comments on the results. This work was supported by core funding from the Paris Brain Institute Foundation.
Additional information
Author Contributions
OM, HB, PF and LS designed the study, OM, HB, LT and TC collected data, IK, OM, LT, TC analyzed the data under supervision from LS, IK, OM and LS wrote a first draft of the manuscript and all authors contributed to the final text.
Competing Interest Statement
The authors have no competing interests as defined by Nature Research, or other interests that might be perceived to influence the interpretation of the article. The authors have no non-financial competing interests as defined by Nature Research, or other interests that might be perceived to influence the interpretation of the article.
Supplementary Information
1. Additional behavioral analysis
a. Within-group comparisons
Belief updating was compared within a group of participants (n=28), who were tested both before (October 2019) and during the COVID-19 pandemic (March-April 2020). A mixed effects linear regression analysis of belief updating showed a significant main effect of EE valence (ß = 4.05, SE = 1.22, t(103) = 3.31, p = 0.001, 95% CI [1.63 – 6.47]), as well as a significant effect of testing context (ß = -4.41, SE = 1.49, t(103) = -2.95, p = 0.004, 95% CI [-7.37 – -1.45]), and more importantly a significant EE valence by testing context interaction (ß = -7.66, SE = 1.49, t(103) = -5.13, p < 0.001, 95% CI [-10.62 – -4.70], SI Figure 1, SI table 4). A significant main effect of gender was also found (ß = 3.53, SE = 1.64,t(103) = 2.15, p = 0.03, 95% CI [0.27 – 6.79]), with women updating their beliefs more than men. Post-hoc t-tests revealed that participants tested before the emergence of the pandemic updated their beliefs more after receiving good news (mean UPDgood = 14.38 ± 1.14) than after receiving bad news (mean UPDbad = 6.23 ± 1.17; t(27) = 4.93, p < 0.001, Cohen’s d = 0.93, paired sample t-tests). This optimism bias in belief updating was not observed when the same 28 participants were tested during the first lockdown (mean UPDgood = 2.25 ± 1.76; mean UPDbad = 9.30 ± 2.47; t(27) = -1.84, p = 0.08, Cohen’s d = -0.35, paired sample t-tests).
b. Sources of variance in belief updating.
In Figure 1b, participants tested during the COVID-19 pandemic showed more variance in belief updating in response to both good and bad news than those tested outside the pandemic. This variability might be because they ignored the base rates when updating their beliefs, possibly influenced by the shift to online testing during the pandemic.
If participants paid attention to the base rates, they were expected to update toward the base rate, yielding positive values for the update. On the contrary, ignoring the base rates can be reflected by updating away from the base rate, with second estimates (E2) that, in the case of good news trials, lay above the first estimate (E1; e.g., E1 = 60%, BR = 40 %, E2 = 70 %, UPD = E1 - E2 = -10 %). Likewise, in the case of bad news trials,second estimates may lay below the first estimate (e.g., E1 = 20 %, BR = 40 %, E2 = 10 %, UPD = E2 - E1 = -10%). We examined the number of trials with such paradoxical second estimates yielding negative values for the belief update. We found no significant difference (t(121) = 1.77, p = 0.08, Cohen’s d = 0.32) between the number of paradoxical trials in participants tested outside (6.09 ± 0.47 trials on average) and during (4.77 ± 0.56 trials on average) the pandemic. This suggests that participants tested online exhibited no greater propensity for paradoxical responses than those tested inperson.
Second, we tested if the observed variance in belief updating between the groups was due to second estimates that over- and undershot the base rates. For instance, individuals who undershot indicated second estimates below base rates signaling good news (e.g., E1 = 40 %, BR = 20 %, E2 = 10 %). In contrast, individuals who overshot indicated second estimates above the base rates, signaling bad news (e.g., E1 = 10%, BR = 20%, E2 = 40%). Undershooting might indicate an attuned sensitivity to good news, and overshooting an attuned sensitivity to bad news. We found that participants tested outside the COVID-19 pandemic undershot more often (on average 4.48 ± 0.46 trials) than they overshot (on average 2.38 ± 0.33 trials; t(57) = 3.76, p < 0.001, Cohen’s d = 0.49, paired-sample, two-tailed t-test). This propensity aligned with the bias to update beliefs more after good than after bad news in this group. Conversely, participants tested during the pandemic didn’t show a significant difference in the number of trials where they under- (2.00 ± 0.32 trials on average) or overshot (3.08 ± 0.73 trials on average;t(64) = -1.34, p = 0. 19, Cohen’s d = -0.17, paired-sample, two-tailed t-test), aligning with the absence of optimistically biased belief updating. Critically, when comparing the two groups, we found a significant positive interaction testing context by type of shooting (ß = 1.66, SE = 0.50, t(233) = 3.33, p = 0.001, 95% CI [0.68 – 2.65]; SI table 14). Post-hoc t-tests showed that participants tested during the COVID-19 pandemic undershot more often than participants tested during the pandemic (t(121) = 4.48, p < 0.001, Cohen’s d = 0.81, two-sample two-tailed t-test). No differences were observed between the groups regarding overshooting (t(121) = -0.84, p = 0.40, Cohen’s d = -0.15, two-sample two-tailed t-test). These findings indicated that participants tested outside the pandemic were more sensitive to good news, corroborating the findings of a more robust good news/bad news bias in belief updating found in this group.
2. Additional computational analyses
a. Model comparisons among participants tested before and during the pandemic.
We then conducted a Bayesian model comparison to the model fits of belief updating observed in the group of participants tested both before and during the lockdown. This cohort’s belief updating was more RL-like before the lockdown (Ef = 0.73, pxp = 1), and rational Bayesian during the lockdown (Ef = 0.61, pxp = 0.99; SI Figure 2).
3. Task instructions
The following instructions in French were displayed to participants prior to performing the task.
“Il s’agit d’un test qui mesure vos croyances concernant de futurs événements de la vie. 40 événements vous seront présentés successivement.
Pour chacun d’entre eux, vous devrez indiquer :
La probabilité qu’il se produise dans votre vie future (Pour vous: quelle est la probabilité estimée que cet évènement vous arrive)
La probabilité que cela arrive à quelqu’un d’autre (Pour quelqu’un d’autre : selon vous, quelle est la probabilité que quelqu’un d’autre vive cet évènement) On vous montrera alors la probabilité officielle que cet évènement se produise pour une personne de votre groupe socio-démographique. Ensuite, on vous demandera de saisir :
Votre niveau de confiance dans le taux officiel, si vous pensez qu’il est correct ou qu’il s’agit d’une erreur de sondage.
La probabilité qu’il se produise dans votre vie future (Pour vous: maintenant que vous connaissez l’estimation de l’INSEE, quelle est, selon vous, la probabilité que cet évènement vous arrive).
Notez que tous les évènements présentés ont une probabilité estimée située entre 3% et 77%.”
4. SI tables: 1 – 15
References
- 1.Unrealistic optimism about future life eventsJournal of Personality and Social Psychology 39:806–820
- 2.How unrealistic optimism is maintained in the face of realityNature Neuroscience 14:1475–1479
- 3.Forming Beliefs: Why Valence MattersTrends in Cognitive Sciences 20:25–33
- 4.Influence of vmPFC on dmPFC Predicts Valence-Guided Belief FormationJ. Neurosci 38:7996–8010
- 5.Valence-Dependent Belief Updating: Computational ValidationFront. Psychol 8
- 6.Optimistic update bias holds firm: Three tests of robustness following Shah et alConsciousness and Cognition: An International Journal 50:12–22
- 7.COVID-19 and mental health: A review of the existing literatureAsian Journal of Psychiatry 52
- 8.Mental Health and COVID-19: Early evidence of the pandemic’s impact: Scientific brief
- 9.Paranoia and belief updating during the COVID-19 crisisNat Hum Behav 5:1190–1202
- 10.Illusion and well-being: A social psychological perspective on mental healthPsychological Bulletin 103:193–210
- 11.Distinguishing optimism from neuroticism (and trait anxiety, self-mastery, and self-esteem): A reevaluation of the Life Orientation TestJournal of Personality and Social Psychology 67:1063–1078
- 12.Is Optimism Associated With Healthier Cardiovascular-Related Behavior?: Meta-Analyses of 3 Health BehaviorsCirc Res 122:1119–1134
- 13.Does Happiness Promote Career Success?Journal of Career Assessment 16:101–116
- 14.Dispositional optimismTrends in Cognitive Sciences 18:293–299
- 15.OptimismClinical Psychology Review 30:879–889
- 16.The Glass is Half-Full: Overestimating the Quality of a Novel Environment is AdvantageousPLoS ONE 7
- 17.Depressive symptoms are associated with unrealistic negative predictions of future life eventsBehaviour Research and Therapy 44:861–882
- 18.Updating Beliefs under Perceived ThreatJ. Neurosci 38:7901–7911
- 19.Losing the rose tinted glasses: neural substrates of unbiased belief updating in depressionFront. Hum. Neurosci 8
- 20.Depression is related to an absence of optimistically biased belief updating about future life eventsPsychol. Med 44:579–592
- 21.Evaluation of Early Ketamine Effects on Belief-Updating Biases in Patients With Treatment-Resistant DepressionJAMA Psychiatry 79
- 22.“Risk perception and optimism bias during the early stages of the COVID-19 pandemic”PsyArXiv
- 23.Self-beneficial belief updating as a coping mechanism for stress-induced negative affectScientific Reports 11
- 24.VBA: A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural DataPLoS Comput Biol 10
- 25.The psychological impact of quarantine and how to reduce it: rapid review of the evidenceThe Lancet 395:912–920
- 26.Conspiracy Theories: A Public Health Concern and How to Address ItFront. Psychol 12
- 27.Reinforcement Learning and Episodic Memory in Humans and Animals: An Integrative FrameworkAnnu. Rev. Psychol 68:101–128
- 28.Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral controlNature Neuroscience 8:1704–1711
- 29.Probabilistic Reasoning in Intelligent Systems: Networks of Plausible InferenceMorgan Kaufmann
- 30.How to Grow a Mind: Statistics, Structure, and AbstractionScience 331:1279–1285
- 31.Optimal Predictions in Everyday CognitionPsychol Sci 17:767–773
- 32.Computational rationality: A converging paradigm for intelligence in brains, minds, and machinesScience 349:273–278
- 33.Weaker Evidence Is Required to Reach Undesirable ConclusionsJ. Neurosci 41
Article and author information
Author information
Version history
- Preprint posted:
- Sent for peer review:
- Reviewed Preprint version 1:
Copyright
© 2024, Khalid et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 132
- downloads
- 3
- citations
- 0
Views, downloads and citations are aggregated across all versions of this paper published by eLife.