1. Neuroscience
Download icon

Paranoia as a deficit in non-social belief updating

  1. Erin J Reed
  2. Stefan Uddenberg
  3. Praveen Suthaharan
  4. Christoph H Mathys
  5. Jane R Taylor
  6. Stephanie Mary Groman
  7. Philip R Corlett  Is a corresponding author
  1. Interdepartmental Neuroscience Program, Yale School of Medicine, United States
  2. Yale MD-PhD Program, Yale School of Medicine, United States
  3. Princeton Neuroscience Institute, Princeton University, United States
  4. Department of Psychiatry, Connecticut Mental Health Center, Yale University, United States
  5. Scuola Internazionale Superiore di Studi Avanzati (SISSA), Italy
  6. Translational Neuromodeling Unit (TNU), Institute for Biomedical Engineering, University of Zurich and ETH Zurich, Switzerland
Research Article
  • Cited 0
  • Views 1,919
  • Annotations
Cite this article as: eLife 2020;9:e56345 doi: 10.7554/eLife.56345

Abstract

Paranoia is the belief that harm is intended by others. It may arise from selective pressures to infer and avoid social threats, particularly in ambiguous or changing circumstances. We propose that uncertainty may be sufficient to elicit learning differences in paranoid individuals, without social threat. We used reversal learning behavior and computational modeling to estimate belief updating across individuals with and without mental illness, online participants, and rats chronically exposed to methamphetamine, an elicitor of paranoia in humans. Paranoia is associated with a stronger prior on volatility, accompanied by elevated sensitivity to perceived changes in the task environment. Methamphetamine exposure in rats recapitulates this impaired uncertainty-driven belief updating and rigid anticipation of a volatile environment. Our work provides evidence of fundamental, domain-general learning differences in paranoid individuals. This paradigm enables further assessment of the interplay between uncertainty and belief-updating across individuals and species.

eLife digest

Everyone has had fleeting concerns that others might be against them at some point in their lives. Sometimes these concerns can escalate into paranoia and become debilitating. Paranoia is a common symptom in serious mental illnesses like schizophrenia. It can cause extreme distress and is linked with an increased risk of violence towards oneself or others. Understanding what happens in the brains of people experiencing paranoia might lead to better ways to treat or manage it.

Some experts argue that paranoia is caused by errors in the way people assess social situations. An alternative idea is that paranoia stems from the way the brain forms and updates beliefs about the world. Now, Reed et al. show that both people with paranoia and rats exposed to a paranoia-inducing substance expect the world will change frequently, change their minds often, and have a harder time learning in response to changing circumstances.

In the experiments, human volunteers with and without psychiatric disorders played a game where the best choices change. Then, the participants completed a survey to assess their level of paranoia. People with higher levels of paranoia predicted more changes would occur and made less predictable choices. In a second set of experiments, rats were put in a cage with three holes where they sometimes received sugar rewards. Some of the rats received methamphetamine, a drug that causes paranoia in humans. Rats given the drug also expected the location of the sugar reward would change often. The drugged animals had harder time learning and adapting to changing circumstances.

The experiments suggest that brain processes found in both rats, which are less social than humans, and humans contribute to paranoia. This suggests paranoia may make it harder to update beliefs. This may help scientists understand what causes paranoia and develop therapies or drugs that can reduce paranoia. This information may also help scientists understand why during societal crises like wars or natural disasters humans are prone to believing conspiracies. This is particularly important now as the world grapples with climate change and a global pandemic. Reed et al. note paranoia may impede the coordination of collaborative solutions to these challenging situations.

Introduction

Paranoia is excessive concern that harm will occur due to deliberate actions of others (Freeman and Garety, 2000). It manifests along a continuum of increasing severity (Freeman et al., 2005; Freeman et al., 2010; Freeman et al., 2011; Bebbington et al., 2013). Fleeting paranoid thoughts prevail in the general population (Freeman, 2006). A survey of over 7000 individuals found that nearly 20% believed people were against them at times in the past year; approximately 8% felt people had intentionally acted to harm them (Freeman et al., 2011). At a national level, paranoia may fuel divisive ideological intolerance. Historian Richard Hofstadter famously described catastrophizing, context insensitive political discourse as the ‘paranoid style’:

“The paranoid spokesman sees the fate of conspiracy in apocalyptic terms—he traffics in the birth and death of whole worlds, whole political orders, whole systems of human values. He is always manning the barricades of civilization. He constantly lives at a turning point [emphasis added]. (Hofstadter, 1964).

At its most severe, paranoia manifests as rigid beliefs known as delusions of persecution. These delusions occur in nearly 90% of first episode psychosis patients (Freeman, 2007). Psychostimulants also elicit severe paranoid states. Methamphetamine evokes new paranoid ideation particularly after repeated exposure or escalating doses (86% and 68%, respectively, in a survey of methamphetamine users) (Leamon et al., 2010).

Paranoia has thus far defied explanation in mechanistic terms. Sophisticated Game Theory driven approaches (such as the Dictator Game [Raihani and Bell, 2018; Raihani and Bell, 2017]) have largely re-described the phenomenon — people who are paranoid have difficulties in laboratory tasks that require trust (Raihani and Bell, 2019). However, this is not driven by personal threat per se, but by negative representations of others (Raihani and Bell, 2018; Raihani and Bell, 2017). We posit that such representations are learned (Fineberg et al., 2014; Behrens et al., 2008), via the same fundamental learning mechanisms (Cramer et al., 2002) that underwrite non-social learning in non-human species (Heyes and Pearce, 2015). We hypothesize that aberrations to these domain-general learning mechanisms underlie paranoia. One such mechanism involves the judicious use of uncertainty to update beliefs: Expectations about the noisiness of the environment constrain whether we update beliefs or dismiss surprises as probabilistic anomalies. The higher the expected uncertainty (i.e., ‘I expect variable outcomes’), the less surprising an atypical outcome may be, and the less it drives belief updates (‘this variation is normal’). Unexpected uncertainty, in contrast, describes perceived change in the underlying statistics of the environment (Yu and Dayan, 2005; Payzan-LeNestour and Bossaerts, 2011; Payzan-LeNestour et al., 2013) (i.e. ‘the world is changing’), which may call for belief revision.

Since excessive unexpected uncertainty is a signal of change, it might drive the recategorization of allies as enemies, which is a tenet of evolutionary theories of paranoia (Raihani and Bell, 2019). We tested the hypothesis that this drive to flexibly recategorize associations extends to non-social, domain-general inferences. We dissected learning mechanisms under expected and unexpected uncertainty – probabilistic variation and changes in underlying task structure (volatility). Here, volatility is a property of the task. Unexpected uncertainty is the perception of that volatility. Participants completed a non-social, three-option learning task which challenged them to form and revise associations between stimuli (colored card decks) and outcomes (points rewarded and lost), in addition to their beliefs about the volatility of the task environment. They encountered expected uncertainty as probabilistic win or loss feedback (‘each option yields positive and negative outcomes, but in different amounts’), and unexpected uncertainty as reassignment of reward probabilities between options (‘sometimes the best option may change,’ reversal events). Although reversal events elicit unexpected uncertainty by driving re-evaluation of the options, participants increasingly anticipate reversals and develop expectations about the stability of the task environment. We implemented an additional task manipulation: a shift in the underlying probabilities themselves (contingency transition, unsignaled to the participants), that effectively changes task volatility. Armed with the task structure and participants’ choices, we applied a Hierarchical Gaussian Filter (HGF) model (Mathys et al., 2011; Mathys et al., 2014) which allowed us to infer participants’ initial beliefs (i.e., priors) about task volatility, their readiness to learn about changes in the task volatility itself (meta-volatility learning rate) and learning rates that captured their expected and unexpected uncertainty regarding the task.

We examined the behavioral and computational correlates of paranoia both in-person and in a large online sample, spanning patients and healthy controls with varying degrees of paranoia. We also undertook a pre-clinical replication in rodents exposed chronically to saline or methamphetamine to determine whether a drug known to elicit paranoia in humans might induce similar perceptions of unexpected uncertainty, without contingency transition (Groman et al., 2018). We predicted that people with paranoia and rats administered methamphetamine would exhibit stronger priors on volatility, facilitating aberrant learning through unexpected uncertainty. We further hypothesized that this learning style would manifest as frequent and unnecessary choice switching (increased choice stochasticity and ‘win-switch’ behavior) rather than increased sensitivity to negative feedback (increased ‘lose-switch’ behavior/decreased ‘lose-stay’ behavior).

Results

We analyzed belief updating across three reversal-learning experiments (Figure 1): an in laboratory pilot of patients and healthy controls, stratified by stable, paranoid personality trait (Experiment 1); four online task variants administered to participants via the Amazon Mechanical Turk (MTurk) marketplace (Experiment 2); and a re-analysis of data from rats on chronic, escalating doses of methamphetamine, a translational model of paranoia (Experiment 3) (Groman et al., 2018).

Probabilistic reversal learning task.

(a) Human paradigm: participants choose between three decks of cards with different colored backs (Blue, Red, and Green) with different, unknown probabilities of reward and loss. (b) Reward contingency schedule for in laboratory experiment (Reward probabilities associated with the different colored decks, Blue, Red, Green, across trials and blocks). On trial 81, the probability context shifted from 90%, 50%, and 10% (dark grey) to 80%, 40%, and 20% without warning (light grey). (c), Reward contingency schedules for online experiment. (d) Rat paradigm: subjects choose between three noseports (Blue, Red, Green, for illustrative puposes) with different probabilities of sucrose pellet reward. (e) Reward contingency schedule for rat experiment (Groman et al., 2018). Performance dependent reversals occur after a certain number of choices of the high reward deck. Performance independent reversals occur regardless of participant behavior.

Experiment 1

First, we explored trans-diagnostic associations between paranoia and reversal-learning in-person. Participants with and without psychiatric diagnoses (mood disorders: anxiety, depression, bipolar disorder, n = 8; schizophrenia spectrum: schizophrenia or schizoaffective disorder, n = 8; and healthy controls, n = 16), completed questionnaire versions of the Structured Clinical Interview for DSM-IV Axis II Personality Disorders (SCID-II) screening assessment (Ryder et al., 2007), Beck’s Anxiety Inventory (BAI) (Beck et al., 1988), Beck’s Depression Inventory (BDI) (Beck et al., 1961), and demographic assessments (Table 1). Approximately two-thirds of participants endorsed three or fewer items on the SCID-II paranoid personality subscale (median = 1 item). Participants who endorsed four or more items were classified as high paranoia (n = 11), consistent with the diagnostic threshold for paranoid personality disorder. Low paranoia (n = 21) and high paranoia groups did not differ significantly by age, nor were there significant group associations with gender, educational attainment, ethnicity, or race, although a larger percentage of paranoid participants identified as racial minorities or ‘not specified’ (Table 1). Diagnostic category (i.e., healthy control, mood disorder, or schizophrenia spectrum) was significantly associated with paranoia group membership, χ2 (2, n = 32)=12.329, p=0.002, Cramer’s V = 0.621, as was psychiatric medication usage, χ2 (1, n = 32)=9.871, p=0.003, Cramer’s V = 0.555. These differences were due to the higher proportion of healthy controls in the low paranoia group. As expected, paranoia, BAI, and BDI scores were significantly elevated in the high paranoia group relative to low paranoia controls (Table 1; paranoia: mean difference (MD) = 0.536, CI=[0.455,0.618], t(30)=13.476, p=2.92E-14, Hedges’ g = 5.016; BAI: MD = 0.585, CI=[0.239, 0.931], t(30)=3.453, p=0.002, Hedges’ g = 1.285, MD = −0.585; BDI: MD = 0.427, CI=[0.078, 0.775], t(11.854) = 2.67, p=0.021, Hedges’ g = 1.255).

Table 1
In Lab vs. Online Version 3.
In LabOnline Version 3
Low Paranoia (n=21)High Paranoia (n=11)Statisticp-valueLow Paranoia (n=56)High Paranoia (n=16)Statisticp-value
Demographics
Age (years)36.0 [3.2]38.9 [3.9]-0.531 (27)†0.638.6 [1.6]32.9 [1.7]2.441 (41.8)†0.019
Gender0.006 (1)‡1§.780 (1)‡0.410
% Female71.4%72.7%n/an/a50.0%62.5%n/an/a
% Male28.6%27.3%n/an/a50.0%37.5%n/an/a
% Other or not specified0%0%n/an/a0%0%n/an/a
Education4.972 (6)‡0.638§5.351 (6)‡0.549§
% High school degree or equivalent 19.0%45.5%n/an/a16.1%6.3%n/an/a
% Some college or university, no degree14.3%0%n/an/a17.9%25.0%n/an/a
% Associate degree 9.5%9.1%n/an/a12.5%12.5%n/an/a
% Bachelor's degree 23.8%27.3%n/an/a35.7%56.3%n/an/a
% Master's degree 9.5%0%n/an/a14.3%0%n/an/a
% Doctorate or professional degree 4.8%0%n/an/a1.8%0%n/an/a
% Completed some postgraduate0%0%n/an/a1.8%0%n/an/a
% Other / not specified19.0%18.2%n/an/a0%0%n/an/a
Ethnicity.134 (1)‡1§.117 (1)‡1§
% Hispanic, Latino, or Spanish origin23.8%18.2%n/an/a8.9%6.3%n/an/a
% Not of Hispanic, Latino, or Spanish origin76.2%81.8%n/an/a91.1%93.8%n/an/a
Race6.250 (4)‡0.186§5.368 (4)‡0.229§
% White61.9%36.4%n/an/a85.7%75.0%n/an/a
% Black or African American19.0%36.4%n/an/a0%12.5%n/an/a
% Asian14.3%9.1%n/an/a3.6%6.3%n/an/a
% American Indian or Alaska Native4.8%0%n/an/a1.8%6.3%n/an/a
% Multiracial0%0%n/an/a3.6%0%n/an/a
% Other / not specified0%18.2%n/an/a5.4%0%n/an/a
Mental Health
Psychiatric diagnosis12.329 (2)‡0.002§7.850 (3)‡0.039§
% No psychiatric diagnosis71.4%9.1%adj. residuals0.00471.4%50.0%adj. residuals0.465
% Schizophrenia spectrum19.0%36.4%adj. residuals0.5460%6.3%adj. residuals0.307
% Mood disorder9.5%54.5%adj. residuals0.020#21.4%43.8%adj. residuals0.356
% Not specified0%0%adj. residualsn/a7.1%0%adj. residuals0.751
% Medicated23.8%81.8%9.871 (1)‡0.003§7.1%31.3%8.730 (2)‡0.023§
Beck's Anxiety Inventory0.27 [0.08]0.85 [0.17]-3.453 (30)†0.0020.24 [0.04]0.90 [0.20]-3.303 (16.179)†0.004
Beck's Depression Inventory0.23 [0.05]0.66 [0.15]-2.67 (11.854)†0.0210.25 [0.04]1.03 [0.19]-3.951 (16.659)†0.001
SCID Paranoia Personality Score0.09 [0.02]0.63 [0.04]-13.476 (30)†2.92E-140.1 [0.02]0.72 [0.04]-16.551 (70)†6.712E-26
Reversal Learning Performance
Total points earned7061.9 [286.9]6290.9 [372.2]1.608 (30)†0.1187533.0 [143.8]6503.1 [340.6]3.177 (70)†0.002
Total reversals achieved4.8 [0.7]2.5 [0.8]2.145 (30)†0.046.3 [0.3]4.9 [0.8]1.758 (20.14)†0.094
% Achieving reversals90.5%72.7%1.407 (1)‡0.327§100%87.5%7.200 (1)‡0.047§
Trials to first reversal29.2 [4.5]27.9 [11]0.136 (25)†0.89320.0 [1.7]13.7 [1.8]1.774 (68)†0.081
% Recovering post-reversal81.0%54.5%2.490 (1)‡0.213§91.1%69.0%3.482 (1)‡0.097§
Trials to switch1.68 [0.22]1.43 [0.20]0.671 (24)†0.5092.1 [0.2]2.6 [0.6]-1.088 (64)†0.280
Trials to recovery3.75 [0.51]4 [0.93]-0.285 (21)†0.7792.9 [0.3]4.9 [0.8]-2.694 (60)†0.009
Win-switch rate, block 1 (90-50-10)0.08 [0.03]0.24 [0.09]-1.742 (12.379)†0.1060.04 [0.01]0.13 [0.05]-1.906 (15.762)†0.075
Win-switch rate, block 2 (80-40-20)0.07 [0.04]0.21 [0.1]-1.601 (30)†0.120.02 [0.01]0.12 [0.05]-2.02 (15.915)†0.061
Lose-stay rate, block 1 (90-50-10)0.19 [0.03]0.13 [0.06]0.919 (30)†0.3650.30 [0.03]0.39 [0.06]-1.425 (70)†0.158
Lose-stay rate, block 2 (80-40-20)0.26 [0.05]0.12 [0.05]1.817 (30)†0.0790.33 [0.03]0.37 [0.06]-0.554 (70)†0.581
Null trials8.5 [2.8]10.4 [3.7]-0.391 (30)†0.699n/an/an/an/a
  1. † Independent samples t-test: t-value (df). Two-tailed p-values reported ‡ Exact test, chi-square coefficient (df)§ Exact significance (2-sided)¶ Equal variances not assumed # Not significant (bonferonni correction).

Participants completed a three-option reversal-learning task in which they chose between three decks of cards with hidden reward probabilities (Figure 1a and b). They selected a deck on each turn and received positive or negative feedback (+100 or −50 points, respectively). They were instructed to find the best deck with the caveat that the best deck may change. Undisclosed to participants, reward probabilities switched among decks after selection of the highest probability option in nine out of ten consecutive trials (‘reversal events’). Thus, the task was designed to elicit expected uncertainty (probabilistic reward associations) and unexpected uncertainty (reversal events), requiring participants to distinguish probabilistic losses from change in the underlying deck values. In addition, reward contingencies changed from 90%, 50%, and 10% chance of reward to 80%, 40%, and 20% between the first and second halves of the task (‘contingency transition’; block 1 = 80 trials, 90-50–10%; block 2 = 80 trials, 80-40–20%, unsignaled to the participants). This transition altered the volatility of the task environment, thereby making it more difficult to achieve reversals and often delaying their occurrence. Successful achievement of reversals was contingent upon adapting stay-vs-switch strategies, thereby testing subjects’ abilities to update beliefs about the overall task volatility (‘metavolatility learning’). High paranoia subjects achieved fewer reversals (MD = −2.31, CI=[−4.504,–0.111,], t(30)=-2.145, p=0.04, Hedges’ g = 0.798), but total points earned did not significantly differ, suggesting that there was no penalty for the different behaviors expressed by the more paranoid subjects (Table 1). We predicted that paranoia would be associated with unexpected uncertainty-driven belief updating.

Experiment 2

We aimed to replicate and extend our investigation of paranoia and reversal-learning in a larger online sample. We administered three alternative task versions to control for the contingency transition (Figure 1c). Version 1 (n = 45 low paranoia, 20 high paranoia) provided a constant contingency of 90-50–10% reward probabilities (Easy-Easy); version 2 (n = 69 low paranoia, 18 high paranoia) provided a constant contingency of 80-40–20% (Hard-Hard); version 3 (n = 56 low paranoia, 16 high paranoia) served to replicate Experiment 1 with a contingency transition from 90-50–10% to 80-40–20% (Easy-Hard); version 4 (n = 64 low paranoia, 19 high paranoia) provided the reverse contingency transition, 80-40–20% to 90-50–10% (Hard-Easy). The stable contingencies (versions 1 and 2) lacked contingency transitions. Versions 3 and 4 manipulated task volatility mid-way, although the contingency transition was not signalled to participants. We predicted that high paranoia participants would find versions 3 and 4 particularly challenging. Given that version 3 is easier to learn initially, we expected participants to develop stronger priors and thus be more confounded by the contingency transition, compared to version four participants.

Participants’ demographic and mental health questionnaire responses did not differ significantly across task version experiments (Table 2). Total points and reversals achieved suggest variations in task difficulty (Table 2, version effects: points earned, F(3, 299)=32.288, p=4.16E-18, ηp2=0.245; reversals achieved, F(3, 299)=4.329, p=0.005, ηp2=0.042), but there was no significant association between task version and attrition rate (52.7%, 52.9%, 54.6%, and 53.1% attrition, respectively; χ(3, n = 752)=0.167, p=0.983, Cramer’s V = 0.015).

Table 2
Online experiment.
Version 1Version 2Version 3Version 4Version EffectParanoia EffectInteraction
 Low Paranoia (n=45)High Paranoia (n=20)Low Paranoia (n=69)High Paranoia (n=18)Low Paranoia (n=56)High Paranoia (n=16)Low Paranoia (n=64)High Paranoia (n=19)Statisticp-valueStatisticp-valueStatisticp-value
Demographics
Age (years)36.5 [1.5]35.4 [2.4]36.2 [1.4]39.5 [2.8]38.6 [1.6]32.9 [1.7]37.6 [1.3]30.7 [1.6]1.12 (3)††0.3423.202 (1)††0.0752.619 (3)††0.051
Gender7.29 (6)‡0.238§1.373 (2)‡0.503§n/an/a
% Female44.4%45.0%47.8%50.0%50.0%62.5%57.8%73.7%n/an/an/an/an/an/a
% Male55.6%55.0%50.7%50.0%50.0%37.5%42.2%26.3%n/an/an/an/an/an/a
% Other or not specified0%0%1.4%0%0%0%0%0%n/an/an/an/an/an/a
Education15.9 (21)‡0.812||7.326 (7)‡0.4§n/an/a
% High school degree or equivalent 17.8%20.0%13.0%16.7%16.1%6.3%25.0%10.5%n/an/an/an/an/an/a
% Some college or university, no degree22.2%30.0%24.6%22.2%17.9%25.0%25.0%26.3%n/an/an/an/an/an/a
% Associate degree 13.3%15.0%17.4%22.2%12.5%12.5%9.4%21.1%n/an/an/an/an/an/a
% Bachelor's degree 33.3%35.0%40.6%22.2%35.7%56.3%28.1%31.6%n/an/an/an/an/an/a
% Master's degree 8.9%0%2.9%0%14.3%0%7.8%10.5%n/an/an/an/an/an/a
% Doctorate or professional degree 4.4%0%0%5.6%1.8%0%1.6%0%n/an/an/an/an/an/a
% Completed some postgraduate0%0%1.4%5.6%1.8%0%3.1%0%n/an/an/an/an/an/a
% Other / not specified0%0%0%5.6%0%0%0%0%n/an/an/an/an/an/a
Income14.961 (18)‡.671||1.177 (6)‡0.981§n/an/a
Less than $20,00024.4%25.0%24.6%33.3%17.9%37.5%23.4%15.8%n/an/an/an/an/an/a
$20,000 to $34,99940.0%25.0%20.3%22.2%33.9%31.3%28.1%31.6%n/an/an/an/an/an/a
$35,000 to $49,99915.6%15.0%18.8%16.7%12.5%6.3%18.8%15.8%n/an/an/an/an/an/a
$50,000 to $74,99913.3%35.0%20.3%5.6%21.4%12.5%18.8%21.1%n/an/an/an/an/an/a
$75,000 to $99,9994.4%0%7.2%11.1%8.9%6.3%7.8%15.8%n/an/an/an/an/an/a
Over $100,0000%0%5.8%5.6%3.6%6.3%1.6%0%n/an/an/an/an/an/a
Not specified2.2%0%2.9%5.6%1.8%0%1.6%0%n/an/an/an/an/an/a
Cognitive Reflection11.922 (9)‡0.223||7.002 (3)‡0.071§n/an/a
% Answering 0/3 correctly11.1%25.0%10.1%11.1%17.9%25.0%15.6%26.3%n/an/an/an/an/an/a
% Answering 1/3 correctly4.4%5.0%15.9%11.1%8.9%25.0%14.1%15.8%n/an/an/an/an/an/a
% Answering 2/3 correctly13.3%25.0%15.9%16.7%19.6%25.0%21.9%31.6%n/an/an/an/an/an/a
% Answering 3/3 correctly71.1%45.0%58.0%61.1%53.6%25.0%48.4%26.3%n/an/an/an/an/an/a
Ethnicity5.162 (3)‡0.157§3.715 (1)‡0.069§n/an/a
% Hispanic, Latino, or Spanish origin4.4%15.0%1.4%0%8.9%6.3%1.6%15.8%n/an/an/an/an/an/a
% Not of Hispanic, Latino, or Spanish origin95.6%85.0%98.6%100.0%91.1%93.8%98.4%84.2%n/an/an/an/an/an/a
Race19.559 (15)‡.173||9.626 (5)‡0.084§n/an/a
% White82.2%75.0%84.1%88.9%85.7%75.0%85.9%73.7%n/an/an/an/an/an/a
% Black or African American6.7%15.0%5.8%11.1%0%12.5%4.7%10.5%n/an/an/an/an/an/a
% Asian8.9%10.0%7.2%0%3.6%6.3%7.8%0%n/an/an/an/an/an/a
% American Indian or Alaska Native0%0%0%0%1.8%6.3%0%0%n/an/an/an/an/an/a
% Multiracial2.2%0%1.4%0%3.6%0%1.6%15.8%n/an/an/an/an/an/a
% Other / not specified0%0%1.4%0%5.4%0%0%0%n/an/an/an/an/an/a
Mental Health
Psychiatric diagnosis10.783 (9)‡0.292||2.960 (3)‡0.361§n/an/a
% No psychiatric diagnosis73.3%80.0%60.9%55.6%71.4%50.0%65.6%42.1%n/an/an/an/an/an/a
% Schizophrenia spectrum2.2%0%0%0%0%6.3%0%0%n/an/an/an/an/an/a
% Mood disorder13.3%15.0%27.5%22.2%21.4%43.8%26.6%31.6%n/an/an/an/an/an/a
% Not specified11.1%5.0%11.6%22.2%7.1%0%7.8%26.3%n/an/an/an/an/an/a
% Medicated8.9%10.0%13.0%22.2%7.1%31.3%14.1%10.5%3.575 (6)‡0.744§4.164 (2)‡0.121§n/an/a
Beck's Anxiety Inventory0.34 [0.06]0.52 [0.14]0.31 [0.04]0.6 [0.13]0.24 [0.04]0.90 [0.20]0.33 [0.06]0.79 [0.18]1.244 (3)0.294138.752 (1)††1.63E-092.577 (3)††0.0539
Beck's Depression Inventory0.36 [0.07]0.86 [0.15]0.32 [0.05]0.79 [0.13]0.25 [0.04]1.03 [0.19]0.38 [0.07]1.06 [0.20]1.023 (3)0.382774.528 (1)††3.62E-161.089 (3)††0.3542
SCID Paranoia Personality Score0.11 [0.02]0.67 [0.04]0.11 [0.02]0.61 [0.03]0.1 [0.02]0.72 [0.04]0.11 [0.02]0.65 [0.03]1.297 (3)0.2756879.379 (1)††4.81E-912.018 (3)††0.1114
Reversal Learning Performance
Total points earned8656.7 [182.9]8372.5 [405.2]6045.7 [135.7]6266.7 [288.0]7533.0 [143.8]6503.1 [340.6]7171.1 [175.6]6510.5 [403.6]32.288 (3)4.16E-186.175 (1)††0.01352.258 (3)††0.0818
Total reversals achieved7.2 [0.3]6.5 [0.5]5.5 [0.3]5.7 [0.5]6.3 [0.3]4.9 [0.8]5.9 [0.3]4.8 [0.6]4.329 (3)0.0055.762 (1)††0.0171.101 (3)††0.349
% Achieving reversals100%100%98.6%94.4%100%87.5%96.9%94.7%2.26 (3)‡0.598§4.4 (1)‡0.058§n/an/a
Win-switch rate, block 1 (90-50-10)0.09 [0.03]0.09 [0.04]0.07 [0.01]0.11 [0.05]0.04 [0.01]0.13 [0.05]0.1 [0.03]0.21 [0.06]2.284 (3)0.0797.117 (1)††0.0081.15 (3)††0.329
Win-switch rate, block 2 (80-40-20)0.05 [0.02]0.08 [0.03]0.04 [0.01]0.05 [0.04]0.02 [0.01]0.12 [0.05]0.06 [0.02]0.15 [0.05]2.067 (3)0.1059.918 (1)††0.0021.174 (3)††0.32
Lose-stay rate, block 1 (90-50-10)0.27 [0.03]0.34 [0.05]0.37 [0.03]0.34 [0.04]0.3 [0.03]0.39 [0.06]0.32 [0.03]0.34 [0.04]0.561 (3)0.6411.834 (1)††0.1770.754 (3)††0.521
Lose-stay rate, block 2 (80-40-20)0.28 [0.03]0.23 [0.05]0.4 [0.03]0.32 [0.05]0.33 [0.03]0.37 [0.06]0.29 [0.03]0.33 [0.06]2.47 (3)0.0620.177 (1)††0.6740.834 (3)††0.476
Reaction time, block 1433.6 [28.8]789.3 [282.7]548.1 [77.8]365.6 [26.4]448 [60.1]442.1 [59.5]557.2 [108.2]530 [130.2]0.793 (3)0.4990.161 (1)††0.6891.727 (3)††0.161
Reaction time, block 2370.7 [23.3]494.3 [88.6]465.3 [61.6]331.4 [22.9]391.7 [52.3]555.9 [121.2]385.4 [29.2]504.1 [82.7]0.394 (3)0.7571.92 (1)††0.1671.949 (3)††0.122
  1. † Univariate analysis, F(df) with df error = 306 Exact test, ‡chi-square coefficient (df), § Exact significance (2-sided), || Monte Carlo significance (2-sided).

Across task versions, high paranoia participants endorsed higher BAI and BDI scores (n = 73 high paranoia, 234 low paranoia; BAI: F(1, 299)=38.752, p=1.63E-09, ηp2=0.115; BDI: F(1, 299)=74.528, p=3.62E-16, ηp2=0.20; Table 2). Both correlated with paranoia (BAI: Pearson’s r = 0.450, p=1.09E-16, CI=[0.348, 0.55]; BDI: Pearson’s r = 0.543, p=6.26E-25, CI=[0.448, 0.638]). Trial-by-trial reaction time did not differ significantly between low and high paranoia (Table 2), but high paranoia participants earned fewer total points (F(1, 299)=6.175, p=0.014, ηp2=0.020) and achieved fewer reversals (F(1, 299)=5.762, p=0.017, ηp2=0.019; Table 2). Deck choice perseveration after negative feedback (lose-stay behavior) did not significantly differ by paranoia group, but choice switching after positive feedback (win-switch behavior) was elevated in high paranoia (block 1: F(1, 299)=7.117, p=0.008, ηp2=0.023; block 2: F(1, 299)=9.918, p=0.002, ηp2=0.032; Table 2).

Experiment 3

To translate across species, we performed a new analysis of published data from rats exposed to chronic methamphetamine (Groman et al., 2018). Rats chose between three operant chamber noseports with differing probabilities of sucrose reward (70%, 30%, and 10%; Figure 1d and e). Contingencies switched between the 70% and 10% noseports after selection of the highest reinforced option in 21 out of 30 consecutive trials (Figure 1e). This task was most similar in structure to the first blocks of online versions 2 and 4. There was no increase in unexpected volatility mid-way through the task. Rats were tested for 26 within-session reversal blocks (Pre-Rx, n = 10 per group), administered saline or methamphetamine according to a 23 day schedule mimicking the escalating doses and frequencies of chronic human methamphetamine users (Groman et al., 2018), and tested once per week for four weeks following completion of the drug regimen (Post-Rx; n = 10 saline, seven methamphetamine) (Groman et al., 2018). Relative to rats exposed to saline, those rats exposed to methamphetamine exhibited increased win-switch behavior, similar to what we has observed in the high paranoia human participants, and additionally, unlike humans, they perseverated after negative feedback (Groman et al., 2018).

Computational modeling

We employed hierarchical Gaussian filter (HGF) modeling to compare belief updating across individuals with low and high paranoia, as well as across human participants and rats exposed to methamphetamine (Table 3). We paired a three-level perceptual model with a softmax decision model dependent upon third level volatility (Figure 2a). We inverted the model from subject data (trial-by-trial choices and feedback) to estimate parameters for each individual (Figure 2b). Level 1 (x1) characterizes trial-by-trial perception of task feedback (win or loss in humans, reward or no reward in rats), Level 2 (x2) distinguishes stimulus-outcome associations (deck or noseport values), and Level 3 (x3) renders perception of the overall task volatility (i.e., frequency of reversal events, changes in the stimulus-outcome associations).

Table 3
ANOVA results for HGF parameters.
Block effect Group effectInteraction effect
Statistic§p-valueStatistic§p-valueStatistic§p-value
Experiment 1
ω311.672 (1)0.0021.294 (1)0.2646.948 (1)0.013
µ3025.904 (1)1.809E-57.063 (1)0.0125.344 (1)0.028
κ7.768 (1)0.0097.599 (1)0.0100.003 (1)0.960
ω22.182 (1)0.1504.186 (1)0.0500.058 (1)0.811
µ204.831 (1)0.0361.261 (1)0.2700.370 (1)0.547
BIC0.061 (1)0.8078.801 (1)0.0061.7 (1)0.202
Experiment 2, Version 3
ω314.932 (1)0.00021.128 (1)0.2921.406 (1)0.240
µ3064.651 (1)1.54E-116.366 (1)0.0140.003 (1)0.959
κ15.53 (1)0.000213.521 (1)0.00050.011 (1)0.916
ω20.027 (1)0.8698.70 (1)0.0040.090 (1)0.765
µ2011.432 (1)0.0010.030 (1)0.8640.203 (1)0.653
BIC1.110E-5 (1)0.99716.336 (1)0.00011.678 (1)0.199
Experiment 3: Rats
ω330.086 (1)6.2785E-54.579 (1)0.0499.058 (1)0.009
µ3031.416 (1)5.0188E-58.454 (1)0.0115.159 (1)0.038
κ9.132 (1)0.00913.356 (1)0.0022.644 (1)0.125
ω232.192 (1)4.4173E-522.344 (1)0.000318.454 (1)0.001
µ205.226 (1)0.0370.368 (1)0.5532.087 (1)0.169
BIC5.052 (1)0.0401.890 (1)0.1890.331 (1)0.573
  1. Block refers to first versus second half in human studies, Pre-Rx vs Post-Rx in rat studies.‡ Group refers to low versus high paranoia in humans, saline versus methamphetamine in rats §F-statistic (degrees of freedom); df error = 30 in Experiment 1, 70 in Experiment 2, Version 3, and 50 in Experiment 3: Rats; split-plot ANOVA (i.e., repeated measures with between-subjects factor).

Hierarchical Gaussian Filter (HGF) model parameters.

(a) 3-level HGF perceptual model (blue) with a softmax decision model (green). Level 1 (x1): trial-by-trial perception of win or loss feedback. Level 2 (x2): stimulus-outcome associations (i.e., deck values). Level 3 (x3): perception of the overall reward contingency context. The impact of phasic volatility upon x2 is captured by κ (i.e., coupling). Tonic volatility modulates x3 and x2 via ω3 and ω2, respectively. μ30 is the initial value of the third level volatility belief. (b) HGF model parameter estimates from each of our three studies (in laboratory, online, rat - columns), ω3, μ30, κ, and ω2, displayed hierarchically, in rows, in parallel with the position of the particular parameter in the model depiction in a). Parameters replicate across high paranoia groups in the in-laboratory experiment (n = 21 low paranoia [gray], 11 high paranoia [orange]; dark bars are initial task blocks, lighter bars follow the contingency transition); the analogous online task (version 3, n = 56 low paranoia [gray], 16 high paranoia [orange]; dark bars are initial task blocks, lighter bars follow the contingency transition); and rats exposed to chronic, escalating saline or methamphetamine (n = 10 per group, Pre-Rx [dark gray]; Post-Rx, n = 10 saline [light gray], seven methamphetamine [orange]). Center lines depict medians; box limits indicate the 25th and 75th percentiles; whiskers extend 1.5 times the interquartile range from the 25th and 75th percentiles, outliers are represented by dots; crosses represent sample means; data points are plotted as open circles. *p≤0.05, **p≤0.01, ***p≤0.001.

Belief trajectories were unique to each subject due to the probabilistic, performance-dependent nature of the task, so we estimated initial beliefs (priors) for x2 and x320 and μ30, respectively). We also estimated ω2, the tonic volatility of stimulus-outcome associations. Lower ω2 indicates that subjects are slower to adjust beliefs about the value of each option; they maintain rigid beliefs about the underlying probabilities. The κ parameter captures the impact of phasic volatility on updating stimulus-outcome associations. In the setting of our experiments, κ approximates the influence of unexpected uncertainty. Higher κ implies faster updating of stimulus-outcome associations – that is, participants are more likely perceive volatility as reversal events. Our final parameter of interest, ω3, characterizes perception of ‘meta-volatility,’ such as changes in the frequency of reversal events (Lawson et al., 2017). The lower ω3, the slower a subject is to adjust their volatility belief; they adhere more rigidly to their volatility prior (μ30).

Priors did not differ between groups at x2 (Table 3) but paranoid individuals and rats exposed to methamphetamine exhibited elevated μ30, they expected greater task volatility (Figure 2b, blue). In Experiment 1, we observed an interaction between task block and paranoia group (F(1, 30)=5.344, p=0.028, ηp2=0.151; Table 1): μ30 differed between high and low paranoia in both blocks (block 1, F(1, 30)=4.232, p=0.048, ηp2=0.124, MD = 0.658, CI=[0.005,1.312]; block 2, F(1, 30)=7.497, p=0.010, ηp2=0.20, MD = 1.598, CI=[0.406, 2.789]), but only low paranoia subjects significantly updated their priors between block 1 and block 2 (F(1, 30)=39.841, p=5.85E-07, ηp2=0.570, MD = 1.504, CI=[1.017, 1.99]). In Experiment 2, the analogous task design (version 3) demonstrated significant effects of block (F(1, 70)=64.652, p=1.54E-11, ηp2=0.480, MD = 1.303, CI=[0.980,1.627]) and paranoia (F(1, 70)=6.366, p=0.014, ηp2=0.083, MD = 0.909, CI=[0.191, 1.628]; Table 1). Rats showed a similar effect following methamphetamine exposure with a significant time (Pre-Rx, Post-Rx) by treatment (methamphetamine, saline) interaction (F(1, 15)=5.159, p=0.038, ηp2=0.256; pre versus post methamphetamine effect: F(1, 15)=12.186, p=0.003, MD = 1.265, CI=[−0.493, 2.037]; Pre-Rx mean [standard error]=−1.25 [0.56] saline, −0.77 [0.80] methamphetamine; Post-Rx: m = −0.69 [0.74] saline, 0.58 [0.73] methamphetamine). Random effects meta-analyses confirmed significant cross-experiment replication of elevated μ30 in human participants with paranoia (in laboratory and online version 3; MDMETA = 1.110, CI=[0.927, 1.292], zMETA = 11.929, p=8.356E-33) and across humans with paranoia and rats exposed to methamphetamine (MDMETA = 2.090, CI=[0.123, 4.056], zMETA = 2.083, p=0.037). Both paranoid humans and rats administered chronic methamphetamine had strong beliefs that the task contingencies would change rapidly and unpredictably – in other words, they expected frequent reversal events. Methamphetamine exposure made rats behave like humans with high paranoia (Figure 2b, Post-Rx condition, orange). This is particularly striking when compared to human data from the first task block (before contingency transition), when task designs are most similar across experiments.

Paranoid participants and methamphetamine exposed rats updated stimulus-outcome associations more strongly in response to perceived volatility (e.g., correctly or incorrectly inferred reversals; Figure 2b). κ showed significant paranoia group and block effects across the in laboratory experiment and online version 3 (Table 1; paranoia effects, in laboratory: F(1, 30)=7.599, p=0.010, ηp2=0.202, MD = 0.081, CI=[0.021, 0.140]; online version 3: F(1, 70)=13.521, p=0.0005, ηp2=0.162, MD = 0.068, CI=[0.031–0.104]; MDMETA = 0.079, CI=[0.063, 0.095], zMETA = 9.502 p=2.067E-21); see Table 3 for block effects). κ increased from baseline in rats on methamphetamine, yielding significant effects of treatment (F(1, 15)=13.356, p=0.002, ηp2=0.471, MD = 0.045, CI=[0.019, 0.072]) and time (F(1, 15)=9.132, p=0.009, ηp2=0.378, MD = 0.041, CI=[0.012, 0.069]); however, the interaction between time and treatment did not reach statistical significance (Table 3; Pre-Rx m = 0.499 [0.015] saline, 0.523 [0.040] methamphetamine; Post-Rx: m = 0.518 [0.053] saline, 0.585 [0.029] methamphetamine). Replication of group effects was significant across all three experiments (MDMETA = 2.063, CI=[0.341, 3.785], zMETA = 2.348, p=0.019). Thus, learning was more strongly driven by unexpected uncertainty in high paranoia participants and rats chronically administered methamphetamine; they were faster to interpret volatility as reversal events than their low paranoia and saline exposed counterparts.

Expected uncertainty (ω2) was decreased in paranoid participants and rats exposed to methamphetamine (Figure 2b). In laboratory and online (version 3), paranoid individuals were slower to update stimulus-outcome associations in response to expected uncertainty (Table 1; ω2 paranoia effect, in laboratory: F(1, 30)=4.186, p=0.050, ηp2=0.122, MD = −1.188, CI=[−2.375,–0.002]; online version 3: F(1, 70)=8.7, p=0.004, ηp2=0.111, MD = −0.993, CI=[−1.665,–0.322]; MDMETA = −1.154, CI=[−1.455,–0.853], zMETA = −7.521, p=5.450E-14). The effects of methamphetamine exposure in rats were consistent (MDMETA = −1.992, CI=[−3.318,–0.665], zMETA = −2.943, p=0.003) yet more striking, with a strongly negative ω2 accounting for the more pronounced lose-stay behavior or perseveration in rats (time by treatment interaction, F(1, 15)=18.454, p=0.001, ηp2=0.552; pre versus post methamphetamine: F(1, 15)=42.242, p=1.0E-522, ηp2=0.738, MD = −1.604, CI=[−2.130,–1.078]; Pre-Rx m = 0.198 [0.33] saline, −0.036 [0.42] methamphetamine; Post-Rx: m = −0.023 [0.56] saline, −1.640 [0.71] methamphetamine). High paranoia humans and rats exposed to methamphetamine maintained rigid beliefs about the underlying option probabilities relative to low paranoia and saline controls. This was associated with perseverative behavior in the rats but not in humans.

Meta-volatility learning (ω3) was similarly decreased across paranoia and methamphetamine exposed groups (in laboratory, online version 3, and rats: MDMETA = −1.155, CI=[−2.139,–0.171], zMETA = −2.3, p=0.021), suggesting more reliance on expected task volatility (i.e., anticipated frequency of reversal events) than on actual task feedback. In laboratory, we observed a block by paranoia group interaction (Table 1, F(1, 30)=6.948, p=0.010, ηp2=0.188). Post-hoc tests differentiated first and second blocks for the low paranoia group only (F(1, 30)=26.640, p=1.5E-5, ηp2=0.470, MD = −0.876, CI=[−1.222,–0.529]). The paranoia effect did not reach statistical significance for online version 3 (block effect only, F(1, 70)=14.932, p=0.0002, ηp2=0.176, MD = −0.692, CI=[−1.050,–0.335]; Table 3), but meta-analytic random effects analysis confirms a significant paranoia group difference (in laboratory and online version 3: MDMETA = −0.341, CI=[−0.522,–0.159], zMETA = −3.68, p=0.0002). Methamphetamine exposure rendered ω3 more negative in rats (time by treatment interaction, (F(1, 15)=9.058, p=0.009, ηp2=0.376; pre versus post methamphetamine: F(1, 15)=30.668, p=5.7E-5, ηp2=0.672, MD = −1.210, CI=[−1.676,–0.745]; Pre-Rx m = −0.692 [0.44] saline, −0.607 [0.51] methamphetamine; Post-Rx: m = −1.044 [0.44] saline, −1.817 [0.32] methamphetamine). These data indicate that paranoia and methamphetamine are associated with slower learning about changes in task volatility, suggesting greater reliance on volatility priors than task feedback.

In summary, our modeling analyses suggest the following about paranoia in humans and methamphetamine exposed animals: they expect the task to be volatile (high μ30), their expectations about task volatility are more rigid (low ω3), and they confuse probabilistic errors and task volatility as a signal that the task has fundamentally changed (high κ, low ω2).

We applied False Discovery Rate (FDR) correction for multiple comparisons of each model parameter (Hochberg and Benjamini, 1990). κ group effects survived corrections within each experiment (Table 4). In addition to κ, μ30 survived for experiment 1; μ30 and ω2 survived in online version 3; and μ30, ω2, and ω3 survived in experiment three as group effects. Such correction is not yet standard practice with this modeling approach (Lawson et al., 2017; Powers et al., 2017; Sevgi et al., 2016) but we believe it should be, and when effects survive correction we should increase our confidence in them.

Table 4
Corrections for multiple comparisons.
Group effect Interaction effect
Survives bonferroni?§Survives FDR?Critical valueBenjamini-Hochberg p-valueSurvives bonferroni?§Survives FDR?Critical valueBenjamini-Hochberg p-value
Experiment 1
ω3N/AN/A0.050.264NoNo0.01250.052
µ30YesYes0.0250.024NoNo0.0250.056
κYesYes0.01250.04N/AN/A0.050.96
ω2NoNo0.03750.0667N/AN/A0.03751.081
Experiment 2, Version 3
ω3N/AN/A0.050.292N/AN/A0.01250.96
µ30NoYes3.75E-020.0187N/AN/A0.050.959
κYesYes0.01250.002N/AN/A0.03751.221
ω2YesYes0.0250.008N/AN/A0.0251.53
Experiment 3: Rats
ω3NoYes5.00E-020.049YesYes0.0250.018
µ30YesYes3.75E-020.0147NoNo0.03750.0507
κYesYes0.0250.004N/AN/A0.050.125
ω2YesYes0.01250.0012YesYes0.01250.004
  1. N/A denotes to p-values that were not significant before corrections. † Low versus high paranoia in humans, saline versus methamphetamine in rats. ‡ Group by time (i.e., first versus second half in human studies, Pre-Rx vs Post-Rx in rat studies). § p-value < 0.0125.

Paranoia effects across task versions

To examine the relationship between beliefs about contingency transition and paranoia within our HGF parameters, we performed split-plot, repeated measures ANOVAs across all four task versions. Paranoia group effects were specific to versions of the task in which we explicitly manipulated uncertainty via contingency transition which increased volatility (Figure 3, Table 5, versions 3 and 4). Specifically, we observed paranoia by version interactions for κ (F(3, 299)=4.178, p=0.006, ηp2=0.040) and ω2 (F(3, 299)=2.809, p=0.040, ηp2=0.027; Table 2). Post-hoc tests confirmed that significant paranoia group effects were restricted to version 3 (κ: F(1, 299)=12.230, p=0.001, ηp2=0.039, MD = 0.068, CI=[0.03,0.106]; ω2: F(1, 299)=8.734, p=0.003, ηp2=0.028, MD = −0.993, CI=[−1.655,–0.332]) and a trend for version 4 (ω2: F(1, 299)=2.909, p=0.089, ηp2=0.010, MD = −0.528, CI=[−1.138, 0.081], Figure 3a). μ30 also exhibited a paranoia by version trend (Table 2, F(3, 299)=2.329, p=0.075, ηp2=0.023), largely driven by version 3 (F(1, 299)=6.206, p=0.013, ηp2=0.020, MD = 0.909, CI=[0.191, 1.628]; Figure 3a). There were no significant paranoia effects or interactions for ω3 (Table 5). In sum, our contingency shift manipulation – from easily discerned options to underlying probabilities that are closer together – increased unexpected uncertainty the most, particularly in highly paranoid participants, compared to the other task versions.

Table 5
Experiment 2 effects across block, paranoia group, and task version.
BlockGroupVersionBlock*group*
Version
Group*versionBlock*groupBlock*version
F
(df)
PF
(df)
PF
(df)
PF
(df)
PF
(df)
PF
(df)
PF
(df)
P
ω33.722 (1)0.0550.499 (1)0.4812.061 (3)0.1050.415 (3)0.7421.005 (3)0.3910.145 (1)0.7047.0155 (3)1.42E-4
µ30288.1 (1)1.01E-452.604 (1)0.1082.321 (3)0.0750.261 (3)0.8532.329 (3)0.0750.281 (1)0.5970.061 (3)0.98
κ120.9 (1)7.65E-243.602 (1)0.0595.06
(3)
0.0020.08 (3)0.9714.178 (3)0.0061.028 (1)0.3122.559 (3)0.055
ω235.3 (1)7.92E-94.435 (1)0.0364.155 (3)0.0070.166 (3)0.9192.809 (3)0.042.387 (1)0.1238.697 (3)1.5E-5
µ2071.3 (1)1.33E-150.242 (1)0.6230.616 (3)0.6051.081 (3)0.3580.412 (3)0.7440.057 (1)0.8121.505 (3)0.213
BIC56.6 (1)6.23E-138.073 (1)0.0055.385 (3)0.0010.262 (3)0.8534.927 (3)0.0020.451 (1)0.50211.905 (3)2.19E-07
  1. † F-statistic (degrees of freedom); df error = 299; split-plot ANOVA (i.e., repeated measures with two between-subjects factors).

    N/A denotes to p-values that were not significant before corrections. † Low versus high paranoia in humans, saline versus methamphetamine in rats. ‡ Group by time (i.e., first versus second half in human studies, Pre-Rx vs Post-Rx in rat studies). § p-value < 0.0125.

Paranoia effects across task versions.

(a) Estimated model parameters derived from participant choices in response to the tasks. Low paranoia is shown in gray, high paranoia is shown in orange. μ30, κ, and ω2 are shown in separate panels (top, middle, and bottom panels, respectively; y-axes). X-axes depict each separate online task version from Experiment 2 (version 1: Easy-Easy, version 2: Hard-Hard, version 3: Easy-Hard, version 4: Hard-Easy). (b) Behavior. Win-switch rate (top): paranoid participants switched between decks more frequently after positive feedback. Rates are collapsed across all task versions and blocks (paranoia group effect; n = 234 low paranoia [gray], 73 high paranoia [orange]). U-value (bottom): a measure of choice stochasticity, calculated for low (gray) and high (orange) paranoia participants and collapsed across task blocks. U-values are shown separately for each online task version (1 through 4, as in part a). In versions 3 and 4 only (the versions containing unsignaled contingency transitions), paranoid participants showed higher U-values, suggesting increasingly stochastic switching rather than perseverative returns to a previously rewarding option. Center lines show the medians; box limits indicate the 25th and 75th percentiles; whiskers extend 1.5 times the interquartile range from the 25th and 75th percentiles, outliers are represented by dots; crosses represent sample means; data points are plotted as open circles. P-values correspond to estimated marginal means post-hoc comparisons: *p≤0.05, **p≤0.01, ***p≤0.001.

Covariate analyses

We completed three ANCOVAs for each HGF parameter derived from Experiment 2: demographics (age, gender, ethnicity, and race); mental health factors (medication usage, diagnostic category, BAI score, and BDI score); and metrics and correlates of global cognitive ability (educational attainment, income, and cognitive reflection; Tables 6 and 7). For κ, our metric of unexpected uncertainty, the paranoia by version interaction remained robust across all three ANCOVAs (demographics: F(3, 294)=3.753, p=0.011, ηp2=0.037; mental health: F(3, 257)=4.417, p=0.005, ηp2=0.049; cognitive: F(3, 290)=4.304, p=0.005 ηp2=0.043). The paranoia by version trend of μ30 diminished with inclusion of demographic, mental health, and cognitive covariates (demographic: F(3, 294)=1.997, p=0.119, ηp2=0.020; mental health: F(3, 257)=1.942, p=0.123, ηp2=0.022; cognitive: F(3, 290)=2.193, p=0.089, ηp2=0.022). The paranoia by version interaction for ω2 was robust to mental health and cognitive factors (F(3, 257)=3.617, p=0.014, ηp2=0.041; F(3, 290)=3.017, p=0.030, ηp2=0.030). A paranoia group effect and paranoia by version trend remained with inclusion of demographics (ω2, paranoia effect: F(1, 294)=4.275, p=0.040, ηp2=0.014; interaction: F(3, 294)=2.507, p=0.059, ηp2=0.025). Thus κ – participants’ perception of unexpected uncertainty – was the only parameter whose main effect of paranoia (higher κ in high paranoia participants) and paranoia-by-version interaction (higher κ in high paranoia participants as a function of increasing unexpected volatility in version 3) survived covariation for demographic, mental health and cognitive covariates. We are most confident that high paranoia participants have higher unexpected uncertainty which drives their excessive updating of stimulus-outcome associations.

Table 6
Experiment 2 ANCOVAs.
ω3µ30κω2
EffectDfFp-valueFp-valueFp-valueFp-value
Demographics (age, gender, ethnicity, and race)
Block1, 2940.3280.56810.8350.0013.4250.0662.7110.101
Block * Age1, 2940.6590.4182.0350.1552.1950.140.2120.646
Block * Gender1, 2940.3630.5470.1050.7464.0420.0460.0960.757
Block * Ethnicity1, 2940.0160.9010.0420.8370.2680.6050.0240.876
Block * Race1, 2943.2440.0730.2790.5980.0820.7751.3860.24
Block * Paranoia Group1, 2940.0010.9690.1620.6870.7380.3911.1890.277
Block * Version3, 2947.617.25E-050.5610.6412.5680.0558.6131.97E-05
Block * Paranoia Group * Version3, 2940.4510.7170.1350.9390.1190.9490.10.96
Age1, 2943.0540.0822.9740.0862.1010.1492.3390.128
Gender1, 2940.4380.5090.020.8860.0050.9410.0140.905
Ethnicity1, 2940.0290.8650.0590.8080.0870.7680.2210.639
Race1, 2940.0720.7892.2180.1380.3730.5420.3330.564
Paranoia Group1, 2944.71E-040.9830.7410.391.7950.1823.3020.071
Version3, 2941.8450.141.9140.1284.9750.0023.7860.011
Paranoia Group * Version3, 2940.9350.4241.9110.1293.5990.0141.9190.127
Mental health factors (medication usage, diagnostic category, BAI score, and BDI score)
Block1, 2573.3330.06995.7533.12E-1925.4988.78E-078.3410.004
Block * BAI1, 2570.260.6111.5320.2172.8520.0930.3940.531
Block * BDI1, 2570.0090.9260.2080.6496.550.0110.5970.441
Block * Medication Usage1, 2570.0270.871.2880.2580.6910.4070.8710.352
Block * Diagnostic Category1, 2571.3660.2441.7850.1830.0630.8030.2080.649
Block * Paranoia Group1, 2570.0680.7950.2980.5860.2980.5860.0070.935
Block * Version3, 2575.8720.0010.5310.6620.9060.4396.160.0005
Block * Paranoia Group * Version3, 2571.0240.3830.8690.4580.2660.850.0950.963
BAI1, 2571.1080.2940.0120.9130.9540.330.9210.338
BDI1, 2570.0370.8480.5740.4491.3430.2482.3720.125
Medication Usage1, 2570.3270.5680.0580.810.0020.9660.4670.495
Diagnostic Category1, 2574.2520.040.0040.9491.4430.2311.7430.188
Paranoia Group1, 2570.0570.8110.2330.631.0320.3111.6950.194
Version3, 2573.1830.0252.730.0455.2740.0024.4680.004
Paranoia Group * Version3, 2570.3110.8182.3070.0774.5560.0043.3970.019
Global cognitive ability (educational attainment, income, and cognitive reflection)
Block1, 2901.19E-040.99151.2647.60E-1228.6751.83E-0718.3882.51E-05
Block * Education1, 2900.6030.4380.0010.9750.0330.8560.2580.612
Block * Income1, 2901.2110.2722.8740.0913.4830.0632.4210.121
Block * Cognitive Reflection1, 2901.830.1770.7090.4011.2210.274.6670.032
Block * Paranoia Group1, 2900.0050.9460.3590.550.2630.6080.8850.348
Block * Version3, 2908.8611.27E-050.1820.9092.3250.0758.8151.35E-05
Block * Paranoia Group * Version3, 2900.8260.480.4780.6980.150.9290.30.825
Education1, 2900.1110.7390.5780.4481.3950.2390.6080.436
Income1, 2902.7630.0981.3820.2410.0550.8141.0350.31
Cognitive Reflection1, 2900.1640.68612.8070.00040.2240.6360.8070.37
Paranoia Group1, 2900.0690.7930.5550.4572.4770.1174.7150.031
Version3, 2902.1040.12.550.0565.530.0013.7990.011
Paranoia Group * Version3, 2901.2880.2792.5680.0554.4690.0042.7930.041
Table 7
Modified Cognitive Reflection Questionnaire Items.
ItemPrompt
1A folder and a paper clip cost $1.10 in total. The folder costs $1.00 more than the paper clip.
How much does the paper clip cost?
2If it takes 5 clerks 5 min to review five applications, how long would it take 100 clerks to review 100 applications?
3In a garden, there is a cluster of weeds. Every day, the cluster doubles in size. If it takes 48 days for the cluster to cover the entire garden, how long would it take for the cluster to cover half of the garden?

Relationships between parameters and paranoia

We found a significant correlation between κ and paranoia scores (Figure 4). However, depression and anxiety were also related to κ, and indeed, paranoia and depression correlate with one another, in our data and in other studies (Na et al., 2019). In order to explore commonalities among the rating scales in the present data, we performed a principle components analysis (Figure 5), identifying three principle components. The first principle component (PC 1) explained 82.3% of the variance in the scales and loaded similarly on anxiety, depression, and paranoia. It correlated significantly with kappa (r = 0.272, p=0.021). Depression, anxiety and paranoia all contribute to PC1. We suggest that this finding is consistent with the idea that depression and anxiety represent contexts in which paranoia can flourish and likewise, harboring a paranoid stance toward the world can induce depression and anxiety.

Correlations between κ and symptoms, with and without paranoia scores of zero.

Paranoia (SCID-II, top), depression (BDI, middle), and anxiety (BAI, bottom). (a) Among all 72 subjects from online version 3, κ correlates with paranoia (r = 0.30, p=0.011, top) and depression (r = 0.250, p=0.034, middle), but not anxiety (r = 0.210, p=0.077, bottom). (b) Among participants who endorse at least one paranoia item (SCID-II paranoia >0, n = 39), κ correlates with paranoia (r = 0.588, p=8.1E-5, top), depression (r = 0.427, p=0.007, middle), and anxiety (r = 0.367, p=0.021, bottom). All correlations are two-tailed.

Dimensionality reduction analysis.

Principal component analysis (PCA) was performed on behavioral data to explain the relationship between κ and the rating scales - paranoia (SCID), depression (BDI) and anxiety (BAI). (a) Scree plot of PCA illustrates percent of variance for each component explained by SCID, BDI and BAI. (b) Principal component 1 (PC1) plotted against κ values. κ correlates with PC1 (r = 0.272, p=0.021).

Multiple regression

In order to make the case that our observations were most relevant to paranoia, we examined the effects of paranoia, anxiety, and depression on κ within the online version three dataset with multiple regression. A significant regression equation was found (F(3,68)=3.681, p=0.016), with an R (Freeman et al., 2005) of 0.140. Participants’ predicted κ equaled 0.486 + 0.062 (PARANOIA)+0.012 (BDI) −0.006 (BAI). Paranoia was a significant predictor of κ (β = 0.343, t = 2.470, p=0.016, CI=[0.012, 0.113]) but depression and anxiety were not (BDI: β = 0.086, t = 0.423, p=0.674, CI=[−0.043, 0.066]; BAI: β = −0.043, t = −0.218, p=0.828, CI=[−0.063, 0.050]). Examination of correlation plots for κ (Figure 4) revealed a much stronger relationship when analyses were restricted to individuals with paranoia scores greater than 0 (i.e., endorsement of at least one item); among participants who denied all questionnaire items, a minority (seven out of 32) exhibited elevated κ. To account for the possibility that some individuals with severe paranoia may avoid disclosing sensitive information, we performed additional analyses of participants who endorsed one or more paranoia item. The correlation between paranoia and κ in the first block of the task increases from r = 0.3, p=0.011, CI=[0.074, 0.497] (all participants, n = 72) to r = 0.588, p=8.0E-5, CI=[0.335, 0.762] (participants with paranoia >0, n = 39). In this subset, a significant regression equation was also found (F(3,35)=6.322, p=0.002), with an R2of 0.351 (Figure 4). Participants’ predicted κ was equal to 0.432 + 0.150 (PARANOIA)+0.013 (BDI) −0.004 (BAI). Paranoia was a significant predictor of κ (β = 0.538, t = 2.983, p=0.005, CI=[0.048, 0.252]) but depression and anxiety were not (BDI: β = 0.111, t = 0.494, p=0.624, CI=[−0.041, 0.067]; BAI: β = −0.035, t = −0.163, p=0.872, CI=[−0.057, 0.049]). Thus, paranoia predicts kappa across participants. Anxiety and depression do not predict kappa.

Behavior and simulations

Win-switching was the prominent behavioral feature of both paranoid participants and rats exposed to methamphetamine (Table 1, Table 2Groman et al., 2018). Collapsed across blocks and task versions, our Experiment 2 data demonstrated a main effect of paranoia group (Figure 3b; F(1, 299)=9.207, p=0.003, ηp2=0.030, MD = 0.059, CI=[0.021, 0.097]; version trend: F(3299)=2.263 p=0.081, ηp2=0.022; low paranoia: m = 0.06 [0.01], high paranoia: m = 0.12 [0.02]). To elucidate whether this behavior was stochastic or predictable (e.g., switching back to a previously rewarding option), we calculated U-values (Kong et al., 2017), a metric of behavioral variability employed by behavioral ecologists (increasingly an inspiration for human behavioral analysis [Fung et al., 2019]), particularly with regards to predator-prey relationships (Humphries and Driver, 1970). When a predator is approaching a prey animal, the prey’s best course of action is to behave randomly, or in a protean fashion, in order to evade capture (Humphries and Driver, 1970). The more protean or stochastic the behavior, the closer to the U-value is to 1. Across task blocks, paranoid participants exhibited elevated choice stochasticity (paranoia by version interaction, F(3, 298)=3.438, p=0.017, ηp2=0.033; Table 2). Post-hoc tests indicate that this stochasticity was specific to versions with contingency transition, suggesting a relationship to unexpected uncertainty (Figure 3b; version 3, F(1, 298)=17.585, p=3.6E-5, ηp2=0.056, MD = 0.071, CI=[0.038, 0.104]; version 4, F(1, 298)=6.397, p=0.012, ηp2=0.021, MD = 0.039, CI=[0.009, 0.07]). Our task manipulation, increasing unexpected volatility, increases win-switching behavior and stochastic choice more in more paranoid participants.

To test the propriety of our model, we simulated data for each subject in online version 3 and determined whether or not key behavioral effects (Figure 7a, Table 1, Table 8) were present. Using individually estimated HGF parameters to generate ten simulations per participant, we recapitulated both elevated win-switch behavior (paranoia effect, F(1, 70)=15.394, p=2.01E-4, ηp2=0.180, MD = 0.186, CI=[0.091, 0.28]) and choice stochasticity (U-value; paranoia effect, F(1, 70)=13.362, p=0.0005, ηp2=0.160, MD = 0.065, CI=[0.030, 0.101]) in simulated paranoid participants (Figure 7b; simulated win-switch rate, low paranoia: m = 0.24 [0.02], high paranoia: m = 0.43 [0.04]; simulated U-value, low paranoia: m = 0.851 [0.008], high paranoia: m = 0.916 [0.016]). Neither real nor simulated data showed any significant relationship between lose-stay behavior and paranoia (Table 1, Table 2, Table 8). To demonstrate the effects of parameters on task performance, we performed additional simulations in which we doubled or halved a single parameter at a time from the baseline average of low paranoia participants. These results confirmed the impact of κ, ω2, and ω3 on win-shift behavior (Figure 4). Parameter recovery revealed significant correlations for κ and ω2 between original subject parameters and those estimated from simulations (Figure 6; ω: r = 0.702, p=2.52E-11, CI=[0.557, 0.805]; κ: r = 0.305, p=0.011, CI=[0.072, 0.506]). Higher level parameters (ω3, μ30) were less consistently recovered, as noted in previous publications (Bröker et al., 2018). Thus, the model we chose, with meta-volatility and three coupled layers of belief, successfully simulates the key features of paranoid behavior: higher win-switching and stochastic choice.

Parameter effects on simulated task performance.

We simulated behavior from low paranoia participants (online Version 3, n = 54) to evaluate the effects of κ,μ30, ω2, and ω3 on win-shift and lose-stay rates. Estimated perceptual parameters were averaged across subjects to create a single set of baseline parameters. Additional parameter sets were created by doubling or halving one parameter at a time (e.g., 2 κ or 0.5 κ), while the others were held constant (n.b., 2 ω2 violated model assumptions and was excluded from analysis). We also included the average parameter values of rats exposed to methamphetamine (Meth). Ten simulations were run per subject for each condition (i.e., parameter set). Win-shift and lose-stay rates were calculated, then averaged across simulations and subjects. Rates from each condition were divided by the baseline condition rate to generate relative win-shift and lose-stay rates. We compared relative rates for each condition to the baseline (relative rate of 1, depicted as the dotted line; paired t-tests, Bonferroni-corrected p-values). Of note, baseline parameters were positive for κ and ω2, and negative for μ30 and ω3. Consequently, the doubled (2x) condition makes μ30 and ω3 more negative (lower). (n = 54). Box-plots: center lines show the medians; box limits indicate the 25th and 75th percentiles; whiskers extend 1.5 times the interquartile range from the 25th and 75th percentiles, outliers are represented by dots; crosses represent sample means; data points are plotted as open circles; *p≤0.05, **p≤0.01, ***p≤0.001.

Parameter recovery.

(a) Actual subject trajectory: this is an example choice trajectory from one participant (top). The layers correspond to the three layers of belief in the HGF model (depicted in Figure 2a). Focusing on the low-level beliefs (yellow box): The purple line represents the subject’s estimated first-level belief about the value of choosing deck 1; blue, their belief about the value of choosing deck 2; and red, their belief about the value of choosing deck 3. Simulated subject trajectory represents the estimated beliefs from choices simulated from estimated perceptual parameters from that participant (middle), and Recovered subject trajectory represents what happens when we re-estimate beliefs from the simulated choices (bottom). Crucially, Simulated trajectories closely align with real trajectories (the increases and decreased in estimated beliefs about the values of each deck [purple, blue, red lines] align with each other across actual, simulated and recovered trajectories), although trial-by-trial choices (colored dots and arrow) occasionally differ. Outcomes (1 or 0; black dots and arrows) remain the same. (b) Actual versus Recovered: these data represent the belief parameters estimated from the participant’s responses (Actual) compared to those estimated from the choices simulated with the participant’s perceptual parameters (Recovered). Actual and Recovered values significantly correlate for ω2 (r = 0.702, p=2.52E-11) and κ (r = 0.305, p=0.011) but not ω3 (r = 0.172, p=0.16) or µ30 (r = 0.186, p=0.13). Box plots: gray indicates low paranoia, orange designates high paranoia; center lines depict medians; box limits indicate the 25th and 75th percentiles; whiskers extend 1.5 times the interquartile range from the 25th and 75th percentiles, outliers are represented by dots; crosses represent sample means; data points are plotted as open circles. Online version three dataset.

Table 8
Simulations and behavior.
Win-switch rateU-valueLose-stay rate
EffectDfFp-valueFp-valueFp-value
Experiment 1
Block1, 301.4650.23616.9990.00031.3340.257
Block*Paranoia Group1, 300.6020.4442.3930.1322.5750.119
Paranoia Group1, 303.5790.0683.3120.0792.2830.141
Experiment 2, Version 3
Block1, 700.9350.33710.1530.0020.1220.728
Block*Paranoia Group1, 700.0010.9820.0030.9581.930.169
Paranoia Group1, 7012.6980.00119.2094.03E-051.0950.299
Simulations
Block1, 700.1760.6763.3350.0725.0730.027
Block*Paranoia Group1, 702.0390.1582.6240.110.0360.85
Paranoia Group1, 7015.3940.000213.3620.00050.0420.839
  1. †Simulated data from experiment 2, Version 3.

Alternate models

Our model is complex and other simpler reinforcement learning models might explain behavior on this task. Given the win-switching behavior we sought to understand, we fit a model from Lefebvre and colleagues that instantiated biased belief updating via differential weighting of positive and negative prediction errors (Lefebvre et al., 2018). Fitting this model to online version 3, we saw no significant paranoia group differences in learning rates for positive or negative prediction errors in parameters derived from all 180 trials (independent samples t-test: α+, t(70)=-0.532, p=0.597; α-, t(70)=0.963, p=0.339), nor did we see any significant block*paranoia or paranoia group effects by repeated measures ANOVA (block*paranoia: α+, F(1, 70)=0.188, p=0.732, α-, F(1, 70)=0.378, p=0.540; paranoia group: α+, F(1, 70)=0.243, p=0.623, α-, F(1, 70)=1.292, p=0.260). See Table 9.

Table 9
Alternative models fail to capture paranoia group differences.
Low Paranoia (n=56)†High Paranoia (n=16)†Paranoia Group EffectParanoia x Block Effect
MeanSEM95% CIMeanSEM95% CIF(df)PF(df)P
Q-learning with learning rates for positive and negative prediction errors
Positive prediction error (α+)
1st half0.4630.038[0.388, 0.538]0.4750.071[0.335, 0.616]0.243 (1, 70)0.6230.118 (1, 70)0.732
2nd half0.4760.039[0.398, 0.555]0.5350.074[0.379, 0.672]
Negative prediction error (α-)
1st half0.4210.022[0.377, 0.464]0.3650.041[0.284, 0.446]1.292 (1, 70)0.2600.320 (1, 70)0.573
2nd half0.3860.021[0.344, 0.427]0.3640.039[0.285,0.442]
Inverse temperature (β )
1st half27174.0[126, 416]147133[-114, 408]1.626 (1, 70)0.2070.043 (1, 70)0.837
2nd half31682.3[155, 477]145132[-114, 403]
2-level HGF with softmax decision model

µ2
1st half-0.0590.081[-0.218, 0.100]-0.3030.157[-0.611, 0.005]3.039 (1, 70)0.0860.385 (1, 70)0.537
2nd half-0.2440.082[-0.405, -0.082]-0.5660.155[-0.869, -0.262]
Inverse temperature (β)
1st half13130.6[71.3, 191]35.36.20[23.2, 47.5]2.665 (1, 70)0.1070.250 (1, 70)0.619
2nd half11930.6[58.7, 179]52.112.1[28.3, 75.9]     
  1. † Online version 3 data ‡ Repeated measures ANOVA.

We can also simplify within our hierarchical Gaussian Filter framework. The model we chose had three layers of beliefs and the highest level seemed to capture most of the task and paranoia effects of interest (Figure 8). To confirm this suspicion, we removed the third layer, fitting an HGF model that had beliefs about outcomes and deck values but no beliefs about volatility, no unexpected volatility learning rate, nor meta-volatility. This model failed to capture the task effects or group differences in its parameters (see Table 9).

Behavioral data and simulations.

(a) Plots of in laboratory and online behavioral metrics. Win-switch rate (switching after positive feedback), U-value (behavioral stochasticity) and Lose-stay rate (perseverating after a loss). Low paranoia participants are shown in gray, High paranoia in orange. Win-switch rates and U-values are collapsed across blocks. For Lose-stay rates, darker colors are block one data and lighter colors are block two data. Behavioral switching patterns replicate across in laboratory and online version three experiments. Perseveration after negative feedback (lose-stay behavior) did not significantly differ between paranoia groups or task block. (b) Simulated data generated from HGF perceptual parameters (version 3). Win-switch rate, U-value and Lose-stay rate of the simulated data are depicted. The model simulated data replicate the win-switch and U-value behavioral differences between high and low paranoia participants presented in panel a. Like the real participants, there was no difference in lose-stay rates in the simulated data. Center lines show the medians; box limits indicate the 25th and 75th percentiles; whiskers extend 1.5 times the interquartile range from the 25th and 75th percentiles, outliers are represented by dots; crosses represent sample means; data points are plotted as open circles.*p≤0.05, **p≤0.01, ***p≤0.001. Plots of participant behavioral metrics (a) are presented side by side with simulated data (b).

Therefore, a more complicated model, one that captures higher-level beliefs about contingency transitions or learning when to learn, seems most appropriate, and indeed, that type of model was able to simulate the key features of our data (Palminteri et al., 2017). Future work will compare and contrast different potential computational models included, but not limited to Bayesian Hidden State Markov Models (Hampton et al., 2006), as well as switching (Gershman et al., 2014) and volatile Kalman Filters (Piray and Daw, 2020).

Clustering analysis

Given the apparent similarity in effects of paranoia and methamphetamine in humans and rats, respectively (Figure 2b), we searched for latent structure in our data using two-step cluster analysis (Tkaczynski, 2017). This approach sorts subjects into groups (clusters) on the basis of some experimenter-selected variables such as estimated model parameters. The goal is to find distinct subsets in the data such that each cluster exhibits a cohesive pattern of relationships between the variables. Whereas some clustering approaches require the experimenter to predefine the expected number of clusters, two step-clustering determines both the optimal number of clusters and the composition of each cluster. The greater the similarity (or homogeneity) within a group and the greater the difference between groups, the better the clustering.

Considering that paranoia and methamphetamine exposure share a pattern of elevated μ30 and κ accompanied by decreased ω2 and ω3 (Table 10), we hypothesized that these four variables would yield a distinct cluster: a ‘paranoid style’ across species. We analyzed μ30, κ, ω2, and ω3 estimates derived from the first block of experiment one and online version 3 (pre-context change data, because rats do not experience a context shift) with post-chronic exposure rat data (methamphetamine and saline). We identified two clusters with good cohesion and separation, meaning that subjects sorted into two groups (each containing rodents and humans) whose parameters travelled in such a way that their values were close to the centroid or mean of the cluster they were in and as far as possible from the centroid of the other cluster (average silhouette coefficient = 0.7; cluster size ratio = 2.46; Figure 9a). All parameters contributed to clustering; κ contributed most strongly (Figure 9b). Importantly, the cluster solution did not separate rats from humans (despite the differences in task structure, incentives, manipulanda, and phylogeny). Relative to the overall distribution, Cluster one was characterized by high κ and μ30, and decreased ω2 and ω3. Cluster one membership was significantly associated with high paranoia and methamphetamine exposure, χ2(1, n = 121)=29.447, p=5.75E-8, Cramer’s V = 0.493 (Figure 9c). Notably, no participants in the low paranoia group with paranoia scores above zero were ascribed Cluster one membership. The cluster solution was robust to validation by split-half analysis (removing half of the participants and repeating the clustering), removal of the rat subjects, and removal of human participants. In each case, we identified two clusters with good cohesion and separation (Split-half 1, n = 19 cluster 1, 42 cluster 2: silhouette coefficient = 0.6; Split-half 2, n = 17 cluster 1, 43 cluster 2: silhouette coefficient = 0.7; No Rat, n = 26 cluster 1, 78 cluster 2: silhouette coefficient = 0.7; Rat Only, n = 6 cluster 1, 11 cluster 2: silhouette coefficient = 0.7). In summary, paranoid participants and methamphetamine-exposed rats cluster together (high μ30, high κ, low ω2, and low ω3), suggesting that these parameters share an underlying generative process and that paranoia and methamphetamine have similar effects on reversal-learning.

Table 10
Summary of paranoia/methamphetamine effects on belief-updating.
In labOnlineRats
ω3
µ30‡§
κ
ω2‡¶
µ02---
  1. ⇡ ⇣ Non-significant increase/decrease in high paranoia or meth, relative to low paranoia or saline ↑ ↓ Trend-level increase/decrease in high paranoia or meth, relative to low paranoia or saline ⬆⬇ Significantly higher/lower in high paranoia or meth, relative to low paranoia or saline - - No significant findings or trends † Baseline trend; parameter decreases in second block for low but not high paranoia ‡ Version 3 only § Trend-level significance disappears with inclusion of demographic covariates ¶ Significance reduced to trend with inclusion of demographic covariates.

Cluster analysis of HGF parameters.

Two-step cluster analysis of model parameters (ω3, μ30, κ , ω2) across rat and human data sets (rat, post-Rx; in laboratory and online version 3, block 1). Automated clustering yielded an optimal two clusters with good cohesion and separation (average silhouette coefficient = 0.7; cluster size ratio = 2.46). (a) Density plots for μ30, κ, ω2, and ω3 (light pink) depict cluster-specific distributions for each parameter (red). Unlike frequency histograms (that depict the number of data points in bins), density plots employ smoothing to prioritize distribution shape and are not restricted by bin size. Beneath each density plot, box-plots of overall median, 25th quartile, and 75th quartile for each parameter are aligned (pink), with cluster medians and quartiles superimposed (red). Relative to the overall distribution, Cluster 1 (n = 35) medians are elevated for μ30 and κ, decreased for ω2 and ω3. Cluster 2 (n = 86) falls within each overall distribution. (b) Predictor importance of included parameters. Consistent with the color scheme in Figure 2a, Uncertainty weighting parameters (κ, ω2, ω3 ) are depicted in purple and μ30 the prior is in blue. (c) Distribution of cluster identities within groups. Black bars signify the proportion of group members assigned to Cluster one and gray bars represent the proportion of group members assigned to Cluster 2. Cluster one membership is significantly associated with paranoia and methamphetamine groups (χ2(1, n = 121)=29.447, p=5.75E-8). Columns display means [standard error] or percentage of participants within the described category, test-statistics, and p-values. Independent samples t-test: t-value (df). Two-tailed P-values reported. Chi square coefficient (df). §Fisher’s exact test, exact significance (2-sided). Equal variances not assumed. #Not significant (Bonferonni correction). ††Data presented in Figure 8; repeated measures ANOVA, paranoia group trend or effect: F(df), P; estimated marginal means and standard error. ‡‡Data presented in Figure 2; repeated measures ANOVA, F(df), P. In laboratory: paranoia x block interactions for ω3, μ30; paranoia group effects for κ, ω2. Version 3: paranoia group effects reported. See Table 3 for complete ANOVA. results. Version columns display means [standard error] or percentage of participants within the described category. ††Univariate analysis, F(df). Exact test, chi-square coefficient (df). § Exact significance (2-sided). ||Monte Carlo significance (2-sided). ‡‡Data presented in Figure 3; repeated measures ANOVA, F(df), P. Mean values collapsed across blocks.

Discussion

During non-social probabilistic reversal-learning, paranoid individuals and rats chronically exposed to methamphetamine have higher initial expectations of task volatility (μ30). In other words, they start the task anticipating more changes in stimulus-outcome associations, and they switch choices readily and excessively in anticipation of reversal events. By relying more on their expectations of volatility than on actual experience (exemplified by switching even after positive feedback), they are slower to learn about changes in task volatility. This manifests as decreased meta-volatility learning (ω3) and failure to significantly adjust μ30 after contingency transitions. More paranoid individuals are similarly slower to adjust expected deck values (lower ω2) but faster to attribute volatility to reversal events (elevated κ), perceiving change (unexpected uncertainty) instead of normal statistical variation (expected uncertainty). They sit at Hofstadter’s ‘turning point’, constantly expecting change but never learning appropriately from it.

In the reversal learning literature, choice switching after positive feedback has garnered less attention than perseverative behavior and sensitivity to negative feedback (Izquierdo et al., 2017; Waltz, 2017). Individuals with depression and schizophrenia seemingly perseverate less than healthy controls, but this has formerly been attributed to increased sensitivity to negative feedback (Waltz, 2017; Robinson et al., 2012). However, elevated win-switch tendencies have been reported in youths with bipolar disorder, major depressive disorder, and anxiety disorder (Dickstein et al., 2010). A prior study in people with schizophrenia described excessive win-switch behavior that correlated with the severity of delusional beliefs and hallucinations (Waltz, 2017). Likewise, an elevated prior on environmental volatility (μ30) and higher sensitivity to this volatility (κ) have been observed in HGF analyses of 2-choice probabilistic reversal-learning in medicated and unmedicated patients with schizophrenia (Deserno, 2018). These authors did not explore paranoia specifically.

We assessed paranoia across the continuum of health and mental illness, provided three choice options, and explicitly manipulated unexpected volatility across task versions. The version that shifted from an easier to a more difficult contingency context (version 3) was associated with paranoia group effects on μ30, κ, and ω2, and a meta-analytic effect on ω3. Furthermore, this contingency transition – an exposure to truly unexpected volatility – rendered low paranoia controls more similar to their paranoid counterparts by decreasing their meta-volatility learning (ω3). Paranoid participants responded to contingency transitions in version 3 and version four by switching stochastically. These findings suggest a continuum of behavioral responses to volatility, moving from optimal learning to diminished feedback sensitivity (i.e, decreased ω3 in low paranoia participants) and from diminished feedback sensitivity (lower ω3 and increased win-switching in high paranoia participants) toward complete dissociation from experienced feedback (stochastic switching).

Unexpected uncertainty, the perception of change in the probabilities of the environment — particularly ‘unsignaled context switches” (Yu and Dayan, 2005) which increase unexpected volatility — is thought to promote abandonment of old associations and new learning. However, our results suggest that this response might vary according to a hierarchy of belief. Paranoid participants were quick to abandon ‘best deck’ associations and explore alternative options (i.e., x2 beliefs), but in turn they relied more on their higher-level beliefs about the task volatility (x3 beliefs) and less on sensory feedback (lower metavolatility learning). Our analysis of covariates warrants specific focus on κ, the sensitivity to unexpected volatility. Other parameter-paranoia associations did not endure after controlling for demographic factors (age, gender, ethnicity, and race), although we see their derangement in our rodent study as well as in the significant meta-analytic effects across our experiments. Furthermore, these demographic factors are themselves strong predictors of paranoia (Holt and Albert, 2006; Iacovino et al., 2014; Mahoney et al., 2010). It is notable too that κ was the most powerful discriminator of the two clusters of human and animal participants. We conclude that elevated κ - belief updating tethered to unexpected volatility - is the parameter change most robustly associated with paranoia. Doubling κ in our simulations induced significantly more win-switching.

Multiple neurobiological manipulations may induce such win-switching behavior. Lesions of the mediodorsal thalamus in non-human primates (Chakraborty et al., 2016) or neurons projecting from the amygdala to orbitofrontal cortex in rats (Groman et al., 2019) engender win-switching. Unexpected uncertainty, and the κ parameter of the HGF in particular (Marshall et al., 2016), are thought to be signaled via the locus coeruleus and noradrenaline (Yu and Dayan, 2005; Payzan-LeNestour and Bossaerts, 2011; Payzan-LeNestour et al., 2013; Tervo et al., 2014). This mechanism is thought to modulate switching versus staying behaviors (Kane et al., 2017; Aston-Jones and Cohen, 2005; Aston-Jones et al., 1999; Eldar et al., 2013), as well as responses to stress (Borodovitsyna et al., 2018; McCall et al., 2015; Atzori et al., 2016) and subliminal fear cues (Liddell et al., 2005) to coordinate fight-or-flight responses (Atzori et al., 2016). The dual role of the locus coeruleus in recognizing and responding to threats as well as unexpected uncertainty suggests that dysfunction could produce both paranoia and the inferential abnormalities we observed. Methamphetamine may induce similar dysfunction (Ferrucci et al., 2019; Ferrucci et al., 2013; Ferrucci et al., 2008). Acute moderate doses increase pre-synaptic catecholamine release, particularly noradrenaline (Rothman et al., 2001), and induce exploratory locomotive effects modulated through adrenoceptors on dopamine neurons (Ferrucci et al., 2013).

Excessive release of noradrenaline from the locus coeruleus into the anterior cingulate cortex drives feedback insensitivity and stochastic switching behavior in rats completing a three-option counter prediction task (Tervo et al., 2014). Evolutionarily, departure from predictable, rational actions might offer an adaptive mechanism for escape from intractable threat. As a protean defense mechanism, behavioral stochasticity impedes predators’ abilities to create accurate, actionable countermeasures (Humphries and Driver, 1970; Richardson et al., 2018; Humphries and Driver, 1967). If driven by excessive unexpected uncertainty, underwritten by noradrenaline, protean defense may represent a heavily conserved, continuous common mechanism underlying vigilance and false alarms (Aston-Jones et al., 1994; Rajkowski et al., 1994; Usher et al., 1999), arousal-linked attentional biases (Eldar et al., 2013) and selective processing of social threats. However, protean behaviors are not necessarily adaptive. Pathological insensitivity to feedback and reliance on internal beliefs over evidence constitute a ‘break from reality’ – in other words, psychosis.

Efference copy models of motor control Wolpert and Ghahramani, 2000 have been evoked to explain psychotic symptoms (Blakemore et al., 2000; Blakemore et al., 1998; Blakemore et al., 1999; Blakemore et al., 2002; Frith et al., 2000a; Frith et al., 2000b; Shergill et al., 2005; Shergill et al., 2014). Aberrant mismatches between expected and experienced sensory consequences of actions, weighted by their uncertainty (Wolpert and Ghahramani, 2000), can lead to the misattribution of one’s movements to an external agent (Blakemore et al., 2000; Blakemore et al., 1998; Blakemore et al., 1999; Blakemore et al., 2002; Frith et al., 2000a; Frith et al., 2000b; Shergill et al., 2005; Shergill et al., 2014). Since we model others’ intentions with reference to our model of ourselves (Friston and Frith, 2015), volatile experiences of ones’ body and actions will lead to uncertain and ultimately more threatening inferences about others (Friston and Frith, 2015). This would be entirely consistent with the present observations.

When confronted with intractable unexpected uncertainty our participants rely on higher-level beliefs about the task environment. When humans experience non-social volatility, (For example through threats to their sense of control [Whitson and Galinsky, 2008] or exposure to surprising non-social stimuli [Proulx et al., 2012; Heine et al., 2006]), they appeal to the influence of powerful enemies, even when those enemies’ influence is not obviously linked to the volatility (Sullivan et al., 2010). Our account places the locus of paranoia at the level of the individual. Here, our account departs from evolutionary accounts of paranoia grounded in coalitional threat (Raihani and Bell, 2019; persecutors are not scapegoats that increase group cohesion. Rather, when paranoid, we have a ready explanation for hazards. With a well-defined persecutor in mind, a volatile world may be perceived to have less randomly distributed risk (Sullivan et al., 2010). However, paranoia might become a self-fulfilling prophecy, engendering more volatility and negative social interactions. This aspect may be captured in our task through win-switch behavior. By failing to incorporate positive feedback from the best option, paranoid individuals sample sub-optimal options which delivers misleading positive feedback.

There are some important limitations to our conclusions. Compared with humans, rats are relatively asocial. But they are not completely asocial. In our experiment they were housed in pairs, and, more broadly, they evince social affiliative interactions with other rats (Donaldson et al., 2018; Kondrakiewicz et al., 2019; Urbach et al., 2010). A further limitation centers on the comparability of our experimental designs. In humans our comparisons were both within (contingency transition) and between groups (low versus high paranoia). In rats, the model was also mixed with some between (saline versus methamphetamine) and some within-subject (pre versus post chronic treatment) comparisons. We should be clear that there was no contingency context transition in the rat study. However, just as that transition made low paranoia humans behave like high paranoia, chronic methamphetamine exposure made rats behave on a stable contingency much like high paranoia humans - even in the absence of contingency transition. The comparable results across species, despite these differences, warrant the inference that our basic, relatively asocial, approach provides a robust tool for computational dissection of learning mechanisms.

Social interactions play a rich and undeniable role in paranoia, but translational, domain-general approaches may ultimately facilitate biological insights into paranoia, psychosis and delusions (Corlett and Fletcher, 2014; Feeney et al., 2017). Whilst we contend that our task is relatively free of social features (certainly compared to others [Raihani and Bell, 2017]), the possibility remains that the elevated U-values in our participants are reflective of attempts (and perhaps failures) to predict our intentions as experimenters. Indeed, this is a possibility raised previously with regards to simple conditioned behaviors in experimental animals. Even during Pavlovian conditioning, animals may attempt to infer a generative model of the task environment, which might, ultimately, include the experimenter arranging the contingencies (Gershman and Niv, 2012; Gershman and Niv, 2010). It is possible that all instances of human cognitive testing involve an element of inference by the participant with regards to the intentions of the experimenter, whether or not the task at hand is explicitly social, and indeed, all cognitive functions may be aimed at or modulated by such inferences (Turner et al., 1994).

In summary, a strong belief in the volatility of the world necessitates hypervigilance and a facility with change. However, in paranoia, that belief in the volatility of the world is itself resistant to change, making it difficult to reassure, teach, or change the minds of people who are paranoid. They remain ‘on guard,’ adhering to expectations over evidence. By using a non-social task, we have shown that this paranoid style is not restricted to the social domain, and that it can be modeled in relatively asocial animals. Additionally, our domain-general approach reaffirms the merit of establishing expectations of a stable, predictable environment to promote recovery from paranoia-associated illness (Powers et al., 2018). We note with interest the apparent relationship between conspiratorial ideation and societal crisis situations (terrorist attacks, plane crashes, natural disasters or war) throughout history, with peaks around the great fire of Rome (AD 64), the industrial revolution, the beginning of the cold war, 9/11, and contemporary financial crises (van Prooijen and Douglas, 2017). In today’s world of escalating uncertainty and volatilty – particularly environmental climate change and viral pandemics – our findings suggest that the paranoid style of inference may prove particularly maladaptive for coordinating collaborative solutions.

Materials and methods

Experiments were conducted at Yale University and the Connecticut Mental Health Center (New Haven, CT) in strict accordance with Yale University’s Human Investigation Committee and Institutional Animal Care and Use Committee. Informed consent was provided by all research participants.

Experiment 1

Request a detailed protocol

English-speaking participants aged 18 to 65 (n = 34) were recruited from the greater New Haven area through public fliers and mental health provider referrals. Exclusion criteria included history of cognitive or neurologic disorder (e.g., dementia), intellectual impairment, or epilepsy; current substance dependence or intoxication; cognition-impairing medications or doses (e.g. opiates, high dose benzodiazepines); history of special education; and color blindness. Participants were classified as healthy controls (n = 18), schizophrenia spectrum patients (schizophrenia or schizoaffective disorder; n = 8), and mood disorder patients (depression, bipolar disorder, generalized anxiety disorder, post-traumatic stress disorder; n = 8) on the basis of clinician referrals and/or self-report. Participants were compensated $10 for enrolment with an additional $10 upon completion. Two healthy controls were excluded from analyses due to failure to complete the questionnaires and suspected substance use, respectively.

Experiment 2

Request a detailed protocol

332 participants were recruited online via Amazon Mechanical Turk (MTurk). The study advertisement was accessible to MTurk workers with a 90% or higher HIT approval rate located within the United States. To discourage bot submissions and verify human participation, we required participants to answer open-ended free response questions; submit unique, separate completion codes for the behavioral task and questionnaires; and enter MTurk IDs into specific boxes within the questionnaires. All submissions were reviewed for completion code accuracy, completeness of responses (i.e., declining no more than 30% of questionnaire items), quality of free response items (e.g., length, appropriate grammar and content), and use of virtual private servers (VPS) to submit multiple responses and/or conceal non-US locations (Dennis VPS paper, 2018). Upon approval, workers were compensated $6. Those who scored in the top 25% on the card game (reversal-learning task) earned a $2 bonus. We rejected or excluded 19 submissions that geolocation services (https://www.iplocation.net/) identified as originating outside of the United States or from suspected server farms, four submissions for failure to manually enter MTurk ID codes, and two submissions for insufficient questionnaire completion. Submissions with grossly incorrect completion codes were rejected without further review.

Experiment 3

Request a detailed protocol

Subject information, behavioral data acquisition, and behavioral analyses were described previously (Groman et al., 2018). Long Evans rats (Charles River; n = 20) ranged from 7 to 9 weeks of age. Rats were exposed to escalating doses and frequency of saline (n = 10) or methamphetamine (n = 10, three withdrawn during dosing), imitating patterns of human methamphetamine users (Segal et al., 2003; Han et al., 2011). Prior to dosing (Pre-Rx), rats completed 26 within-session reversal sessions, including up to eight reversals per session. Post-dosing (Post-Rx), rats completed one test session per week for four weeks. Computational model parameters were estimated from each session and averaged across treatment conditions to yield one Pre-Rx and Post-Rx set of parameters per rat.

Behavioral task

Request a detailed protocol

Participants completed a 3-option probabilistic reversal-learning paradigm. Three decks of cards were displayed on a computer monitor for 160 trials. Participants selected a deck on each trial by pressing the predesignated key. We advised participants that each deck contained winning and losing cards (+100 and −50 points), but in different amounts. We also stated that the best deck may change. Participants were instructed to find the best deck and earn as many points as possible. Probabilities switched between decks when the highest probability deck was selected in 9 out of 10 consecutive trials (performance-dependent reversal). Every 40 trials the participant was provided a break, following which probabilities automatically reassigned (performance-independent reversal).

In Experiment 1, the task was presented via Eprime 2.0 software (Psychology Software Tools, Sharpsburg, PA). Participants were limited to a 3 s response window, after which the trial would time out and record a null response. A fixation cross appeared during variable inter-trial intervals (jittering). Task pacing remained independent of response time. In block 1 (trials 1–80) the reward probabilities (contingency) of the three decks were 90%, 50%, and 10% (90-50–10%). Without cue or warning (i.e. unsignaled to the participants) the contingency transitioned to 80%, 40%, and 20% (80-40–20%) upon initiation of block 2 (trials 81–160).

In Experiment 2, the task was administered via web browser link from the MTurk marketplace. We changed the task timing to self-paced and eliminated null trials and inter-trial jittering. A progress tracker was provided every 40 trials. Workers were randomly assigned to one of four task versions, using restricted block randomization to ensure comparable numbers of high paranoia participants across task versions. Version one had a constant contingency of 90-50–10%. Version 4 maintained a constant contingency of 80-40–20%. Version 3 replicated the 90-50–10% (block 1) to 80-40–20% (block 2) context transition of Experiment 1. Version 4 presented the reversed contingency transition, 80-40–20% (block 1) to 90-50–10% (block 2). We analyzed attrition rates across the four versions.

Questionnaires

Request a detailed protocol

Following task completion, questionnaires were administered via the Qualtrics survey platform (Qualtrics Labs, Inc, Provo, UT). Items included demographic information (age, gender, educational attainment, ethnicity, and race) and mental health questions (past or present diagnosis, medication use, Structured Clinical Interview for DSM-IV Axis II Personality Disorders (SCID-II) (Ryder et al., 2007), Beck’s Anxiety Inventory (BAI) (Beck et al., 1988), Beck’s Depression Inventory (BDI) (Beck et al., 1961). We removed the single suicidality question from the BDI for Experiment 2. Experiment 2 included additional items: income, three cognitive reflection questions (Table 7), and three free response items (‘What do you think the card game was testing?’, ‘Did you use any particular strategy or strategies? If yes, please describe’, and ‘Did you find yourself switching strategies over the course of the game?’). We quantified trait-level paranoia using the paranoid personality subscale of the SCID-II, and we included an ideas of reference item from the schizotypy subscale (‘When you are out in public and see people talking, do you often feel that they are talking about you?’) This item, along with other SCID-II items, has previously been included as a metric of paranoia in the general population (Bebbington et al., 2013; Bell and O'Driscoll, 2018). Participants who endorsed four or more paranoid personality items (i.e., the cut-off for the top third identified in Experiment 1) were classified as ‘high paranoia.’ Each participant’s SCID-II, BAI, and BDI scores were normalized by total scale items answered. Response rates were higher than 90% for all questionnaire items and scales (Table 11).

Table 11
Questionnaire item completion (% responses).
Questionnaire/subscaleExperiment 1Experiment 2
Age90.6%99.7%
Gender100.0%100.0%
Ethnicity100.0%100.0%
Race100.0%100.0%
Education100.0%99.7%
Meds100.0%90.6%
Dx100.0%94.1%
IncomeN/A98.0%
SCID-II Paranoia - all items96.9%94.1%
 SCID-II Paranoia - one item missing3.1%5.5%
 SCID-II Paranoia - three items missing0.0%0.3%
Cognitive reflection - all itemsN/A97.7%
Beck's Anxiety Inventory (BAI) - all items90.6%96.7%
 BAI - one item missing3.1%2.9%
 BAI - two items missing6.3%0.3%
Beck's Depression Inventory (BDI) - all items100.0%99.0%
 BDI - one item missing0.0%1.0%

Behavioral analysis

Request a detailed protocol

We analyzed tendencies to choose alternative decks after positive feedback (win-switch) and select the same deck after negative feedback (lose-stay). Win-switch rates were calculated as the number of trials in which the participant switched after positive feedback divided by the number of trials in which they received positive feedback. Lose-stay rates were calculated as number of trials in which a participant persisted after negative feedback divided by total negative feedback trials. In Experiment 1, we excluded post-null trials from these analyses. To further characterize switching behavior, we calculated U-values, a measure of choice stochasticity:

(1) Uvalue=Σi=1βlog(αi) x αilog(β)

where β is the number of possible choice options (i.e., card decks or noseports) and α equals the relative frequency of choice option i (Kong et al., 2017). To avoid any choice counterbalancing effects across reversals, choice frequencies were determined by the underlying probabilities of the decks rather than their physical attributes (e.g., deck position or color). Additional behavioral analyses included trials to first reversal, trials to post-reversal recovery, and trials to post-reversal switch. The latter two were restricted to the first reversal in the first block. Trials post-reversal were counted from the first-negative feedback trial following the true reversal event. Recovery was defined as switching to the best deck and staying for at least one additional trial.

Computational modeling

Materials

The Hierarchical Gaussian Filter (HGF) toolbox v5.3.1 is freely available for download in the TAPAS package at https://translationalneuromodeling.github.io/tapas (Mathys et al., 2011; Mathys et al., 2014). We installed and ran the package in MATLAB and Statistics Toolbox Release 2016a (MathWorks, Natick, MA).

Perceptual parameter estimation

Request a detailed protocol

In the human reversal-learning experiments, we estimated perceptual parameters individually for the first and second halves of the task (i.e., blocks 1 and 2). Each participant’s choices (i.e., deck 1, 2, or 3) and outcomes (win or loss) were entered as separate column vectors with rows corresponding to trials. Wins were encoded as ‘1’, losses as ‘0’, and choices as ‘1’, ‘2’, or ‘3’. We selected the autoregressive 3-level HGF multi-arm bandit configuration for our perceptual model and paired it with the softmax-mu03 decision model.

Rat reversal-learning data was entered similarly, with choices designated as ‘1’, ‘2’, or ‘3’ and reward presence or absence noted as ‘1’ and ‘0’, respectively. Perceptual parameters were estimated as a single block per session and averaged across Pre-Rx or Post-Rx sessions for each subject. Since the contingency remained 70-30–10%, we used the default start point values of µ2 and µ3, as in block one estimations for the human reversal-learning experiments).

Simulations

Request a detailed protocol

We performed ten simulations per participant (online version 3) to determine whether our parameter estimates and model successfully captured behavioral differences between groups (e.g., win-switch rates). Each simulation required the participant’s actual data (i.e., the column vectors ‘outcomes’ and ‘choices’) and the corresponding set of derived perceptual parameters. On each trial, a new choice was simulated conditional on the actual inputs in previous trials.

To illustrate the effects of each parameter on task behavior we doubled or halved one parameter at a time, by establishing a baseline set of perceptual parameters containing the average values from the low paranoia participants (online version 3). We then ran 10 simulations per subject for each of the following conditions: baseline, 2κ, 0.5κ, 2µ30, 0.5µ30, 2ω3, 0.5ω3, 2ω2, 0.5ω2, and the average perceptual parameters (κ, µ30, ω3, and ω2) from Post-Rx methamphetamine rats. The 2ω2 condition yielded parameters in a region where model assumptions were violated (negative posterior precision error message) and was excluded from further analysis. Win-shift and lose-stay rates were calculated from each simulation as follows, and then averaged for each condition:

Win-switch rate=Number of trials in which choice switched after positive feedbackTotal positive feedback trials
Lose-stay rate=Number of trials in which choice repeated after negative feedbackTotal negative feedback trials

For each participant, we divided rates derived from each condition by the baseline rates to determine relative win-switch and lose-stay rates. We compared each relative rate to the baseline condition (i.e., 1.0) with paired-samples t-tests using Bonferroni-corrected p-values.

Parameter recovery

Request a detailed protocol

We performed perceptual parameter estimation (see above) on 10 simulations per subject using first block data from online version 3. These simulations were generated from each subject’s corresponding perceptual parameters. We averaged recovered parameters across simulations and low versus high paranoia (Figure 7).

Alternative models

Request a detailed protocol

We employed a Q-learning model with separate parameter weights for positive and negative prediction errors to determine whether differential weighting might contribute to paranoia group effects. This model has been described previously (Lefebvre et al., 2018). We also evaluated whether a simpler two-level HGF model might suffice to capture paranoia group differences. To sever the third level from the model, we fixed the log- κ parameter at negative infinity (i.e., by additionally setting the variance to zero), and similarly fixed the values of µ3, ω3, ω2, Φ3 at the values previously assigned in the configuration file. Parameter estimation was performed as described above, with a softmax decision model.

Statistics

Unless otherwise specified, statistical analyses and effect size calculations were performed in IBM SPSS Statistics, Version 25 (IBM Corp., Armonk, NY), with an alpha of 0.05. Box-plots were created with the web tool BoxPlotR (Spitzer et al., 2014). Model parameters were corrected for multiple comparisons using the Benjamini Hochberg (False Discovery Rate) method. Bonferroni corrected results were largely consistent (Table 4).

To compare questionnaire item means between two groups (Table 1, low versus high paranoia), we conducted independent samples t-tests. To compare questionnaire item means across paranoia groups and task versions (Table 2), we employed univariate analyses. Associations between characteristic frequencies and subject group or task version were evaluated by Chi-Square Exact tests (two groups) or Monte Carlo tests (more than two groups). Pearson correlations established the associations between paranoia and BDI scores, BAI scores, win-switch rates, and κ. We selected two-tailed p-values where applicable and assumed normality. Multiple regressions were conducted with κ estimates from the first task block (dependent variable) and paranoia, BAI, and BDI scores from online version 3.

To compare HGF parameter estimates and behavioral patterns (win-switch, U-value, lose-stay) across block, paranoia group (Experiment 1, Experiment 2 version 3), and/or task version (Experiment 2), we employed repeated measures and split-plot ANOVAs (i.e., block designated within-subject factor, paranoia group and task version as between subject). We similarly evaluated Experiment three parameter estimates for treatment by time interactions. For Experiment 2, we performed ANCOVAs for μ30, κ, ω2, and ω3 to evaluate three sets of covariates: (1) demographics (age, gender, ethnicity, and race); (2) mental health factors (medication usage, diagnostic category, BAI score, and BDI score); (3) and metrics and correlates of global cognitive function (educational attainment, income, and cognitive reflection). Unless otherwise stated, post-hoc tests were conducted as least significant difference (LSD)-corrected estimated marginal means.

Meta-analyses were conducted using random effects models with the R Metafor package (Viechtbauer, 2010). Mean differences were assessed for low versus high paranoia groups in the in-laboratory experiment and online version 3. Standardized mean differences (methamphetamine or high paranoia versus saline or low paranoia) were employed to account for the differences in task design between animal and human studies.

The 2-step clustering analysis approach was selected to automatically determine optimal cluster count and cluster group assignment. Clustering variables included paranoia-relevant parameter estimates (μ30, κ, ω2, and ω3) from Experiment 1 (block 1); online, version 3 (block 1), and rats (Post-Rx) as continuous variables with a Log-likelihood distance measure, maximum cluster count of 15, and Schwarz’s Bayesian Criterion (BIC) clustering criterion. We validated our clustering solution by sorting the data into two halves and running separate cluster analyses. We also compared cluster solutions derived exclusively from rat data versus human data. A Chi-Square test determined the significance of the association between cluster membership and group (methamphetamine/high paranoia versus saline/low paranoia).

Data availability

Request a detailed protocol

Data are available on ModelDB (McDougal et al., 2017; http://modeldb.yale.edu/258631) with accession code p2c8q74m.

References

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27
  28. 28
  29. 29
  30. 30
  31. 31
  32. 32
  33. 33
  34. 34
  35. 35
  36. 36
    Abnormalities in the awareness and control of action
    1. CD Frith
    2. SJ Blakemore
    3. DM Wolpert
    (2000b)
    Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 355:1771–1788.
    https://doi.org/10.1098/rstb.2000.0734
  37. 37
  38. 38
  39. 39
  40. 40
  41. 41
  42. 42
  43. 43
  44. 44
  45. 45
  46. 46
    Not-so-social learning strategies
    1. C Heyes
    2. JM Pearce
    (2015)
    Proceedings of the Royal Society B: Biological Sciences 282:20141709.
    https://doi.org/10.1098/rspb.2014.1709
  47. 47
  48. 48
    The Paranoid Style in American Politics
    1. R Hofstadter
    (1964)
    Harper's Magazine.
  49. 49
  50. 50
  51. 51
  52. 52
  53. 53
  54. 54
  55. 55
  56. 56
  57. 57
  58. 58
  59. 59
  60. 60
  61. 61
  62. 62
  63. 63
  64. 64
  65. 65
  66. 66
  67. 67
  68. 68
  69. 69
  70. 70
  71. 71
  72. 72
  73. 73
  74. 74
  75. 75
  76. 76
  77. 77
  78. 78
  79. 79
  80. 80
  81. 81
  82. 82
  83. 83
  84. 84
  85. 85
  86. 86
  87. 87
  88. 88
  89. 89
  90. 90
  91. 91
  92. 92
  93. 93
  94. 94
  95. 95
  96. 96
  97. 97
  98. 98
  99. 99

Decision letter

  1. Geoffrey Schoenbaum
    Reviewing Editor; National Institute on Drug Abuse, National Institutes of Health, United States
  2. Floris P de Lange
    Senior Editor; Radboud University, Netherlands
  3. Geoffrey Schoenbaum
    Reviewer; National Institute on Drug Abuse, National Institutes of Health, United States

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

In this study, the authors tested the ability of humans and rats to track probabilities of reward in a 3-option discrimination task. Paranoia/meth use was associated with worse performance on the task, reflected in fewer reversals in humans and increases in suboptimal win-switch, lose-stay responding, and these tendencies were associated with an increase in the model parameter reflecting phasic volatility and a reduction in the model parameter reflecting contextual volatility. The authors conclude that alterations in perceptions of environmental volatility – uncertainty – may play a significant causal role in paranoia.

Decision letter after peer review:

Thank you for submitting your article "A paranoid style of belief updating across species" for consideration by eLife. Your article has been reviewed by three peer reviewers, including Geoffrey Schoenbaum as the Reviewing Editor and Reviewer #1, and the evaluation has been overseen by Floris de Lange as the Senior Editor.

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

We would like to draw your attention to changes in our policy on revisions we have made in response to COVID-19 (https://elifesciences.org/articles/57162). Specifically, when editors judge that a submitted work as a whole belongs in eLife but that some conclusions require a modest amount of additional new data, as they do with your paper, we are asking that the manuscript be revised to either limit claims to those supported by data in hand, or to explicitly state that the relevant conclusions require additional supporting data.

Our expectation is that the authors will eventually carry out the additional experiments and report on how they affect the relevant conclusions either in a preprint on bioRxiv or medRxiv, or if appropriate, as a Research Advance in eLife, either of which would be linked to the original paper.

Summary:

In this study, the authors tested the ability of humans and rats to track probabilities of reward in a 3-option discrimination task. Performance was challenged outright reversal of reward probabilities of the different options (phasic volatility) as well as a shift in the spread of probabilities across blocks (contextual volatility). The effect of different types of volatility on performance were modeled and correlated with paranoia in the humans and with effects of methamphetamine in the rats, the use of which has been associated with paranoia in humans. Paranoia/meth use was associated with worse performance on the task, reflected in fewer reversals in humans and increases in suboptimal win-switch, lose-stay responding, and these tendencies were associated with an increase in the model parameter reflecting phasic volatility and a reduction in the model parameter reflecting contextual volatility. The authors conclude that alterations in perceptions of environmental volatility – uncertainty – may play a significant causal role in paranoia. Overall the reviewers agreed that the paper addressed an important question using an exciting combination of behavior and computational models, and that the results were compelling and potentially important. The main concerns revolved around a desire for more clarity in the presentation and some effort to contrast the current results with other possible models.

Essential revisions:

Key areas of revision were threefold. Together these encompass most of the individual reviewer remarks, which are left below to be addressed rather than reproduced here.

1) Two of the reviewers found it difficult to follow some of the logic and explanations. So the most important revision is to make the specific points raised in the reviews more clear while at the same time simplifying the results to be more digestible for readers who are not computational modelers. This might include removing some experiments, showing more data initially, etc.

2) Remove the rat experiment – it does not really match the others and the paper will be much simpler without it. Of course if the authors disagree, this is their prerogative but in this case it needs to be better explained why the differences are not important. For instance, it is somewhat unclear how the rat behavior can effectively model context volatility as it does not include this in the design. It could also be included as supplemental perhaps.

3) Two reviewers questioned whether the model used is superior to simpler models in interpreting the behavior. Some comparison would be useful to show that the three level model applied is superior.

Reviewer #1:

In this study, the authors tested the ability of humans and rats to track probabilities of reward in a 3-option discrimination task. Performance was challenged outright reversal of reward probabilities of the different options (phasic volatility) as well as a shift in the spread of probabilities across blocks (contextual volatility). The effect of different types of volatility on performance were modeled and correlated with paranoia in the humans and with effects of methamphetamine in the rats, the use of which has been associated with paranoia in humans. Paranoia/meth use was associated with worse performance on the task, reflected in fewer reversals in humans and increases in suboptimal win-switch, lose-stay responding, and these tendencies were associated with an increase in the model parameter reflecting phasic volatility and a reduction in the model parameter reflecting contextual volatility. The authors conclude that alterations in perceptions of environmental volatility – uncertainty – may play a significant causal role in paranoia.

Overall I really like the use of the task variants and modeling to identify links between paranoia and simple learning parameters. I did find it hard to decipher some of the Results sections and the modeling. I think the paper would benefit greatly from being written with more up front handholding for readers who are not well-versed in these concepts. This might be accomplished by laying out more clearly how the different parameters can be understood both intuitively to impact learning/paranoia as well as how they are directly related to behavior in the tasks. This might include presenting more of the behavioral data. Currently all that is presented are the model parameters. It would be more convincing I think if the actual performance was shown from the subjects and then from the model, along with the parameters.

As part of this, I also am not sure the rodent data really fits. I like its inclusion in principle, but the task does not correspond directly to the variants used in humans. Specifically it lacks the shift in context volatility. This seems crucial to me. I think perhaps it might be removed to simplify the presentation. Likewise the task variants that do not include this could be removed.

On an interpretive level, I had two further questions. The first is whether it is possible to reproduced the performance with simpler models? Or how much of an improvement is gained with the use of the more complex model? Beyond this I also wonder if the authors believe that some of the effects might be compensatory – that is if I undertand correctly, they are arguing that there is less of an impact of context volatility on behavior in the experimental subjects. If this is true, it seems to me it might lead to more surprise when there are sudden changes in reward probability when a reversal occurs….?

Reviewer #2:

The authors ran 2 experiments in human subjects (one in the lab, the other online) and re-analysed behavioural data in rats and found that: 1) in humans, paranoid scores are correlated with an impairment in volatility monitoring according to a Bayesian meta-learning framework: 2) in rats meta-amphetamine administration (a pharmacological manipulation that induces paranoia in humans) impairs uncertainty monitoring in a similar way. Overall, I liked this paper; I think it represents an important contribution.

My main questions / suggestions are about the choice model-free metrics and statistical analyses and computational modeling inferences.

1) In Experiment 1, the difference between high/low paranoia is on the “number of reversals” variable. In Experiment 2 (and in the rats) the difference between high/low paranoia (placebo/ meta-amphetamine) is captured by the “win / switch” rate. However, I could not find the “win / switch” rate measure for Experiment 1 and the “number of reversal” metric for Experiment 2. The authors should report the same behavioural metrics for all experiments.

2) Even if expected, the correlation between depression, anxiety and paranoia is a bit annoying. I am convinced that paranoia is the main determinant of the computational effects, but I think the authors could provide some additional evidence that this is the case. A possible solution could be to use a structural equation modeling. Another (possibly better) solution would be to run a PCA on the three scales (the average scores, not necessarily the individual items): my prediction is that the first component will have positive loadings on the three scales and the second will be specific to the paranoia scale. They could then correlate the PCA values instead of the scores of the scales.

3) I think that, in addition to the current model, the authors could also test a simple RL model with different learning rates for positive and negative prediction errors (see Lefebvre et al., NHB, 2017). I think the readers would be interested in knowing these results as the learning rate asymmetry has been shown to correlate with the striatum, as meta-amphetamine affects dopamine and also because in paranoia there seems to be an affective component to paranoia. This analysis could be done in parallel (not in antagonism) of the main model and reported in the SI.

Reviewer #3:

This study takes on the hypothesis that paranoia is actually due to dysfunction in recognizing volatility. They address this through two human experiments (one in-person comparing individuals with and without psychiatric diagnoses; one online using Amazon Mechanical Turk) and a rat experiment in which rats are exposed to methamphetamine or saline. They justify their claims by fitting a model using a Hierarchical Gaussian Filter (HGF) and identifying changes in the underlying parameters, particularly identifying larger priors for higher volatility in the high paranoia group and in the methamphetamine-exposed rats.

The strength of this paper is that it uses a simple task to explore important topics, particularly a transdiagnostic perspective. However, we have several serious problems with the manuscript, including both the communication and the experiments and analysis itself. While we laud the authors for attempting to compare experiments across species, we do not find the rat experiment a good parallel for the human.

Major concerns:

– Overall, it was very difficult to read this paper. Even with multiple read-throughs each and multiple discussions between the reviewers (senior π and graduate student), we are not sure that we understand the manuscript, its goals, or its conclusions. Many of the figures are not referenced in the text (Tables 5 and 9 are never referenced in the paper at all), many of the figures are unclear as to their purpose (what is being plotted in Figure 4?), and many of the figures are very poorly explained (we finally concluded that we are supposed to track colors not position in Figure 3). A careful use of supplemental figures, a better track to the storyline, and better communication overall would improve this paper dramatically.

– The rat experiment is interesting, but it is not a good comparison for the human data. The rat experiment is within-subject, comparing pre and post-manipulation, while the human data is between-subject, comparing high and low paranoia scores. Furthermore, the experiment itself is completely different. While the human experiments had three decks that changed throughout, the rats had two changing and one unchanging deck. We recommend removing the rat experiment.

Unclear concerns

– These Bayesian models (such as the HGF shown in Figure 2) are notoriously unstable. Very small changes can produce dramatically different results. How independent are these variables? Are there other models that can be compared?

– The authors seem to be trying to make the argument that the real issue with paranoia is not the social decision-making process, but rather an underlying issue with measuring volatility (and particularly meta-volatility). As such, the title of the paper should be changed. The important part of this paper (as we understand it) is not the cross-species translation (which is problematic at best), but rather the new model of paranoia as a dysfunction elsewhere than the social sphere.

– The authors need to add citations for using rats exposed to methamphetamine as a model for paranoia. While there appears to be research supporting this method, the authors do not actually cite it. To our knowledge, It is not appropriate to describe methamphetamine as a locus coeruleus disruption or as a change in the noradrenergic gain. Yet, the discussion about the rat experiment seems to be based on noradrenergic gain manipulations.

Specific concerns

– Figure 1: what is the difference between performance independent and performance dependent changes? Explain in figure caption.

– Figure 2B: Once we finally realized that the key to this figure were the colors, we liked that the authors kept the colors consistent across the rats and humans, since the rat comparisons were pre-Rx instead of having two blocks, and therefore likely indistinguishable from the low-paranoia group pre-Rx. However, it makes comparison of the figures confusing, because we expect the comparisons across the figures to mean the same thing to compare the outcomes. This figure requires much better and clearer explanations.

– Figure 3: Is there a reason that the authors expected version 3 to be significant over version 4? Why might the order of context change matter (or not matter)?

– Figure 4: This figure is confusing and at the moment does not provide additional understanding of the results. Consider relabeling and adjusting figure caption to explain what is in the figure and move the results to the Results section or display in a table (or both), or otherwise remove it in total.

– Figure 5 seems important, but shouldn't we see this for all of the important variables? We thought the argument was primarily about metavolatility rather than phasic volatility coupling.

– Figure 7 is important, but was poorly labeled and mostly impenetrable. In particular, panel a is completely unclear. There are no labels, no explanation for colors or any other components. What is the difference between simulated and "recovered" parameters?

– Figure 8 claims a replication between in-person and online, but the online appear significant, while the in-person do not.

– Figures 9 and 10 are impenetrable. What is this cluster analysis and how is it done?

– Figure 11 seems more appropriate to a supplemental figure.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Thank you for resubmitting your work entitled "A paranoid style of belief updating across species" for further consideration by eLife. Your revised article has been evaluated by Floris de Lange (Senior Editor) and a Reviewing Editor.

The manuscript has been improved but there are some remaining issues that need to be addressed before acceptance. In particular, while the concerns of reviewer 1 and 2 were addressed, and the manuscript is markedly improved in terms of clarity, reviewer 3 still has some remaining requests. They are described below.

Reviewer #1:

The authors have addressed my concerns.

Reviewer #2:

The authors successfully addressed my concerns.

Reviewer #3:

Overall, the paper is much clearer in its explanation of the design, analyses, and simulations. The authors have also made a clearer argument for including the rat data. However, we believe they still need to explicitly state the limitations of the human and rat experiments. Additionally, the graphs still need to be brought up to the level of clarity of the writing (particularly Figure 7). In summary, the authors have successfully clarified many questions about the analyses and conclusions of the paper, yet additional work is needed surrounding the rat vs. human experimental comparisons.

Major concerns:

– The title needs to be changed, as it is misleading regarding the findings. What seems to be the main argument is that paranoia-like behaviors are evident in belief-updating outside of a social lens, so perhaps something clearer could be something about how paranoia may arise from belief-updating, rather than social cognition. For example, we recommend a title such as "Paranoia may arise from general problems in belief-updating rather than specific social cognition" or something like that.

– Add a paragraph outlining the limitations of cross-species comparison, particularly the fact that the rats are compared within subject, while the humans are compared between subjects.

– The social nature of rats is heavily debated, and while we know they are not as social as humans, there may be some sociality for the rats. Nevertheless, the task is still asocial, and therefore assists the argument of the paper. However, if the authors are going to discuss that rats are asocial animals, we think they should include a paragraph discussing the support for and against this statement, and relate it to the asocial nature of the task. Along with this argument, please mention how rats were housed, which speaks to their sociality.

– Figure 7 has not been adequately addressed from the first round of revisions. It is still largely impenetrable. For instance, what is the left side of 7A? What are the lines? Can you describe what "choice trajectory" is? Phrases like "the purple shaded errobars indicate…" would be very helpful to the reader.

– In general, the figures need more work. Fonts are too small (particularly Figure 4), which makes it difficult to really interpret the graphs. Along with that, many of the graphs have a lot of panels and not a lot of text to describe what each of the panels means. Thorough explication of the figures would improve the paper tremendously.

https://doi.org/10.7554/eLife.56345.sa1

Author response

Reviewer #1:

In this study, the authors tested the ability of humans and rats to track probabilities of reward in a 3-option discrimination task. Performance was challenged outright reversal of reward probabilities of the different options (phasic volatility) as well as a shift in the spread of probabilities across blocks (contextual volatility). The effect of different types of volatility on performance were modeled and correlated with paranoia in the humans and with effects of methamphetamine in the rats, the use of which has been associated with paranoia in humans. Paranoia/meth use was associated with worse performance on the task, reflected in fewer reversals in humans and increases in suboptimal win-switch, lose-stay responding, and these tendencies were associated with an increase in the model parameter reflecting phasic volatility and a reduction in the model parameter reflecting contextual volatility. The authors conclude that alterations in perceptions of environmental volatility – uncertainty – may play a significant causal role in paranoia.

Overall I really like the use of the task variants and modeling to identify links between paranoia and simple learning parameters. I did find it hard to decipher some of the Results sections and the modeling. I think the paper would benefit greatly from being written with more up front handholding for readers who are not well-versed in these concepts. This might be accomplished by laying out more clearly how the different parameters can be understood both intuitively to impact learning/paranoia as well as how they are directly related to behavior in the tasks. This might include presenting more of the behavioral data. Currently all that is presented are the model parameters. It would be more convincing I think if the actual performance was shown from the subjects and then from the model, along with the parameters.

We are glad that the reviewer liked our work. We agree that some of our presentation could be made more accessible to readers with different backgrounds in modeling experience. In the much-revised version of the paper, we signpost the modeling results much more clearly. As requested, we show model simulations next to behavioral data.

As part of this, I also am not sure the rodent data really fits. I like its inclusion in principle, but the task does not correspond directly to the variants used in humans. Specifically it lacks the shift in context volatility. This seems crucial to me. I think perhaps it might be removed to simplify the presentation. Likewise the task variants that do not include this could be removed.

We take the point. The paper was unwieldy. As we argue above, we would prefer not to remove either these rat data or these data from the task variants.

The different task variants serve as important controls for our volatility manipulation in version 3 – the task version that increased uncertainty about the task in a manner that distinguished the high from the low paranoia participants.

With regards to these rat data – we now state more explicitly what the differences are between the tasks – and what the similarities are. The reviewer is correct, we do not manipulate the volatility context in the rodent task like we did in version 3 of the human task. However, we believe these rat data are still key and informative. When confronted with increased task volatility, even low-paranoia participants began to behave more stochastically. The high paranoia participants evinced this stochasticity, even before the contextual shift towards higher volatility, during the easy task blocks. The easy task blocks are more similar to the contingencies that the rats experienced. Chronic exposure to methamphetamine made rats behave similarly to high-paranoia humans on this comparable contingency. We believe that is worth reporting. It supports further exploration of this task in a rodent setting, with all of the tools available to computational behavioral neuroscience, in order to better understand and ultimately treat paranoia.

On an interpretive level, I had two further questions. The first is whether it is possible to reproduced the performance with simpler models? Or how much of an improvement is gained with the use of the more complex model?

In response to this reviewer and the others, we fit simpler models. Those simpler models did not capture the behavioral effects or group differences in our data and as such, we conclude that our three-layer model is the most appropriate.

Beyond this I also wonder if the authors believe that some of the effects might be compensatory – that is if I undertand correctly, they are arguing that there is less of an impact of context volatility on behavior in the experimental subjects. If this is true, it seems to me it might lead to more surprise when there are sudden changes in reward probability when a reversal occurs….?

This is an interesting thought. To phrase it differently, might the increased expectation of volatility in paranoid participants be adaptive somehow? In the unprecedented uncertainty that we are currently experiencing, people who had prepared all along (and been ridiculed for it) might feel validated. There are others – expertly reviewed Raihani and Bell2 – who have evoked “the smoke detector principle”3,4 to explain paranoia – that is, a series of false alarms (even if costly) is preferable to a catastrophic miss5. However, our data advise against the conclusion that paranoia is an adaptive solution to high volatility. This is because in addition to high expected volatility, paranoid participants also appear impaired at learning from volatility (captured in our κ parameter), they expect volatility (captured by μ30), but cannot use it adaptively to update their beliefs appropriately (more negative ω3). This would seem an extremely deleterious combination, and one which captures the broad reach of paranoia and the fact that it fails to satisfy its adherents – there is always something new to be concerned about, some new dimension that ones’ persecutors can reach into.

Reviewer #2:

The authors ran 2 experiments in human subjects (one in the lab, the other online) and re-analysed behavioural data in rats and found that: 1) in humans, paranoid scores are correlated with an impairment in volatility monitoring according to a Bayesian meta-learning framework: 2) in rats meta-amphetamine administration (a pharmacological manipulation that induces paranoia in humans) impairs uncertainty monitoring in a similar way. Overall, I liked this paper; I think it represents an important contribution.

My main questions / suggestions are about the choice model-free metrics and statistical analyses and computational modeling inferences.

1) In Experiment 1, the difference between high/low paranoia is on the “number of reversals” variable. In Experiment 2 (and in the rats) the difference between high/low paranoia (placebo/ meta-amphetamine) is captured by the “win / switch” rate. However, I could not find the “win / switch” rate measure for Experiment 1 and the “number of reversal” metric for Experiment 2. The authors should report the same behavioural metrics for all experiments.

We are grateful for this opportunity to clarify. Average win-switch rates and numbers of reversals are reported in Tables 1 and 2. We recognize that the tables were perhaps too densely populated with information. The reporting of behavioral data is consistent between the Experiments 1 and 2, with the exception of study-specific metrics such as number of null trials, which only occurred in Experiment 1. Behavioral metrics have been previously published for Experiment 3 (see Groman et al., 20186).

2) Even if expected, the correlation between depression, anxiety and paranoia is a bit annoying. I am convinced that paranoia is the main determinant of the computational effects, but I think the authors could provide some additional evidence that this is the case. A possible solution could be to use a structural equation modeling. Another (possibly better) solution would be to run a PCA on the three scales (the average scores, not necessarily the individual items): my prediction is that the first component will have positive loadings on the three scales and the second will be specific to the paranoia scale. They could then correlate the PCA values instead of the scores of the scales.

This is a great suggestion. We performed the PCA as suggested, combining these data from the SCID Paranoia questions, the Beck Depression and Beck Anxiety Inventories. The scree plot depicts the three-principle component solution. We regressed each on the kappa parameter, and only principle component 1 correlated with kappa. Unpacking the contribution of each scale to PC1, it is clear that depression, anxiety and paranoia all contribute to PC1. We suggest that this finding is consistent with the idea that depression and anxiety represent contexts in which paranoia can flourish and likewise, harboring a paranoid stance toward the world can induce depression and anxiety. We report this analysis in the revised version if the manuscript. The multiple regression that we included in the manuscript does however suggest that the relationship between paranoia and kappa is paramount, since, in that model, kappa was not related to depression or anxiety, but remained significantly related to paranoia.

3) I think that, in addition to the current model, the authors could also test a simple RL model with different learning rates for positive and negative prediction errors (see Lefebvre et al., NHB, 2017). I think the readers would be interested in knowing these results as the learning rate asymmetry has been shown to correlate with the striatum, as meta-amphetamine affects dopamine and also because in paranoia there seems to be an affective component to paranoia. This analysis could be done in parallel (not in antagonism) of the main model and reported in the SI.

Thank you. We fit this model. We find no difference in prediction error weightings between our high and low paranoia participants. This simpler model does not capture the patterns in our data. We now report this analysis in the revised paper.

Reviewer #3:

This study takes on the hypothesis that paranoia is actually due to dysfunction in recognizing volatility. They address this through two human experiments (one in-person comparing individuals with and without psychiatric diagnoses; one online using Amazon Mechanical Turk) and a rat experiment in which rats are exposed to methamphetamine or saline. They justify their claims by fitting a model using a Hierarchical Gaussian Filter (HGF) and identifying changes in the underlying parameters, particularly identifying larger priors for higher volatility in the high paranoia group and in the methamphetamine-exposed rats.

The strength of this paper is that it uses a simple task to explore important topics, particularly a transdiagnostic perspective. However, we have several serious problems with the manuscript, including both the communication and the experiments and analysis itself. While we laud the authors for attempting to compare experiments across species, we do not find the rat experiment a good parallel for the human.

Major concerns:

– Overall, it was very difficult to read this paper. Even with multiple read-throughs each and multiple discussions between the reviewers (senior π and graduate student), we are not sure that we understand the manuscript, its goals, or its conclusions. Many of the figures are not referenced in the text (Tables 5 and 9 are never referenced in the paper at all), many of the figures are unclear as to their purpose (what is being plotted in Figure 4?), and many of the figures are very poorly explained (we finally concluded that we are supposed to track colors not position in Figure 3). A careful use of supplemental figures, a better track to the storyline, and better communication overall would improve this paper dramatically.

This is a fair criticism. We had prepared our work as a paper with supplementary materials. Unfortunately, eLife does not permit supplementary materials and so we had to integrate them into our manuscript. This made the piece unwieldy. We have thoroughly revised the manuscript for clarity of presentation. We feel it is much improved. We hope that π and graduate student agree.

– The rat experiment is interesting, but it is not a good comparison for the human data. The rat experiment is within-subject, comparing pre and post-manipulation, while the human data is between-subject, comparing high and low paranoia scores. Furthermore, the experiment itself is completely different. While the human experiments had three decks that changed throughout, the rats had two changing and one unchanging deck. We recommend removing the rat experiment.

We respectfully disagree. There are key differences between the human and rat tasks of course, however, the rat methamphetamine manipulation captures the apparent stochasticity of the high paranoia participants even in response to the simple or easy contingency. Our data show that this stochasticity arises in high paranoid humans and methamphetamine exposed rats for exactly the same computational reasons. As such, we prefer to retain the rat experiment. In the much-revised manuscript, which we hope is much clearer, we now emphasize the task differences so that readers are aware of them.

Unclear concerns

– These Bayesian models (such as the HGF shown in Figure 2) are notoriously unstable. Very small changes can produce dramatically different results. How independent are these variables? Are there other models that can be compared?

In response to these reviewers and all other reviewers, we computed simpler models (one reinforcement learning model and one simpler HGF model). Those models failed to capture task induced and group differences that we sought to explain. Taken together with the fact that our chosen model yields parameters which, when used to simulate data, recapitulate the win-switching and stochastic behavior we observed in high paranoia, we believe that the model is the most appropriate. We hope that our more careful and clear unpacking of our modeling approach and our data is more interpretable and understandable.

The only choices the HGF modelling results are sensitive to are those of the priors of the estimated parameters. While the choice of priors affects the model’s performance, the nature of this effect is different from that seen in chaotic systems, which the reviewer seems to be referring to. In our model, small changes to priors lead to small changes in estimated parameters and inferred belief trajectories.

– The authors seem to be trying to make the argument that the real issue with paranoia is not the social decision-making process, but rather an underlying issue with measuring volatility (and particularly meta-volatility). As such, the title of the paper should be changed. The important part of this paper (as we understand it) is not the cross-species translation (which is problematic at best), but rather the new model of paranoia as a dysfunction elsewhere than the social sphere.

We agree, this paper is about volatility processing as a mechanism for paranoia, relatively free from the social domain (which has been the focus for most paranoia research in humans). We disagree that the cross-species part is problematic, as we have outlined above. In fact, the rodent data is an important key-stone of our argument. Compared to human and non-human primates, rodents are relatively asocial. They are also free of the socioeconomic factors often associated with paranoia. The observation of similar behaviors in a similar task under the influence of manipulations relevant to human paranoia bolsters our argument that the observed “style” of learning dysfunction is not constricted to the social domain. One of the biggest take-home points and implications of non-social learning dysfunction is that future studies can explore the neural substrates of paranoia-relevant learning mechanisms in animal models without needing to emulate the complexities of paranoid social relationships. But we agree that our aim – to deliver an account of paranoia that focuses not on the social, but in basic belief updating mechanisms – could have been clearer. We now clarify in the revised manuscript.

– The authors need to add citations for using rats exposed to methamphetamine as a model for paranoia. While there appears to be research supporting this method, the authors do not actually cite it. To our knowledge, It is not appropriate to describe methamphetamine as a locus coeruleus disruption or as a change in the noradrenergic gain. Yet, the discussion about the rat experiment seems to be based on noradrenergic gain manipulations.

We now cite the extensive literature on methamphetamine’s impact on noradrenaline release and locus-coeruleus function (see for example7-9).

Specific concerns

– Figure 1: what is the difference between performance independent and performance dependent changes? Explain in figure caption.

Thank you, we now explain in the caption that performance dependent changes elicit reversals after a certain number of correct responses, performance independent changes occur when reversals are imposed regardless of participant behavior.

– Figure 2B: Once we finally realized that the key to this figure were the colors, we liked that the authors kept the colors consistent across the rats and humans, since the rat comparisons were pre-Rx instead of having two blocks, and therefore likely indistinguishable from the low-paranoia group pre-Rx. However, it makes comparison of the figures confusing, because we expect the comparisons across the figures to mean the same thing to compare the outcomes. This figure requires much better and clearer explanations.

We have revised the legend and in-text description of this figure.

– Figure 3: Is there a reason that the authors expected version 3 to be significant over version 4? Why might the order of context change matter (or not matter)?

We thought that moving from an easy to a harder task context would be significant because the easy context would be easier to acquire than the hard. If the hard were completed first, the expectations would be weaker and so less confounded by the subsequent changes in the underlying contingencies.

– Figure 4: This figure is confusing and at the moment does not provide additional understanding of the results. Consider relabeling and adjusting figure caption to explain what is in the figure and move the results to the Results section or display in a table (or both), or otherwise remove it in total.

We have removed what was Figure 4.

– Figure 5 seems important, but shouldn't we see this for all of the important variables? We thought the argument was primarily about metavolatility rather than phasic volatility coupling.

Kappa is the parameter that replicated across all experimental contexts and survived correction for multiple statistical comparisons and correction for all the potential demographic confounders we queried. It is the parameter that we are most confident in explaining the group differences. We correlated it to paranoia as a further test of our hypothesis. We did not feel it appropriate to correlate every parameter from the model with every clinical variable. The point is that volatility learning rate and priors on volatility capture the differences between the groups and travel together in all of the three experiments we report (as evidenced by the meta-analytic p-value and cluster analyses).

– Figure 7 is important, but was poorly labeled and mostly impenetrable. In particular, panel a is completely unclear. There are no labels, no explanation for colors or any other components. What is the difference between simulated and "recovered" parameters?

We now clearly unpack the figures in their legends and in the text. Simulated parameters are those that we estimate back from simulated behavioral choices. Recovered parameters were extracted from the models fit to real behavioral data. That they correlate with one another suggests that we have an appropriate model that recapitulates behavioral choices that match those that we observed experimentally.

– Figure 8 claims a replication between in-person and online, but the online appear significant, while the in-person do not.

We replicate the broad pattern of changes in behaviors and model parameters across the three experiments – as evidenced by the meta-analytic p-value analysis and the cluster analysis. The results are not necessarily completely identical but they are highly consistent across the studies.

– Figures 9 and 10 are impenetrable. What is this cluster analysis and how is it done?

We now unpack the cluster analysis more clearly in the manuscript.

– Figure 11 seems more appropriate to a supplemental figure.

We have removed this figure.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

The manuscript has been improved but there are some remaining issues that need to be addressed before acceptance. In particular, while the concerns of reviewer 1 and 2 were addressed, and the manuscript is markedly improved in terms of clarity, reviewer 3 still has some remaining requests. They are described below.

Reviewer #3:

Overall, the paper is much clearer in its explanation of the design, analyses, and simulations. The authors have also made a clearer argument for including the rat data. However, we believe they still need to explicitly state the limitations of the human and rat experiments.

We now explicitly state the limitations as requested, acknowledging both the design differences and the broader debate about sociality in rats:

“There are some important limitations to our conclusions. Compared with humans, rats are relatively asocial. But they are not completely asocial. In our experiment they were housed in pairs, and, more broadly, they evince social affiliative interactions with other rats. A further limitation centers on the comparability of our experimental designs. In humans our comparisons were both within (contingency transition) and between groups (low versus high paranoia). In rats, the model was also mixed with some between (saline versus methamphetamine) and some within-subject (pre versus post chronic treatment) comparisons. We should be clear that there was no contingency context transition in the rat study. However, just as that transition made low paranoia humans behave like high paranoia humans, chronic methamphetamine exposure made rats behave on a stable contingency much like high paranoia humans – even in the absence of contingency transition.”

Additionally, the graphs still need to be brought up to the level of clarity of the writing (particularly Figure 7). In summary, the authors have successfully clarified many questions about the analyses and conclusions of the paper, yet additional work is needed surrounding the rat vs. human experimental comparisons.

We included vector (PDF) files of the figures, which should be clearer than the embedded figures. We feel the best way to improve clarity is with more detailed figure legends, which we now include, with a particular emphasis on Figure 7.

We agree that some of the figures were dense and perhaps distracting. For example, Figure 10 was intended to be a supplementary figure depicting our control analyses for the clustering. As we noted previously, eLife does not allow supplementary figures. On the basis of the reviewer’s comments, we opted to remove Figure 10 and describe the results of the control analyses in the text. We deemed another large multi-paneled figure to be surplus to requirements.

Major concerns:

– The title needs to be changed, as it is misleading regarding the findings. What seems to be the main argument is that paranoia-like behaviors are evident in belief-updating outside of a social lens, so perhaps something clearer could be something about how paranoia may arise from belief-updating, rather than social cognition. For example, we recommend a title such as "Paranoia may arise from general problems in belief-updating rather than specific social cognition" or something like that.

While we disagree that the title is misleading, we have changed the title at the reviewer’s request. We chose:

“Paranoia as a deficit in non-social belief updating”

– Add a paragraph outlining the limitations of cross-species comparison, particularly the fact that the rats are compared within subject, while the humans are compared between subjects.

Per above, this has been noted as a limitation in the Discussion:

“There are some important limitations to our conclusions. Compared with humans, rats are relatively asocial. But they are not completely asocial. In our experiment they were housed in pairs, and, more broadly, they evince social affiliative interactions with other rats. A further limitation centers on the comparability of our experimental designs. In humans our comparisons were both within (contingency transition) and between groups (low versus high paranoia). In rats, the model was also mixed with some between (saline versus methamphetamine) and some within-subject (pre versus post chronic treatment) comparisons. We should be clear that there was no contingency context transition in the rat study. However, just as that transition made low paranoia humans behave like high paranoia, chronic methamphetamine exposure made rats behave on a stable contingency much like high paranoia humans – even in the absence of contingency transition.”

– The social nature of rats is heavily debated, and while we know they are not as social as humans, there may be some sociality for the rats. Nevertheless, the task is still asocial, and therefore assists the argument of the paper. However, if the authors are going to discuss that rats are asocial animals, we think they should include a paragraph discussing the support for and against this statement, and relate it to the asocial nature of the task.

Again, per above, this statement has been made.

Along with this argument, please mention how rats were housed, which speaks to their sociality.

We now state:

“Compared with humans, rats are relatively asocial. But they are not completely asocial. In our experiment they were housed in pairs, and, more broadly, they evince social affiliative interactions with other rats1-3

Rats were housed in pairs. This has also been noted:

“In our experiment they were housed in pairs,”

– Figure 7 has not been adequately addressed from the first round of revisions. It is still largely impenetrable. For instance, what is the left side of 7A? What are the lines? Can you describe what "choice trajectory" is? Phrases like "the purple shaded errobars indicate…" would be very helpful to the reader.

We are sorry that the figure was largely impenetrable. This type of figure is typically included in supplementary files. However, as noted previously, eLife does not allow supplementary figures. We include the figure in order to highlight how actual participant choices and inferred beliefs (following our observing the observer approach) as well as beliefs inferred from simulated choices (themselves grounded in perceptual parameters estimated from actual behavior) are very similar. It is encouraging that the kappa parameter (which captures the impact of phasic volatility on belief updating, survives correction for multiple comparisons, replicates across studies in its association with paranoia and paranoia-relevant states and drives clustering) is well recovered.

We now breakdown the legend panel by panel, describing each feature.

We hope that it is now clearer.

Here is the new legend for Figure 7:

“Figure 7. Parameter recovery. a, Actual subject trajectory: this is an example choice trajectory from one participant (top). The layers correspond to the three layers of belief in the HGF model (depicted in Figure 2A). Focusing on the low-level beliefs (yellow box): The purple line represents the subject’s estimated first-level belief about the value of choosing deck 1; blue, their belief about the value of choosing deck 2; and red, their belief about the value of choosing deck 3. Simulated subject trajectory represents the estimated beliefs from choices simulated from estimated perceptual parameters from that participant (middle), and Recovered subject trajectory represents what happens when we re-estimate beliefs from the simulated choices (bottom). Crucially, Simulated trajectories closely align with real trajectories (the increases and decreased in estimated beliefs about the values of each deck [purple, blue, red lines] align with each other across actual, simulated and recovered trajectories), although trial-by-trial choices (colored dots and arrow) occasionally differ. Outcomes (1 or 0; black dots and arrows) remain the same. b, Actual versus Recovered: these data represent the belief parameters estimated from the participant’s responses (Actual) compared to those estimated from the choices simulated with the participant’s perceptual parameters (Recovered). Actual and Recovered values significantly correlate for 𝛚2 (r=0.702, p=2.52E-11) and 𝛋 (r=0.305, p=0.011) but not 𝛚3 (r=0.172, p=0.16) or 𝛍30 (r=0.186, p=0.13). Box plots: gray indicates low paranoia, orange designates high paranoia; center lines depict medians; box limits indicate the 25th and 75th percentiles; whiskers extend 1.5”

– In general, the figures need more work. Fonts are too small (particularly Figure 4), which makes it difficult to really interpret the graphs. Along with that, many of the graphs have a lot of panels and not a lot of text to describe what each of the panels means. Thorough explication of the figures would improve the paper tremendously.

Per our response above, we have included PDF vector files that can be readily enlarged. We feel the panels are all necessary. The best way to improve the figures, we believe, is to increase the detail included in the legends, which we now do throughout. We also removed Figure 10. Its many panels and complexity ultimately distracted from the simple message that the cluster analysis is robust to removal of various halves of the data.

References

1 Palminteri, S., Wyart, V. & Koechlin, E. The Importance of Falsification in Computational Cognitive Modeling. Trends Cogn Sci 21, 425-433, doi:10.1016/j.tics.2017.03.011 (2017).

2 Raihani, N. J. & Bell, V. An evolutionary perspective on paranoia. Nat Hum Behav 3, 114-121, doi:10.1038/s41562-018-0495-0 (2019).

3 Nesse, R. M. The smoke detector principle: Signal detection and optimal defense regulation. Evol Med Public Health 2019, 1, doi:10.1093/emph/eoy034 (2019).

4 Nesse, R. M. The smoke detector principle. Natural selection and the regulation of defensive responses. Ann N Y Acad Sci 935, 75-85 (2001).

5 Green, M. J. & Phillips, M. L. Social threat perception and the evolution of paranoia. Neurosci Biobehav Rev 28, 333-342, doi:10.1016/j.neubiorev.2004.03.006 (2004).

6 Groman, S. M., Rich, K. M., Smith, N. J., Lee, D. & Taylor, J. R. Chronic Exposure to Methamphetamine Disrupts Reinforcement-Based Decision Making in Rats. Neuropsychopharmacology 43, 770-780, doi:10.1038/npp.2017.159 (2018).

7 Ferrucci, M. et al. The Effects of Amphetamine and Methamphetamine on the Release of Norepinephrine, Dopamine and Acetylcholine From the Brainstem Reticular Formation. Front Neuroanat 13, 48, doi:10.3389/fnana.2019.00048 (2019).

8 Ferrucci, M., Giorgi, F. S., Bartalucci, A., Busceti, C. L. & Fornai, F. The effects of locus coeruleus and norepinephrine in methamphetamine toxicity. Curr Neuropharmacol 11, 80-94, doi:10.2174/157015913804999522 (2013).

9 Ferrucci, M., Pasquali, L., Paparelli, A., Ruggieri, S. & Fornai, F. Pathways of methamphetamine toxicity. Ann N Y Acad Sci 1139, 177-185, doi:10.1196/annals.1432.013 (2008).

https://doi.org/10.7554/eLife.56345.sa2

Article and author information

Author details

  1. Erin J Reed

    1. Interdepartmental Neuroscience Program, Yale School of Medicine, New Haven, United States
    2. Yale MD-PhD Program, Yale School of Medicine, New Haven, United States
    Contribution
    Conceptualization, Data curation, Formal analysis, Investigation, Visualization, Methodology, Writing - original draft, Writing - review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-1669-1929
  2. Stefan Uddenberg

    Princeton Neuroscience Institute, Princeton University, Princeton, United States
    Contribution
    Software, Writing - review and editing
    Competing interests
    No competing interests declared
  3. Praveen Suthaharan

    Department of Psychiatry, Connecticut Mental Health Center, Yale University, New Have, United States
    Contribution
    Formal analysis, Visualization, Writing - review and editing
    Competing interests
    No competing interests declared
  4. Christoph H Mathys

    1. Scuola Internazionale Superiore di Studi Avanzati (SISSA), Trieste, Italy
    2. Translational Neuromodeling Unit (TNU), Institute for Biomedical Engineering, University of Zurich and ETH Zurich, Zurich, Switzerland
    Contribution
    Software, Formal analysis, Writing - original draft, Writing - review and editing
    Competing interests
    No competing interests declared
  5. Jane R Taylor

    Department of Psychiatry, Connecticut Mental Health Center, Yale University, New Have, United States
    Contribution
    Resources, Supervision, Writing - review and editing
    Competing interests
    No competing interests declared
  6. Stephanie Mary Groman

    Department of Psychiatry, Connecticut Mental Health Center, Yale University, New Have, United States
    Contribution
    Conceptualization, Resources, Software, Formal analysis, Supervision, Methodology, Writing - review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-5231-0612
  7. Philip R Corlett

    Department of Psychiatry, Connecticut Mental Health Center, Yale University, New Have, United States
    Contribution
    Conceptualization, Resources, Supervision, Funding acquisition, Validation, Investigation, Visualization, Methodology, Writing - original draft, Project administration, Writing - review and editing
    For correspondence
    philip.corlett@yale.edu
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-5368-1992

Funding

NIMH (R01MH12887)

  • Philip R Corlett

NIMH (R21MH120799-01)

  • Stephanie Mary Groman
  • Philip R Corlett

International Mental Health Research Organization (Janssen Rising Star Translational Research Award)

  • Philip R Corlett

Interacting Minds Centre (Pilot Project Award)

  • Philip R Corlett

NIH (Medical Scientist Training Program Training Grant)

  • Erin J Reed

NIH (GM007205)

  • Erin J Reed

NINDS (Neurobiology of Cortical Systems Grant)

  • Erin J Reed

NINDS (T32 NS007224)

  • Erin J Reed

Gustavus and Louise Pfeiffer Research Foundation (Fellowship)

  • Erin J Reed

NSF (DGE1122492)

  • Stefan Uddenberg

NSF (DGE1752134)

  • Stefan Uddenberg

NIDA (DA DA041480)

  • Stephanie Mary Groman

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

This work was supported by the Yale University Department of Psychiatry, the Connecticut Mental Health Center (CMHC) and Connecticut State Department of Mental Health and Addiction Services (DMHAS). It was funded by an IMHRO/Janssen Rising Star Translational Research Award, an Interacting Minds Center (Aarhus) Pilot Project Award, NIMH R01MH12887 (PRC), NIMH R21MH120799-01 (PRC and SG). EJR was supported by the NIH Medical Scientist Training Program Training Grant, GM007205; NINDS Neurobiology of Cortical Systems Grant, T32 NS007224; and a Gustavus and Louise Pfeiffer Research Foundation Fellowship. SU received funding from NSF Fellowships DGE1122492 and DGE1752134. SMG and JRT were supported by NIDA DA DA041480. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. The authors thank Dr. James Waltz for providing an earlier version of the reversal-learning e-prime code. The authors acknowledge the help, support, and advice of Dr. Sarah Fineberg, Dr. Albert Powers III, and Dr. Pantelis Leptourgos.

Ethics

Human subjects: Experiments were conducted at Yale University and the Connecticut Mental Health Center (New Haven, CT) in strict accordance with Yale University's Human Investigation Committee and Institutional Animal Care and Use Committee. Informed consent was provided by all research participants (Yale HIC# 2000022111: Beliefs and Personality Traits).

Animal experimentation: This study was performed in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. All of the animals were handled according to approved institutional animal care and use committee (IACUC) at Yale University.

Senior Editor

  1. Floris P de Lange, Radboud University, Netherlands

Reviewing Editor

  1. Geoffrey Schoenbaum, National Institute on Drug Abuse, National Institutes of Health, United States

Reviewer

  1. Geoffrey Schoenbaum, National Institute on Drug Abuse, National Institutes of Health, United States

Publication history

  1. Received: February 24, 2020
  2. Accepted: May 22, 2020
  3. Accepted Manuscript published: May 26, 2020 (version 1)
  4. Accepted Manuscript updated: May 27, 2020 (version 2)
  5. Version of Record published: June 30, 2020 (version 3)
  6. Version of Record updated: July 7, 2020 (version 4)

Copyright

© 2020, Reed et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,919
    Page views
  • 243
    Downloads
  • 0
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

Further reading

    1. Neuroscience
    Laura Gwilliams, Jean-Remi King
    Research Article Updated

    Perception depends on a complex interplay between feedforward and recurrent processing. Yet, while the former has been extensively characterized, the computational organization of the latter remains largely unknown. Here, we use magneto-encephalography to localize, track and decode the feedforward and recurrent processes of reading, as elicited by letters and digits whose level of ambiguity was parametrically manipulated. We first confirm that a feedforward response propagates through the ventral and dorsal pathways within the first 200 ms. The subsequent activity is distributed across temporal, parietal and prefrontal cortices, which sequentially generate five levels of representations culminating in action-specific motor signals. Our decoding analyses reveal that both the content and the timing of these brain responses are best explained by a hierarchy of recurrent neural assemblies, which both maintain and broadcast increasingly rich representations. Together, these results show how recurrent processes generate, over extended time periods, a cascade of decisions that ultimately accounts for subjects’ perceptual reports and reaction times.

    1. Neuroscience
    Vincent Huson et al.
    Research Advance Updated

    Previously, we showed that modulation of the energy barrier for synaptic vesicle fusion boosts release rates supralinearly (Schotten, 2015). Here we show that mouse hippocampal synapses employ this principle to trigger Ca2+-dependent vesicle release and post-tetanic potentiation (PTP). We assess energy barrier changes by fitting release kinetics in response to hypertonic sucrose. Mimicking activation of the C2A domain of the Ca2+-sensor Synaptotagmin-1 (Syt1), by adding a positive charge (Syt1D232N) or increasing its hydrophobicity (Syt14W), lowers the energy barrier. Removing Syt1 or impairing its release inhibitory function (Syt19Pro) increases spontaneous release without affecting the fusion barrier. Both phorbol esters and tetanic stimulation potentiate synaptic strength, and lower the energy barrier equally well in the presence and absence of Syt1. We propose a model where tetanic stimulation activates Syt1-independent mechanisms that lower the energy barrier and act additively with Syt1-dependent mechanisms to produce PTP by exerting multiplicative effects on release rates.