Conformist social learning leads to self-organised prevention against adverse bias in risky decision making

  1. Wataru Toyokawa  Is a corresponding author
  2. Wolfgang Gaissmaier
  1. Department of Psychology, University of Konstanz, Germany
  2. Centre for the Advanced Study of Collective Behaviour, University of Konstanz,, Germany

Abstract

Given the ubiquity of potentially adverse behavioural bias owing to myopic trial-and-error learning, it seems paradoxical that improvements in decision-making performance through conformist social learning, a process widely considered to be bias amplification, still prevail in animal collective behaviour. Here we show, through model analyses and large-scale interactive behavioural experiments with 585 human subjects, that conformist influence can indeed promote favourable risk taking in repeated experience-based decision making, even though many individuals are systematically biased towards adverse risk aversion. Although strong positive feedback conferred by copying the majority’s behaviour could result in unfavourable informational cascades, our differential equation model of collective behavioural dynamics identified a key role for increasing exploration by negative feedback arising when a weak minority influence undermines the inherent behavioural bias. This ‘collective behavioural rescue’, emerging through coordination of positive and negative feedback, highlights a benefit of collective learning in a broader range of environmental conditions than previously assumed and resolves the ostensible paradox of adaptive collective behavioural flexibility under conformist influences.

Editor's evaluation

The authors use reinforcement learning and dynamic modeling to formalize the favorable effects of conformity on risk taking, demonstrating that social influence can produce an adaptive risk-seeking equilibrium at the population level. The work provides a rigorous analysis of a paradoxical interplay between social and economic choice.

https://doi.org/10.7554/eLife.75308.sa0

eLife digest

When it comes to making decisions, like choosing a restaurant or political candidate, most of us rely on limited information that is not accurate enough to find the best option. Considering others’ decisions and opinions can help us make smarter choices, a phenomenon called “collective intelligence”.

Collective intelligence relies on individuals making unbiased decisions. If individuals are biased toward making poor choices over better ones, copying the group’s behavior may exaggerate biases. Humans are persistently biased. To avoid repeated failure, humans tend to avoid risky behavior. Instead, they often choose safer alternatives even when there might be a greater long-term benefit to risk-taking. This may hamper collective intelligence.

Toyokawa and Gaissmaier show that learning from others helps humans make better decisions even when most people are biased toward risk aversion. The experiments first used computer modeling to assess the effect of individual bias on collective intelligence. Then, Toyokawa and Gaissmaier conducted an online investigation in which 185 people performed a task that involved choosing a safer or risker alternative, and 400 people completed the same task in groups of 2 to 8. The online experiment showed that participating in a group changed the learning dynamics to make information sampling less biased over time. This mitigated people’s tendency to be risk-averse when risk-taking is beneficial.

The model and experiments help explain why humans have evolved to learn through social interactions. Social learning and the tendency of humans to conform to the group’s behavior mitigates individual risk aversion. Studies of the effect of bias on individual decision-making in other circumstances are needed. For example, would the same finding hold in the context of social media, which allows individuals to share unprecedented amounts of sometimes incorrect information?

Introduction

Collective intelligence, a self-organised improvement of decision making among socially interacting individuals, has been considered one of the key evolutionary advantages of group living (Harrison et al., 2001; Krause and Ruxton, 2002; Sumpter, 2005; Ward and Zahavi, 1973). Although what information each individual can access may be a subject of uncertainty, information transfer through the adaptive use of social cues filters such ‘noises’ out (Laland, 2004; Rendell et al., 2010), making individual behaviour on average more accurate (Hastie and Kameda, 2005; King and Cowlishaw, 2007; Simons, 2004). Evolutionary models (Boyd and Richerson, 1985; Kandler and Laland, 2013; Kendal et al., 2005) and empirical evidence (Toyokawa et al., 2014; Toyokawa et al., 2019) have both shown that the benefit brought by the balanced use of both socially and individually acquired information is usually larger than the cost of possibly creating an alignment of suboptimal behaviour among individuals by herding (Bikhchandani et al., 1992; Giraldeau et al., 2002; Raafat et al., 2009). This prediction holds as long as individual trial-and-error learning leads to higher accuracy than merely random decision making (Efferson et al., 2008). Copying a common behaviour exhibited by many others is adaptive if the output of these individuals is expected to be better than uninformed decisions.

However, both humans and non-human animals suffer not only from environmental noise but also commonly from systematic biases in their decision making (e.g. Harding et al., 2004; Hertwig and Erev, 2009; Real, 1981; Real et al., 1982). Under such circumstances, simply aggregating individual inputs does not guarantee collective intelligence because a majority of the group may be biased towards suboptimization. A prominent example of such a potentially suboptimal bias is risk aversion that emerges through trial-and-error learning with adaptive information-sampling behaviour (Denrell, 2007; March, 1996). Because it is a robust consequence of decision making based on learning (Hertwig and Erev, 2009; Yechiam et al., 2006; Weber, 2006; March, 1996), risk aversion can be a major constraint of animal behaviour, especially when taking a high-risk high-return behavioural option is favourable in the long run. Therefore, the ostensible prerequisite of collective intelligence, that is, that individuals should be unbiased and more accurate than mere chance, may not always hold. A theory that incorporates dynamics of trial-and-error learning and the learnt risk aversion into social learning is needed to understand the conditions under which collective intelligence operates in risky decision making.

Given that behavioural biases are omnipresent and learning animals rarely escape from them, it may seem that social learning, especially the ‘copy-the-majority’ behaviour (aka, ‘conformist social learning’ or ‘positive frequency-based copying’; Laland, 2004), whereby the most common behaviour in a group is disproportionately more likely to be copied (Boyd and Richerson, 1985), may often lead to maladaptive herding, because recursive social interactions amplify the common bias (i.e. a positive feedback loop; Denrell and Le Mens, 2007; Denrell and Le Mens, 2017; Dussutour et al., 2005; Raafat et al., 2009). Previous studies in humans have indeed suggested that individual decision-making biases are transmitted through social influences (Chung et al., 2015; Bault et al., 2011; Suzuki et al., 2016; Shupp and Williams, 2008; Jouini et al., 2011; Moussaïd et al., 2015). Nevertheless, the collective improvement of decision accuracy through simple copying processes has been widely observed across different taxa (Sasaki and Biro, 2017; Seeley et al., 1991; Alem et al., 2016; Sumpter, 2005; Harrison et al., 2001), including the very species known to exhibit learnt risk-taking biases, such as bumblebees (Real, 1981; Real et al., 1982), honeybees (Drezner-Levy and Shafir, 2007), and pigeons (Ludvig et al., 2014). Such observations may indicate, counter-intuitively, that social learning may not necessarily trap animal groups in suboptimization even when most of the individuals are suboptimally biased.

In this paper, we propose a parsimonious computational mechanism that accounts for the emerging improvement of decision accuracy among suboptimally risk-aversive individuals. In our agent-based model, we allow our hypothetical agents to compromise between individual trial-and-error learning and the frequency-based copying process, that is, a balanced reliance on social learning that has been repeatedly supported in previous empirical studies (e.g. Deffner et al., 2020; McElreath et al., 2005; McElreath et al., 2008; Toyokawa et al., 2017; Toyokawa et al., 2019). This is a natural extension of some previous models that assumed that individual decision making was regulated fully by others’ beliefs (Denrell and Le Mens, 2007; Denrell and Le Mens, 2017). Under such extremely strong social influence, exaggeration of individual bias was always the case because information sampling was always directed towards the most popular alternative, often resulting in a mismatch between the true environmental state and what individuals believed (’collective illusion’; Denrell and Le Mens, 2017). By allowing a mixture of social and asocial learning processes within a single individual, the emergent collective behaviour is able to remain flexible (Aplin et al., 2017; Toyokawa et al., 2019), which may allow groups to escape from the suboptimal behavioural state.

We focused on a repeated decision-making situation where individuals updated their beliefs about the value of behavioural alternatives through their own action–reward experiences (experience-based task). Experience-based decision making is widespread in animals that learn in a range of contexts (Hertwig and Erev, 2009). The time-depth interaction between belief updating and decision making may create a non-linear relationship between social learning and individual behavioural biases (Biro et al., 2016), which we hypothesised is key in improving decision accuracy in self-organised collective systems (Harrison et al., 2001; Sumpter, 2005).

In the study reported here, we firstly examined whether a simple form of conformist social influence can improve collective decision performance in a simple multi-armed bandit task using an agent-based model simulation. We found that promotion of favourable risk taking can indeed emerge across different assumptions and parameter spaces, including individual heterogeneity within a group. This phenomenon occurs thanks, apparently, to the non-linear effect of social interactions, namely, collective behavioural rescue. To disentangle the core dynamics behind this ostensibly self-organised process, we then analysed a differential equation model representing approximate population dynamics. Combining these two theoretical approaches, we identified that it is a combination of positive and negative feedback loops that underlies collective behavioural rescue, and that the key mechanism is a promotion of information sampling by modest conformist social influence.

Finally, to investigate whether the assumptions and predictions of the model hold in reality, we conducted a series of online behavioural experiments with human participants. The experimental task was basically a replication of the task used in the agent-based model described above, although the parameters of the bandit tasks were modified to explore wider task spaces beyond the simplest two-armed task. Experimental results show that the human collective behavioural pattern was consistent with the theoretical prediction, and model selection and parameter estimation suggest that our model assumptions fit well with our experimental data. In sum, we provide a general account of the robustness of collective intelligence even under systematic risk aversion and highlight a previously overlooked benefit of conformist social influence.

Results

The decision-making task

The minimal task that allowed us to study both learnt risk aversion and conformist social learning was a two-armed bandit task where one alternative provided certain payoffs πs constantly (safe option s) and the other alternative provided a range of payoffs stochastically, following a Gaussian distribution πrN(μ,s.d.) (risky option r; Figure 1a). Unless otherwise stated, we followed the same task setup as Denrell, 2007, who mathematically derived the condition under which individual reinforcement learners would exhibit risk aversion. In the main analysis, we focus on the case where the risky alternative had a higher mean payoff than the safe alternative (i.e. producing more payoffs on average in the long run; positive risk premium [positive RP]), meaning that choosing the risky alternative was the optimal strategy for a decision maker to maximise accumulated payoffs. Unless otherwise stated, the total number of decision-making trials (time horizon) was set to T=150 in the main simulations described below.

Figure 1 with 6 supplements see all
Mitigation of suboptimal risk aversion by social influence.

(a) A schematic diagram of the task. A safe option provides a constant reward πs=1 whereas a risky option provides a reward randomly drawn from a Gaussian distribution with mean μ=1.5 and s.d.=1. (b, c): The emergence of suboptimal risk aversion (the hot stove effect) depending on a combination of the reinforcement learning parameters; (b): under no social influence (i.e. the copying weight σ=0), and (c): under social influences with different values of the conformity exponents θ and copying weights σ. The dashed curve is the asymptotic equilibrium at which asocial learners are expected to end up choosing the two alternatives with equal likelihood (i.e. Pr,t=0.5), which is given analytically by β=(2-α)/α(Denrell, 2007). The coloured background is a result of the agent-based simulation with total trials T=150 and group size N=10, showing the average proportion of choosing the risky option in the second half of the learning trials Pr,t>75>0.5 under a given combination of the parameters. (d): The differences between the mean proportion of risk aversion of asocial learners and that of social learners, highlighting regions in which performance is improved (orange) or undermined (purple) by social learning.

To maximise one’s own long-term individual profit under such circumstances, it is crucial to strike the right balance between exploiting the option that has seemed better so far and exploring the other options to seek informational gain. Because of the nature of adaptive information sampling under such exploration–exploitation trade-offs, lone decision makers often end up being risk averse, trying to reduce the chance of further failures once the individual has experienced an unfavourable outcome from the risky alternative (March, 1996; Denrell, 2007; Hertwig and Erev, 2009), a phenomenon known as the hot stove effect. Within the framework of this task, risk aversion is suboptimal in the long run if the risk premium is positive (Denrell and March, 2001).

The baseline model

For the baseline asocial reinforcement learning, we assumed a standard, well-established model that is a combination of the Rescorla–Wagner learning rule and softmax decision making (Sutton and Barto, 2018, see Materials and methods for the full details). There are two parameters, a learning rate (α) and an inverse temperature (β). The larger the α, the more weight is given to recent experiences, making the agent’s belief update more myopic. The parameter β regulates how sensitive the choice probability is to the belief about the option’s value (i.e. controlling the proneness to explore). As β0, the softmax choice probability approximates to a random choice (i.e. highly explorative). Conversely, if β+, it asymptotes to a deterministic choice in favour of the option with the highest subjective value (i.e. highly exploitative).

Varying these two parameters systematically, it is possible to see under what conditions trial-and-error learning leads individuals to be risk averse (Figure 1b). Suboptimal risk aversion becomes prominent when value updating in learning is myopic (i.e. when α is large) or action selection is exploitative (i.e. when β is large) or both (the blue area of Figure 1b). Under such circumstances, the hot stove effect occurs (Denrell, 2007): Experiences of low-value payoffs from the risky option tend to discourage decision makers from further choosing the risky option, trapping them in the safe alternative. In sum, whenever the interaction between the two learning parameters α(β+1) exceeds a threshold value, which was 2 in the current example, decision makers are expected to become averse to the risky option (the black solid lines in Figure 2). The hot stove effect is known to emerge in a range of model implementations and has been widely observed in previous human experiments (March, 1996; Denrell, 2007; Hertwig and Erev, 2009).

Figure 2 with 2 supplements see all
The effect of social learning on average decision performance.

The x axis is a product of two reinforcement learning parameters α(β+1), namely, the susceptibility to the hot stove effect. The y axis is the mean probability of choosing the optimal risky alternative in the last 75 trials in a two-armed bandit task whose setup was the same as in Figure 1. The black solid curve is the analytical prediction of the asymptotic performance of individual reinforcement learning with infinite time horizon T+ (Denrell, 2007). The analytical curve shows a choice shift emerging at α(β+1)=2; that is, individual learners ultimately prefer the safe to the risky option in the current setup of the task when α(β+1)>2. The dotted curves are mean results of agent-based simulations of social learners with two different mean values of the copying weight σ{0.25,0.5} (green and yellow, respectively) and asocial learners with σ=0 (purple). The difference between the agent-based simulation with σ=0 and the analytical result was due to the finite number of decision trials in the simulation, and hence, the longer the horizon, the closer they become (Figure 2—figure supplement 1). Each panel shows a different combination of the inverse temperature β and the conformity exponent θ.

The conformist social influence model

We next considered a collective learning situation in which a group of multiple individuals perform the task simultaneously and individuals can observe others’ actions. We assumed a simple frequency-based social cue specifying distributions of individual choices (McElreath et al., 2005; McElreath et al., 2008; Toyokawa et al., 2017; Toyokawa et al., 2019; Deffner et al., 2020). We assumed that individuals could not observe others’ earnings, ensuring that they could not sample information about payoffs being no longer available because of their own choice (i.e. forgone payoffs; Denrell, 2007; Yechiam and Busemeyer, 2006).

A realised payoff was independent of others’ decisions and was drawn solely from the payoff probability distribution specific to each alternative (and hence no externality was assumed), thereby ensuring there would be no direct social competition over the monetary reward (Giraldeau and Caraco, 2000) nor normative pressure towards majority alignment (Cialdini and Goldstein, 2004; Mahmoodi et al., 2018). The value of social information was assumed to be only informational (Efferson et al., 2008; Nakahashi, 2007). Nevertheless, our model may apply to the context of normative social influences, because what we assumed here was modification in individual choice probabilities by social influences, irrespective of underlying motivations of conformity.

To model a compromise between individual trial-and-error learning and the frequency-based copying process, we formulated the social influences on reinforcement learning as a weighted average between the asocial (A) and social (S) processes of decision making, that is, Pi,t=(1-σ)Ai,t+σSi,t, where Pi,t is the individual net probability of choosing an option i{r,s} at time t and σ is a weight given to the social influence (copying weight).

In addition, the level of social frequency dependence was determined by another social learning parameter θ (conformity exponent), such that Si,t=Ni,tθ/(Nr,tθ+Ns,tθ), where Ni is the number of agents who chose option i (see the Materials and methods for the accurate formulation). The larger the θ, the more the net choice probability favours a common alternative chosen by the majority of a group at the moment (a conformity bias; Boyd and Richerson, 1985). Note that there is no actual social influence when θ=0 because in this case the ‘social influence’ favours a uniformly random choice, irrespective of whether it is a common behaviour.

Our model is a natural extension of both the asocial reinforcement learning and the model of ‘extreme conformity’ assumed in some previous models (e.g. Denrell and Le Mens, 2017), as these conditions can be expressed as a special case of parameter combinations. We explore the implications of this extension in the Discussion. The descriptions of the parameters are summarised in Table 1.

Table 1
Summary of the learning model parameters.
SymbolMeaningRange of the value
αLearning rate[0, 1]
βInverse temperature[0, +∞]
α(1+β)Susceptibility to the hot stove effect
σCopying weight[0, 1]
θConformity exponent[-∞, +∞]

The collective behavioural rescue effect

Varying these two social learning parameters, σ and θ, systematically, we observed a mitigation of suboptimal risk aversion under positive frequency-based social influences. As shown in Figure 1c, even with a strong conformity bias (θ>1), social influence widened the region of parameter combinations where the majority of decision makers could escape from suboptimal risk aversion (the increase of the red area in Figure 1c). The increment of the area of adaptive risk seeking was greater with θ=1 than with θ=4. When θ=1, a large copying weight (σ) could eliminate almost all the area of risk aversion (Figure 1c; see also Figure 1—figure supplement 1 for a greater range of parameter combinations), whereas when θ=4, there was also a region in which optimal risk seeking was weakened (Figure 1d). On the other hand, such substantial switching of the majority to being risk seeking did not emerge in the negative risk premium (negative RP) task (Figure 1—figure supplement 3), although there was a parameter region where the proportion of suboptimal risk seeking relatively increased compared to that of individual learners (Figure 1—figure supplement 6). Naturally, increasing the copying weight σ1 eventually approximated the chance-level performance in both positive and negative RP cases (Figure 1—figure supplement 1, Figure 1—figure supplement 3). In sum, simulations suggest that conformist social influence widely promoted risk seeking under the positive RP, and that such a promotion of risk seeking was less evident in the negative RP task.

Figure 2 highlights the extent to which risk aversion was relaxed through social influences. Individuals with positive σ>0 could maintain a high proportion of risk seeking even in the region of high susceptibility to the hot stove effect (α(β+1)>2). Although social learners eventually fell into a risk-averse regime with increasing α(β+1), risk aversion was largely mitigated compared to the performance of individual learners who had σ=0. Interestingly, the probability of choosing the optimal risky option was maximised at an intermediate value of α(β+1) when the conformity exponent was large θ=4 and the copying weight was high σ=0.5.

In the region of less susceptibility to the hot stove effect (α(β+1)<2), social influence could enhance individual optimal risk seeking up to the theoretical benchmark expected in individual reinforcement learning with an infinite time horizon (the solid curves in Figure 2). A socially induced increase in risk seeking in the region α(β+1)<2 was more evident with larger β, and hence with smaller α to satisfy α(β+1)<2. The smaller the learning rate α, the longer it would take to achieve the asymptotic equilibrium state, due to slow value updating. Asocial learners, as well as social learners with high σ (=0.5) coupled with high θ (=4), were still far from the analytical benchmark, whereas social learners with weak social influence σ=0.25 were nearly able to converge on the benchmark performance, suggesting that social learning might affect the speed of learning. Indeed, a longer time horizon T=1075 reduced the advantage of weak social learners in this α(β+1)<2 region because slow learners could now achieve the benchmark accuracy (Figure 2—figure supplement 1 and Figure 2—figure supplement 2).

Approaching the benchmark with an elongated time horizon, and the concomitant reduction in the advantage of social learners, was also found in the high susceptibility region α(β+1)2 especially for those who had a high conformity exponent θ=4 (Figure 2—figure supplement 1). Notably, however, facilitation of optimal risk seeking became further evident in the other intermediate region 2<α(β+1)<4. This suggests that merely speeding up or slowing down learning could not satisfactorily account for the qualitative ‘choice shift’ emerging through social influences.

We obtained similar results across different settings of the multi-armed bandit task, such as a skewed payoff distribution in which either large or small payoffs were randomly drawn from a Bernoulli process (March, 1996; Denrell, 2007, Figure 1—figure supplement 4) and increased option numbers (Figure 1—figure supplement 5). Further, the conclusion still held for an alternative model in which social influences modified the belief-updating process (the value-shaping model; Najar et al., 2020) rather than directly influencing the choice probability (the decision-biasing model) as assumed in the main text thus far (see Supplementary Methods; Figure 1—figure supplement 2). One could derive many other more complex social learning processes that may operate in reality; however, the comprehensive search of possible model space is beyond the current interest. Yet, decision biasing was found to fit better than value shaping with our behavioural experimental data (Figure 6—figure supplement 2), leading us to focus our analysis on the decision-biasing model.

The robustness of individual heterogeneity

We have thus far assumed no parameter variations across individuals in a group to focus on the qualitative differences between social and asocial learners’ behaviour. However, individual differences in development, state, or experience or variations in behaviour caused by personality traits might either facilitate or undermine collective decision performance. Especially if a group is composed of both types of individuals, those who are less susceptible to the hot stove effect (α(β+1)<2) as well as those who are more susceptible α(β+1)>2, it remains unclear who benefits from the rescue effect: Is it only those individuals with α(β+1)>2 who enjoy the benefit, or can collective intelligence benefit a group as a whole? For the sake of simplicity, here we considered groups of five individuals, which were composed of either homogeneous (yellow in Figure 3) or heterogeneous (green, blue, purple in Figure 3) individuals. Individual values of a focal behavioural parameter were varied across individuals in a group. Other non-focal parameters were identical across individuals within a group. The basic parameter values assigned to non-focal parameters were α=0.5, β=7, σ=0.3, and θ=2, which were chosen so that the homogeneous group could generate the collective rescue effect. The groups’ mean values of the various focal parameters were matched to these basic values.

The effect of individual heterogeneity on the proportion of choosing the risky option in the two-armed bandit task.

(a) The effect of heterogeneity of α, (b) β, (c) σ, and (d) θ. Individual values of a focal behavioural parameter were varied across individuals in a group of five. Other non-focal parameters were identical across individuals within a group. The basic parameter values assigned to non-focal parameters were α=0.5, β=7, σ=0.3, and θ=2, and groups’ mean values of the various focal parameters were matched to these basic values. We simulated 3 different heterogeneous compositions: The majority (3 of 5 individuals) potentially suffered the hot stove effect αi(βi+1)>2 (a, b) or had the highest diversity in social learning parameters (c, d; purple); the majority were able to overcome the hot stove effect αi(βi+1)<2 (a, b) or had moderate heterogeneity in the social learning parameters (c, d; blue); and all individuals had αi(βi+1)>2 but smaller heterogeneity (green). The yellow diamond shows the homogeneous groups’ performance. Lines are drawn through average results across the same compositional groups. Each round dot represents a group member’s mean performance. The diamonds are the average performance of each group for each composition category. For comparison, asocial learners’ performance, with which the performance of social learners can be evaluated, is shown in gray. For heterogeneous α and β, the analytical solution of asocial learning performance is shown as a solid-line curve. We ran 20,000 replications for each group composition.

Figure 3a shows the effect of heterogeneity in the learning rate (α). Heterogeneous groups performed better on average than a homogeneous group (represented by the yellow diamond). The heterogeneous groups owed this overall improvement to the large rescue effect operating for individuals who had a high susceptibility to the hot stove effect (α(β+1)2). On the other hand, the performance of less susceptible individuals (α(β+1)<2) was slightly undermined compared to the asocial benchmark performance shown in grey. Notably, however, how large the detrimental effect was for the low-susceptibility individuals depended on the group’s composition: The undermining effect was largely mitigated when low-susceptibility individuals (α(β+1)<2) made up a majority of a group (3 of 5; the blue line), whereas they performed worse than the asocial benchmark when the majority were those with high susceptibility (purple).

The advantage of a heterogeneous group was also found for the inverse temperature (β), although the impact of the group’s heterogeneity was much smaller than that for α (Figure 3b). Interestingly, no detrimental effect for individuals with α(β+1)<2 was found in association with the β variations.

On the other hand, individual variations in the copying weight (σ) had an overall detrimental effect on collective performance, although individuals in the highest diversity group could still perform better than the asocial learners (Figure 3c). Individuals who had an intermediate level of σ achieved relatively higher performance within the group than those who had either higher or lower σ. This was because individuals with lower σ could benefit less from social information, while those with higher σ relied so heavily on social frequency information that behaviour was barely informed by individual learning, resulting in maladaptive herding or collective illusion (Denrell and Le Mens, 2017; Toyokawa et al., 2019). As a result, the average performance decreased with increasing diversity in σ.

Such a substantial effect of individual differences was not observed in the conformity exponent θ (Figure 3d), where individual performance was almost stable regardless of whether the individual was heavily conformist (θi=8) or even negatively dependent on social information (θi=-1). The existence of a few conformists in a group could not itself trigger positive feedback among the group unless other individuals also relied on social information in a conformist-biased way, because the flexible behaviour of non-conformists could keep the group’s distribution nearly flat (i.e. NsNr). Therefore, the existence of individuals with small θ in a heterogeneous group could prevent the strong positive feedback from being immediately elicited, compensating for the potential detrimental effect of maladaptive herding by strong conformists.

Overall, the relaxation of, and possibly the complete rescue from, a suboptimal risk aversion in repeated risky decision making emerged in a range of conditions in collective learning. It was not likely a mere speeding up or slowing down of learning process (Figure 2—figure supplement 1 and Figure 2—figure supplement 2), nor just an averaging process mixing performances of both risk seekers and risk-averse individuals (Figure 3). It depended neither on specific characteristics of social learning models (Figure 1—figure supplement 2) nor on the profile of the bandit task’s setups (Figure 1—figure supplement 4). Instead, our simulation suggests that self-organisation may play a key role in this emergent phenomenon. To seek a general mechanism underlying the observed collective behavioural rescue, in the next section we show a reduced, approximated differential equation model that can provide qualitative insights into the collective decision-making dynamics observed above.

The simplified population dynamics model

To obtain a qualitative understanding of self-organisation that seems responsible for the pattern of adaptive behavioural shift observed in our individual-based simulation, we made a reduced model that approximates temporal changes of behaviour of an ‘average’ individual, or in other words, average dynamics of a population of multiple individuals, where the computational details of reinforcement learning were purposely ignored. Such a dynamic modelling approach has been commonly used in population ecology and collective animal behaviour research and has proven highly useful in disentangling the factors underlying complex systems (e.g. Beckers et al., 1990; Goss et al., 1989; Seeley et al., 1991; Sumpter and Pratt, 2003; Harrison et al., 2001).

Specifically, we considered a differential equation that focuses only on increases and decreases in the number of individuals who are choosing the risky option (NR) and the safe option (NS) with either a positive (+) or a negative (-) ‘attitude’ (or preference) towards the risky option (Figure 4a). The part of the population that has a positive attitude (NS+ and NR+) is more likely to move on to, and stay at, the risky option, whereas the other part of the population that has a negative attitude (NS- and NR-) is more likely to move on to, and stay at, the safe option. Note that movements in the opposite direction also exist, such as moving on to the risky option when having a negative attitude (PR-), but at a lower rate than PS-, depicted by the thickness of the arrows in Figure 4a. We defined that the probability of moving towards an option matched with their attitude (PS-=PR+=ph) was higher than that of moving in the opposite direction (PR-=PS+=pl), that is, ph>pl. The probability pl and ph can be seen approximately as the per capita rate of exploration and exploitation, respectively.

Figure 4 with 1 supplement see all
The population dynamics model.

(a) A schematic diagram of the dynamics. Solid arrows represent a change in population density between connected states at a time step. The thicker the arrow, the larger the per-capita rate of behavioural change. (b, c) The results of the asocial, baseline model where PS-=PR+=ph and PR-=PS+=pl (ph>pl). Both figures show the equilibrium bias towards risk seeking (i.e., Nr-Ns) as a function of the degree of risk premium e as well as of the per-capita probability of moving to the less preferred behavioural option pl. (b) The explicit form of the curve is given by -n(ph-pl){(1-e)ph-epl}/(ph+pl){(1-e)ph+epl}. (c) The dashed curve is the analytically derived neutral equilibrium of the asocial system that results in NR*=NS*, given by e=ph/(ph+pl). (d) The equilibrium of the collective behavioural dynamics with social influences. The numerical results were obtained with NS,t=0-=NS,t=0+=5, NR,t=0=10, and ph=0.7.

An attitude can change when the risky option is chosen. We assumed that a proportion e (0e1) of the risk-taking part of the population would have a good experience, thereby holding a positive attitude (i.e. NR+=eNR). On the other hand, the rest of the risk-taking population would have a negative attitude (i.e. NR-=(1-e)NR). This proportion e can be interpreted as an approximation of the risk premium under the Gaussian noise of risk, because the larger e is, the more individuals one would expect would encounter a better experience than when making the safe choice. The full details are shown in the Materials and methods (Table 2).

Table 2
Summary of the differential equation model parameters.
SymbolMeaningRange of the value
NR+Density of individuals choosing R and preferring RNR+=eNR
NR-Density of individuals choosing R and preferring SNR-=(1-e)NR
NS+Density of individuals choosing S and preferring R
NS-Density of individuals choosing S and preferring S
plPer capita rate of moving to the unfavourable option0plph1
phPer capita rate of moving to the favourable option0plph1
ePer capita rate of becoming enchanted with the risky option[0,1]
σSocial influence weight[0,1]
θConformity exponent[-,+]

To confirm that this approximated model can successfully replicate the fundamental property of the hot stove effect, we first describe the asocial behavioural model without social influence. The baseline, asocial dynamic system has a locally stable non-trivial equilibrium that gives NS0 and NR0, where N means the equilibrium density at which the system stops changing (dNS/dt=dNR/dt=0). At equilibrium, the ratio between the number of individuals choosing the safe option S and the number choosing the risky option R is given by NS:NR=e(pl/ph)+(1-e)(ph/pl):1, indicating that risk aversion (defined as the case where a larger part of the population chooses the safe option; NS>NR) emerges when the inequality e<PS/(PS+PR)=ph/(ph+pl) holds.

Figure 4b visually shows that the population is indeed attracted to the safe option S (that is, NS>NR) in a wide range of the parameter region even when there is a positive ‘risk premium’ defined as e>1/2. Although individuals choosing the risky option are more likely to become enchanted with the risky option than to be disappointed (i.e., eNR=NR+>(1e)NR=NR), the risk-seeking equilibrium (defined as NS<NR) becomes less likely to emerge as the exploration rate pl decreases, consistent with the hot stove effect caused by asymmetric adaptive sampling (Denrell, 2007). Risk seeking never emerges when e1/2, which is also consistent with the results of reinforcement learning.

This dynamics model provides an illustrative understanding of how the asymmetry of adaptive sampling causes the hot stove effect. Consider the case of high inequality between exploitation (ph) and exploration (pl), namely, phpl. Under such a condition, the state S-, that is choosing the safe option with the negative inner attitude –, becomes a ‘dead end’ from which individuals can seldom escape once entered. However, if the inequality phpl is not so large that a substantial fraction of the population now comes back to R- from S-, the increasing number of people belonging to R+ (that is, NR+) could eventually exceed the number of people ‘spilling out’ to S-. Such an illustrative analysis shows that the hot stove effect can be overcome if the number of people who get stuck in the dead end S- can somehow be reduced. And this is possible if one can increase the ‘come-backs’ to R-. In other words, if any mechanisms can increase PR- in relation to PS-, the hot stove effect should be overcome.

Next, we assumed a frequency-dependent reliance on social information operating in this population dynamics. Specifically, we considered that the net per capita probability of choosing each option, P, is composed of a weighted average between the asocial baseline probability (p) and the social frequency influence (F), namely, P=(1-σ)p+σF. Again, σ is the weight of social influence, and we also assumed that there would be the conformity exponent θ in the social frequency influence F such that F=Niθ/(NSθ+NRθ) where i{S,R} (see Materials and methods).

Through numerical analyses, we have confirmed that social influence can indeed increase the flow-back rate PR-, which raises the possibility of risk-seeking equilibrium NR>NS (Figure 4d; see Figure 4—figure supplement 1 for a wider parameter region). For an approximation of the bifurcation analysis, we recorded the equilibrium density of the risky state NR starting from various initial population distributions (that is, varying NR,t=0 and NS,t=0=20-NR,t=0). Figure 5 shows the conditions under which the system ends up in risk-seeking equilibrium. When the conformity exponent θ is not too large (θ<10), there is a region that risk seeking can be a unique equilibrium, irrespective of the initial distribution, and attracting the population even from an extremely biased initial distribution such as NR,t=0=0 (Figure 5).

Figure 5 with 1 supplement see all
The approximate bifurcation analysis.

The relationships between the social influence weight σ and the equilibrium number of individuals in the risky behavioural state NR across different conformity exponents θ{0,1,2,10} and different values of risk premium e{0.55,0.65,0.7,0.75}, are shown as black dots. The background colours indicate regions where the system approaches either risk aversion (NR<NS; blue) or risk seeking (NR>NS; red). The horizontal dashed line is NR=NS=10. Two locally stable equilibria emerge when θ2, which suggests that the system has a bifurcation when σ is sufficiently large. The other parameters are set to ph=0.7, pl=0.2, and N=20.

Under the conformist bias θ2, two locally stable equilibria exist. Strong positive feedback dominates the system when both σ and θ are large. Therefore, the system can end up in either of the equilibria depending solely on the initial density distribution, consistent with the conventional view of herding (Denrell and Le Mens, 2017; Toyokawa et al., 2019). This is also consistent with a well-known result of collective foraging by pheromone trail ants, which react to social information in a conformity-like manner (Beckers et al., 1990; Harrison et al., 2001).

Notably, however, even with a positive conformist bias, such as θ=2, there is a region with a moderate value of σ where risk seeking remains a unique equilibrium when the risk premium was high (e0.7). In this regime, the benefit of collective behavioural rescue can dominate without any possibility of maladaptive herding.

It is worth noting that in the case of θ=0, where individuals make merely a random choice at a rate σ, risk aversion is also relaxed (Figure 5, the leftmost column), and the adaptive risky shift even emerges around 0.25<σ<1. However, this ostensible behavioural rescue is due solely to the pure effect of additional random exploration that reduces PS-/(PS-+PR-), mitigating stickiness to the dead-end status S-. When σ1 with θ=0, therefore, the risky shift eventually disappears because the individuals choose between S and R almost randomly.

However, the collective risky shift observed in the conditions of θ>0 cannot be explained solely by the mere addition of exploration. A weak conformist bias (i.e. a linear response to the social frequency; θ=1) monotonically increases the equilibrium density NR with increasing social influence σ, which goes beyond the level of risky shift observed with the addition of random choice (Figure 5). Therefore, although the collective rescue might indeed owe its part of the mitigation of the hot stove effect to increasing exploration, the further enhancement of risk seeking cannot be fully explained by it alone.

The key is the interaction between negative and positive feedback. As we discussed above, risk aversion is reduced if the ratio PS-/(PS-+PR-) decreases, either by increasing PR- or reducing PS-. The per individual probability of choosing the safe option with the negative attitude, that is, PS-=(1-σ)ph+σNSθ/(NRθ+NSθ), becomes smaller than the baseline exploitation probability ph, when NSθ/(NRθ+NSθ)<ph. Even though the majority of the population may still choose the safe alternative and hence NS>NR, the inequality NSθ/(NRθ+NSθ)<ph can nevertheless hold if one takes a sufficiently small value of θ. Crucially, the reduction of PS leads to a further reduction of PS- itself through decreasing NS, thereby further decreasing the social influence supporting the safe option. Such a negative feedback process weakens the concomitant risk aversion. Naturally, this negative feedback is maximised with θ=0.

Once the negative feedback has weakened the underlying risk aversion, the majority of the population eventually choose the risky option, an effect evident in the case of θ=0 (Figure 5). What uniquely operates in cases of θ>0 is that because NR is a majority by now, positive feedback starts. Thanks to the conformist bias, the inequality NR>NS is further amplified. In this phase, the larger θ, the stronger the concomitant relationship NSθ/(NRθ+NSθ)ph. Such positive feedback will never operate with θ0.

In conclusion, it is the synergy of negative and positive feedback that explains the full range of adaptive risky shift. Neither positive nor negative feedback alone can account for both accuracy and flexibility emerging through collective learning and decision making. The results are qualitatively unchanged across a range of different combinations of e, pl, and ph (Figure 4—figure supplement 1 and Figure 5—figure supplement 1). It is worth noting that when e<0.5, this social frequency-dependent population tends to exhibit risk aversion (Figure 5—figure supplement 1), consistent with the result of the agent-based simulation for the case where the mean payoff of the risky option was smaller than that of the safe option (Figure 1—figure supplement 3). Therefore, the system does not mindlessly prefer risk seeking, but it becomes risk prone only when to do so is favourable in the long run.

An experimental demonstration

One hundred eighty-five adult human subjects performed the individual task without social interactions, while 400 subjects performed the task collectively with group sizes ranging from 2 to 8. We confirmed that the model predictions were qualitatively unchanged across the experimental settings used in the online experiments (Figure 1—figure supplement 5).

We used four different task settings. Three of them were positive risk premium (positive RP) tasks that had an optimal risky alternative, while the other was a negative risk premium (negative RP) task that had a suboptimal risky alternative. On the basis of both the agent-based simulation (Figure 1 and Figure 1—figure supplement 3) and the population dynamics (Figure 5 and Figure 5—figure supplement 1), we hypothesised that conformist social influence promotes risk seeking to a lesser extent when the RP is negative than when it is positive. We also expected that whether the collective rescue effect emerges under positive RP settings depends on learning parameters such as αi(βi+1) (Figure 1—figure supplement 5d-f).

The Bayesian model comparison (Stephan et al., 2009) revealed that participants in the group condition were more likely to employ decision-biasing social learning than either asocial reinforcement learning or the value-shaping process (Figure 6—figure supplement 2). Therefore, in the following analysis, we focus on results obtained from the decision-biasing model fit. Individual parameters were estimated using a hierarchical Bayesian method whose performance had been supported by the parameter recovery (Figure 6—figure supplement 3).

Parameter estimation (Table 3) showed that individuals in the group condition across all four tasks were likely to use social information in their decision making at a rate ranging between 4% and 18% (Mean σ; Table 3), and that mean posterior values of θ were above 1 for all four tasks. These suggest that participants were likely to use a mix of individual reinforcement learning and conformist social learning.

Table 3
Means and 95% Bayesian credible intervals (shown in square brackets) of the global parameters of the learning model.

The group condition and individual condition are shown separately. All parameters satisfied the Gelman–Rubin criterion R^<1.01. All estimates are based on over 500 effective samples from the posterior.

Task categoryPositive risk premium (positive RP)Negative risk premium (negative RP)
Task1-risky-1-safe1-risky-3-safe2-risky-2-safe1-risky-1-safe
Groupn = 123n = 97n = 87n = 93
μlogitα–2.2 [-2.8,–1.5]–1.8 [-2.3,–1.4]–1.7 [-2.1,–1.3]–0.09 [-0.7, 0.6]
(Mean α)0.10 [0.06, 0.18]0.14 [0.09, 0.20]0.15 [0.11, 0.21]0.48 [0.3, 0.6]
μlogitβ1.4 [1.1, 1.6]1.5 [1.3, 1.8]1.3 [1.0, 1.5]1.2 [1.0, 1.5]
(Mean β)4.1 [3.0, 5.0]4.5 [3.7, 6.0]3.7 [2.7, 4.5]3.3 [2.7, 4.5]
μlogitα–2.4 [-3.1,–1.8]–2.1 [-2.6,–1.6]–2.1 [-2.5,–1.7]–2.0 [-2.7,–1.5]
(Mean σ)0.08 [0.04, 0.14]0.11 [0.07, 0.17]0.11 [0.08, 0.15]0.12 [0.06. 0.18]
μθ = mean θ1.4 [0.58, 2.3]1.6 [0.9, 2.4]1.8 [1.0, 2.9]1.6 [0.9, 2.3]
Individualn = 45n = 51n = 64n = 25
μlogitα–2.1 [-3.1,–0.87]–2.1 [-2.6,–1.6]–1.3 [-2.1,–0.50]–1.3 [-2.2,–0.4]
(Mean α)0.11 [0.04, 0.30]0.11 [0.07, 0.17]0.21 [0.11, 0.38]0.2 [0.1, 0.4]
μlogitβ0.42 [-0.43, 1.1]0.91 [0.63, 1.2]0.76 [0.42, 1.1]1.2 [0.9, 1.4]
(Mean β)1.5 [0.65, 3.0]2.5 [1.9, 3.3]2.1 [1.5, 3.0]3.3 [2.5, 4.1]

To address whether the behavioural data are well explained by our social learning model and whether collective rescue was indeed observed for social learning individuals, we conducted agent-based simulations of the fit computational model with the calibrated parameters, including 100,000 independent runs for each task setup (see Materials and methods).

The results of the agent-based simulations agreed with our hypotheses (Figure 6). Overall, the 80% Bayesian credible intervals of the predicted performance of the group condition (shades of orange in Figure 6) cover an area of more risk taking than the area covered by the individual condition (shades of grey). As predicted, in the negative RP task, social learning promoted suboptimal risk taking for some values of α(β+1), but the magnitude looked smaller compared to in the positive RP tasks. Additionally, increasing σi led to an increasing probability of risk taking in the positive RP tasks (Figure 6a–c), whereas in the negative RP task, increasing σ did not always increase risk taking (Figure 6d).

Figure 6 with 3 supplements see all
Prediction of the fit learning model.

Results of a series of agent-based simulations with individual parameters that were drawn randomly from the best fit global parameters. Independent simulations were conducted 100,000 times for each condition. Group size was fixed to six for the group condition. Lines are means (black-dashed: individual, coloured-solid: group) and the shaded areas are 80% Bayesian credible intervals. Mean performances of agents with different σi are shown in the colour gradient. (a) A two-armed bandit task. (b) A 1-risky-3-safe (four-armed) bandit task. (c) A 2-risky-2-safe (four-armed) bandit task. (d) A negative risk premium two-armed bandit task.

However, a complete switch of the majority’s behaviour from the suboptimal safe options to the optimal risky option (i.e. Pr>0.5 for the two-armed task and Pr>0.25 for the four-armed task) was not widely observed. This might be because of the low copying weight (σ), coupled with the lower αi(βi+1) of individual learners (mean [median] = 0.8 [0.3]) than that of social learners (mean [median] = 1.1 [0.5]; Table 3). The weak average reliance on social learning (σi) hindered the strong collective rescue effect because strong positive feedback was not robustly formed.

To quantify the effect size of the relationship between the proportion of risk taking and each subject’s best fit learning parameters, we analysed a generalised linear mixed model (GLMM) fitted with the experimental data (see Materials and methods; Table 4). Within the group condition, the GLMM analysis showed a positive effect of σi on risk taking for every task condition (Table 4), which supports the simulated pattern. Also consistent with the simulations, in the positive RP tasks, subjects exhibited risk aversion more strongly when they had a higher value of αi(βi+1) (Figure 6—figure supplement 1a-c). There was no such clear trend in data from the negative RP task, although we cannot make a strong inference because of the large width of the Bayesian credible interval (Figure 6—figure supplement 1d). In the negative RP task, subjects were biased more towards the (favourable) safe option than subjects in the positive RP tasks (i.e. the intercept of the GLMM was lower in the negative RP task than in the others).Table 2.

Table 4
Means and 95% Bayesian credible intervals (CIs; shown in square brackets) of the posterior estimations of the mixed logit model (generalised linear mixed model) that predicts the probability of choosing the risky alternative in the second half of the trial (t>35).

All parameters satisfied the Gelman–Rubin criterion R^<1.01. All estimates are based on over 500 effective samples from the posterior. Coefficients whose CI is either below or above 0 are highlighted.

Task categoryPositive Risk Premium (positive RP)Negative Risk Premium (negative RP)
Task1-risky-1-safe1-risky-3-safe2-risky-2-safe1-risky-1-safe
n = 168n = 148n = 151n = 118
Intercept–0.1 [-0.6, 0.3]–1.1 [-1.5,–0.6]–0.8 [-1.2,–0.4]–3.5 [-4.4,–2.7]
Susceptibility to the hot stove effect (α(β+1))–0.9 [-1.3,–0.4]–1.0 [-1.5,–0.5]–0.9 [-1.3,–0.6]0.6 [-0.1, 1.4]
Group (no = 0/yes = 1)0.0 [-0.7, 0.7]–0.2 [-1.0, 0.7]0.4 [-0.5, 1.2]3.8 [2.7, 4.9]
Group × α(β+1)0.6 [0.0, 1.1]0.4 [0.0, 0.9]0.3 [-0.1, 0.7]–1.1 [-1.9,–0.3]
Group × copying weight σ1.4 [0.5, 2.3]1.9 [0.8, 3.0]2.2 [0.4, 4.0]3.8 [2.2, 5.3]
Group × conformity exponent θ–0.7 [-0.9,–0.5]0.2 [0.0, 0.5]–0.3 [-0.5,–0.1]–1.8 [-2.1,–1.5]

In sum, the experimental data analysis supports our prediction that conformist social influence promotes favourable risk taking even if individuals are biased towards risk aversion. The GLMM generally agreed with the theoretical prediction, and the fitted computational model that was supported by the Bayesian model comparison confirmed that the observed pattern was indeed likely to be a product of the collective rescue effect by conformist social learning. As predicted, the key was the balance between individual learning and the use of social information. In the Discussion, we consider the effect of the experimental setting on human learning strategies, which can be explored in future studies.

Discussion

We have demonstrated that frequency-based copying, one of the most common forms of social learning strategy, can rescue decision makers from committing to adverse risk aversion in a risky trial-and-error learning task, even though a majority of individuals are potentially biased towards suboptimal risk aversion. Although an extremely strong reliance on conformist influence can raise the possibility of getting stuck on a suboptimal option, consistent with the previous view of herding by conformity (Raafat et al., 2009; Denrell and Le Mens, 2017), the mitigation of risk aversion and the concomitant collective behavioural rescue could emerge in a wide range of situations under modest use of conformist social learning.

Neither the averaging process of diverse individual inputs nor the speeding up of learning could account for the rescue effect. The individual diversity in the learning rate (αi) was beneficial for the group performance, whereas that in the social learning weight (σi) undermines the average decision performance, which could not be explained simply by a monotonic relationship between diversity and wisdom of crowds (Lorenz et al., 2011). Self-organisation through collective behavioural dynamics emerging from the experience-based decision making must be responsible for the seemingly counter-intuitive phenomenon of collective rescue.

Our simplified differential equation model has identified a key mechanism of the collective behavioural rescue: the synergy of positive and negative feedback. Despite conformity, the probability of choosing the suboptimal option can decrease from what is expected by individual learning alone. Indeed, an inherent individual preference for the safe alternative, expressed by the softmax function eβQs/(eβQs+eβQr), is mitigated by the conformist influence Nsθ/(Nsθ+Nrθ) as long as the former is larger than the latter. In other words, risk-aversion was mitigated not because the majority chose the risky option, nor were individuals simply attracted towards the majority. Rather, participants’ choices became risker even though the majority chose the safer alternative at the outset. Under social influences (either because of informational or normative motivations), individuals become more explorative, likely to continue sampling the risky option even after he/she gets disappointed by poor rewards. Once individual risk aversion is reduced, there will exist fewer individuals choosing the suboptimal safe option, which further reduces the number of majority choosing the safe option. This negative feedback facilitates individuals revisiting the risky alternative. Such an attraction to the risky option allows more individuals, including those who are currently sceptical about the value of the risky option, to experience a large bonanza from the risky option, which results in ‘gluing’ them to the risky alternative for a while. Once a majority of individuals get glued to the risky alternative, positive feedback from conformity kicks in, and optimal risk seeking is further strengthened.

Models of conformist social influences have suggested that influences from the majority on individual decision making can lead a group as a whole to collective illusion that individuals learn to prefer any behavioural alternatives supported by many other individuals (Denrell and Le Mens, 2007; Denrell and Le Mens, 2017). However, previous empirical studies have repeatedly demonstrated that collective decision making under frequency-based social influences is broadly beneficial and can maintain more flexibility than what suggested by models of herding and collective illusion (Toyokawa et al., 2019; Aplin et al., 2017; Beckers et al., 1990; Seeley et al., 1991; Harrison et al., 2001; Kandler and Laland, 2013). For example, Aplin et al., 2017 demonstrated that populations of great tits (Parus major) could switch their behavioural tradition after an environmental change even though individual birds were likely to have a strong conformist tendency. A similar phenomenon was also reported in humans (Toyokawa et al., 2019).

Although these studies did not focus on risky decision making, and hence individuals were not inherently biased, experimentally induced environmental change was able to create such a situation where a majority of individuals exhibited an out-dated, suboptimal behaviour. However, as we have shown, a collective learning system could rescue their performance even though the individual distribution was strongly biased towards the suboptimal direction at the outset. The great tit and human groups were able to switch their tradition because of, rather than despite, the conformist social influence, thanks to the synergy of negative and positive feedback processes. Such the synergistic interaction between positive and negative feedback could not be predicted by the collective illusion models where individual decision making is determined fully by the majority influence because no negative feedback would be able to operate.

Through online behavioural experiments using a risky multi-armed bandit task, we have confirmed our theoretical prediction that simple frequency-based copying could mitigate risk aversion that many individual learners, especially those who had higher learning rates or lower exploration rates or both, would have exhibited as a result of the hot stove effect. The mitigation of risk aversion was also observed in the negative RP task, in which social learning slightly undermined the decision performance. However, because riskiness and expected reward are often positively correlated in a wide range of decision-making environments in the real world (Frank, 2009; Pleskac and Hertwig, 2014), the detrimental effect of reducing optimal risk aversion when risk premium is negative could be negligible in many ecological circumstances, making the conformist social learning beneficial in most cases.

Yet, a majority, albeit a smaller one, still showed risk aversion. The weak reliance on social learning, which affected less than 20% of decisions, was unable to facilitate strong positive feedback. The little use of social information might have been due to the lack of normative motivations for conformity and to the stationarity of the task. In a stable environment, learners could eventually gather enough information as trials proceeded, which might have made them less curious about information gathering including social learning (Rendell et al., 2010). In reality, people might use more sophisticated social learning strategies whereby they change the reliance on social information flexibly over trials (Deffner et al., 2020; Toyokawa et al., 2017; Toyokawa et al., 2019). Future research should consider more strategic use of social information, and will look at the conditions that elicit heavier reliance on the conformist social learning in humans, such as normative pressures for aligning with majority, volatility in the environment, time pressure, or an increasing number of behavioural options (Muthukrishna et al., 2016), coupled with much larger group sizes (Toyokawa et al., 2019).

The low learning rate α, which was at most 0.2 for many individuals in all the experimental task except for the negative RP task, should also have hindered the potential benefits of collective rescue in our current experiment, because the benefit of mitigating the hot stove effect would be minimal or hardly realised under such a small susceptibility to the hot stove effect. Although we believe that the simplest stationary environment was a necessary first step in building our understanding of the collective behavioural rescue effect, we would suggest that future studies use a temporally unstable (‘restless’) bandit task to elicit both a higher learning rate and a heavier reliance on social learning, so as to investigate the possibilities of a stronger effect. Indeed, previous studies with changing environments have reported a learning rate as high as α>0.5 (Toyokawa et al., 2017; Toyokawa et al., 2019; Deffner et al., 2020), under which individual learners should have suffered the hot stove trap more often.

Information about others’ payoffs might also be available in addition to inadvertent social frequency cues in some social contexts (Bault et al., 2011; Bolton and Harris, 1999). Knowing others’ payoffs allows one to use the ‘copy-successful-individuals’ strategy, which has been suggested to promote risk seeking irrespective of the risk premium because at least a subset of a population can be highly successful by sheer luck in risk taking (Baldini, 2012; Baldini, 2013; Takahashi and Ihara, 2019). Additionally, cooperative communications may further amplify the suboptimal decision bias if information senders selectively communicate their own, biased, beliefs (Moussaïd et al., 2015). Therefore, although communication may transfer information about forgone payoffs of other alternatives, which could mitigate the hot stove effect (Denrell, 2007; Yechiam and Busemeyer, 2006), future research should explore the potential impact of active sharing of richer information on collective learning situations (Toyokawa et al., 2014).

In contrast, previous studies suggested that competitions or conflicts of interest among individuals can lead to better collective intelligence than fully cooperative situations (Conradt et al., 2013) and can promote adaptive risk taking (Arbilly et al., 2011). Further research will identify conditions under which cooperative communication containing richer information can improve decision making and drive adaptive cumulative cultural transmission (Csibra and Gergely, 2011; Morgan et al., 2015), when adverse biases in individual decision-making processes prevail.

The generality of our dynamics model should apply to various collective decision-making systems, not only to human groups. Because it is a fundamental property of adaptive reinforcement learning, risk aversion due to the hot stove effect should be widespread in animals (Real, 1981; Weber et al., 2004; Hertwig and Erev, 2009). Therefore, its solution, the collective behavioural rescue, should also operate broadly in collective animal decision making because frequency-based copying is one of the common social learning strategies (Hoppitt and Laland, 2013; Grüter and Leadbeater, 2014). Future research should determine to what extent the collective behavioural rescue actually impacts animal decision making in wider contexts, and whether it influences the evolution of social learning, information sharing, and the formation of group living.

We have identified a previously overlooked mechanism underlying the adaptive advantages of frequency-based social learning. Our results suggest that an informational benefit of group living could exist well beyond simple informational pooling where individuals can enjoy the wisdom of crowds effect (Ward and Zahavi, 1973). Furthermore, the flexibility emerging through the interaction of negative and positive feedback suggests that conformity could evolve in a wider range of environments than previously assumed (Aoki and Feldman, 2014; Nakahashi et al., 2012), including temporally variable environments (Aplin et al., 2017). Social learning can drive self-organisation, regulating the mitigation and amplification of behavioural biases and canalising the course of repeated decision making under risk and uncertainty.

Materials and methods

The baseline asocial learning model and the hot stove effect

Request a detailed protocol

We assumed that the decision maker updates their value of choosing the alternative i ({s,r}) at time t following the Rescorla–Wagner learning rule: Qi,t+1(1-α)Qi,t+απi,t, where α (0α1) is a learning rate, manipulating the step size of the belief updating, and πi,t is a realised payoff from the chosen alternative i at time t (Sutton and Barto, 2018). The larger the α, the more weight is given to recent experiences, making reinforcement learning more myopic. The Q value for the unchosen alternative is unchanged. Before the first choice, individuals had no previous preference for either option (i.e. Qr,1=Qs,1=0). Then Q values were translated into choice probabilities through a softmax (or multinomial-logistic) function such that Pi,t=exp(βQi,t)/(exp(βQs,t)+exp(βQr,t)), where β, the inverse temperature, is a parameter regulating how sensitive the choice probability is to the value of the estimate Q (i.e. controlling the proneness to explore).

In such a risk-heterogeneous multi-armed bandit setting, reinforcement learners are prone to exhibiting suboptimal risk aversion (March, 1996; Denrell, 2007; Hertwig and Erev, 2009), even though they could have achieved high performance in a risk-homogeneous task where all options have an equivalent payoff variance (Sutton and Barto, 2018). Denrell, 2007 mathematically derived the condition under which suboptimal risk aversion arises, depicted by the dashed curve in Figure 1b. In the main analysis, we focused on the case where the risky alternative had μ=1.5 and s.d.=1 and the safe alternative generated πs=1 unless otherwise stated, that is, where choosing the risky alternative was the optimal strategy for a decision maker in the long run.

Collective learning and social influences

Request a detailed protocol

We extended the baseline model to a collective learning situation in which a group of 10 individuals completed the task simultaneously and individuals could obtain social information. For social information, we assumed a simple frequency-based social cue specifying distributions of individual choices (McElreath et al., 2005; McElreath et al., 2008; Toyokawa et al., 2017; Toyokawa et al., 2019; Deffner et al., 2020). Following the previous modelling of social learning in such multi-agent multi-armed bandit situations (e.g. Aplin et al., 2017; Barrett et al., 2017; McElreath et al., 2005; McElreath et al., 2008; Toyokawa et al., 2017; Toyokawa et al., 2019; Deffner et al., 2020), we assumed that social influences on reinforcement learning would be expressed as a weighted average between the softmax probability based on the Q values and the conformist social influence, as follows:

(1) Pi,t=(1σ)exp(βQi,t)exp(βQr,t)+exp(βQs,t)+σ(Ni,t1+0.1)θ(Ns,t1+0.1)θ+(Nr,t%1+0.1)θ

where σ was a weight given to the social influence (copying weight) and θ was the strength of conformist influence (conformity exponent), which determines the influence of social frequency on choosing the alternative i at time t-1, that is, Ni,t-1. The larger the conformity exponent θ, the higher the influence that was given to an alternative that was chosen by more individuals, with non-linear conformist social influence arising when θ>1. We added a small number, 0.1, to Ni,t-1 so that an option chosen by no one (i.e., Ni,t-1=0) could provide the highest social influence when θ<0 (negative frequency bias). Although this additional 0.1 slightly reduces the conformity influence when θ>0, we confirmed that the results were qualitatively unchanged. Note also that in the first trial t=1, we assumed that the choice was determined solely by the asocial softmax function because there was no social information available yet.

Note that when σ=0, there is no social influence, and the decision maker is considered an asocial learner. It is also worth noting that when σ=1 with θ>1, individual choices become fully contingent on the group’s most common behaviour, which was assumed in some previous models of strong conformist social influences in sampling behaviour (Denrell and Le Mens, 2017). The descriptions of the parameters are shown in Table 1. The simulations were run in R 4.0.2 (https://www.r-project.org) and the code is available at (the author’s github repository).

The approximated dynamics model of collective behaviour

Request a detailed protocol

We assume a group of N individuals who exhibit two different behavioural states: choosing a safe alternative S, exhibited by NS individuals; and choosing a risky alternative R, exhibited by NR individuals (N=NS+NR). We also assume that there are two different ‘inner belief’ states, labelled ‘-’ and ‘+’. Individuals who possess the negative belief prefer the safe alternative S to R, while those who possess the positive belief prefer R to S. A per capita probability of choice shift from one behavioural alternative to the other is denoted by P. For example, PS- means the individual probability of changing the choice to the safe alternative from the risky alternative under the negative belief. Because there exist NS- individuals who chose S with belief -, the total number of individuals who ‘move on’ to S from R at one time step is denoted by PSNS. We assume that the probability of shifting to the more preferable option is larger than that of shifting to the less preferable option, that is, PS>PR and PR+>PS+ (Figure 4a).

We assume that the belief state can change by choosing the risky alternative. We define that the per capita probability of becoming + state, that is, having a higher preference for the risky alternative, is e (0e1), and hence NR+=eNR. The rest of the individuals who choose the risky alternative become - belief state, that is, NR-=(1-e)NR.

We define ‘e’ so that it can be seen as a risk premium of the gambles. For example, imagine a two-armed bandit task equipped with one risky arm with Gaussian noises and the other a sure arm. The larger the mean expected reward of the risky option (i.e. the higher the risk premium), the more people who choose the risky arm are expected to obtain a larger reward than what the safe alternative would provide. By assuming e>1/2, therefore, it approximates a situation where risk seeking is optimal in the long run.

Here, we focus only on the population dynamics: If more people choose S, NS increases. On the other hand, if more people choose R, NR increases. As a consequence, the system may eventually reach an equilibrium state where both NS and NR no longer change. If we find that the equilibrium state of the population (denoted by *) satisfies NR>NS, we define that the population exhibits risk seeking, escaping from the hot stove effect. For the sake of simplicity, we assumed pl=PR-=PS+ and ph=PR+=PS-, where 0plph1, for the asocial baseline model.

Considering NR+=eNR and NR-=(1-e)NR, the dynamics are written as the following differential equations:

(2) {dNRdt=plNSph(1e)NR+phNS+pleNRdNSdt=plNS+ph(1e)NR,dNS+dt=phNS++pleNR.

Overall, our model crystallises the asymmetry emerging from adaptive sampling, which is considered as a fundamental mechanism of the hot stove effect (Denrell, 2007; March, 1996): Once decision makers underestimate the expected value of the risky alternative, they start avoiding it and do not have another chance to correct the error. In other words, although there would potentially be more individuals who obtain a preference for R by choosing the risky alternative (i.e. e>0.5), this asymmetry raised by the adaptive balance between exploration–exploitation may constantly increase the number of people who possess a preference for S due to underestimation of the value of the risky alternative. If our model is able to capture this asymmetric dynamics properly, the relationship between e (i.e. the potential goodness of the risky option) and pl/ph (i.e. the exploration–exploitation) should account for the hot stove effect, as suggested by previous learning model analysis (Denrell, 2007). The equilibrium analysis was conducted in Mathematica (code is available online). The results are shown in Figure 4.

Collective dynamics with social influences

Request a detailed protocol

For social influences, we assumed that the behavioural transition rates, PS and PR, would depend on the number of individuals NS and NR as follows:

(3) {PS=(1σ)ph+σNSθNRθ+NSθ,PR=(1σ)pl+σNRθNRθ+NSθ,PS+=(1σ)pl+σNSθNRθ+NSθ,PR+=(1σ)ph+σNRθNRθ+NSθ,

where σ is the weight of social influence and θ is the strength of the conformist bias, corresponding to the agent-based learning model (Table 1). Other assumptions were the same as in the baseline dynamics model. The baseline dynamics model was a special case of this social influence model with σ=0. Because the system was not analytically tractable, we obtained the numeric solution across different initial distribution of NS,t=0 and NR,t=0 for various combinations of the parameters.

The online experiments

The experimental procedure was approved by the Ethics Committee at the University of Konstanz (‘Collective learning and decision-making study’). Six hundred nineteen English-speaking subjects [294 self-identified as women, 277 as men, 1 as other, and the rest of 47 unspecified; mean (minimum, maximum) age = 35.2 (18, 74) years] participated in the task through the online experimental recruiting platform Prolific Academic. We excluded subjects who disconnected from the online task before completing at least the first 35 rounds from our computational model-fitting analysis, resulting in 585 subjects (the detailed distribution of subjects for each condition is shown in Table 3). A parameter recovery test had suggested that the sample size was sufficient to reliably estimate individual parameters using a hierarchical Bayesian fitting method (see below; Figure 6—figure supplement 3).

Design of the experimental manipulations

Request a detailed protocol

The group size was manipulated by randomly assigning different capacities of a ‘waiting lobby’ where subjects had to wait until other subjects arrived. When the lobby capacity was 1, which happened at probability 0.1, the individual condition started upon the first subject’s arrival. Otherwise, the group condition started when there were more than three people at 3 min since the lobby opened (see Appendix 1 Supplementary Methods). If there were only two or fewer people in the lobby at this stage, the subjects each were assigned to the individual condition. Note that some groups in the group condition ended up with only two individuals due to a drop out of one individual during the task.

We used three different tasks: a 1-risky-1-safe task, a 1-risky-3-safe task, and a 2-risky-2-safe task, where one risky option was expected to give a higher payoff than other options on average (that is, tasks with a positive risk premium [positive RP]). To confirm our prediction that risky shift would not strongly emerge when risk premium was negative (i.e. risk seeking was suboptimal), we also conducted another 1-risky-1-safe task with a negative risk premium (the negative RP task). Participants’ goal was to gather as many individual payoff as possible, as monetary incentives were given to the individual performance. In the negative RP task, risk aversion was favourable instead. All tasks had 70 decision-making trials. The task proceeded on a trial basis; that is, trials of all individuals in a group were synchronised. Subjects in the group condition could see social frequency information, namely, how many people chose each alternative in the preceding trial. No social information was available in the first trial. These tasks were assigned randomly as a between subject condition, and subjects were allowed to participate in one session only.

We employed a skewed payoff probability distribution rather than a normal distribution for the risky alternative, and we conducted not only a two-armed task but also four-armed bandit tasks, because our pilot study had suggested that subjects tended to have a small susceptibility to the effect (αi(βi+1)2), and hence we needed more difficult settings than the conventional Gaussian noise binary-choice task to elicit risk aversion from individual decision makers. Running agent-based simulations, we confirmed that these task setups used in the experiment could elicit the collective rescue effect (Figure 1—figure supplement 5 Figure 1—figure supplement 6).

The details of the task setups are as follows:

The 1-risky-1-safe task (positive RP)
Request a detailed protocol

The optimal risky option produced either 50 or 550 points at probability 0.7 and 0.3, respectively (the expected payoff was 200). The safe option produced 150 points (with a small amount of Gaussian noise with s.d. = 5).

The 1-risky-3-safe task (positive RP)
Request a detailed protocol

The optimal risky option produced either 50 or 425 points at probability 0.6 and 0.4, respectively (the expected payoff was 200). The three safe options each produced 150, 125, and 100 points, respectively, with a small Gaussian noise with s.d. = 5.

The 2-risky-2-safe task (positive RP)
Request a detailed protocol

The optimal risky option produced either 50 or 425 points at probability 0.6 and 0.4, respectively (the expected payoff was 200). The two safe options each produced 150 and 125 points, respectively, with a small Gaussian noise with s.d. = 5. The suboptimal risky option, whose expected value was 125, produced either 50 or 238 points at probability 0.6 and 0.4, respectively.

The 1-risky-1-safe task (negative RP)
Request a detailed protocol

The setting was the same as in the 1-risky-1-safe positive RP task, except that the expected payoff from the risky option was smaller than the safe option, producing either 50 or 220 points at probability 0.7 and 0.3, respectively (the expected payoff was 101).

We have confirmed through agent-based model simulations that the collective behavioural rescue could emerge in tasks equipped with the experimental settings (Figure 1—figure supplement 5). We have also confirmed that risk seeking does not always increase when risk premium is negative (Figure 1—figure supplement 6). With the four-armed tasks we aimed to demonstrate that the rescue effect is not limited to binary-choice situations. Other procedures of the collective learning task were the same as those used in our agent-based simulation shown in the main text. The experimental materials including illustrated instructions can be found in Video 1 (individual condition) and Video 2 (group condition).

Video 1
A sample screenshot of the online experimental task (Individual condition).

This video was taken only for the demonstration purpose and hence not associated to any actual participant’s behaviour.

Video 2
A sample screenshot of the online experimental task with N = 3 (group condition).

This video was taken only for the demonstration purpose and hence not associated to any actual participant’s behaviour. Also note that actual participants could see only one browser window per participant in the experimental sessions.

The hierarchical Bayesian model fitting

To fit the mixed logit model (GLMM) as well as the learning model, we used a hierarchical Bayesian method. For the learning model, we estimated the global means (μα, μβ, μσ, and μθ) and global variances (vα, vβ, vσ, and vθ) for each of the four experimental conditions and for the individual and group conditions separately. For the individual condition, we assumed σ=0 for all subjects and hence no social learning parameters were estimated. Full details of the model-fitting procedure and prior assumptions are shown in the Supplementary Methods. The R and Stan code used in the model fitting are available from an online repository.

The GLMM

Request a detailed protocol

We conducted a mixed logit model analysis to investigate the relationship between the proportion of choosing the risky option in the second half of the trials (Pr,t>35) and the fit learning parameters (αi(βi+1), σi, and θi). Since no social learning parameters exist in the individual condition, the dummy variable of the group condition was considered (Gi=1 if individual i was in the group condition or 0 otherwise). The formula used is logit(Pr,t>35) = γ0+γ1αi(βi+1)+γ2Gi+γ3Giαi(βi+1)+γ4Giσi+γ5Giθi+ϵi+ϵg, where ϵi and ϵg were the random effect of individual and group, respectively. The model fitting using the Markov chain Monte Carlo (MCMC) method was the same as what was used for the computational model fitting, and the code are available from the repository shown above.

Model and parameter recovery, and post hoc simulation

Request a detailed protocol

To assess the adequacy of the hierarchical Bayesian model-fitting method, we tested how well the hierarchical Bayesian method (HBM) could recover ‘true’ parameter values that were used to simulate synthetic data. We simulated artificial agents’ behaviour assuming that they behave according to the social learning model with each parameter setting. We generated ‘true’ parameter values for each simulated agent based on both experimentally fit global parameters (Table 1; parameter recovery test 1). In addition, we ran another recovery test using arbitrary global parameters that deviated from the experimentally fit values (parameter recovery test 2), to confirm that our fitting procedure was not just ‘attracted’ to the fit value. We then simulated synthetic behavioural data and recovered their parameter values using the HBM described above. Both parameter recovery tests showed that all the recovered individual parameters were positively correlated with the true values, whose correlation coefficients were all larger than 0.5. We also confirmed that 30 of 32 global parameters in total were recovered within the 95% Bayesian credible intervals, and that even those two non-recovered parameters (μβ for the 2-risky-2-safe task in parameter recovery test 1 and μα for the 1-risky-3-safe task in parameter recovery test 2) did not deviate so much from the true value (Figure 6—figure supplement 3).

We compared the baseline reinforcement learning model, the decision-biasing model, and the value-shaping model (see Supplementary Methods) using Bayesian model selection (Stephan et al., 2009). The model frequency and exceedance probability were calculated based on the Widely Applicable Information Criterion (WAIC) values for each subject (Watanabe and Opper, 2010). We confirmed accurate model recovery by simulations using our task setting (Figure 6—figure supplement 2).

We also ran a series of individual-based model simulations using the calibrated global parameter values for each condition. First, we randomly sampled a set of agents whose individual parameter values were drawn from the fit global parameters. Second, we let this synthetic group of agents perform the task for 70 rounds. We repeated these steps 100,000 times for each task setting and for each individual and group condition.

Appendix 1

Supplementary methods

An analytical result derived by Denrell, 2007

In the simplest setup of the two-armed bandit task, Denrell, 2007 derived an explicit form for the asymptotic probability of choosing the risky alternative Pr (as t) as follows:

(4) Pr=11+exp[αβ2s.d.22(2α)β(μπs)].

Equation 4 identifies a condition under which reinforcement learners exhibit risk aversion. In fact, when there is no risk premium (i.e. μπs), the condition of risk aversion always holds, that is, Pr<0.5. Consider the case where risk aversion is suboptimal, that is, μ>πs. Equation 4 suggests that suboptimal risk aversion emerges when learning is myopic (i.e. when α is large) and/or decision making is less explorative (i.e. when β is large). For instance, when the payoff distribution of the risky alternative is set to μ=πs+0.5 and s.d.2=1, the condition of risk aversion, Pr<0.5, holds under β>(2α)/α, which corresponds to the area above the dashed curve in Figure 1b in the main text. Risk aversion becomes more prominent when the risk premium μ-πs is small and/or the payoff variance s.d.2 is large.

The online experiments

Subjects

The positive risk premium (positive RP) tasks were conducted between August and October 2020 (recruiting 492 subjects), while the negative risk premium (negative RP) task was conducted in September 2021 (recruiring 127 subjects) in response to the comments from peer reviewers. All subjects declared their residence in the United Kingdom, the United States, Ireland, or Australia. All subjects consented to participation through an online consent form at the beginning of the task. We excluded subjects who disconnected from the online task before completing at least the first 35 rounds from our computational model-fitting analysis, resulting in 467 subjects for the positive RP tasks and 118 subjects for the negative RP task (the detailed distribution of subjects for each condition is shown in Table 1 in the main text). The task was available only for English-speaking subjects and they had to be 18 years old or older. Only subjects who passed a comprehension quiz at the end of the instructions could enter the task. Subjects were paid 0.8 GBP as a show-up fee as well as an additional bonus payment depending on their performance in the decision-making task In the positive RP tasks 500 artificial points were converted to 8 pence, while in the negative RP task 500 points were converted to 10 pence so as to compensate the less productive environment, resulting in a bonus ranging between £1.0 and £3.5.

Sample size

Our original target sample size for the positive RP tasks was 50 subjects for the individual condition and 150 subjects for the group condition where our target average group size was 5 individuals per group. For the negative RP task, we aimed to recruit 30 individuals for the individual condition and 100 individuals (that is, 20 groups of 5) for the group condition. Subjects each completed 70 trials of the task. The sample size and the trial number had been justified by a model recovery analysis of a previous study (Toyokawa et al., 2019).

Because of the nature of the ‘waiting lobby’, which was available only for 3 min, we could not fully control the exact size of each experimental group. Therefore, we set the maximum capacity of a lobby to 8 individuals for the 1-safe-1-risky task, which was conducted in August 2020, so as to buffer potential dropouts during the waiting period. Since we learnt that dropping out happened far less than we originally expected, we reduced the lobby capacity to 6 for both the 1-risky-3-safe and the 2-risky-2-safe task, which were conducted in October 2020. As a result, we had 20 groups (mean group size = 6.95), 21 groups (mean group size = 4.7), 19 groups (mean group size = 4.3), and 21 gorups (mean group size = 4.4), for the 1-risky-1-safe, 1-risky-3-safe, 2-risky-2-safe task, and the negative risk premium 2-armed task, respectively. Although we could not achieve the sample size targeted, partly due to the dropouts during the task and to a fatal error occurring in the experimental server in the first few sessions of the four-armed tasks, the parameter recovery test with N=105 suggested that the current sample size should be reliable enough to estimate social influences for each subject (Figure 6—figure supplement 3).

The hierarchical Bayesian parameter estimation

We used the hierarchical Bayesian method (HBM) to estimate the free parameters of our learning model. HBM allowed us to estimate individual differences, while this individual variation is bounded by the group-level (i.e. hyper) parameters. To do so, we used the following non-centred reparameterisation (the ‘Matt trick’) as follows:

logit(αi)=μα+vααraw,i

where μα is a global mean of logit(αi) and vα is a global scale parameter of the individual variations, which is multiplied by a standardised individual random variable αraw,i. We used a standardised normal prior distribution centred on 0 for μα and an exponential prior for vα. The same method was applied to the other learning parameters βi, σi, and θi.

We assumed that the ‘raw’ values of individual random variables (αraw,i, βraw,i, σraw,i, θraw,i) were drawn from a multivariate normal distribution. The correlation matrix was estimated using a Cholesky decomposition with a weakly informative Lewandowski–Kurowicka–Joe prior that gave a low likelihood to very high or very low correlations between the parameters (McElreath, 2020; Deffner et al., 2020).

Model fitting

All models were fitted using the Hamiltonian Monte Carlo engine CmdStan 2.25.0 (https://mc-stan.org/cmdstanr/index.html) in R 4.0.2 (https://www.r-project.org). The models contained at least six parallel chains and we confirmed convergence of the MCMC using both the Gelman–Rubin statistics criterion R^1.01 and the effective sample sizes greater than 500. The R and Stan code used in the model fitting are available from an online repository.

The value-shaping social influence model

We considered another implementation of social influences in reinforcement learning, namely, a value-shaping (Najar et al., 2020) (or ‘outcome-bonus’ Biele et al., 2011) model rather than the decision-biasing process assumed in our main analyses. In the value-shaping model, social influence modifies the Q value’s updating process as follows:

Qi,t+1(1α)Qi,t+α(πi,t+σvsπ¯Ni,t1θNs,t1θ+Nr,t1θ)

where the social frequency cue acts as an additional ‘bonus’ to the value that was weighted by σvs (σvs>0) and standardised by the expected payoff from choosing randomly among all alternatives π¯. Here we assumed no direct social influence on the action selection process (i.e., σ=0 in our main model). We confirmed that the collective behavioural rescue could emerge when the inverse temperature β was sufficiently small (Figure 1—figure supplement 2). Although it is beyond the focus of this article whether any other types of models would fit better with human data than the models we considered in this study, it is an interesting question for future research. For such an attempt, see Najar et al., 2020.

Data availability

Code for the agent-based simulation as well as for the experimental data analyses can be found in the main author's Github repository https://github.com/WataruToyokawa/ToyokawaGaissmaier2021 (copy archived at swh:1:rev:6fca0b26c33af3a5b3c415719fa3df0dced15149).

References

  1. Book
    1. Boyd R
    2. Richerson PJ
    (1985)
    Culture and the Evolutionary Process
    Chicago, IL: University of Chicago Press.
    1. Csibra G
    2. Gergely G
    (2011) Natural pedagogy as evolutionary adaptation
    Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 366:1149–1157.
    https://doi.org/10.1098/rstb.2010.0319
  2. Book
    1. Giraldeau LA
    2. Caraco T
    (2000) Social Foraging Theory
    New Jersey, United States: Princeton University Press.
    https://doi.org/10.1515/9780691188348
    1. Giraldeau LA
    2. Valone TJ
    3. Templeton JJ
    (2002) Potential disadvantages of using socially acquired information
    Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 357:1559–1566.
    https://doi.org/10.1098/rstb.2002.1065
  3. Book
    1. Krause J
    2. Ruxton GD
    (2002)
    Living in Groups
    Oxford: Oxford University Press.
  4. Book
    1. Sutton RS
    2. Barto AG
    (2018)
    Reinforcement Learning: An Introduction
    Massachusetts, United States: MIT press.
    1. Watanabe S
    2. Opper M
    (2010)
    Asymptotic equivalence of bayes cross validation and widely applicable information criterion in singular learning theory
    Journal of Machine Learning Research 11:12.

Decision letter

  1. Mimi Liljeholm
    Reviewing Editor; University of California, Irvine, United States
  2. Michael J Frank
    Senior Editor; Brown University, United States

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Decision letter after peer review:

[Editors’ note: the authors submitted for reconsideration following the decision after peer review. What follows is the decision letter after the first round of review.]

Thank you for submitting the paper "Conformist social learning leads to self-organised prevention against adverse bias in risky decision making" for consideration by eLife. Your article has been reviewed by 3 peer reviewers, including Mimi Liljeholm as the Reviewing Editor and Reviewer #3, and the evaluation has been overseen by a Senior Editor.

We are sorry to say that, after consultation with the reviewers, we have decided that this work will not be considered further for publication by eLife.

There was consensus among the reviewers that the paper addresses an important and impactful topic. The influence of conformity on economic choice is still largely unexplored, and the rigorous modeling approach employed here is valuable. However, a primary weakness is the lack of integration with the relevant empirical and theoretical context, which imperils both the novelty and interpretability of the results:

First, it is unclear whether these findings constitute enough of an advance over those reported by Denrell and Le Mens (2007) to warrant publication in eLife. Second, there is no effort to incorporate processes supporting normative and informational conformity into the models. Notably, these issues are somewhat connected, in that a formal integration of normative and informational conformity with sampling-based collective rescue might go a long way towards distinguishing this work from that of Denrell and Le Mens. Thus, while these issues are too open-ended for a revision decision at eLife, the enthusiasm among reviewers was such that, should these concerns be fully addressed, the paper might be considered again as a new submission.

The specific comments from the reviewers are appended below for further reference.

Reviewer #1:

In this study the authors investigated the effect of social influence on individuals' risk aversion in a 2-armed bandit task. Using a reinforcement learning model and a dynamic model they showed that social influence can in fact diminish risk aversion. The authors then conducted a series of online experiments and found that their experimental findings were consistent with the prediction of their simulations.

The authors addressed an important question in the literature and adopted an interesting approach by first making predictions using simulations and then verifying those predictions with experimental data. The modelling has been conducted very carefully.

However, I have some concerns about the interpretation of the findings which might be addressed using additional analysis and or rewriting some parts of the manuscript. The study does not clarify whether in this task participants copy others to maximize their accuracy (informational conformity) or alternatively to be aligned with others (normative conformity). It is possible that participants became riskier because most of the group were choosing the riskier decisions (regardless of the outcome). In addition to that, an earlier study showed that people make riskier decisions when they make decision alongside other people. This might be a potential confound of this study.

One potentially interesting design would be to test people in a situation where only the minority of the group members choose the optimal option (riskier option). If participants' choices become riskier even in this condition, we can conclude that they were not just copying the majority, but were maximising their reward by observing others' decisions and outcomes.

In this study the authors investigated the effect of social influence on individuals' risk aversion in a 2-armed bandit task. Using a reinforcement learning model and a dynamic model they showed that social influence can in fact diminish risk aversion. The authors then conducted a series of online experiments and observed the same effect in their experimental data as well. The research question is timely, and the modelling has been done carefully. However, I have some comments and concerns about the interpretations of their findings.

– My first question is do the participants copy others because the other risky option sounds better in terms of reward or because they copy others just because being in alignment with others is rewarding? This brings us to the distinction between informational and normative influence. For example, a recent study showed that copying others is not necessarily motivated by maximising accuracy (Mahmoodi et al. 2018, see also Cialdinin and Goldstein 2004). In their experimental data, the authors found that participants do not copy others (choosing the risky options) as much as they should do. Does it suggest that their conformity toward others cannot be fully explained by informational motives (where the aim of conformity is to maximise payoff/accuracy). I suggest that the authors discuss each of these possibilities and then explain to which of these two types of influence their findings belong to.

– An earlier study showed that people's decisions become riskier when they make decisions with others (Bault et al. PNAS 2011). Could this explain the findings that are presented in this paper? Can the models distinguish between these two types of change in behaviour? I strongly suggest the authors to discuss the Bault et al. paper and discuss how their findings deviate from this study.

– In one section the authors show that reducing heterogeneity in groups undermines group performance. It brought my attention to a study (Lorenz et al. PNAS 2011) which suggested that social influence can undermine wisdom of crowds through reducing heterogeneity of opinions. It seems that the authors are presenting the same phenomenon as that suggested by Lorenz and colleagues. I suggest that the authors cite that study and discuss how their results is related to that study and whether their findings broaden our understanding of the effect of heterogeneity on collective performance.

– Nothing can be found about the model in the main text. Similarly, some of the terms are not even defined before they are used in the main text. For example, the term "asocial learning" is only defined in the Figure 2 caption. I suggest that the authors briefly explain the model and the key terms in the main text before presenting the result. I also strongly suggest that the authors mention in the main text that the detail of the model is presented in the methods.

In the introduction reads: social influence does not mindlessly increase risk seeking; instead, it may work only when to do so is adaptive. I believe this sentence is vague in its current form. I suggest that the authors elaborate on it, especially on the last part (i.e. to do so is adaptive).

Reviewer #2:

The paper considers an interesting puzzle. While most psychological studies of conformity tend to focus on negative effects, animal research highlights positive effects of conformity. The current analysis tries to clarify this apparent puzzle by clarifying one positive effect of conformity: Reduction of the hot stove effect that impairs maximization when taking risk is optimal.

The paper can be improved by building on previous efforts (e.g., Denrell and Le Mens, 2007) to clarify the impact of social influence on the hot stove effect. It would also be good to try to simplify the model, and use the same tasks in the theoretical analysis and the experimental study.

One interesting open question involves the impact of the increase in the information, available today (in social networks like Facebook) concerning the behavior of other individuals. I think that the current analysis predicts an increase in risk taking.

The paper considers an interesting puzzle. While most psychological studies of conformity tend to focus on negative effects, animal research highlights positive effects of conformity. The current analysis tries to clarify this apparent puzzle by clarifying one positive effect of conformity: Reduction of the hot stove effect that impairs maximization when taking risk is optimal.

The paper's main shortcoming is the fact that it is difficult to understand how it adds to the observations presented in Denrell and Le Mens (2007), and more recent research by Le Mens. It is possible that the authors can address this shortcoming by clarifying the difference between pure conformity (or imitation) and the impact of social influence examined by Le Mens and his co-authors.

Another shortcoming involves the difference between the choice task analyzed in the theoretical analysis, and the task examined in the experiment. The theoretical analysis focuses on normal distributions, and the experiment focuses on asymmetric bimodal distributions. The authors suggest that they chose to switch to asymmetric bimodal distributions as the hot stove effect, exhibited by human subjects, in the case of normal distributions is not strong. If this is the case, it would be good to adjust the theoretical model and use a model that better capture human behavior.

A third shortcoming involves the complexity to the theoretical model. Since this model is only used to demonstrate that conformity can reduce the hot stove effect, and is not supposed to capture the exact magnitude of the two effects, I could not understand why it includes so many parameters. For example, it would be nice to add only one parameter to the basic reinforcement learning model. If more parameters are needed it would be good to show why they are needed.

Reviewer #3:

The authors use reinforcement learning and dynamic modeling to formalize the favorable effects of conformity on risk taking, demonstrating that social influence can produce an adaptive risk-seeking equilibrium at the population level. The work provides a rigorous analysis of a paradoxical interplay between social and economic choice.

Conformity is commonly attributed to either an intrinsic reward of group membership, or to inferences about the optimality of others' behavior (i.e., normative vs. informational). Neither of these aspects of conformity are addressed here, which limits the interpretability of the results. For example, if there is an intrinsic reward associated with majority alignment, that should contribute to the reinforcement of such decisions; moreover, inferences about the optimality of observed behavior likely change from early trials, in which others can be assumed to simply explore, to later trials, in which the decisions of others may be indicative of their success. The work would be more impactful if it considered how these factors might affect the potential for collective rescue.

An interesting question is whether a substantial payoff contingent on choosing a risky option may server to reinforce the act of risk taking itself, and how such processes might propagate social influence across environments.

I suspect that the paper was initially written with the Methods following the Introduction, and with the Methods being subsequently moved to the end without much additional editing. As a result, very few of the variables and concepts (e.g., conformist influence, copying weight, positive/negative feedback) are defined in the main text upon first mention, which makes for extremely onerous reading.

Cases where the risky option yields a lesser mean payoff, producing a potentially detrimental social influence, should be given full weight in the main text, and should have been included in the behavioral study. Generally, the discrepancy between modeling and behavioral results is a bit disappointing. It is unclear why the behavioral experiment was not designed so as create the most relevant conditions.

My greatest concern is that the work does not integrate properly with its theoretical and empirical context. Additional analyses assessing the relative contributions of normative and informational conformity to socially induced risk-seeking would be helpful.

[Editors’ note: further revisions were suggested prior to acceptance, as described below.]

Thank you for resubmitting your work entitled "Conformist social learning leads to self-organised prevention against adverse bias in risky decision making" for further consideration by eLife. Your revised article has been evaluated by Michael Frank (Senior Editor) and a Reviewing Editor.

The manuscript has been improved, and Reviewer 2 recommends acceptance at this point, but Reviewer 3 has some remaining concerns, summarized below. We invite you to address all remaining concerns in a second round of revisions. Make sure to include point-by-point replies to each of Reviewer 3's recommendations.

1) Although the organization and writing is improved, it has some ways to go before the manuscript is ready for publication. For example, the basic aims and methods should be stated in one or two sentences at the end of the first, or at most second, paragraph of the introduction, giving the reader a clear sense of where things are going. Moreover, the "Agent-based model" section should be shortened (to include only what is needed to conceptually understand the model, leaving details for a table and the methods) and better integrated with the introduction, rather than inserted as (what appears to be) a super-section.

2) Just as the intro should include a concise description of the model, it should highlight the online experiments, and how they relate to the modeling.

3) The result section should start with a paragraph outlining the various hypotheses and corresponding analyses (i.e., a "roadmap" for the section).

4) Please quantify the performance of your model relative to others with formal comparisons (e.g., Bayesian Model Selection).

5) Please quantify all claims of associations with effect sizes and clearly justify all parameter cut-offs/values.

6) Streamline figures and included predicted/observed result plots wherever possible.

Reviewer #3:

The authors showed that when individuals learn about how they should decide in a situation where they can choose between a risky and a safe option, they might overcome maladaptive biases (e.g. exaggerated risk-aversion when risk-taking would be beneficial) by conforming with group behaviour. Strengths include rigorous and innovative computational modeling, a weakness might be that the set-up of the empirical study did not actually widely provoke behavioral phenomena at question, e.g. social learning (which is arguably at the core of the research question). Even though I am reviewing a revised manuscript I would hope the authors find a way to further improve clarity in the presentation of their research question and results.

My main concern was that even in the revised version I read, I found the paper not as accessible as I think it should be for a wide readership of a journal like eLife; and often times I found that things you could communicate in a straightforward way are put too complicated/expressed very verbosely. When reading the previous reviews after having read the revised paper, I also got the feeling that there were some misunderstandings. You clarified these specific points well in your responses, I think, but the lack in clarity might be even more drastic with an interdisciplinary readership this journal aims at as compared to the experts the journal has recruited now for this review? I will try to give some examples below.

The introduction should, imho provide a general intro to the question of the paper and how you've arrived to ask that question, avoiding too much technical jargon. After having read the paper, I realized that the research question is pretty straight-forward (and interesting) and derived from 1/2 previous observations, but this didn't become clear on the very first read.

Just as an example, some of the first sentences are…

"One rationale behind this optimistic view might come from the assumption that individuals tend to prefer a behavioural option that provides larger net benefits in utility over those providing lower outcomes. Therefore, even though uncertainty makes individual decision-making fallible, statistical filtering through informational pooling may be able to reduce uncertainty by cancelling out such noise."

This is only 1 example (it is something I noted throughout the paper… and also came up in the previous round of reviews) where I think these 2 sentences require some if not a little more background in decision-making (net benefits, utility, uncertainty, noise, stat filtering, informational pooling) to be understandable.

Other terminology like e.g. collective illusion, opportunity costs, description-based vs. experienced-based risk-taking paradigms, frequency-based influences, would be nice to be either defined in the text or replaced by a more accessible description in the introduction.

I know it is sometimes hard to mentalize which terminology others not working on the same things might struggle with, but given that this is an interdisciplinary journal, might it perhaps make sense to ask a researcher friend who is not exactly working on this topic to give it a read?

"such a risk-taking bias constrained by the fundamental nature of learning may function independently from the adaptive risk perception (Frey et al., 2017), potentially preventing adaptive risk taking."

Unclear without knowing or looking up the Frey paper – shorten ("to be too risk-averse might be maladaptive in some contexts"?) or explain.

Line 74-82 extremely long sentence – I think can easily be simplified? – "previous studies have neglected contexts where individuals learn about the environment both by own and others' experiences"

Line 84-89 is really long, too.

I would have been interested to learn more about the online experiments in the intro.

I do understand the reasoning of copy and pasting the agent Based Model description after having read the previous reviews, but it confused me when reading the article first; I think it needs to be integrated with Intro/Results section better (the first paragraph reads like intro/background still, but then it is about the authors' current study). I fear that readers don't know where they are in the paper at that point. (I much prefer journals with the Methods-Results order rather than the one eLife uses, as this would naturally circumvent this problem, but that's the challenge here, I reckon.)

Results

Subheadings: Make clearer which is the section that describes simulations and which is the empirical section.

Can you maybe start with reiterating in a structured way which parameters you set to which values and why in the simulation before describing the effects of it, how many trials you simulate (you start speaking of elongated time horizons but do not mention the original horizon length other than in the figure legend?) etc?

I know the Najar work and I think it is cool that you can generalize your results also to a value-based framework, but I do not think readers that do not know the Najar study will be able to follow this at it is described now, which makes it more confusing than interesting. So either elaborate what this means (accessible to non-initiated readers) or ban to the Supplement (would be a shame).

Would the value-based model fit the empirical data better?

When reading the first part of the Results section I constantly wondered: How did the group behave / How was the behaviour of the group determined in the simulation? Was variability considered? (this is something that's been manipulated in some empirical studies building on descriptive risk scenarios, e.g. Suzuki et al). It becomes clear when reading on/looking at Figure 3, but it is such a crucial point that it needs to be made clear from the beginning.

I think it is a major limitation that in the empirical study actual social learning was extremely limited, given that the paper claims to provide a formal account of the function of social learning in this situation?…. I would have thought that indeed trying to provoke more use of social influence by altering the experimental setup in a way the authors propose in their discussion would have been important, and given that this can be done online, also a feasible option.

Also, empirically, susceptibility to the hot stove effect, i.e., αi(βi + 1), seems to be very low – zero for most of the participants in some scenarios according to Figure 6A,B,C? – isn't this concerning, given that this is at the core of what the authors want to explain?

"those with a higher value of the susceptibility to the hot stove effect (αi(βi + 1)) were less likely to choose the risky alternative, whereas those who had a smaller value of αi(βi + 1) had a higher chance of choosing the safe alternative (Figure 6a-c), " -- I'm confused- should it read "those who had a smaller value of αi(βi + 1) had a higher chance of choosing the *risky* alternative"?

Could you please quantify this correlation in terms of an effect size? The association is not linear, from the Figure? Please specify.

Is the association driven by α or by β or really by the product?

"The behaviour in the group condition supports our theoretical predictions. In the PRP tasks, the proportion of choosing the favourable risky option increased with social influence (σi) particularly for individuals who had a high susceptibility to the hot stove effect. On the other hand, social influence had little benefit for those who had a low susceptibility to the hot stove effect (e.g., αi(βi + 1) {less than or equal to} 0.5)." Can you quantify this with statistically (effect sizes etc)?

Did you do any form of model selection on the empirical data (with different set ups of your models, the reduced models (e.g. without σ), or the Najar type of model) to demonstrate that it is really your theoretically proposed model that fits the data best (e.g. Bayesian Model Selection)? Please include in the main manuscript. See Palminteri, TiCS on why this might be important.

I think Figure 6 is really overloaded. The legend somehow looks as it would belong only to panel c? The coloured plots are individual data points as a function of group size (which is what?) or copying weight? For me, in the current formatting, dots were too small to detect a continuous colour coding scheme (only yellow vs purple). The solid lines are simulated data? Can you show a regression line for the empirical data to allow for comparisons? Does it not differ substantially from the model predictions? I suggest making different plots for different purposes (compare predicted behaviour to empirical behaviour, show effect of copying weight, show effect of group size etc, show simulation were you plug in σ¯>0.4)

"In keeping with this, if we extrapolated the larger value of the copying of weight (i.e., σi¯> 0.4) into the best fitting social learning model with the other parameters calibrated, a strong collective rescue became prominent " – sorry, where does the value σ¯>0.4 exactly come from for this analysis? Please give more detail/contextualise better.

Please try to be consistent with terminology αi(βi + 1) is sometimes called 'susceptibility ' or 'susceptibility value', which might be confusing, given that in some published articles susceptibility refers to susceptibility to social influence which would be another parameter…. I suggest to go through the manuscript once more and strictly only use one term for each parameter (the one you introduce in the table).

https://doi.org/10.7554/eLife.75308.sa1

Author response

[Editors’ note: the authors resubmitted a revised version of the paper for consideration. What follows is the authors’ response to the first round of review.]

The specific comments from the reviewers are appended below for further reference.

Reviewer #1:

In this study the authors investigated the effect of social influence on individuals' risk aversion in a 2-armed bandit task. Using a reinforcement learning model and a dynamic model they showed that social influence can in fact diminish risk aversion. The authors then conducted a series of online experiments and found that their experimental findings were consistent with the prediction of their simulations.

The authors addressed an important question in the literature and adopted an interesting approach by first making predictions using simulations and then verifying those predictions with experimental data. The modelling has been conducted very carefully.

However, I have some concerns about the interpretation of the findings which might be addressed using additional analysis and or rewriting some parts of the manuscript.

The study does not clarify whether in this task participants copy others to maximize their accuracy (informational conformity) or alternatively to be aligned with others (normative conformity). It is possible that participants became riskier because most of the group were choosing the riskier decisions (regardless of the outcome).

We agree to the reviewer’s point that both informational and normative motivations would underlie conformity behaviour. It is correct that our model does not specify underlying individual motivations. In other words, the model does not depend on whether there are normative motivations to align with the majority or there exist only informational motivations for conformity. The result of our model can hold irrespective of them. Whatever the proximate reasons behind the conformist social influence, as long as individual choices are influenced by many others’ behaviour, experiences from the (otherwise avoided) risky alternative can increase, which results in mitigation of the hot stove effect. To make this point clearer, we have added texts in the Model section in the main text as follows:

– (Line 159 – 167): “A payoff realised was independent of others’ decisions and it was drawn solely from the payoff probability distribution specific to each alternative, thereby we assume neither direct social competition over the monetary reward (Giraldeau & Caraco, 2000) nor normative pressures towards majority alignment (Cialdini & Goldstein, 2004; Mahmoodi et al., 2018). The value of social information was assumed to be only informational (Nakahashi, 2012). Nevertheless, our model could apply to the context of normative social influences, because what we assumed here was modifications in individual choice probabilities due to social influences, irrespective of underlying motivations of conformity.”

For the sake of simplicity, in the online experiment we aimed to limit the underlying motivation of using social information to the informational one. Therefore, participants in our experiment had no direct monetary incentives for aligning with the majority and were kept anonymous during the task, which we thought minimised the possibility to evoke the normative motivation. Nevertheless, we agree with the reviewer’s point that the distinction between informational and normative conformity is important in a broader context of social influence, and both types of motivations could have in fact co-worked in the experiment as well as in many real-world situations. We added discussions to elaborate this point (see below). In particular, we expect that the weight for social learning (that is,σparameter in our model) would increase if both informational and normative motivations for conformity operate together, which would either promote more robust rescue effect ifσis still not too large, or trigger maladaptive herding (i.e. collective illusion) ifσbecomes too large. The discussion we have added are as follows:

– (Lines 572 – 586): “The weak reliance on social learning, which affected only about 15% of decisions, was unable to facilitate strong positive feedback. The little use of social information might have been due to the lack of normative motivations for conformity and to the stationarity of the task. In a stable environment, learners could eventually gather enough information as trials proceeded, which might have made them less curious about information gathering including social learning (Rendell et al., 2010). In reality, people might use more sophisticated social learning strategies whereby they change the reliance on social information flexibly over trials (Deffner et al., 2020; Toyokawa et al., 2017, 2019). Future research should consider more strategic use of social information, and will look at the conditions that elicit heavier reliance on the conformist social learning in humans, such as normative pressures for aligning with majority, volatility in the environment, time pressure, or an increasing number of behavioural options (Muthukrishna et al., 2015), coupled with larger group sizes (Toyokawa et al., 2019).”

In addition to that, an earlier study showed that people make riskier decisions when they make decision alongside other people. This might be a potential confound of this study.

We now cite some earlier studies, highlighting how our approach and the previous literature differ qualitatively. Previous studies investigating social influence on risky decision making have focused mainly on the description-based task where information sampling from experience does not play any important role in decision making. In contrast, our focus here is on the experience-based (i.e., learning based) risky decision making where information sampling processes are responsible for the proximate causes of risk-aversion, whose mechanisms can be independent from the utility function-based risk sensitivity measured in the description-based task. This distinction is important, because the very nature of information sampling in the experienced-based task plays the core role in both the hot stove effect and the collective rescue effect. To make this point clear, we have added sentences in the Introduction as follows:

–(Lines 72 – 82): How, if at all, can group-living animals improve collective decision accuracy while suppressing the potentially deleterious constraint of decision-making biases through trial-and-error learning? One of the strong candidates of explaining this gap is the fact that studies in human social learning in risky decision making have focused only on either the description based gambles (Chung et al., 2015; Bault et al., 2011; Suzuki et al., 2016; Shupp and Williams, 2008) or extreme conformity where individual choices are regulated fully by others’ behaviour (Denrell and Le Mens, 2007, 2016), but not on experienced-based situations where both individual and social learning affect behavioural outcomes, a form of decision making widespread in group-living animals and humans (Hertwig and Erev, 2009; Camazine et al., 2001; Toyokawa et al., 2019).

One potentially interesting design would be to test people in a situation where only the minority of the group members choose the optimal option (riskier option). If participants' choices become riskier even in this condition, we can conclude that they were not just copying the majority, but were maximising their reward by observing others' decisions and outcomes.

The situation described by the reviewer here is exactly what happened in our results. Risk-aversion was mitigated not because the majority chose the risky option, nor were individuals simply attracted towards the majority. Rather, participants’ choices became risker even though the majority chose the safer alternative at the outset. The mechanism behind such an ostensibly ‘minority effect’ is explained in the main text as follows, and we now highlight more clearly what this means:

– (Line 516 – 526): Despite conformity, the probability of choosing the suboptimal option can decrease from what is expected by individual learning alone. Indeed, an inherent individual preference for the safe alternative, expressed by the softmax function eβQs/(eβQs+eβQr), is always mitigated by the conformist influence Nsθ/(Nsθ+Nrθ) as long as the former is larger than the latter. In other words, risk-aversion was mitigated not because the majority chose the risky option, nor were individuals simply attracted towards the majority. Rather, participants’ choices became risker even though the majority chose the safer alternative at the outset. Intuitively, under social influences (either because of informational or normative motivations), individuals become more explorative, likely to continue sampling the risky option even after he/she becomes disappointed by poor rewards.

In this study the authors investigated the effect of social influence on individuals' risk aversion in a 2-armed bandit task. Using a reinforcement learning model and a dynamic model they showed that social influence can in fact diminish risk aversion. The authors then conducted a series of online experiments and observed the same effect in their experimental data as well. The research question is timely, and the modelling has been done carefully. However, I have some comments and concerns about the interpretations of their findings.

– My first question is do the participants copy others because the other risky option sounds better in terms of reward or because they copy others just because being in alignment with others is rewarding? This brings us to the distinction between informational and normative influence. For example, a recent study showed that copying others is not necessarily motivated by maximising accuracy (Mahmoodi et al. 2018, see also Cialdinin and Goldstein 2004). In their experimental data, the authors found that participants do not copy others (choosing the risky options) as much as they should do. Does it suggest that their conformity toward others cannot be fully explained by informational motives (where the aim of conformity is to maximise payoff/accuracy). I suggest that the authors discuss each of these possibilities and then explain to which of these two types of influence their findings belong to.

We agree that two different motivations for conformity might have played a role in our experimental setup, although we did not explicitly distinguish these factors in our theoretical development. We have discussed further on this point and edited some texts in the Discussion as we have shown in our response to the reviewer 1 above.

– An earlier study showed that people's decisions become riskier when they make decisions with others (Bault et al. PNAS 2011). Could this explain the findings that are presented in this paper? Can the models distinguish between these two types of change in behaviour? I strongly suggest the authors to discuss the Bault et al. paper and discuss how their findings deviate from this study.

We have added texts explaining the relationship between our study and other studies using the description-based gambling tasks such as Bault et al. (2011), as we have shown in our response to the reviewer 2 (see above). In general, Bault et al. (2011) focuses on the description-based task where individuals can access to the profile of gambles, whereas our focus is on the experience-based decision making where information sampling through choices is crucial. The fact that previous human collective risky decision-making studies have been dominated mostly by the description-based gambling seems to account for the ostensible gap between maladaptive collective illusion reported in human conformity studies and collective intelligence documented in animal conformity studies. Since information sampling through experience is the crucial factor in our results, the rescue effect would never emerge if we used the description-based tasks.

Another key difference between our model and Bault et al. (2011) was whether others’ payoff information was available or not. Bault et al. focused on the situation where participants could see others’ payoffs, hence assuming the richer social information transmission than what assumed by our frequency-based social learning model. The implications of this difference were discussed in detail as follows:

– (Lines 600 – 609): Information about others’ payoffs might also be available in addition to inadvertent social frequency cues in some social contexts (Bault et al., 2011), especially with the aid of online communication tools or benevolent pedagogical acts from others. Although communicative acts may transfer information about behavioural alternatives that one has never tried before and may inform about forgone payoffs from other alternatives, which could mitigate the hot stove effect (Denrell, 2007; Yechiam and Busemeyer, 2006), it may further amplify the suboptimal decision bias if information senders, despite their cooperative motivation, selectively filter out some pieces of information they think are redundant (Moussaïd et al., 2015).

– In one section the authors show that reducing heterogeneity in groups undermines group performance. It brought my attention to a study (Lorenz et al. PNAS 2011) which suggested that social influence can undermine wisdom of crowds through reducing heterogeneity of opinions. It seems that the authors are presenting the same phenomenon as that suggested by Lorenz and colleagues. I suggest that the authors cite that study and discuss how their results is related to that study and whether their findings broaden our understanding of the effect of heterogeneity on collective performance.

Thank you for asking us to clarify the relation to this important study. We now explain explicitly why the collective rescue effect we find cannot be explained by the monotonic relationship between diversity and the wisdom of crowds (as it occurred in Lorenz et al., 2011). The followings are the discussion that we added:

– (Lines 505 – 513): Neither the averaging process of diverse individual inputs nor the speeding up of learning could account for the rescue effect. The individual diversity in the learning rate (α) was beneficial for the group performance, whereas that in the social learning weight (σ) undermines the average decision performance, which could not be explained simply by a monotonic relationship between diversity and wisdom of crowds (Lorenz et al., 2011). Self-organisation through collective behavioural dynamics emerging from the experience-based decision making must be responsible for the seemingly counter-intuitive phenomenon of collective rescue.

– Nothing can be found about the model in the main text. Similarly, some of the terms are not even defined before they are used in the main text. For example, the term "asocial learning" is only defined in the Figure 2 caption. I suggest that the authors briefly explain the model and the key terms in the main text before presenting the result. I also strongly suggest that the authors mention in the main text that the detail of the model is presented in the methods.

We now include ‘the Agent-Based Model’ section right after the Introduction (line 94 – 196), explaining the details of both task setups and the reinforcement learning models. We introduce the term ‘asocial learning’ in this new section as follows:

– (Lines 188 – 196): Note that, when σ = 0, there is no social influence, and the decision maker is considered as an asocial learner. It is also worth noting that, when σ = 1 with θ > 0, individual choices are assumed to be contingent fully upon majority's behaviour, which was assumed in some previous models of strong conformist social influences in sampling behaviour (Denrell and LeMens, 2016). Our model is a natural extension of both the asocial reinforcement learning and the model of ‘extreme conformity’, as these conditions can be expressed as a special case of parameter combinations. We will discuss the implications of this extension in the Discussion. The descriptions of the parameters are shown in Table 1.

We mention that the details of the dynamics model and that of the online experiments are presented in the Method as follows:

– (Line 329): The full details of this dynamics model are shown in the Method and Table 3.

– (Lines 436 – 439): The experimental task was basically a replication of the agent-based model described above, although the parameters of the bandit tasks were different (see the Method for the details of the experimental procedures; Supplementary Figure 11).

In the introduction reads: social influence does not mindlessly increase risk seeking; instead, it may work only when to do so is adaptive. I believe this sentence is vague in its current form. I suggest that the authors elaborate on it, especially on the last part (i.e. to do so is adaptive).

To clarify this point, we have changed the sentence as follows:

– (Lines 208 – 211): Interestingly, such a switch to risk seeking did not emerge when risk aversion was actually optimal (Supplementary Figure 9), suggesting that social influence does not always increase risk seeking; instead, the effect seems to be more prominent especially when risk seeking is beneficial in the long run.

Reviewer #2:

The paper considers an interesting puzzle. While most psychological studies of conformity tend to focus on negative effects, animal research highlights positive effects of conformity. The current analysis tries to clarify this apparent puzzle by clarifying one positive effect of conformity: Reduction of the hot stove effect that impairs maximization when taking risk is optimal.

The paper can be improved by building on previous efforts (e.g., Denrell and Le Mens, 2007) to clarify the impact of social influence on the hot stove effect. It would also be good to try to simplify the model, and use the same tasks in the theoretical analysis and the experimental study.

We agree to the reviewer’s point that our theory could become more impactful by relating to previous models such as Denrell & Le Mens (2007; 2016). We have explained what the critical difference between our model and the previous model is, highlighting how our model can be considered as a natural extension of previous conformity models. Notably, our model includes the cases explored by Denrell & Le Mens (2016) as an extreme setting of the social learning parameters where individual decision making is regulated fully by conformist social influence (that is,σ=1 andθ>1 for all individuals). Although the minor technical details between ours and their model are not identical, we were indeed able to replicate the pattern they found (i.e., the collective illusion by copying the majority’s behaviour) especially when social learning parameters (i.e., σandθ) were very high. The texts we added are as follows:

–(Lines 188 – 196): “Note that, when σ = 0, there is no social influence, and the decision maker is considered as an asocial learner. It is also worth noting that, when σ = 1 with θ > 0, individual choices are assumed to be contingent fully upon majority's behaviour, which was assumed in some previous models of strong conformist social influences in sampling behaviour (Denrell & LeMens, 2016). Our model is a natural extension of both the asocial reinforcement learning and the model of ‘extreme conformity’, as these conditions can be expressed as a special case of parameter combinations. We will discuss the implications of this extension in the Discussion. The descriptions of the parameters are shown in Table 1.”

To keep the model as simple as possible, we have added only two parameters for the frequency-based social learning processes, namely, σ(copying weight) and θ(conformity exponent). Previous studies have established that these two processes (i.e., the rate of social learning and the strength of conformity) affect collective dynamics differently (e.g., Kandler & Laland, 2013; Toyokawa et al., 2019). Therefore, we must consider these two parameters explicitly. We could have made our model more complex and more realistic by, for instance, considering temporally changing social influences (Toyokawa et al., 2019; Deffner et al., 2020), which we believe is worth exploring in the future studies. However, we aimed to limit our analysis to the simplest case so as to connect the literature on reinforcement learning and the hot stove effect (Denrell, 2007).

The discrepancy between our theoretical model and the online experimental tasks has by now been resolved by additional simulations shown in Supplementary Fig. 11 (Figure 1 – figure supplement 5). Here, we have shown that the rescue effect emerges robustly across different settings of the bandit tasks used in the online experiment. The reason why we focused on the Gaussian distribution task in the main result of the Agent-Based Model section was for the sake of mathematical tractability. The Gaussian task was theoretically well established, and its analytical solution was available (Denrell, 2007), which made our findings much clearer because we can assure that performance of social learners deviates truly from the analytical solution of asocial reinforcement learners’ performance (Fig 1).

One interesting open question involves the impact of the increase in the information, available today (in social networks like Facebook) concerning the behavior of other individuals. I think that the current analysis predicts an increase in risk taking.

We have added some discussion on this issue in the discussion section as follows:

– (Lines 600 – 609): “Information about others’ payoffs might also be available in addition to inadvertent social frequency cues in some social contexts (Bault et al., 2011), especially with the aid of online communication tools or benevolent pedagogical acts from others. Although communicative acts may transfer information about behavioural alternatives that one has never tried before and may inform about forgone payoffs from other alternatives, which could mitigate the hot stove effect (Denrell, 2007; Yechiam and Busemeyer, 2006), it may further amplify the suboptimal decision bias if information senders, despite their cooperative motivation, selectively filter out some pieces of information they think are redundant (Moussaïd et al., 2015).”

The paper considers an interesting puzzle. While most psychological studies of conformity tend to focus on negative effects, animal research highlights positive effects of conformity. The current analysis tries to clarify this apparent puzzle by clarifying one positive effect of conformity: Reduction of the hot stove effect that impairs maximization when taking risk is optimal.

The paper's main shortcoming is the fact that it is difficult to understand how it adds to the observations presented in Denrell and Le Mens (2007), and more recent research by Le Mens. It is possible that the authors can address this shortcoming by clarifying the difference between pure conformity (or imitation) and the impact of social influence examined by Le Mens and his co-authors.

Denrell and Le Mens (2007 and 2016) are indeed very relevant to our topic. We cite these two papers in our revised manuscript. Their 2007 paper considered opinion dynamics of a pair of individuals, while their 2016 paper extended it to multiple players (n≧2) that is more relevant to our model. The most crucial difference between their 2016 model and ours is that whilst they only considered a very strong conformity bias whereby individual choices were determined fully by other people’s opinion state, we have considered a wider range of conformist social influences from extremely weak (σ = 0; asocial reinforcement learning) to extremely strong (σ = 1; akin to the strong conformity assumed in Denrell and Le Mens (2016)). As we showed in our results, this relaxation of allowing the intermediate-level of conformist social influence in decision making is the necessary condition to generate the collective rescue effect. To clarify this point, we have modified several texts as follows:

– (Lines 72 – 82): “How, if at all, can group-living animals improve collective decision accuracy while suppressing the potentially deleterious constraint of decision-making biases through trial-and-error learning? One of the strong candidates of explaining this gap is the fact that studies in human social learning in risky decision making have focused only on either the description-based gambles (Chung et al., 2015; Bault et al., 2011; Suzuki et al., 2016; Shupp and Williams, 2008) or extreme conformity where individual choices are regulated fully by others’ behaviour (Denrell and Le Mens, 2007, 2016), but not on experienced-based situations where both individual and social learning affect behavioural outcomes, a form of decision making widespread in group-living animals and humans (Hertwig and Erev, 2009; Camazine et al., 2001; Toyokawa et al., 2019).”

– (Lines 188 – 196): “Note that, when σ = 0, there is no social influence, and the decision maker is considered as an asocial learner. It is also worth noting that, when σ = 1 with θ > 0, individual choices are assumed to be contingent fully upon majority's behaviour, which was assumed in some previous models of strong conformist social influences in sampling behaviour (Denrell and LeMens, 2016). Our model is a natural extension of both the asocial reinforcement learning and the model of ‘extreme conformity’, as these conditions can be expressed as a special case of parameter combinations. We will discuss the implications of this extension in the Discussion. The descriptions of the parameters are shown in Table 1.”

– (Lines 288 – 292): “This was because individuals with lower σ could benefit less from social information, while those with higher relied so heavily on social frequency information that behaviour was barely informed by individual learning, resulting in maladaptive herding or collective illusion (Denrell and Le Mens, 2016; Toyokawa et al., 2019).”

– (Lines 495 – 504): “We have demonstrated that frequency-based copying, one of the most common forms of social learning strategy, can rescue decision makers from committing to adverse risk aversion in a risky trial-and-error learning task, even though a majority of individuals are potentially biased towards suboptimal risk aversion. Although an extremely strong reliance on conformist influence can raise the possibility of getting stuck on a suboptimal option, consistent with the previous view of herding by conformity (Raafat et al., 2009; Denrell and Le Mens, 2016), the mitigation of risk aversion and the concomitant collective behavioural rescue could emerge in a wide range of situations under modest use of conformist social learning.”

– (Lines 557 – 560): “Such the synergistic interaction between positive and negative feedback could not be predicted by the collective illusion models where individual decision making is determined fully by the majority influence because no negative feedback would be able to operate.”

Another shortcoming involves the difference between the choice task analyzed in the theoretical analysis, and the task examined in the experiment. The theoretical analysis focuses on normal distributions, and the experiment focuses on asymmetric bimodal distributions. The authors suggest that they chose to switch to asymmetric bimodal distributions as the hot stove effect, exhibited by human subjects, in the case of normal distributions is not strong. If this is the case, it would be good to adjust the theoretical model and use a model that better capture human behavior.

We conducted additional simulations using the same bandit task setups as we used in the online experiment, confirming that the results do not change across conditions. Please find the details of this point in our reply to the review from reviewer #2 above.

A third shortcoming involves the complexity to the theoretical model. Since this model is only used to demonstrate that conformity can reduce the hot stove effect, and is not supposed to capture the exact magnitude of the two effects, I could not understand why it includes so many parameters. For example, it would be nice to add only one parameter to the basic reinforcement learning model. If more parameters are needed it would be good to show why they are needed.

As we discussed in the reply to the above, we agree that we should restrict the model to be as simple as possible. We believe that the current modelling is one of the simplest forms that can capture both the reliance on social influence (captured by σ) and the strength of conformity (captured by θ) separately.

Reviewer #3:

The authors use reinforcement learning and dynamic modeling to formalize the favorable effects of conformity on risk taking, demonstrating that social influence can produce an adaptive risk-seeking equilibrium at the population level. The work provides a rigorous analysis of a paradoxical interplay between social and economic choice.

Conformity is commonly attributed to either an intrinsic reward of group membership, or to inferences about the optimality of others' behavior (i.e., normative vs. informational). Neither of these aspects of conformity are addressed here, which limits the interpretability of the results. For example, if there is an intrinsic reward associated with majority alignment, that should contribute to the reinforcement of such decisions; moreover, inferences about the optimality of observed behavior likely change from early trials, in which others can be assumed to simply explore, to later trials, in which the decisions of others may be indicative of their success. The work would be more impactful if it considered how these factors might affect the potential for collective rescue.

An interesting question is whether a substantial payoff contingent on choosing a risky option may server to reinforce the act of risk taking itself, and how such processes might propagate social influence across environments.

I suspect that the paper was initially written with the Methods following the Introduction, and with the Methods being subsequently moved to the end without much additional editing. As a result, very few of the variables and concepts (e.g., conformist influence, copying weight, positive/negative feedback) are defined in the main text upon first mention, which makes for extremely onerous reading.

We have substantially revised the structure of the manuscript, now placing the ‘a Agent-Based Model’ section between the Introduction and the Results. In the current form, all key parameters (namely, conformist influence and copying weight) as well as other important concepts (e.g., positive feedback) are defined when it first appears:

– (Line 61 – 64): “Given that behavioural biases are ubiquitous and learning animals rarely escape from them, it may seem that conformist social influences may often lead to suboptimal herding or collective illusion through recursive amplification of the majority influence (i.e., positive feedback)”

Also, we deleted the term ‘negative feedback’ from the Introduction so that the term first appears at line 412 so that the meaning becomes clearer:

– (Lines 410 – 413): “Crucially, the reduction of Ps leads to further reduction of Ps itself through decreasing Ns, thereby further decreasing the social influence supporting the safe option Nsθ/(Nrθ+Nsθ). Such a negative feedback process weakens the concomitant risk aversion.”

Cases where the risky option yields a lesser mean payoff, producing a potentially detrimental social influence, should be given full weight in the main text, and should have been included in the behavioral study. Generally, the discrepancy between modeling and behavioral results is a bit disappointing. It is unclear why the behavioral experiment was not designed so as create the most relevant conditions.

To address this important point, we have conducted an additional series of experiments in which the risky option yields a smaller mean payoff than the safe alternative (namely, the negative risk premium [NRP] task), and report on it both in the theoretical part (see Supplementary Figure 18 [Figure 6 —figure supplement 2]) as well as in the experimental part (Figure 6 and Table 2; see also Supplementary Figure 19). In general, the model prediction was supported by the data from the NRP condition, suggesting that social influences could slightly be detrimental in such a condition because promotion of exploration increased the suboptimal risk taking. Nevertheless, the extent to which risk taking was increased by social influence in the NRP task was smaller than the extent to which optimal risk taking was increased in the positive risk premium tasks. Also, a previous study found that risk and reward are often correlated positively in many real-life circumstances (Pleskac and Hertwig, 2014), suggesting that situations where social influence is detrimental might be less common than situations where social influence is beneficial. Therefore, our conclusion that conformist social learning is more likely to promote adaptive risk taking should widely hold.

To highlight the results from these additional analyses and experiments, we have modified several parts of our texts as follows:

– (Lines 434 – 445): “To investigate whether the collective rescue effect can operate in reality, we conducted a series of online behavioural experiments using human participants. The experimental task was basically a replication of the agent-based model described above, although the parameters of the bandit tasks were different (see the Method for the details of the experimental procedures; Supplementary Figure 11). One hundred eighty-five adult human subjects performed the individual task without social interactions, while 400 subjects performed the task collectively with group sizes ranging from 2 to 8 (Supplementary Figure 17 and 19). We used four different settings for the multiarmed bandit tasks. Three of them were positive risk premium (PRP) tasks that had an optimal risky alternative, while the other was a negative risk premium (NRP) task that had a suboptimal risky alternative (see Methods). l (Lines 446 – 455): The behavioural results with statistical model fitting confirmed the predictions of the theoretical model. In the PRP task subjects who had a larger estimated value of the susceptibility to the hot stove effect (αi(βi + 1)) were less likely to choose the risky alternative, whereas those who had a smaller value of αi(βi + 1) had a higher chance of choosing the safe alternative (Figure 6a–c), consistent with the theory of the hot stove effect (Figure 2, Supplementary Figure 11). In the NRP task, individuals tended to choose the favourable safe option more often than they chose the risky option in a range of the susceptibility value αi(βi + 1) (Figure 6d), which was also consistent with the model prediction (Supplementary Figure 18).”

– (Lines 480 – 493): “In the NRP task, conformist social influence undermined the proportion of choosing the optimal safe option and increased adverse risk seeking, although a complete switch of the majority's behaviour to the suboptimal risky option did not happen (Figure 6d; Suppelementary Figure 19). Such promotion of suboptimal risk taking was particularly prominent when the susceptibility value αi(βi + 1) was large. Nonetheless, the extent to which risk taking was increased in the NRP condition was smaller than that in the PRP tasks, consistent with our model prediction that conformist social learning is more likely to promote favourable risk taking (Supplementary Figure 18). It is worth noting that the estimated learning rates (i.e., mean αi = 0.48) in the NRP task were larger than that in other PRP tasks (mean αi < 0.21; Table 2), making social learning particularly deleterious when risk taking is suboptimal (Supplementary Figure 18). In the discussion, we will discuss about the effect of experimental setting on the human learning strategies, which can be explored in the future studies.”

– (Lines 561 – 571): “Through online behavioural experiments using a risky multi-armed bandit task, we have confirmed our theoretical prediction that simple frequency-based copying could mitigate risk aversion that many individual learners, especially those who had higher learning rates and/or lower exploration rates, would have exhibited as a result of the hot stove effect. The mitigation of risk aversion was also observed in the NRP task, in which social learning slightly undermined the decision performance. However, because riskiness and expected reward are often positively correlated in a wide range of decision-making environments in the real world (Pleskac and Hertwig, 2014), the detrimental effect of reducing optimal risk aversion when risk premium is negative could be negligible in many ecological circumstances, making the conformist social learning beneficial in most cases.”

My greatest concern is that the work does not integrate properly with its theoretical and empirical context. Additional analyses assessing the relative contributions of normative and informational conformity to socially induced risk-seeking would be helpful.

We have conducted additional simulations with the bandit task setting identical to the experimental tasks (see Supplementary Figure 11 (Figure 1 —figure supplement 5) for the PRP tasks; and Figure 18 (Figure 6 —figure supplement 2) for the NRP task). We have confirmed that our theoretical results can hold robustly across the range of different task settings, strengthening the implication of the finding.

We are fully aware of the important distinctions between normative and informational motivations underlying the use of social information. We have explicitly mentioned our stance at lines 159 – 167 that we limited our analysis to the informational context and suggested some future directions in the discussion at lines 572 – 586, as follows:

– (Line 159 – 167): “A payoff realised was independent of others’ decisions and it was drawn solely from the payoff probability distribution specific to each alternative, thereby we assume neither direct social competition over the monetary reward (Giraldeau and Caraco, 2000) nor normative pressures towards majority alignment (Cialdini and Goldstein, 2004; Mahmoodi et al., 2018). The value of social information was assumed to be only informational (Nakahashi, 2012). Nevertheless, our model could apply to the context of normative social influences, because what we assumed here was modifications in individual choice probabilities due to social influences, irrespective of underlying motivations of conformity.”

– (Lines 572 – 586): “The weak reliance on social learning, which affected only about 15% of decisions, was unable to facilitate strong positive feedback. The little use of social information might have been due to the lack of normative motivations for conformity and to the stationarity of the task. In a stable environment, learners could eventually gather enough information as trials proceeded, which might have made them less curious about information gathering including social learning (Rendell et al., 2010). In reality, people might use more sophisticated social learning strategies whereby they change the reliance on social information flexibly over trials (Deffner et al., 2020, Toyokawa et al., 2017, Toyokawa et al., 2019). Future research should consider more strategic use of social information, and will look at the conditions that elicit heavier reliance on the conformist social learning in humans, such as normative pressures for aligning with majority, volatility in the environment, time pressure, or an increasing number of behavioural options (Muthukrishna et al., 2015), coupled with larger group sizes (Toyokawa et al., 2019).”

[Editors’ note: what follows is the authors’ response to the second round of review.]

Essential revisions:

The manuscript has been improved, and Reviewer 2 recommends acceptance at this point, but Reviewer 3 has some remaining concerns, summarized below. We invite you to address all remaining concerns in a second round of revisions. Make sure to include point-by-point replies to each of Reviewer 3's recommendations.

1) Although the organization and writing is improved, it has some ways to go before the manuscript is ready for publication. For example, the basic aims and methods should be stated in one or two sentences at the end of the first, or at most second, paragraph of the introduction, giving the reader a clear sense of where things are going. Moreover, the "Agent-based model" section should be shortened (to include only what is needed to conceptually understand the model, leaving details for a table and the methods) and better integrated with the introduction, rather than inserted as (what appears to be) a super-section.

We thank the editors and the reviewers for this valuable suggestion. We totally agree with the value of giving readers a clear sense of the article’s structure in the beginning of the introduction. To do this, among addressing the other points related to the Introduction (see below), we have revised the Introduction sentence-by-sentence, and have made a central question and background of this paper much clearer in the first two paragraphs. Particularly, the aim of this paper is summarised at the end of the second paragraph the Introduction:

– Lines 60 – 63: “A theory that incorporates dynamics of trial-and-error learning and the learnt risk aversion into social learning is needed to understand the conditions under which collective intelligence operates in risky decision making.”

Also, we have deleted the subsection “Agent-based model” and integrated its contents into the end of the Introduction and the beginning of the Result. In both places, we defined technical terms as soon as they first appeared, and verbally described the concept and assumptions of the computational model. Please see the subsections “The decision-making task”, “The baseline model”, and “The conformist social influence model” in the Result section.

2) Just as the intro should include a concise description of the model, it should highlight the online experiments, and how they relate to the modeling.

Highlighting the online experiment and articulating the relationship between the experiment and models in the Introduction is a wonderful suggestion. We have modified the Introduction to make the aim of the experiment clear. The modified text is as follows:

– Lines 122 – 130: “Finally, to investigate whether the assumptions and predictions of the model hold in reality, we conducted a series of online behavioural experiments with human participants. The experimental task was basically a replication of the task used in the agent-based model described above, although the parameters of the bandit tasks were modified to explore wider task spaces beyond the simplest two-armed task. Experimental results show that the human collective behavioural pattern was consistent with the theoretical prediction, and model selection and parameter estimation suggest that our model assumptions fit well with our experimental data.”

3) The result section should start with a paragraph outlining the various hypotheses and corresponding analyses (i.e., a "roadmap" for the section).

We thank the editors for this insightful suggestion. Sketching a roadmap before showing detailed results is a great idea. To guide readers smoothly from the Introduction to the Result, we have outlined an overview of our analysis at the end if the Introduction:

– Lines 109 – 132: “In the study reported here, we firstly examined whether a simple form of conformist social influence can improve collective decision performance in a simple multi-armed bandit task using an agent-based model simulation. We found that promotion of favourable risk taking can indeed emerge across different assumptions and parameter spaces, including individual heterogeneity within a group. This phenomenon occurs thanks, apparently, to the non-linear effect of social interactions, namely, collective behavioural rescue. To disentangle the core dynamics behind this ostensibly self-organised process, we then analysed a differential equation model representing approximate population dynamics. Combining these two theoretical approaches, we identified that it is a combination of positive and negative feedback loops that underlies collective behavioural rescue, and that the key mechanism is a promotion of information sampling by modest conformist social influence.

Finally, to investigate whether the assumptions and predictions of the model hold in reality, we conducted a series of online behavioural experiments with human participants. The experimental task was basically a replication of the task used in the agent-based model described above, although the parameters of the bandit tasks were modified to explore wider task spaces beyond the simplest two-armed task. Experimental results show that the human collective behavioural pattern was consistent with the theoretical prediction, and model selection and parameter estimation suggest that our model assumptions fit well with our experimental data. In sum, we provide a general account of the robustness of collective intelligence even under systematic risk aversion and highlight a previously overlooked benefit of conformist social influence.”

As we believe that repeating the general outline at the beginning of the result section would be redundant, we put short introductory sentences at the beginning of each subsection in the Result. Especially, we have made our experimental hypotheses clearly stated at the beginning of the “Experimental demonstration” subsection as follows:

– Lines 507 – 512: “On the basis of both the agent-based simulation (Figure 1 and Supplementary Figure 9) and the population dynamics (Figure 5 and Supplementary Figure 16), we hypothesised that conformist social influence promotes risk seeking to a lesser extent when the RP is negative than when it is positive. We also expected that whether the collective rescue effect emerges under positive RP settings depends on learning parameters such as αi(βi+1) (Supplementary Figure 11d–f).”

4) Please quantify the performance of your model relative to others with formal comparisons (e.g., Bayesian Model Selection).

We really appreciate this insightful suggestion. Thanks to the formal model comparison using the Bayesian model selection based on WAIC, our finding has been much strengthened. We have now included both the model recovery check and the model comparison result in the new Supplementary Figure 18 (Figure 6 —figure supplement 2) in page 54. The successful model recovery has ensured that the hierarchical Bayesian model fitting method could reliably differentiate between the candidate models, and we confirmed that the model comparison favoured the decision-biasing model that we used in our main analysis. We have added some texts to include this finding as follows:

– Lines 277 – 286: “Further, the conclusion still held for an alternative model in which social influences modified the belief-updating process (the value-shaping model; Najar et al., 2020) rather than directly influencing the choice probability (the decision-biasing model) as assumed in the main text thus far (see Supplementary Methods; Supplementary Figure 8). One could derive many other more complex social learning processes that may operate in reality; however, the comprehensive search of possible model space is beyond the current interest. Yet, decision biasing was found to fit better than value shaping with our behavioural experimental data (Supplementary Figure 18), leading us to focus our analysis on the decision-biasing model.”

– Lines 513 – 517: “The Bayesian model comparison (Stephan et al., 2009) revealed that participants in the group condition were more likely to employ decision-biasing social learning than either asocial reinforcement learning or the value-shaping process (Supplementary Figure 18). Therefore, in the following analysis we focus on results obtained from the decision-biasing model fit.”

– Line 950 – 956: “We compared the baseline reinforcement learning model, the decision biasing model, and the value-shaping model (see Supplementary Methods) using Bayesian model selection (Stephan et al., 2009). The model frequency and exceedance probability were calculated based on the Widely Applicable Information Criterion (WAIC) values for each subject (Watanabe and Opper, 2010). We confirmed accurate model recovery by simulations using our task setting (Supplementary Figure 18).”

5) Please quantify all claims of associations with effect sizes and clearly justify all parameter cut-offs/values.

6) Streamline figures and included predicted/observed result plots wherever possible.

We thank the editors and the reviewers for this great suggestion. Having conducted an additional data analysis using a standard GLMM with a hierarchical Bayesian estimation method, we have quantified all the empirical findings with their effect sizes accompanied by the Bayesian credible intervals. Please see the new Table 3 (page 28) for the estimated coefficients of the GLMM as well as a new Supplementary Figure 17 (Figure 6 —figure supplement 1; page 53) for the prediction from the fit GLMM. Also, we deleted the arbitrary parameter cut-offs, and showed the effect of the copying weight in a continuous manner in the Figure 6 (page 26).

To streamline the two different types of empirical analyses (that are the fit computational model and the raw data with fit GLMM), we have separated them into the computational model prediction (Figure 6) and the GLMM regression with the experimental data (Supplementary Figure 17). The matched pattern between them supports that the fit computational model was able to reproduce the actual participants’ behaviour. To highlight these points, we have added a new paragraph in the result section as follows:

– Lines 548 – 560: “To quantify the effect size of the relationship between the proportion of risk taking and each subject’s best fit learning parameters, we analysed a generalised linear mixed model (GLMM) fitted with the experimental data (see Methods; Table 3). Within the group condition, the GLMM analysis showed a positive effect of on risk taking for every task condition (Table 3), which supports the simulated pattern. Also consistent with the simulations, in the positive RP tasks, subjects exhibited risk aversion more strongly when they had a higher value of αi(βi+1) (Supplementary Figure 17a–c). There was no such clear trend in data from the negative RP task, although we cannot make a strong inference because of the large width of the Bayesian credible interval (Supplementary Figure 17d). In the negative RP task, subjects were biased more towards the (favourable) safe option than subjects in the positive RP tasks (i.e., the intercept of the GLMM was lower in the negative RP task than in the others).”

Reviewer #3:

The authors showed that when individuals learn about how they should decide in a situation where they can choose between a risky and a safe option, they might overcome maladaptive biases (e.g. exaggerated risk-aversion when risk-taking would be beneficial) by conforming with group behaviour. Strengths include rigorous and innovative computational modeling, a weakness might be that the set-up of the empirical study did not actually widely provoke behavioral phenomena at question, e.g. social learning (which is arguably at the core of the research question). Even though I am reviewing a revised manuscript I would hope the authors find a way to further improve clarity in the presentation of their research question and results.

My main concern was that even in the revised version I read, I found the paper not as accessible as I think it should be for a wide readership of a journal like eLife; and often times I found that things you could communicate in a straightforward way are put too complicated/expressed very verbosely. When reading the previous reviews after having read the revised paper, I also got the feeling that there were some misunderstandings. You clarified these specific points well in your responses, I think, but the lack in clarity might be even more drastic with an interdisciplinary readership this journal aims at as compared to the experts the journal has recruited now for this review? I will try to give some examples below.

We thank the reviewer very much for this valuable suggestion. We fully agreed that the previous manuscript was not very accessible to a wider audience we aim to reach. We have revised the manuscript substantially to improve its clarity, accessibility, and rigor. Especially, both the Introduction and introductory paragraphs of the Result were rewritten to let them clearly articulate our research question and aims. In the following, we will explain, point-by-point, how we have revised the manuscript in responding to each of the reviewer’s concerns.

The introduction should, imho provide a general intro to the question of the paper and how you've arrived to ask that question, avoiding too much technical jargon. After having read the paper, I realized that the research question is pretty straight-forward (and interesting) and derived from 1/2 previous observations, but this didn't become clear on the very first read.

Just as an example, some of the first sentences are…

"One rationale behind this optimistic view might come from the assumption that individuals tend to prefer a behavioural option that provides larger net benefits in utility over those providing lower outcomes. Therefore, even though uncertainty makes individual decision-making fallible, statistical filtering through informational pooling may be able to reduce uncertainty by cancelling out such noise."

This is only 1 example (it is something I noted throughout the paper… and also came up in the previous round of reviews) where I think these 2 sentences require some if not a little more background in decision-making (net benefits, utility, uncertainty, noise, stat filtering, informational pooling) to be understandable.

We thank the reviewer for this valuable feedback. The sentence referred here was in the first paragraph of the Introduction of the previous manuscript, which was indeed not very accessible for many interdisciplinary readers. Having restructured the Introduction, we believe that the aims and motivations behind this study became clearer and selfexplanatory. The first paragraph of the Introduction is now read as follows:

– Lines 29 – 46: “Collective intelligence, a self-organised improvement of decision making among socially interacting individuals, has been considered one of the key evolutionary advantages of group living (Camazine et al., 2001; Krause and Ruxton, 2002; Sumpter, 2006; Ward and Zahavi, 1973). Although what information each individual can access may be a subject of uncertainty, information transfer through the adaptive use of social cues filters such ‘noises’ out (Laland, 2004; Rendell et al., 2010), making individual behaviour on average more accurate (Hastie and Kameda, 2005; King and Cowlishaw, 2007; Simons, 2004). Evolutionary models (Boyd and Richerson, 1985; Kandler and Laland, 2013; Kendal et al., 2005) and empirical evidence (Toyokawa et al., 2014, 2019) have both shown that the benefit brought by the balanced use of both socially and individually acquired information is usually larger than the cost of possibly creating an alignment of suboptimal behaviour among individuals by herding (Bikhchandani et al., 1992; Giraldeau et al., 2002; Raafat et al., 2009). This prediction holds as long as individual trialand-error learning leads to higher accuracy than merely random decision making (Efferson et al., 2008). Copying a common behaviour exhibited by many others is adaptive if the output of these individuals is expected to be better than uninformed decisions.”

Other terminology like e.g. collective illusion, opportunity costs, description-based vs. experienced-based risk-taking paradigms, frequency-based influences, would be nice to be either defined in the text or replaced by a more accessible description in the introduction.

I know it is sometimes hard to mentalize which terminology others not working on the same things might struggle with, but given that this is an interdisciplinary journal, might it perhaps make sense to ask a researcher friend who is not exactly working on this topic to give it a read?

We thank the reviewer so much for specifying these reader-unfriendly technical jargons. We have defined both “collective illusion” and “frequency-based” when they first appear, with the other terms eliminated from the text. Please see the revised manuscript listed below. We have asked several colleagues from different fields to read the manuscript, which we believe has made the manuscript much more accessible. The modified sentences are as follows:

– Lines 94 – 95: “a mismatch between the true environmental state and what individuals believed (’collective illusion’; Denrell and Le Mens, 2016).”

– Lines 65 – 71: “it may seem that social learning, especially the ’copy-the-majority’ behaviour (aka, ’conformist social learning’ or ’positive frequency-based copying’; Laland, 2004), whereby the most common behaviour in a group is disproportionately more likely to be copied (Boyd and Richerson, 1985), may often lead to maladaptive herding, because recursive social interactions amplify the common bias (i.e., a positive feedback loop; Denrell and Le Mens, 2007, 2016; Dussutour et al., 2005; Raafat et al., 2009).”

"such a risk-taking bias constrained by the fundamental nature of learning may function independently from the adaptive risk perception (Frey et al., 2017), potentially preventing adaptive risk taking."

Unclear without knowing or looking up the Frey paper – shorten ("to be too risk-averse might be maladaptive in some contexts"?) or explain.

We totally agreed that the sentence was unclear. Indeed, merely mentioning that risk aversion may be adaptive in some context (Real and Caraco, 1986; McNamara and Houston, 1992; Yoshimura and Clark, 1991), and that risk aversion may arise from different mechanisms (Frey et al. 2017), was not directly related to the focus of this paper. What we would like to highlight in the second paragraph of the Introduction was the omnipresent possibility of risk aversion arising from reinforcement learning. In the revised version, therefore, we concentrated on this focal point and deleted those irrelevant topics.

– Lines 47 – 63: “However, both humans and non-human animals suffer not only from environmental noise but also commonly from systematic biases in their decision making (e.g., Harding et al., 2004; Hertwig and Erev, 2009; Real, 1981; Real et al., 1982). Under such circumstances, simply aggregating individual inputs does not guarantee collective intelligence because a majority of the group may be biased towards suboptimization. A prominent example of such a potentially suboptimal bias is risk aversion that emerges through trial-and-error learning with adaptive information-sampling behaviour (Denrell, 2007; March, 1996). Because it is a robust consequence of decision making based on learning (Hertwig and Erev, 2009; Yechiam et al., 2006; Weber, 2006; March, 1996), risk aversion can be a major constraint of animal behaviour, especially when taking a high-risk high-return behavioural option is favourable in the long run. Therefore, the ostensible prerequisite of collective intelligence, that is, that individuals should be unbiased and more accurate than mere chance, may not always hold. A theory that incorporates dynamics of trial-and-error learning and the learnt risk aversion into social learning is needed to understand the conditions under which collective intelligence operates in risky decision making.”

Line 74-82 extremely long sentence – I think can easily be simplified? – "previous studies have neglected contexts where individuals learn about the environment both by own and others' experiences"

We agreed that the sentence was too long. Because this relates to the central motivation behind our choice of model and question, we elaborated it in two paragraphs as follows:

– Lines 83 – 99: “In this paper, we propose a parsimonious computational mechanism that accounts for the emerging improvement of decision accuracy among suboptimally riskaversive individuals. In our agent-based model, we allow our hypothetical agents to compromise between individual trial-and-error learning and the frequency-based copying process, that is, a balanced reliance on social learning that has been repeatedly supported in previous empirical studies (e.g., Deffner et al., 2020; McElreath et al., 2005, 2008; Toyokawa et al., 2017, 2019). This is a natural extension of some previous models that assumed that individual decision making was regulated fully by others’ beliefs (Denrell and Le Mens, 2007, 2016). Under such extremely strong social influence, exaggeration of individual bias was always the case because information sampling was always directed towards the most popular alternative, often resulting in a mismatch between the true environmental state and what individuals believed (’collective illusion’; Denrell and Le Mens, 2016). By allowing a mixture of social and asocial learning processes within a single individual, the emergent collective behaviour is able to remain flexible (Aplin et al., 2017; Toyokawa et al., 2019), which may allow groups to escape from the suboptimal behavioural state.”

– Lines 100 – 108: “We focused on a repeated decision-making situation where individuals updated their beliefs about the value of behavioural alternatives through their own action–reward experiences (experience-based task). Experience-based decision making is widespread in animals that learn in a range of contexts (Hertwig and Erev, 2009). The time-depth interaction between belief updating and decision making may create a non-linear relationship between social learning and individual behavioural biases (Biro et al., 2016), which we hypothesised is key in improving decision accuracy in self-organised collective systems (Camazine et al., 2001; Sumpter, 2006).”

Line 84-89 is really long, too.

I would have been interested to learn more about the online experiments in the intro.

We thank the reviewer very much for pointing this out. We totally agreed. In the revised version, we gave a more accessible roadmap of the paper at the end of the Introduction, rather than putting such a long-sentence summary. In this roadmap section, we have described more about the experiment and highlight the relationship between the theoretical models and the experiment. The revised paragraph of the Introduction is as follows:

– Lines 109 – 132: “In the study reported here, we firstly examined whether a simple form of conformist social influence can improve collective decision performance in a simple multi-armed bandit task using an agent-based model simulation. We found that promotion of favourable risk taking can indeed emerge across different assumptions and parameter spaces, including individual heterogeneity within a group. This phenomenon occurs thanks, apparently, to the non-linear effect of social interactions, namely, collective behavioural rescue. To disentangle the core dynamics behind this ostensibly self-organised process, we then analysed a differential equation model representing approximate population dynamics. Combining these two theoretical approaches, we identified that it is a combination of positive and negative feedback loops that underlies collective behavioural rescue, and that the key mechanism is a promotion of information sampling by modest conformist social influence.

Finally, to investigate whether the assumptions and predictions of the model hold in reality, we conducted a series of online behavioural experiments with human participants. The experimental task was basically a replication of the task used in the agent-based model described above, although the parameters of the bandit tasks were modified to explore wider task spaces beyond the simplest two-armed task. Experimental results show that the human collective behavioural pattern was consistent with the theoretical prediction, and model selection and parameter estimation suggest that our model assumptions fit well with our experimental data. In sum, we provide a general account of the robustness of collective intelligence even under systematic risk aversion and highlight a previously overlooked benefit of conformist social influence.”

I do understand the reasoning of copy and pasting the agent Based Model description after having read the previous reviews, but it confused me when reading the article first; I think it needs to be integrated with Intro/Results section better (the first paragraph reads like intro/background still, but then it is about the authors' current study). I fear that readers don't know where they are in the paper at that point. (I much prefer journals with the Methods-Results order rather than the one eLife uses, as this would naturally circumvent this problem, but that's the challenge here, I reckon.)

We totally agreed that the conceptual description of the method should be integrated in the Introduction and the Result. In the revised manuscript, we have elaborated the concept of the model using as an accessible language as possible. Especially, the new subsections “The decision-making task” (page 5), “The baseline model” (page 7), “The conformist social influence model” (page 8), as well as “The simplified population dynamics model” (page 16) have been substantially revised to have conceptual verbal descriptions about the assumption and formulation before showing detailed results.

Results

Subheadings: Make clearer which is the section that describes simulations and which is the empirical section.

Thank you for this valuable suggestion. We have now subheadings fully separated between the theoretical part and the experimental results, so that the empirical result is shown only in the subsection “An experimental demonstration” (page 21).

Can you maybe start with reiterating in a structured way which parameters you set to which values and why in the simulation before describing the effects of it, how many trials you simulate (you start speaking of elongated time horizons but do not mention the original horizon length other than in the figure legend?) etc?

We have added parameters used in the simulations before describing the result. The revision we made are as follows:

– Lines 146 – 147: “Unless otherwise stated, the total number of decision-making trials (time horizon) was set to T = 150 in the main simulations described below.”

– Line 301 – 307: “Individual values of a focal behavioural parameter were varied across individuals in a group. Other non-focal parameters were identical across individuals within a group. The basic parameter values assigned to non-focal parameters were α = 0.5, β = 7, σ = 0.3, and θ = 2, which were chosen so that the homogeneous group could generate the collective rescue effect. The groups’ mean values of the various focal parameters were matched to these basic values.”

I know the Najar work and I think it is cool that you can generalize your results also to a value-based framework, but I do not think readers that do not know the Najar study will be able to follow this at it is described now, which makes it more confusing than interesting. So, either elaborate what this means (accessible to non-initiated readers) or ban to the Supplement (would be a shame).

Would the value-based model fit the empirical data better?

Thank you so much for this valuable comment. As we described in our response to the editor’s point (4), we have conducted the Bayesian model comparison and have established that the decision-biasing model was likely to fit better than both the value-shaping model and the baseline reinforcement learning model. We have verbally described the value shaping model in the following paragraph, while the full details are shown in the supplementary method.

– Lines 277 – 286: “Further, the conclusion still held for an alternative model in which social influences modified the belief-updating process (the value-shaping model; Najar et al., 2020) rather than directly influencing the choice probability (the decision-biasing model) as assumed in the main text thus far (see Supplementary Methods; Supplementary Figure 8). One could derive many other more complex social learning processes that may operate in reality; however, the comprehensive search of possible model space is beyond the current interest. Yet, decision biasing was found to fit better than value shaping with our behavioural experimental data (Supplementary Figure 18), leading us to focus our analysis on the decision-biasing model.”

When reading the first part of the Results section I constantly wondered: How did the group behave / How was the behaviour of the group determined in the simulation? Was variability considered? (this is something that's been manipulated in some empirical studies building on descriptive risk scenarios, e.g. Suzuki et al). It becomes clear when reading on/looking at Figure 3, but it is such a crucial point that it needs to be made clear from the beginning.

This is a great suggestion and we agreed with the importance to consider the variability of individuals. To make this point clear in the Introduction, we have included this in the “roadmap” paragraph as follows:

– Lines 111 – 113: “We found that promotion of favourable risk taking can indeed emerge across different assumptions and parameter spaces, including individual heterogeneity within a group.”

I think it is a major limitation that in the empirical study actual social learning was extremely limited, given that the paper claims to provide a formal account of the function of social learning in this situation?…. I would have thought that indeed trying to provoke more use of social influence by altering the experimental setup in a way the authors propose in their discussion would have been important, and given that this can be done online, also a feasible option.

We thank the reviewer for pointing out the limitation of the current study. We totally agree that increasing the copying weight by experimental manipulations will indeed be an important future direction. As we discussed in the main text (lines 649 – 676), one of such a promising manipulation will be to use a ‘restless’ bandit task that is theoretically expected to induce both a higher learning rate and higher copying weight. Nevertheless, we believe that a direct link between the simplest form of theory (that is, a static bandit task) and experimental findings was a necessary first step toward developing further theoretical hypotheses in more complex settings. Therefore, in the current purpose we put the possibility of the restless bandit as a future task.

Also, empirically, susceptibility to the hot stove effect, i.e., αi(βi + 1), seems to be very low – zero for most of the participants in some scenarios according to Figure 6A,B,C? – isn't this concerning, given that this is at the core of what the authors want to explain?

We thank the reviewer for raising this point. This is a wonderful question. The average value of the susceptibility to the hot stove effect across the conditions was about 0.2 ~ 0.6, but not zero (which was not visually obvious due to the scale of the x-axis, but can be derived from the fit parameter values shown in Table 2). Such a low level of the susceptibility to the hot stove effect was problematic in the 1-risky-1-safe task because asocial individuals were not likely to suffer from the hot stove effect (see Supplementary Figure 11a). Therefore, we conducted the other two positive RP 4-armed tasks where risk aversion was expected to emerge even if α (β +1) was as small as such values (Supplementary Figure 11b, c). However, as we have discussed in the main text, it is indeed an interesting future direction to manipulate both task and environment to elicit higher learning rate as well as heavier reliance on social learning.

"those with a higher value of the susceptibility to the hot stove effect (αi(βi + 1)) were less likely to choose the risky alternative, whereas those who had a smaller value of αi(βi + 1) had a higher chance of choosing the safe alternative (Figure 6a-c), " -- I'm confused- should it read "those who had a smaller value of αi(βi + 1) had a higher chance of choosing the ‘risky’ alternative"?

Could you please quantify this correlation in terms of an effect size? The association is not linear, from the Figure? Please specify.

Is the association driven by α or by β or really by the product?

The reviewer is correct, the original sentence suggested the effect in the wrong, opposite direction. To quantify the effect size, we have conducted an additional GLMM, and the results are now read:

– Lines 548 – 560: “To quantify the effect size of the relationship between the proportion of risk taking and each subject’s best fit learning parameters, we analysed a generalised linear mixed model (GLMM) fitted with the experimental data (see Methods; Table 3). Within the group condition, the GLMM analysis showed a positive effect of on risk taking for every task condition (Table 3), which supports the simulated pattern. Also consistent with the simulations, in the positive RP tasks, subjects exhibited risk aversion more strongly when they had a higher value of αi(βi+1) (Supplementary Figure 17a–c). There was no such clear trend in data from the negative RP task, although we cannot make a strong inference because of the large width of the Bayesian credible interval (Supplementary Figure 17d). In the negative RP task, subjects were biased more towards the (favourable) safe option than subjects in the positive RP tasks (i.e., the intercept of the GLMM was lower in the negative RP task than in the others).”

The product, αi(βi + 1), has been derived from the theoretical development of Denrell (2007) and has been used in our theoretical analysis too (Figure 2). Of course, the learning rate (α) and the inverse temperature (β) are a different free parameter that plays a different functional role in the learning algorithm. However, in the context of the hot stove effect, they play a correlated role, which allows us to compress a dimension in the analysis. Thanks to this, we can understand both theory (Figure 2) and empirical results (Figure 6 and Supplementary Figure 17) in a simple way. Therefore, for the sake of brevity, we believe that treating them as a product form αi(βi + 1) is more straightforward for the current purpose than separating them apart.

"The behaviour in the group condition supports our theoretical predictions. In the PRP tasks, the proportion of choosing the favourable risky option increased with social influence (i) particularly for individuals who had a high susceptibility to the hot stove effect. On the other hand, social influence had little benefit for those who had a low susceptibility to the hot stove effect (e.g., αi(βi + 1) {less than or equal to} 0.5)." Can you quantify this with statistically (effect sizes etc)?

We thank the reviewer very much for suggesting us to formally quantify the effect sizes. As we described above, we have conducted a GLMM analysis, and confirmed that the pattern emerged in the experiment matched well with the prediction of the calibrated computational model. We believe that our findings have been made more convincing by this additional analysis.

Did you do any form of model selection on the empirical data (with different set ups of your models, the reduced models (e.g. without σ), or the Najar type of model) to demonstrate that it is really your theoretically proposed model that fits the data best (e.g. Bayesian Model Selection)? Please include in the main manuscript. See Palminteri, TiCS on why this might be important.

I think Figure 6 is really overloaded. The legend somehow looks as it would belong only to panel c? The coloured plots are individual data points as a function of group size (which is what?) or copying weight? For me, in the current formatting, dots were too small to detect a continuous colour coding scheme (only yellow vs purple). The solid lines are simulated data? Can you show a regression line for the empirical data to allow for comparisons? Does it not differ substantially from the model predictions? I suggest making different plots for different purposes (compare predicted behaviour to empirical behaviour, show effect of copying weight, show effect of group size etc, show simulation were you plug in σ>0.4)

Thank you so much also for giving us this terrific suggestion. We have included both the model recovery test and model comparison based on Bayesian model selection in the main test (Supplementary Figure 18 [Figure 6 —figure supplement 2]) in page 54. Please see our response to the editor’s comment (4) for more details.

"In keeping with this, if we extrapolated the larger value of the copying of weight (i.e., σi¯>0.4) into the best fitting social learning model with the other parameters calibrated, a strong collective rescue became prominent " – sorry, where does the value σ¯>0.4 exactly come from for this analysis? Please give more detail/contextualise better.

This was indeed helpful feedback. As we described in our response to the editor’s comment (5) and (6), we have separated the computational model prediction and the data with a regression line into two figures. We have also included the varying σ in a gradual manner, varying across the range of individual fit σ for each task, rather than just showing an arbitrary high value (that was set to σ>0.4 in the previous manuscript). We believe that the current presentation of the empirical result allows readers to easily differentiate what was the data themselves and what was the model prediction.

Please try to be consistent with terminology αi(βi+ 1) is sometimes called 'susceptibility ' or 'susceptibility value', which might be confusing, given that in some published articles susceptibility refers to susceptibility to social influence which would be another parameter…. I suggest to go through the manuscript once more and strictly only use one term for each parameter (the one you introduce in the table).

Thank you so much again for your thorough review and insightful comments. We have gone through the text again and made all the terms consistent.

https://doi.org/10.7554/eLife.75308.sa2

Article and author information

Author details

  1. Wataru Toyokawa

    Department of Psychology, University of Konstanz, Konstanz, Germany
    Contribution
    Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Validation, Visualization, Writing - original draft, Writing - review and editing
    For correspondence
    wataru.toyokawa@uni-konstanz.de
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-8558-8568
  2. Wolfgang Gaissmaier

    1. Department of Psychology, University of Konstanz, Konstanz, Germany
    2. Centre for the Advanced Study of Collective Behaviour, University of Konstanz,, Konstanz, Germany
    Contribution
    Conceptualization, Funding acquisition, Project administration, Visualization, Writing - original draft, Writing - review and editing
    Competing interests
    No competing interests declared

Funding

Deutsche Forschungsgemeinschaft (EXC 2117 - 422037984)

  • Wataru Toyokawa
  • Wolfgang Gaissmaier

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

This work was funded by a Small Project Grant from the Centre for the Advanced Study of Collective Behaviour, the University of Konstanz (S20-06), by the University of Konstanz Committee on Research (FP031/19), and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC 2117–422037984. We thank Iain Couzin, Lucy Aplin, Brendan Barrett, Ralf Kurvers, Charley Wu, Gota Morishita, and Anita Todd for many helpful comments on earlier versions of this paper.

Ethics

Human subjects: The experimental procedure was approved by the Ethics Committee at the University of Konstanz ('Collective learning and decision-making study').All subjects consented to participation through an online consent form at the beginning of the task.

Senior Editor

  1. Michael J Frank, Brown University, United States

Reviewing Editor

  1. Mimi Liljeholm, University of California, Irvine, United States

Publication history

  1. Preprint posted: February 23, 2021 (view preprint)
  2. Received: November 5, 2021
  3. Accepted: April 1, 2022
  4. Version of Record published: May 10, 2022 (version 1)

Copyright

© 2022, Toyokawa and Gaissmaier

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 416
    Page views
  • 76
    Downloads
  • 0
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Wataru Toyokawa
  2. Wolfgang Gaissmaier
(2022)
Conformist social learning leads to self-organised prevention against adverse bias in risky decision making
eLife 11:e75308.
https://doi.org/10.7554/eLife.75308

Further reading

    1. Biochemistry and Chemical Biology
    2. Computational and Systems Biology
    Laura M Doherty et al.
    Research Article

    Deubiquitinating enzymes (DUBs), ~100 of which are found in human cells, are proteases that remove ubiquitin conjugates from proteins, thereby regulating protein turnover. They are involved in a wide range of cellular activities and are emerging therapeutic targets for cancer and other diseases. Drugs targeting USP1 and USP30 are in clinical development for cancer and kidney disease respectively. However, the majority of substrates and pathways regulated by DUBs remain unknown, impeding efforts to prioritize specific enzymes for research and drug development. To assemble a knowledgebase of DUB activities, co-dependent genes, and substrates, we combined targeted experiments using CRISPR libraries and inhibitors with systematic mining of functional genomic databases. Analysis of the Dependency Map, Connectivity Map, Cancer Cell Line Encyclopedia, and multiple protein-protein interaction databases yielded specific hypotheses about DUB function, a subset of which were confirmed in follow-on experiments. The data in this paper are browsable online in a newly developed DUB Portal and promise to improve understanding of DUBs as a family as well as the activities of incompletely characterized DUBs (e.g. USPL1 and USP32) and those already targeted with investigational cancer therapeutics (e.g. USP14, UCHL5, and USP7).

    1. Computational and Systems Biology
    2. Neuroscience
    Rany Abend et al.
    Research Article Updated

    Influential theories implicate variations in the mechanisms supporting threat learning in the severity of anxiety symptoms. We use computational models of associative learning in conjunction with structural imaging to explicate links among the mechanisms underlying threat learning, their neuroanatomical substrates, and anxiety severity in humans. We recorded skin-conductance data during a threat-learning task from individuals with and without anxiety disorders (N=251; 8-50 years; 116 females). Reinforcement-learning model variants quantified processes hypothesized to relate to anxiety: threat conditioning, threat generalization, safety learning, and threat extinction. We identified the best-fitting models for these processes and tested associations among latent learning parameters, whole-brain anatomy, and anxiety severity. Results indicate that greater anxiety severity related specifically to slower safety learning and slower extinction of response to safe stimuli. Nucleus accumbens gray-matter volume moderated learning-anxiety associations. Using a modeling approach, we identify computational mechanisms linking threat learning and anxiety severity and their neuroanatomical substrates.