Abstract
Human perception is susceptible to social influences. To determine if and how individuals opportunistically integrate real-time social information about noisy stimuli into their judgment, we tracked perceptual accuracy and confidence in social (dyadic) and non-social (solo) settings using a novel continuous perceptual report (CPR) task with peri-decision wagering. In the dyadic setting, most participants showed a higher degree of perceptual confidence. In contrast, average accuracy did not improve compared to solo performance. Underlying these net effects, partners in the dyad exhibit mutual convergence of accuracy and confidence, benefitting less competent or confident individuals, at the expense of the better performing partner. In conclusion, real-time social information asymmetrically shapes human perceptual decision-making, with dyads expressing more confidence without a matching gain in overall competence.
Introduction
Natural behavior is a time-continuous process that depends on past events, reward contingencies, expectations, and social context. To behave adaptively in uncertain environments, individuals collect and evaluate dynamic sensory evidence. Perceptual decision-making encompasses two main dimensions: competence (accuracy of the percept) and confidence in the perceptual experience. Perceptual confidence, the individual’s belief regarding the accuracy of the own perception via metacognitive assessment of the perceptual history, plays a key role in perceptual decision-making (Fleming and Lau, 2014; Kepecs and Mainen, 2012; Yeung and Summerfield, 2012). Social settings can change decision competence and confidence (Bahrami et al., 2010; Bang et al., 2017; Esmaily et al., 2023; Pescetelli et al., 2016; Pescetelli and Yeung, 2022). In a social interaction, the assessment of own and other’s behavior depends on the individuals’ skill level: Incompetent individuals do not only perform poorly but are also unable to recognize this (Kruger and Dunning, 1999). However, it remains uncertain whether this metacognitive bias applies to basic perceptual phenomena, specifically the relationship between perceptual accuracy and perceptual confidence. Furthermore, it is not clear how real-time social influence manifests when individual decisions are combined with social signals. To address these gaps, we investigate whether, under which conditions, and to what extent unconstrained social information exchange affects the continuous accuracy and confidence reports of individuals during perception of ambiguous visual stimuli during a decision-making paradigm with unconstrained social exchange.
Perceptual decision-making is susceptible to both informational and normative social influences (Frith and Singer, 2008; Takagaki and Krug, 2020; Terenzi et al., 2021; Toelch and Dolan, 2015; Van Den Bos et al., 2013). For instance, by integrating the perceptual report of a partner with one’s own subjective experience, the quality of perceptual judgments can be optimized and result in collective benefits but only under certain conditions (Bahrami et al., 2010, 2012a; Baumgart et al., 2020). In addition, social information can speed up decision-making by reducing exploration time, but it can also introduce biases and false beliefs (Bang and Frith, 2017). For example, social conformity biases information uptake towards majority choices in perceptual decision-making (Germar et al., 2016; Toelch et al., 2018). Confidence reports in particular have been shown to have a major influence during social exchange. Judging competence of others is difficult because it requires performance monitoring over time. Therefore, agents often use confidence signals of a partner as surrogate for assessing competence of others (Bang and Frith, 2017). Furthermore, confidence signals could serve as a channel for social information transfer. For instance, the expressed confidence range can adapt to specific social partners to achieve more optimal group decisions (Bang et al., 2017). Access to choices coupled with the confidence of another player has been reported to modulate post-decision wagers: dyadic choice agreement caused increases in confidence levels while disagreement reduced confidence. Furthermore, compared to solo performance, dyadic agreement resulted in greater improvements to both confidence and accuracy than disagreement reduced these factors (Pescetelli et al., 2016). Accuracy and confidence measures generally covary (Baumgart et al., 2020; Khalvati et al., 2021; Pescetelli et al., 2016), however, how real-time social feedback during continuous evidence accumulation affects each measure is poorly understood.
Previous studies of social perceptual decision-making are characterized by the following aspects: First, sensory stimuli were presented in a serial and random manner, however, sensory perception is an ongoing and correlated process. Second, trial-based, discrete paradigms have been used, with delays between perceptual and metacognitive reports. This makes it liable to include aspects in metacognitive judgments that were not available at the time of the perceptual report (Navajas et al., 2016; Yeung and Summerfield, 2012). Third, choices were limited to two options, e.g., left vs right (Esmaily et al., 2023; Kiani and Shadlen, 2009) or first vs second interval (Bahrami et al., 2010; Pescetelli and Yeung, 2022), not allowing the report of the graded sensory perception. Fourth, social exchange and joint decision-making was imposed, resulting in interdependent dyadic performance (Bahrami et al., 2010; Bang et al., 2017; Baumgart et al., 2020). Furthermore, choices have been recorded in a serial manner with individual choices preceding joint choices. To understand how decision confidence and accuracy evolve during the course of the dynamic decision-making and how both factors are shaped by the availability of social information, novel experimental methods allowing unrestricted access to continuous reports are needed. Indeed, recent work began to demonstrate the influence of continuous confidence reports during dyadic co-action, although still in a two-alternative and serial design. Dynamic, mutually visible confidence reports elicited greater increase of perceptual confidence during agreement and less reduction during disagreement, compared to static reports (Pescetelli and Yeung, 2020), and also resulted in the confidence alignment between dyadic human partners (Pescetelli and Yeung, 2022).
To overcome these limitations, a new approach that partly reconciles the mismatch between real-life continuous perception and experiments has recently emerged: continuous psychophysics (Bonnen et al., 2017, 2015; Huk et al., 2018). This technique quantifies various sensorimotor and cognitive processes in real-time via continuous perceptual reports (CPR). CPRs have been shown to be viable alternatives for estimating perceptual uncertainty parameters (Straub and Rothkopf, 2022). As perceptual reports are displayed continuously, we argue that CPRs are particularly useful to measure interacting processes that unfold over time, such as the integration of noisy sensory and social information. To that end, we developed a novel CPR paradigm that allows the assessment of individual decision-making during social (dyadic) and non-social (solo) decision-making. Notably, our social setting did not enforce dyadic payoff interdependence: the payoffs of the two players did not influence each other. Participants were free to monitor and opportunistically incorporate information provided by the other player, without being compelled to do so by the task design.
Our task was set up to dissociate the perceptual report and the associated confidence via real-time (“peri-decision”) wagering, using standardized, easily comparable visual cues (see Methods). Previous work resorted to post-decision confidence assessments like (i) numerical ratings (Boldt and Yeung, 2015; Fleming et al., 2010), (ii) post-decision wagering (Moreira et al., 2018; Persaud et al., 2007), or (iii) opt-out/decline response options (Gail et al., 2004; Hanks and Summerfield, 2017; Khalvati et al., 2021; Kiani and Shadlen, 2009; Komura et al., 2013; Smith, 1997). Here, in contrast, we use continuous wagering to allow immediate access to the full perceptual report of others, including accuracy and confidence. With this approach, we aimed to answer the following questions: First, do humans express perceptual confidence in real-time? Second, does accuracy and the expression of confidence depend on the social setting? We hypothesize that humans signal perceptual confidence based on a real-time evaluation of the visual scene, and incorporate others’ perceptual choice and confidence to better interpret ambiguous sensory information. Specifically, we expect the weighted integration of sensory and social evidence to result in more accurate and confident perceptual reports. We conjectured that a partner’s task competence is a key driver of information integration during social interaction, with more dyadic benefit for participants performing worse in the solo condition.
Results
To assess the integration of social information, we developed a behavioral paradigm (Figure 1, Supplementary Figure 1a and Supplementary Video 1, Continuous Perceptual Report task, ‘CPR’, see Methods) that enables continuous “peri-decision” wagering on the accuracy of perceptual judgments about the direction of a noisy random dot pattern (RDP). Participants used a joystick to signal their perceived direction (angle) and confidence (eccentricity, the deviation from the central position). Their task was to maximize the monetary reward score by following the direction of the RDP as accurately as possible, so that their cursor (partial circle, ‘response arc’) would overlap with occasionally presented small reward targets appearing in the direction of the coherent motion (Supplementary Figure 1b). Notably, increasing joystick eccentricity shortened the arc length, making it harder to hit the target. In case of a successful target hit, the reward score was calculated as the product of report accuracy and confidence (and was zero otherwise, Figure 1d). This reward scheme was intended to elicit continuous peri-decision wagering, by awarding large rewards for accurate and confident reports, small rewards for less accurate and/or less confident reports, and omitting rewards for inaccurate but confident reports.
Participants played the game in different experimental settings: alone (solo) and together with a partner (dyadic). In dyadic experiments, we continuously presented the perceptual (joystick) reports – perceived motion direction and associated confidence – of both participants in a mutually visible manner. In the context of this paper, we define as social information the perceptual report of the partner. In addition, ‘task competence’ refers to perceptual accuracy, while ‘performance’ relates to the behavioral outcome (reward score).
Humans express perceptual confidence in real-time through peri-decision wagering
Confidence measures have been shown to scale with evidence accumulation time and perceptual performance (Balsdon et al., 2021, 2020; Kiani and Shadlen, 2009). Our task design intended to capture perceptual confidence changes via real-time wagering behavior. First, we verified that participants used peri-decision wagering while playing the CPR game. To that end, we quantified if the overall quality of the solo report depended on the RDP motion coherence. The following parameters were considered: joystick position (accuracy & eccentricity), the proportion of successfully collected targets (‘hit rate’, Figure 2a), and joystick response lag after an RDP direction change (Figure 2b-c, Supplementary Figure 1d).
Participants responded, on average, 643 ms ± 78 ms (Mean ± IQR, coherence pooled) after an RDP direction change, with higher motion coherence causing faster stimulus following responses. Low RDP coherence resulted in a breakdown of motion tracking, increasing the variance of cross-correlation peaks. This motion tracking profile reflects the average evidence integration time, with faster more reliable responses suggesting higher levels of confidence.
Despite high inter-subject variability, we found that motion coherence robustly impacts all behavioral response measures: hit rates, joystick accuracy and joystick eccentricity (Linear mixed effects model – see Supplementary Table 1 to Supplementary Table 4). This suggests that participants were able to adapt their behavioral response to varying stimulus difficulty to maximize monetary outcome. Hence, participants had utilized the contingencies of the game and were able to wager on their percepts. In addition to this consistent joystick response modulation, we further demonstrate that hit rates, while generally increasing with motion coherence, dropped for 24 of 38 participants (63%) during the most salient RDP coherence, suggesting overconfident or risk-seeking joystick placement. These findings imply faster evidence integration resulting in higher perceptual accuracy and confidence levels.
To assess if subjects used joystick eccentricity as a proxy for perceptual confidence, we adapted the metacognitive performance analysis of the area under the receiver operating characteristics curve (AUC) for confidence ratings to our continuous joystick responses (Fleming and Lau, 2014; Maniscalco and Lau, 2014, 2012). We used the distributions of joystick response measurements to infer whether there is a relationship between response accuracy and eccentricity, separately for each RDP coherence level. To that end, we median-split the joystick responses into high- and low eccentricity distributions. We then analyzed the AUC to quantify if the accuracy of these distributions were different. High accuracy was indeed related to higher eccentricity, suggesting an ongoing metacognitive assessment of the perceptual report that is reflected in the eccentric joystick placement (Supplementary Figure 2). Thus, participants optimized joystick placement in both response dimensions (accuracy and eccentricity) when the task was easier, which provides further evidence for real-time peri-decision wagering, and the link between joystick eccentricity and perceptual confidence.
In summary, the solo CPR data indicate that participants positioned the joystick to actively wager on their own percept. As the eccentric joystick position was the only response dimension that could be chosen freely via metacognitive assessment of the current perceptual process, it can be treated as a proxy measure of subjective perceptual confidence. Usage and range of this response parameter varied widely between participants, suggesting individual confidence ranges. These findings imply that our CPR game makes it possible to continuously assess participants’ perceptual processes and associated confidence based on the behavioral report. In the next section, we examine whether and to what extent participants incorporated the perceptual report of a second player into their own decision-making process and how this affected perceptual accuracy and confidence.
Social setting changes perceptual confidence during real-time decision-making
Previous studies have shown that, under certain conditions, two participants are more successful in perceptual decision-making than the more competent player on its own (Bahrami et al., 2012b; Pescetelli and Yeung, 2022). We asked whether participants performed the CPR task better when a second player was playing along, and if so, whether changes in perceptual competence or confidence were the driving forces behind any improvement. To that end, we have developed a two-player (dyadic) CPR task (Figure 1a) to assess whether and how participants incorporated information from another player into their own perceptual report. Dyadic conditions could be real, with two human participants playing simultaneously (‘HH dyad’; participants were introduced to the other player beforehand); or simulated, with one participant (who is led to believe to be playing with another participant) performing alongside a computer agent (‘HC dyad’). This section covers results of human-human dyadic experiments. Both participants reacted to the same RDP and could observe both cursors and feedback of immediate and cumulative scores for themselves and the other player (see Methods). Importantly, we did not enforce a competitive or cooperative context – the individual payoff did not directly depend on the performance of the partner. Thus, participants could freely choose whether to use or ignore the perceptual report of the other player.
We pooled all experimental sessions of each participant according to social context (solo vs dyadic, within-subject). Average response lags after a direction change were significantly different in solo and dyadic experiments (Solo: 643 ms ± 78 ms [Mean ± IQR]; Dyadic 662 ms ± 105 ms; Wilcoxon signed rank test, n = 34, Z = −2.98, p < 0.01), albeit the average response lag difference was small. We also found a small but significant improvement in average individual score between solo and dyadic experiments (Solo: 0.2357 ± 0.0696 [Mean ± IQR]; Dyadic 0.247 ± 0.049; Wilcoxon signed rank test, n = 34 (subjects), Z = 2.31, p < 0.05). Compared to the solo CPR, 68% of participants (23/34) achieved a higher score when co-acting with a partner (Figure 3a). On the level of a dyad, the combined score of two players working together was higher than that of the same two players working alone (Mean difference = 0.023, Wilcoxon signed rank test, n = 50 (dyads), Z= −3.36, p<0.001). Thus, social context seemed to improve overall task outcome (‘reward score’).
Next, we assessed whether changes in perceptual competence (discrete: hit rate; continuous: accuracy) or confidence were the driving factors behind this improvement. Hit rates were unaffected by social setting (Solo: 0.3887 ± 0.1054 [Mean ± IQR]; Dyadic 0.3909 ± 0.0638; Wilcoxon signed rank test, n = 34 (subjects), Z = 0.6582, p = 0.51). Furthermore, average response accuracy in dyadic experiments did not change for 94% (32/34) of participants (Figure 3b; within subjects: Wilcoxon rank-sum test, coherence pooled, number of tests: 34 (subjects), Bonferroni-corrected significance threshold = 0.0015; across subjects: Wilcoxon signed rank test, Median AUC = 0.49, Z = −1.94, p = 0.0523). Thus, the access to reports of others did not improve average CPR competence. However, 94% of participants (32/34) placed their joysticks at a significantly different eccentric position when playing in a dyadic setting (Figure 3b; within subjects: Wilcoxon rank-sum test, coherence pooled, number of tests: 34 (subjects), Bonferroni-corrected significance threshold = 0.0015; across subjects: Wilcoxon signed rank test, Median AUC = 0.57, Z = 2.47, p < 0.05), suggesting altered confidence reports in a social setting. We estimated the directionality of the dyadic vs solo social modulation with the area under the receiver-operating characteristic (AUC) for each player (Figure 3c + Supplementary Figure 3, see Methods). Of all participants with significantly different joystick eccentricity in solo and dyadic conditions, 38% - 56% (min and max across coherence levels) increased their joystick eccentricity when playing with a partner, while 12% - 26% changed to a more central position, i.e., played more conservative (Figure 3c, Wilcoxon rank-sum test, number of tests: 34 (subjects) * 7 (coherence levels) = 238, Bonferroni-corrected significance threshold = 2.1008e-04). Thus, most participants achieved better score by signaling, on average, more perceptual confidence during their continuous perceptual report in a social setting. The observed bidirectional effect might be explained by dyadic convergence, as initially less confident participants seemed to increase their perceptual wagers, while accurate individuals declined in accuracy (Figure 3c, plot of social modulation vs solo performance, see also Supplementary Figure 4) We further explore this hypothesis in the next section.
Performance difference between participants determines dyadic effect
When comparing all solo vs all dyadic sessions, our results suggest that perceptual confidence but not competence is modulated by the social setting. We next asked whether the observed social modulation can be explained by within-dyad differences in solo behavior. Intuitively, we hypothesized that a larger difference in solo performance between subjects would lead to a stronger modulation by the social context, because no additional information can be derived from observing a very similar partner (Figure 4a, ‘bow-tie’). On the other hand, earlier work suggested less successful perceptual decision-making in dyadic settings when perceptual sensitivities or confidence differ strongly between participants (Bahrami et al., 2010; Pescetelli and Yeung, 2022). To test these rival hypotheses, we analyzed the social modulation by contrasting the dyadic vs solo AUC for each player (all solo sessions of this subject vs specific dyadic session, RDP coherence pooled). Overall, we observed three behavioral patterns, across the range of individual solo differences (Figure 4a): (i) both participants improved in a social setting (AUC > 0.5; Accuracy: 14% of dyads, Eccentricity: 48% of dyads), (ii) both participants got worse (AUC < 0.5; Accuracy: 38%, Eccentricity: 16%) and (iii) one player improved while the other got worse (Accuracy: 48%, Eccentricity: 36%). Across the participants, the confidence increased while accuracy slightly decreased in the dyadic setting (Figure 4a – histogram, Wilcoxon signed rank test (n = 100), Accuracy: Median = 0.48, Z = - 2.93, p < 0.01; Eccentricity: Median = 0.58, Z = 2.92, p < 0.01), confirming earlier within-subject findings for confidence (Figure 3). The significant shift of the median from 0.5 indicates asymmetric social modulation. Crucially, in contrast to earlier studies, our data does not reveal systematic dyadic benefits for dyads with similar perceptual accuracy or confidence. Instead, there was a positive (but not significant) correlation between the average social modulation within a dyad and solo difference between players (Supplementary Figure 5a).
To further examine the relationship between solo and dyadic performance, we tested the social modulation difference between the two players in the dyad, again as a function of individual differences in the solo condition. The absolute social modulation difference increased with larger difference in solo performance, as indicated by the running averages and better fits resembling a U-shaped function instead of a linear function (Figure 4b, Adj. R2: Accuracy: Linear = 0.0335 vs. Quadratic = 0.0620; Eccentricity: Linear = −0.0186 vs. Quadratic = 0.2271). Thus, more dissimilar solo performance elicits large differences in social modulation.
Furthermore, concerning the direction of this relationship, we hypothesized that the worse solo player would benefit from a social setting, relative to the better solo player (here and elsewhere, “worse”/“better” mean less/more confident/accurate, correspondingly). Figure 4c (right column, see also Figure 4a) illustrates two possible scenarios: for dyad x, the individually worse player benefits from social context while the better solo player gets worse. For dyad y, both players get better, but the individually worse one improves more. We correlated the signed social modulation difference between the two members of each dyad with the difference in solo behavior. For both eccentricity and accuracy, a negative correlation was found between the within-dyad difference in social modulation and the difference in solo task (Figure 4c, Pearson correlation, n = 50, Eccentricity: r = −0.74, p < 0.05; Accuracy: r = −0.6, p < 0.01; compared to permuted dyads, see Methods). Thus, in line with our prediction, worse solo players either improve more or at least get less bad, relative to the initially better partner (who is either improving less or getting worse).
The observed effects could be explained by confidence convergence during dyadic interaction, which has been reported to impact perceptual accuracy of individual and dyadic decisions (Bang et al., 2017; Pescetelli and Yeung, 2022). To directly test this, we contrasted the absolute eccentricity and accuracy differences between the two players in dyadic vs solo setting. Compared to solo experiments, 74% of dyads (37/50) exhibit a smaller eccentricity difference when playing together (Figure 4d, Wilcoxon signed rank test, n = 50, Z = 3.46, p < 0.001). This finding and the relative benefit for the worse solo player (Figure 4c) suggest dyadic convergence for metacognitive confidence, and to a smaller extent for accuracy, in a social context. This convergence is asymmetric because initially more confident players were affected less, and more accurate players were affected more, than their counterparts (Figure 3c), resulting in overall positive shift for confidence and slight negative shift for accuracy (Figure 4a).
Lastly, we compared the average eccentricity and the average accuracy correlations between the two players, in solo and dyadic context. As expected, there was no correlation between solo reports, but the participants’ behavior became significantly correlated when they played together (Supplementary Figure 5c, Pearson’s correlation, n = 50, Accuracy: Solo: r = 0.19, p = 0.18, Dyadic: r = 0.48, p < 0.001; Eccentricity: Solo: r = 0.11, p = 0.44, Dyadic: r = 0.54, p < 0.001).
Perceptual accuracy improves with reliable social signaling
As expected, the quality of the solo perceptual report declined in a comparable fashion across participants for low stimulus coherence (Figure 2). We wondered if this perceptual breakdown led to the relatively small accuracy modulation by the social context we described above (Figure 3 and Figure 4). Based on earlier work on Bayesian integration and social conformity (De Martino et al., 2017; Germar et al., 2016; Khalvati et al., 2021; Park et al., 2017), we expected that integrating information from a partner will be weighted by their reports’ accuracy reliability. We hypothesized that participants would integrate more social evidence when it was reliably accurate, regardless of the stimulus noise. Furthermore, we asked whether incorporating social signals into human decision-making requires graded, accuracy-depended confidence signaling by others.
To that end, we developed a computer player, that was programed to accurately represent the nominal RDP direction (± Gaussian noise; note that even 0% coherence had a correct “nominal” direction), with a fixed “confidence” of 0.5 (± Gaussian noise in a.u.) across all coherence levels at all times (Supplementary Figure 6). In such human-computer (HC) dyads, the computer player was physically impersonated by one of the experimenters who pretended to be the partner. Thus, participants believed that they played the game with another human. The computer player was set up to report motion direction with a constant, human-like latency (508 ms ± Gaussian noise). The computer response was not affected by the cursor of the human participants, resulting in a situation where the social signals might only unilaterally affect the human player. Crucially, unlike real human partners, the computer player did not provide useful information regarding its tracking confidence. With this condition we aimed to evaluate whether human participants would integrate social cues about the motion direction into their own reports, while the partner’s confidence report was uninformative. We nevertheless expected a response accuracy improvement, especially when sensory evidence became degraded at low coherences. Furthermore, we hypothesized that the reliable nature of the computer partner would result in riskier, more eccentric joystick placement in human players.
By accurately representing the stimulus direction, the computer player triggered profound behavioral effects (Supplementary Table 2 to Supplementary Table 4). In this setting, participants collected more targets with a higher score (Figure 5a). Compared to human-human dyads, 48% of participants (16/33) improved their accuracy (while none became worse) across all coherence levels (Figure 5b-d, within-subjects: Wilcoxon rank-sum test, coherence pooled, number of tests = 33 (subjects), Bonferroni-corrected significance threshold = 0.0015; across subjects: Wilcoxon signed rank test, Median AUC = 0.57, Z = 4.85, p < 0.001; see Supplementary Figure 7 for comparison to solo behavior). In particular, 52% of participants (17/33) showed a significant accuracy boost at 0% coherence (Wilcoxon rank-sum test, number of tests: 33 (subjects) * 7 (coherence levels) = 231, Bonferroni-corrected significance threshold = 2.1645e-04, Median AUC across subjects at 0% coherence = 0.73). Thus, participants integrated reliable sensory-social direction cues to improve their task performance, especially when stimulus was ambiguous, suggesting a unilateral convergence towards the computer player. But despite more accurate task performance, most participants did not improve in the eccentricity dimension (Figure 5b-d). In fact, compared to an interaction with a real human counterpart, 64% of participants showed less confidence while only 18% improved when playing with the computer player (within-subjects: Wilcoxon rank-sum test, coherence pooled, number of tests = 33 (subjects), Bonferroni-corrected significance threshold = 0.0015, across subjects: Wilcoxon signed rank test, Median AUC = 0.45, Z = −3.15, p < 0.01). Subjects were particularly affected when the task was easy (98% coherence, Median AUC across subjects = 0.36). This too seems to suggest confidence convergence to the relatively invariant low confidence computer player, even with otherwise reliably accurate direction signaling.
Social modulation of confidence and accuracy co-varies across dyads
So far, we analyzed the accuracy and the eccentricity modulations independently. Here we investigate the link between the two report dimensions. In both dyadic conditions, subject-wise social modulation of perceptual confidence correlated with the change in accuracy (Figure 6, HH: Pearson’s correlation, n = 100, r = 0.55, p < 0.001; HC: n = 33, r = 0.43, p < 0.05), suggesting that the gain or the loss in accuracy leads to reappraisal of confidence. Interestingly, partners’ initial confidence (mis)match in solo experiments did not correlate with dyadic improvements or deteriorations in accuracy (Supplementary Figure 8, Pearson’s correlation, n = 50, r = 0.24, p = 0.09), unlike in the previous study that investigated a similar relationship (Pescetelli and Yeung, 2022).
Despite the positive covariation between social modulation of accuracy and eccentricity, the human – computer experiment demonstrated that social evidence integration does not require metacognitively sensitive, graded confidence signaling by the partner. However, the apparent dissociation between the average improvement of accuracy and the average decline of confidence is still grounded in a lawful relationship between two response dimensions. Players who gained substantially more accuracy tended to show confidence increase, or at least less confidence decrease, compared to players who did not change much in accuracy (Figure 6, HC vs HH: n = 98, r = 0.62, p < 0.001). To conclude, both, individually-varied, coherence-dependent human reports, and reliably accurate direction reports coupled with uninformative confidence expression by the simulated partner influenced human perceptual decisions and resulted in converging, socially conforming behavior.
Discussion
In this study, we assessed continuous human perceptual decision-making in social and non-social settings, with a newly developed paradigm, where subjects wager on the correctness of their motion percept in real-time. Overall, in dyadic settings, we find higher perceptual confidence but no gain in accuracy. We demonstrate that convergence during dyadic co-action underlies this net effect: the magnitude and directionality of the behavioral change depends on competence and confidence of the social partners.
In contrast to the general increase in confidence we have observed, previous studies did not report overall rise in confidence (Bang et al., 2017; Pescetelli et al., 2016; Pescetelli and Yeung, 2022). However, some earlier work demonstrated gains in dyadic competence – i.e. when the joint decision-making outperforms even the better of two partners – after partners exchanged their individual confidence (Bahrami et al., 2010; Pescetelli et al., 2016). Likewise, competence increases even during dyadic co-action, especially for participants with similar confidence (Pescetelli and Yeung, 2022). Despite presenting subjective confidence continuously and saliently as part of the perceptual report, our dyadic task did not evoke overall improvements in accuracy (a continuous measure of competence) or hit rate (a discrete measure of competence), suggesting a metacognitive bias in social conditions. The resulting score (which combines accuracy, hit rate, and confidence), however, did improve in the dyadic setting, suggesting a form of dyadic benefit dissociated from perceptual competence. This indicates that real-time social feedback boosts confidence in one’s perception, without a corresponding enhancement in competence.
This apparent disconnect between confidence (which can be construed as perceived competence) and actual competence is further reflected in the differences in social modulation as a function of solo performance. We found that individually least confident players exhibited the largest increase in confidence in dyadic settings. Conversely, the most individually accurate players declined in accuracy. Reminiscent of previous work on metacognitive biases in individuals (Kruger and Dunning, 1999), we show disproportional effects of social context on perceptual confidence in less competent players. Less skilled individuals benefit from the social information of their superior partners, while proficient individuals give little weight to the information of their less skilled partners.
Along these lines, we found convergence in accuracy and especially confidence between dyadic partners. Direction and magnitude of this effect were determined by the difference in performance of the two players, with the worse solo player changing more and in a more beneficial way during dyadic experiments. Larger solo differences between participants resulted in larger difference in social modulation between the partners. Dyadic confidence convergence (Esmaily et al., 2023; Pescetelli and Yeung, 2022) and confidence matching (Bang et al., 2017) have been described before. Here, we also show systematic social modulation based on not only accuracy and confidence differences between participants but also on their initial solo performance. However, unlike the previous work, where similar confidence (Pescetelli and Yeung, 2022) or perceptual sensitivity (Bahrami et al., 2012a, 2010; Baumgart et al., 2020) correlated with higher dyadic competence, we do not find systematic dyadic competence benefits (in our case, average accuracy within a dyad) for subjects with similar task competence or confidence. We speculate that the type and modality of social feedback and interaction might underlie these differences. Explicit verbal communication (Bahrami et al., 2012a, 2010) or periods of metacognitive introspection in which prior individual decisions are evaluated (Pescetelli and Yeung, 2022) might have elicited competence improvements.
The control experiment with a reliably accurate, simulated dyadic partner who exhibited constant intermediate level of confidence irrespective of the task difficulty evoked vastly improved hit rate and accuracy, especially when sensory information was ambiguous. At the same time, human confidence reports became more conservative when playing with the “conservative” simulated partner, especially at the high stimulus coherence. Improvements in competence together with declining confidence further support dyadic convergence, because the human player gravitated towards the report of the simulated partner. Thus, instead of using the reliably accurate information provided by the computer player to be more accurate and more confident, convergence interfered with fully maximizing the reward score. High-accuracy, low-confidence simulated partners have been recently shown to elicit more conservative confidence reports during binary dyadic decision-making (Esmaily et al., 2023). Beyond these results, our experiments demonstrate that humans do not require sensible confidence expression to recognize and utilize differences in task competence. Our findings indicate a possible dissociation between the direction of accuracy and confidence alignment. At the same time, we observe a positive covariation of accuracy and confidence modulation by the dyadic context, suggesting a retention of metacognitive sensitivity under social influence. We conclude that performance history and temporal reliability might be important factors in addition to explicitly signaled confidence, especially when these information streams are not congruent.
Systematic changes towards group consensus (‘social conformity’) have been shown to bias decision-making towards majority choices (De Martino et al., 2017; Germar et al., 2016; Park et al., 2017; Toelch and Dolan, 2015). The dyadic convergence we and others observe might be the basis for social conformity in larger group settings. Critically, dyadic convergence bilaterally affects both partners, but asymmetrically. For instance, the more confident players adjust their confidence very little but such individuals may disproportionally influence group consensus.
In our experiment, participants were not instructed to cooperate or compete with one another. They did not jointly reach a perceptual decision but instead co-acted under independent reward contingencies. This difference to previous reports (Bahrami et al., 2012b, 2012a, 2010; Pescetelli et al., 2016) is crucial for interpreting our results, since participants could ignore the overt behavior of the other player. Therefore, any social modulation or correlated behavior observed in our experiment can be attributed to a spontaneous, self-regulated process. We interpret our findings as evidence that in social situations most people spontaneously and opportunistically integrate the judgment of others into their own decisions, even when social interaction is not incentivized or enforced. In line with this argument, humans seem to naturally follow gaze signals and choice preferences of others, suggesting the utilization of others’ thoughts and intentions (Bayliss et al., 2007; Madipakkam et al., 2019; Mitsuda and Masaki, 2018). Furthermore, human co-action seems to result in attentional attraction or withdrawal in some dyads (Dosso et al., 2018). As next step, it would be very interesting to test whether face-to-face co-action through the transparent shared visual display will induce even stronger social effects compared to separate experimental booths (Moeller et al., 2023).
The advent of new techniques such as time-continuous decision-making (Bonnen et al., 2015; Huk et al., 2018; Noel et al., 2023, 2022) and hyperscanning (Babiloni and Astolfi, 2014; Czeszumski et al., 2020) allow to ask how evolving decisional variables are represented in neural circuitry underlying flexible behaviors. This is an important step beyond the traditional approach based on discrete, trial-based decisions. Adapting this approach, we demonstrate real-time influence of social information on human perceptual decisions. Studies investigating the neuronal correlates of similar perceptual decisions have demonstrated faster and more accurate behavioral responses when the sensory evidence resulted in earlier and more reliable neuronal changes (Fan et al., 2018; Gold and Stocker, 2017; Kiani and Shadlen, 2009). It has also been shown that microstimulation- and optogenetically- elicited inputs can be integrated into perceptual decisions (Fetsch et al., 2018, 2014; Salzman et al., 1990). Along these lines, we propose that reliably accurate real-time social information is multiplexed with sensory signals, possibly resulting in enhanced encoding already in cortical neurons representing relevant sensory dimensions.
In summary, we show that the presence of a co-acting social partner adaptively changes continuous perceptual decisions, resulting in mutual but asymmetric convergence and a net dyadic benefit. This is particularly apparent in a strong improvement of competence, confidence, and reward score of the worse partner. The better partner, on the other hand, gets less accurate and/or slightly loses confidence. These lawful relationships between confidence and competence modulations demonstrate the importance of concurrently considering these two measures, both within each participant and across interacting partners. These results advance our understanding of how humans evaluate and incorporate social information, especially in real-time decision-making situations not permitting slow and careful deliberation.
Methods
Study design and participants
Data were recorded from 38 human participants (Median age: 26.17 years, IQR 4.01; 13 of which with corrected vision) between January 2022 and August 2023. Prior to the experimental sessions, each participant was trained on two occasions. During the experimental phase, participants played three variations of the experimental paradigm: alone (solo), with a human player (Human-Human dyad), and with a computer player (Human-Computer dyad). The experimental order was mixed and largely determined by the availability of participants (Supplementary Figure 1a). Most participants in this study originated from central Europe or South Asia. All procedures performed in this study were approved by the Ethics Board of the University of Göttingen (Application 171/292).
Experimental setup
Participants sat in separate experimental booths with identical hardware. They were instructed to rest their head on a chinrest, placed 57 cm away from the screen (Asus XG27AQ, 27” LCD). A single-stick joystick (adapted analog multifunctional joystick (Sasse), Resolution: 10-bit, 100 Hz) was anchored to an adjustable platform placed in front of the participants, at a height of 75 cm from the floor. The joystick was calibrated before data acquisition to ensure comparable readouts. Screens were calibrated to be isoluminant. Two speakers (Behringer MS16), one for each setup were used to deliver auditory feedback at 70 dB SPL.
The experimental paradigm was programmed in MWorks (Version 0.10 - https://mworks.github.io). Two iMac Pro computers (Apple, MacOS Mojave 10.14.6) served as independent servers for each setup booth. These computers were controlled by an iMac Pro (Apple, MacOS Mojave 10.14.6). Custom-made plugins for MWorks were used to generate and display the stimuli, to handle the data acquisition from the joystick (10 ms sampling rate), and to incorporate all data from both servers into a single data file.
Continuous perceptual report (CPR) game
Participants were instructed to maximize monetary outcome in a motion tracking game. In this game, subjects watched a frequently changing random dot pattern on the screen and used a joystick (Szul et al., 2020) to indicate their current motion direction perception. The joystick controlled an arc-shaped response cursor on the screen (partial circle with fixed eccentricity, Solo: 2 degree of visual angle (‘dva’) radius from the center of the screen; Dyadic: 1.8 dva & 2 dva radius). The angular direction of the joystick was linked to the cursor’s polar center position. In addition, the joystick eccentricity was permanently coupled to the cursor’s width (see below, 13 – 180 degrees). This resulted a continuous representation of the joystick position along its two axes. By moving the joystick, participants could rotate and shape the cursor. At unpredictable times (1% probability every 10 ms), a small white disc (‘reward target’, diameter: 0.5 dva) appeared on the screen for a duration of 50 ms at 2.5 dva eccentricity, congruently with the motion direction of the stimulus. Whenever a target appeared in line with the cursor, the target was considered collected (‘hit’) and the score of the participant increased. To help participants performing such alignment to the best of their perceptual abilities, a small triangular reference point was added to the center point of the cursor. Throughout the experiment, participants were required to maintain gaze fixation on a central fixation cross (2.5 dva radius tolerance window) or the cursor would disappear and no targets could be collected until fixation was resumed.
In the solo experiments, the cursor was always red. In dyadic conditions, the two cursors, present on screen simultaneously, were red and green (isoluminant at 17.5 cd/m2 ± 1 cd/m2). During dyadic experiments, the position of the two cursors switched between stimulus cycles, with the red cursor always starting above, but not overlapping the green cursor. Each cursor color was permanently associated with one of the two experimental booths. After the mid-session break, participants switched booths, contributing an equal amount of data for each setup (600 reward targets, ∼20 min, up to 17 stimulus cycles). Players initiated new stimulus cycles with a joystick movement. Each stimulus cycle could last up to 75 seconds, during which the RDP’s motion direction and coherence changed at pseudorandomized intervals, resulting in the presentation of 30 stimulus states per cycle.
Random dot pattern (RDP)
We used a circular RDP (8 dva radius) with white dots on a black background. Each dot had a diameter of 0.1 dva, moved with 8 dva/s and had a lifetime of 25 frames (208 ms). The overall dot density was 2.5 dots/dva. The stimulus patch was centrally located on the screen. The central part of the stimulus (5 dva diameter) was blacked out. In this area we presented the fixation cross and the response arc. The RDP motion direction was randomly seeded and set to change instantly by either 15 deg, 45 deg, 90 deg or 135 deg after a pseudo-randomized time interval of 1250 ms to 2500 ms. Whether the signal moved in clockwise or counterclockwise direction was random. Only signal dots altered their direction. The dot coherence changed pseudo-randomly after 10 RDP direction changes to the coherence level that was presented least. Seven coherence conditions were tested: 0, 8, 13, 22, 36, 59, 98%.
Gaze control
Participants were required to maintain gaze fixation at the center of the screen throughout each stimulus cycle. We used a white cross (0.3 dva diameter) as anchor point for the participants’ gaze. The diameter of the fixation window was set to 5 dva. An eye tracker (SR research, EyeLink 1000 Plus) was used to control gaze position in real-time. If the gaze position left the fixation window for more than 300 ms, the player’s arc would disappear from the screen, preventing target collection. In addition to this, an increase of the fixation cross’ size, together with a change in color (white to red), signaled to the participants that fixation was broken. As soon as the gaze entered the fixation window again, visual parameters were reset to the original values and the arc would reappear, allowing the player to continue target collection.
Reward score
Participants were incentivized to maximize their monetary outcome by collecting as many targets as possible with the highest possible score. The minimum polar distance between the arc’s center position and the target’s center (‘accuracy’) as well as the angular width of the arc (‘eccentricity’) at the moment of collection were taken into account when calculating the score:
where RDPdirrefers to the direction of the random dot pattern and JSdir refers to the direction of the joystick at sample i.
where Eccentricity varies between 0 and 1 for minimal and maximal eccentric joystick positions, respectively.
Thus, narrower and more accurately placed cursors caused higher reward scores.
Feedback signals
Various feedback signals were provided throughout the experiment to inform participants about their short- and long-term performance. All feedback signals were mutually visible.
Immediate feedback
Immediately after a target was collected, visual and acoustic signals were provided simultaneously. The auditory feedback consisted of a 200 ms long sinusoidal pure tone at a frequency determined by the score. Each tone corresponded to a reward range of 12.5%, with lower pitch corresponding to low reward score. We used 8 notes from the C5 major scale (523, 587, 659, 698, 784, 880, 988, 1047 Hz). Sounds were on- and off-ramped using a 50 ms Hanning window. No sound feedback was given for missed targets. In solo experiments, the visual feedback consisted of a 2 dva wide circle, filled in proportion to the score with the same color as the arc’s player. The circle was presented in the center of the screen, behind the fixation cross for 150 ms. In dyadic conditions, the visual feedback consisted of half a disc for each player and both color-coded halves were mutually visible.
Short-term feedback
During each stimulus cycle, a running average of the reward score was displayed for each player with a 0.9 dva wide, color-coded ring around the circumference of the RDP (18.2 dva and 19.4 dva diameter). After every target presentation, the filled portion of the ring updated. To avoid spatial biasing, the polar zero position of the ring changed randomly with every stimulus cycle.
Long-term feedback
Cumulative visual feedback was provided after each stimulus cycle (during the inter-cycle intervals) for 2000 ms. It displayed the total reward score accumulated across all cycles as a colored bar graph located at the center of the screen. A grey bar (2 dva wide, max height: 10 dva) indicated the maximal possible cumulative score after each cycle. A colored bar next to it (same dimensions) showed how much was collected by the player so far. In dyadic experiments, red and green bars would be shown on either side of the grey bar. In solo experiments, a red arc was shown to the left of the grey bar. The configuration of the visual stimuli and task parameters is illustrated in Figure 1 and as a supplementary video file (Supplementary Video 1).
Behavioral analysis
Performance metrics were extracted and averaged in time windows of 30 frames (∼250 ms) that were either target- or state-aligned: Target-alignment refers to time windows prior to the first target presentation of each stimulus state. Target-aligned data were only considered if the first target appeared at least 1000 ms after the direction change. This analysis approach was chosen to (i) allow adequate time for a response and (ii) to avoid the prediction of motion direction based on earlier target locations. State-alignment refer to a 30 frames time-window before a motion direction change of the stimulus. Stimulus states in which fixation breaks exceeded 10% of the state duration were excluded from the performance analyses.
Statistical analysis
For population analyses, data were first averaged within-subject. Bootstrapped 95% confidence intervals were estimated with 1000 repetitions (Matlab: bootci). Differences between experimental conditions were tested with a two-sided Wilcoxon signed rank test (Matlab: signrank, for paired samples) and a two-sided paired Wilcoxon rank sum test (Matlab: ranksum). Bonferroni correction was applied for multiple testing. Whether or not coherence was pooled is indicated for each test. Within-subject effect size was estimated with the area under the receiver operating characteristic (‘AUC’, Matlab: perfcurve). AUC values of 0.5 indicated similar distributions. AUC of 0 and 1 suggested perfectly separated distributions. Directionality of social modulation was inferred by AUC change (larger vs smaller than 0.5). Correlation coefficients were calculated with a Pearson correlation (Matlab: corrcoef). Average response lags between stimulus and response were estimated with the maximum cross-correlation coefficient (Matlab: xcorr). Social modulation differences between dyadic players were compared to the baseline modulation of shuffled dyadic partners (Matlab: randperm). We fitted three Generalized Linear Mixed Models (GLMM; Baayen, 2008) which differed in their response variable and the size of the data set analyzed but had identical fixed effects structures and largely identical random effects structures. We fitted one model for the probability of a target hit (model 1a), joystick eccentricity (model 1b), and joystick accuracy (model 1c) as the response. All three aimed at estimating the extent to which the respective response variable was affected by the fixed effects of experimental condition (solo or human-computer dyad), random dot pattern coherence, stimulus duration, stimulus number, block number, and day number. We hypothesized that the effect of coherence depended on the condition, thus, we included the interaction between these two predictors into the fixed affects part of the model. To avoid pseudo-replication and account for the possibility that the response was influenced by several layers of non-independence, we included three random intercepts effects, namely those of the ID of the participant, the ID of test day (nested in participant; thereafter ‘day ID’), and the ID of the block (nested in participant and day; thereafter ‘block ID’). The reason for including the latter two was that it could be reasonably assumed that the performance of participants varied between test days and also between blocks tested on the same day. To avoid an ‘overconfident model’ and keep type I error rate at the nominal level of 0.05 we included all theoretically random slopes (Barr et al., 2013; Schielzeth and Forstmeier, 2009). These were those of condition, coherence, their interaction, stimulus duration, stimulus number, block number, and day number within participant, coherence, stimulus duration, stimulus number, and block number within day ID, and finally coherence, stimulus duration, and stimulus number within block ID. Originally we also included estimates of the correlations among random intercepts and slopes into each model, but do to convergence and identifiability problems (recognizable by absolute correlation parameters being close to 1; Matuschek et al., 2017) we had to exclude all or several of these estimates from the full models (see Supplementary Table 1 for detailed information).
For each model we conducted a full-null model comparison which aims at avoiding ‘cryptic multiple testing’ and keeping the type one error rate at the nominal level of 0.05 (Forstmeier and Schielzeth, 2011). As we had a genuine interest in all predictors present in the fixed effects part of each model the null models comprised only the intercept in the fixed effect’s part but were otherwise identical to the respective full model. This full-null model comparison utilized a likelihood ratio test (Dobson, 2001). Tests of individual effects were also based on likelihood ratio test, comparing a full model with an each in a set of reduced models which lacked fixed effects one at a time.
Model implementation
We fitted all models in R (version 4.3.2; R Core Team, 2023). In model 1a we included the response as a two columns matrix with the number of targets hit and not hit in the first and second column respectively (Baayen, 2008). The model was fitted with a binomial error structure and logit link function (McCullagh and Nelder, 1989). In essence, such models model the proportion of targets hit. We are aware that in principle one would need an ‘observation level random effect’ which would link the number of targets hit and not hit in a given stage. However, in a relatively large proportion of stages (19.7%) there was only a single target that appeared and in the majority of stages (47.0%) only two targets appeared, making it unlikely that a respective random effect can be fitted successfully.
For models 1b and 1c, we fitted with a beta error distribution and logit link function (Bolker, 2008). Models fitted with a beta 1 error distribution cannot cope with values in the response being exactly 0 or 1. Hence, when such values were present in a given response variable we transformed then as suggested by Smithson & Verkuilen, 2006. Model 1a was fitted using the function glmer of the package lme4 (version 1.134; Bates et al., 2015), and models 1b and 1c were fitted using the function glmmTMB of the equally named package (version 1.1.8; Brooks et al., 2017). We determined model stability by dropping levels of the random effects factors, one at a time, fitting the full model to each of the subsets, and finally comparing the range of fixed effects estimates obtained from the subsets with those obtained from the model fitted on the respective full data set. This revealed all models to be of good stability. We estimated 95% confidence limits of model estimates and fitted values by means of parametric bootstraps (N=1000 bootstraps; function bootMer of the package lme4 for model 1 and function simulate of package glmmTMB for models the response was overdispersed (maximum dispersion parameter: 1.0).
Acknowledgements
This work was supported by German Research Foundation (Deutsche Forschungsgemeinschaft, DFG), SFB 1528 - Cognition of Interaction, project A01, and the Leibniz Collaborative Excellence grant K265/2019 “Neurophysiological mechanisms of primate interactions in dynamic sensorimotor settings” (PRIMAINT). We thank Fred Wolf for useful discussions.
Additional information
Author contribution
Conceptualization: F.S., A.C., A.G., I.K., S.T.; Methodology: F.S., A.C., S.T.; Investigation: F.S.; Analysis: F.S., R.M.; Software: F.S.; Writing – Original Draft: F.S., A.C., I.K.; Writing – Review & Editing: all authors; Funding Acquisition: A.G, I.K., S.T.; Resources: S.T.; Supervision: I.K, S.T.
Declaration of Interests
The authors declare no competing interests.
Lead contact
Further information and requests for resources should be directed to and will be fulfilled by the lead contact, Felix Schneider (fschneider@dpz.eu).
Data and code availability
The dataset generated during this study is available at CLOUD INFORMATION (LINK). The MATLAB code generated during this study is available at GitHub (https://github.com/SocCog-Team/CPR/tree/main/Publications/2023_perceptual_confidence).
Supplementary Figures
Supplementary Tables
References
- Analyzing Linguistic Data: A Practical Introduction to Statistics using RCambridge University Press https://doi.org/10.1017/CBO9780511801686
- Social neuroscience and hyperscanning techniques: Past, present and futureNeuroscience & Biobehavioral Reviews 44:76–93https://doi.org/10.1016/j.neubiorev.2012.07.006
- What failure in collective decision-making tells us about metacognitionPhilosophical Transactions of the Royal Society B: Biological Sciences 367:1350–1365https://doi.org/10.1098/rstb.2011.0420
- Together, slowly but surely: The role of social interaction and feedback on the build-up of benefit in collective decision-makingJournal of Experimental Psychology: Human Perception and Performance 38:3–8https://doi.org/10.1037/a0025708
- Optimally Interacting MindsScience 329:1081–1085https://doi.org/10.1126/science.1185718
- Separable neural signatures of confidence during perceptual decisionseLife 10https://doi.org/10.7554/eLife.68491
- Confidence controls perceptual evidence accumulationNat Commun 11https://doi.org/10.1038/s41467-020-15561-w
- Confidence matching in group decision-makingNat Hum Behav 1https://doi.org/10.1038/s41562-017-0117
- Making better decisions in groupsR Soc open sci 4https://doi.org/10.1098/rsos.170193
- Random effects structure for confirmatory hypothesis testing: Keep it maximalJournal of Memory and Language 68:255–278https://doi.org/10.1016/j.jml.2012.11.001
- Fitting Linear Mixed-Effects Models Using lme4J Stat Soft 67https://doi.org/10.18637/jss.v067.i01
- Neurophysiological correlates of collective perceptual decision-makingEur J of Neuroscience 51:1676–1696https://doi.org/10.1111/ejn.14545
- Affective evaluations of objects are influenced by observed gaze direction and emotional expressionCognition 104:644–653
- Shared Neural Markers of Decision Confidence and Error DetectionJ Neurosci 35:3478–3484https://doi.org/10.1523/JNEUROSCI.0797-14.2015
- Ecological models and datan J: Princeton University Press
- Continuous psychophysics: Target-tracking to measure visual sensitivityJournal of Vision 15https://doi.org/10.1167/15.3.14
- Dynamic mechanisms of visually guided 3D motion trackingJournal of Neurophysiology 118:1515–1531https://doi.org/10.1152/jn.00831.2016
- glmmTMB Balances Speed and Flexibility Among Packages for Zero-inflated Generalized Linear Mixed ModelingThe R Journal 9https://doi.org/10.32614/RJ-2017-066
- Hyperscanning: A Valid Method to Study Neural Inter-brain Underpinnings of Social InteractionFront Hum Neurosci 14https://doi.org/10.3389/fnhum.2020.00039
- Social Information Is Integrated into Value and Confidence Judgments According to Its ReliabilityJ Neurosci 37:6066–6074https://doi.org/10.1523/JNEUROSCI.3880-16.2017
- An Introduction to Generalized Linear Models, Second EditionChapman & Hall/CRC Texts in Statistical Science. Chapman and Hall/CRC https://doi.org/10.1201/9781420057683
- The Influence of Co-action on a Simple Attention Task: A Shift Back to the Status QuoFront Psychol 9https://doi.org/10.3389/fpsyg.2018.00874
- Interpersonal alignment of neural evidence accumulation to social exchange of confidenceeLife 12https://doi.org/10.7554/eLife.83722
- Ongoing, rational calibration of reward-driven perceptual biaseseLife 7:1–26https://doi.org/10.7554/eLife.36018
- Effects of Cortical Microstimulation on Confidence in a Perceptual DecisionNeuron 83:797–804https://doi.org/10.1016/j.neuron.2014.07.011
- Focal optogenetic suppression in macaque area MT biases direction discrimination and decision confidence, but only transientlyeLife 7:1–23https://doi.org/10.7554/eLife.36523
- How to measure metacognitionFrontiers in Human Neuroscience 8:1–9https://doi.org/10.3389/fnhum.2014.00443
- Relating Introspective Accuracy to Individual Differences in Brain StructureScience 329:1541–1543https://doi.org/10.1126/science.1191883
- Cryptic multiple hypotheses testing in linear models: overestimated effect sizes and the winner’s curseBehav Ecol Sociobiol 65:47–55https://doi.org/10.1007/s00265-010-1038-5
- The role of social cognition in decision makingPhil Trans R Soc B 363:3875–3886https://doi.org/10.1098/rstb.2008.0156
- Perception-related Modulations of Local Field Potential Power and Coherence in Primary Visual Cortex of Awake Monkey during Binocular RivalryCerebral Cortex 14:300–313https://doi.org/10.1093/cercor/bhg129
- Social conformity is due to biased stimulus processing: electrophysiological and diffusion analysesSocial Cognitive and Affective Neuroscience 11:1449–1459https://doi.org/10.1093/scan/nsw050
- Visual Decision-Making in an Uncertain and Dynamic WorldAnnu Rev Vis Sci 3:227–250https://doi.org/10.1146/annurev-vision-111815-114511
- Perceptual Decision Making in Rodents, Monkeys, and HumansNeuron 93:15–31https://doi.org/10.1016/j.neuron.2016.12.003
- Beyond Trial-Based Paradigms: Continuous Behavior, Ongoing Neural Activity, and Natural StimuliJ Neurosci 38:7551–7558https://doi.org/10.1523/JNEUROSCI.1920-17.2018
- A computational framework for the study of confidence in humans and animalsPhil Trans R Soc B 367:1322–1337https://doi.org/10.1098/rstb.2012.0037
- Bayesian inference with incomplete knowledge explains perceptual confidence and its deviations from accuracyNat Commun 12https://doi.org/10.1038/s41467-021-25419-4
- Representation of Confidence Associated with a Decision by Neurons in the Parietal CortexScience 324:759–764https://doi.org/10.1126/science.1169405
- . Responses of pulvinar neurons reflect a subject’s confidence in visual categorizationnature NEUROSCIENCE
- Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessmentsJournal of Personality and Social Psychology 77:1121–1134https://doi.org/10.1037/0022-3514.77.6.1121
- The influence of gaze direction on food preferencesSci Rep 9https://doi.org/10.1038/s41598-019-41815-9
- Signal Detection Theory Analysis of Type 1 and Type 2 Data: Meta-d′, Response-Specific Meta-d′, and the Unequal Variance SDT Model In: Fleming SM, Frith CD, editors. The Cognitive Neuroscience of Metacognition. BerlinHeidelberg: Springer Berlin Heidelberg :25–66https://doi.org/10.1007/978-3-642-45190-4_3
- A signal detection theoretic approach for estimating metacognitive sensitivity from confidence ratingsConsciousness and Cognition 21:422–430https://doi.org/10.1016/j.concog.2011.09.021
- Balancing Type I error and power in linear mixed modelsJournal of Memory and Language 94:305–315https://doi.org/10.1016/j.jml.2017.01.001
- Generalized Linear Models. BostonMA: Springer US https://doi.org/10.1007/978-1-4899-3242-6
- Subliminal gaze cues increase preference levels for items in the gaze directionCognition and Emotion 32:1146–1151https://doi.org/10.1080/02699931.2017.1371002
- Human and macaque pairs employ different coordination strategies in a transparent decision gameeLife 12https://doi.org/10.7554/eLife.81641
- Post-decision wagering after perceptual judgments reveals bi-directional certainty readoutsCognition 176:40–52https://doi.org/10.1016/j.cognition.2018.02.026
- Post-decisional accounts of biases in confidenceCurrent Opinion in Behavioral Sciences 11:55–60https://doi.org/10.1016/j.cobeha.2016.05.005
- Coding of latent variables in sensory, parietal, and frontal cortices during closed-loop virtual navigationeLife 11https://doi.org/10.7554/eLife.80280
- Causal inference during closed-loop navigation: parsing of self- and object-motionPhil Trans R Soc B 378https://doi.org/10.1098/rstb.2022.0344
- Integration of individual and social information for decision-making in groups of different sizesPLoS Biol 15https://doi.org/10.1371/journal.pbio.2001958
- Post-decision wagering objectively measures awarenessNat Neurosci 10:257–261https://doi.org/10.1038/nn1840
- The perceptual and social components of metacognitionJournal of Experimental Psychology: General 145:949–965https://doi.org/10.1037/xge0000180
- Benefits of spontaneous confidence alignment between dyad membersCollective Intelligence 1https://doi.org/10.1177/26339137221126915
- The effects of recursive communication dynamics on belief updatingProc R Soc B 287https://doi.org/10.1098/rspb.2020.0025
- Cortical microstimulation influences perceptual judgements of motion directionNature 346:174–177https://doi.org/10.1038/346174a0
- Conclusions beyond support: overconfident estimates in mixed modelsBehavioral Ecology 20:416–420https://doi.org/10.1093/beheco/arn145
- The uncertain response in humans and animalsCognition 62:75–97https://doi.org/10.1016/S0010-0277(96)00726-3
- A better lemon squeezer? Maximum-likelihood regression with beta-distributed dependent variablesPsychological Methods 11:54–71https://doi.org/10.1037/1082-989X.11.1.54
- Putting perception into action with inverse optimal control for continuous psychophysicseLife 11https://doi.org/10.7554/eLife.76635
- The validity and consistency of continuous joystick response in perceptual decision-makingBehav Res 52:681–693https://doi.org/10.3758/s13428-019-01269-3
- The effects of reward and social context on visual processing for perceptual decision-makingCurrent Opinion in Physiology 16:109–117https://doi.org/10.1016/j.cophys.2020.08.006
- Determinants and modulators of human social decisionsNeuroscience & Biobehavioral Reviews 128:383–393https://doi.org/10.1016/j.neubiorev.2021.06.041
- Informational and Normative Influences in Conformity from a Neurocomputational PerspectiveTrends in Cognitive Sciences 19:579–589https://doi.org/10.1016/j.tics.2015.07.007
- Neural substrates of norm compliance in perceptual decisionsSci Rep 8https://doi.org/10.1038/s41598-018-21583-8
- Social modulation of decision-making: a cross-species reviewFront Hum Neurosci 7https://doi.org/10.3389/fnhum.2013.00301
- Metacognition in human decision-making: confidence and error monitoringPhil Trans R Soc B 367:1310–1321https://doi.org/10.1098/rstb.2011.0416
Article and author information
Author information
Version history
- Sent for peer review:
- Preprint posted:
- Reviewed Preprint version 1:
Copyright
© 2024, Schneider et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 66
- downloads
- 0
- citations
- 0
Views, downloads and citations are aggregated across all versions of this paper published by eLife.