Abstract
It is widely agreed that people make irrational decisions in the presence of irrelevant distractor options. However, there is little consensus on whether decision making is facilitated or impaired by the presence of a highly rewarding distractor or whether distraction effect operates at the level of options’ component attributes rather than at the level of their overall value. To reconcile different claims, we argue that it is important to incorporate consideration of the diversity of people’s ways of decision making. We focus on a recent debate over whether people combine choice attribute in an additive or multiplicative way. Employing a multi-laboratory dataset investigating the same decision making paradigm, we demonstrated that people used a mix of both approaches and the extent to which approach was used varied across individuals. Critically, we identified that this variability was correlated with the effect of distractor on decision making. Individuals who tended to use a multiplicative approach, and hence focused on overall value, showed a positive distractor effect. In contrast, in individuals who tended to use an additive approach, driven by component attributes, the opposite negative distractor effect (divisive normalisation) was prominent. These findings suggest that distractor effects can operate at the level of overall choice values and concur with recent behavioural and neuroscience findings that multiple distractor effects co-exist.
Introduction
Psychologists, economists and neuroscientists have been interested in whether and how decision making is influenced by the presence of unchooseable distractor options. Rationally, choices ought to be unaffected by distractors, however, it has been demonstrated repeatedly that this is not the case in human decision making. For example, the presence of a strongly rewarding, yet unchooseable, distractor can either facilitate or impair decision making (Louie et al., 2013; Chau et al., 2014; Webb et al., 2020). Which effect predominates depends on the distractor’s interactions with the chooseable options which, in turn, are a function of their values (Chau et al., 2020).
Intriguingly, most investigations have considered the interaction between distractors and chooseable options either at the level of their overall utility or at the level of their component attributes, but not both. Recently, however, one study has considered both possible levels of option-distractor interactions and argued that the distractor effect operates mainly at the attribute level rather than the overall utility level (Cao and Tsetsos, 2022). When the options are comprised of component attributes (for example one feature might indicate the probability of an outcome if the option is chosen while the other might indicate the magnitude of the outcome), it is argued that the distractor exerts its effects through interactions with the attributes of the chooseable options. However, as has been argued in other contexts, just because one type of distractor effect is present does not preclude another type from existing; different distractor effects are not mutually exclusive (Chau et al., 2020; Kohl et al., 2023).
Moreover, the fact that people have diverse way of making decisions is often overlooked. We argue that this diversity can make different forms of distractor effect more or less prominent in different circumstances.
Multiple distractor effects have been proposed. At the level of the overall utility of the choice, the divisive normalization model suggests that the presence of a valuable distractor can impair decision accuracy and this is sometimes known as a negative distractor effect (Louie et al., 2013; Webb et al., 2020; Kohl et al., 2023). Conversely, an attractor network model suggests the opposite – a valuable distractor can improve decision accuracy by slowing down decision speed and this is sometimes known as a positive distractor effect (Chau et al., 2014; Chang et al., 2019; Chau et al., 2020; Kohl et al., 2023). At the other level, the level of choice attribute, the selective integration model emphasizes that individual attributes of the distractor interact with individual attributes of the chooseable options and distort the way they are integrated. Although these models are sometimes discussed as if they were mutually exclusive, it is possible that some, if not all, of them can co-exist. For example, in a two-attribute decision making scenario, each option and distractor fall at different points in a two-dimensional decision space defined by the two component attributes. Whether the distractor facilitates or impairs decision accuracy depends on the exact locations of the chooseable options and distractor (Dumbalska et al., 2020). Alternatively, the decision space can be defined by the sum of and differences in the two chooseable options’ overall values, and it has been shown that positive and negative distractor effects predominate in different parts of this decision space (Chau et al., 2020; Kohl et al., 2023). Hence, it is unlikely that the distractor affects decision making in a single, monotonic way. In addition, although most studies focused on analyzing distractor effects at the group level, they often involved a mix of individuals showing positive and negative distractor effects (Chang et al., 2019; Webb et al., 2020). It is possible that such variability in distractor effects could be related to variability in people’s ways of combining attributes during decision making.
Options that we choose in everyday life are often multi-attribute in nature. For example, a job-seeker may consider the salary and the chance of successfully getting the job before submitting the application. An ideal way of combining the two attributes (the salary and success rate) is by calculating their product, i.e. employing the Expected Value model or a multiplicative rule (von Neumann and Morgenstern, 1944). However, it has been argued that, instead of following this ideal rule, people use a simpler additive rule to combine attributes, in which individual attributes of the same option are added together often via a weighted-sum procedure (Farashahi et al., 2019; Cao and Tsetsos, 2022). This is equivalent to assuming that people compare pairs of attributes separately, without integration. As such, it is easy to take a dichotomous view that people either use a multiplicative or additive rule in their decision making. Intriguingly, however, recently it has been shown that, at the level of each individual human or monkey, decision making involves a combination of both rules (Scholl et al., 2014; Bongioanni et al., 2021). The two computational strategies may rely on distinct neuronal mechanisms, one in parietal cortex estimating the perceptual differences between stimuli, leading to an additive rule, and a more sophisticated one in prefrontal cortex tracking integrated value, leading to a multiplicative rule. It is possible to devise a composite model by having a single parameter (an integration coefficient) to describe, for each individual decision maker, the extent to which decision-making is based on multiplication/addition. In other words, the integration coefficient captures individual differences in people’s and animals’ degree of attribute combination during decision making (Figure 1). Here we consider whether such individual differences could result in different forms of distractor effect.
In the current study, we re-analysed data collected from three different laboratories that involved 144 human participants choosing between two options in the presence of a distractor (Figure 1a and b) (Chau et al., 2014; Gluth et al., 2018; Chau et al., 2020). Recently, the data has been fitted using a multiplicative rule, additive rule, and a multiplicative rule with divisive normalization (Cao and Tsetsos, 2022). It was argued that participants’ choice behaviour was best described by the additive rule and that the previously reported positive distractor effect was absent when utility was estimated using the additive rule. Here, we fitted the data using the same models and procedures, but also considered an additional composite model to capture individual variations in the relative use of multiplicative and additive rules (Figure 1c). We found that this compositive model provides the best account of participants’ behaviour. Critically, those who employed a more multiplicative style of integrating choice attributes also showed stronger positive distractor effects, whereas those who employed a more additive style showed negative distractor effects. These findings concur with neural data demonstrating that the medial prefrontal cortex computes the overall values of choices in ways that go beyond simply adding their components together, and it is the neural site at which positive distractor effects emerge (Barron et al., 2013; Chau et al., 2014; Noonan et al., 2017; Papageorgiou et al., 2017; Fouragnan et al., 2019; Bongioanni et al., 2021).
Results
This study analysed empirical data acquired from human participants (five datasets; N=144) performing the multi-attribute decision-making task described in Figure 1 (Chau et al., 2014; Gluth et al., 2018).
Participants were tasked to maximise their rewards by choosing between options that were defined by different reward magnitudes (X) and probabilities (P) depicted as rectangular bars of different colours and orientations, respectively (Figure 1b). The task involved, as a control, Two-Option Trials, on which participants were offered two options. It also involved Distractor Trials, on which two chooseable options were presented alongside a distractor option which could not be selected. The two chooseable options and unchosen distractor option are referred to as the higher value (HV), lower value (LV), and distractor value (DV), based on their utility. We began our analyses by assuming utility as Expected Value (i.e., EV = X × P).
A distractor effect was, on average, absent?
We used a general linear model (GLM), GLM1, to examine whether a distractor effect was present in the choice behaviour of participants. GLM1 included 3 regressors to predict the choice of the HV option: an HV − LV term that represents the choice difficulty (i.e., as the difference in value between HV and LV becomes smaller, it becomes more difficult to select the better option), a DV − HV term that represents the relative distractor value, and a (HV − LV)(DV − HV) interaction term that examines whether the distractor effect was modulated as a function of choice difficulty. Similar approaches have been used previously (Chau et al., 2014; Gluth et al., 2018; Chau et al., 2020; Cao and Tsetsos, 2022; Kohl et al., 2023). On Distractor Trials (Figure 2a), the results of GLM1 showed a positive DV − HV effect [β = 0.0835, t(143) = 3.894, p = 0.000151] and HV − LV effect [β = 0.486, t(143) = 20.492, p < 10−43], and a negative (HV − LV)(DV − HV) interaction effect [β = −0.0720, t(143) = −3.668, p = 0.000344]. However, Cao and Tsetsos (Cao and Tsetsos, 2022) suggested that the same analysis should be applied to the control Two-Option Trials. As such, each control trial should be analysed by including a hypothetical distractor identical to the actual distractor that was present in the matched experimental Distractor Trials. Moreover, Cao and Tsetsos suggest specific methods for trial matching (Cao and Tsetsos, 2022). Theoretically, a distractor effect should be absent when GLM1 is applied to analyse these control trials because the distractor was not really present. Surprisingly, although consistent with the finding of Cao and Tsetsos, the Two-Option Trials (Figure 2b) displayed a significant DV − HV effect [β = 0.0685, t(143) = 2.877, p = 0.00463] and a (HV − LV)(DV − HV) interaction effect [β = −0.0487, t(143) = −2.477, p = 0.0144], alongside the expected HV − LV effect [β = 0.635, t(143) = 27.975, p < 10−59],.
One key test to examine whether a distractor effect was present was to compare the strength of distractor effects on the experimental Distractor Trials and the control Two-Option Trials. This was achieved by adapting GLM1 into GLM2 and then matching the control Two-Option Trials and Distractor Trials (here we followed exactly the approach suggested by Cao and Tsetsos. More details are presented in the section GLM analysis of relative choice accuracy). In addition to the regressors involved in GLM1, GLM2 included a binary variable T to describe the trial type (i.e., 0 = Two-Option Trials and 1 = Distractor Trial). Then all original GLM1 regressors were multiplied by this variable T. Hence, the presence of a “stronger” distractor effect on Distractor Trials than control trials should be manifested in the form of a significant (DV − HV)T effect or (HV − LV)(DV − HV)T effect. However, the results showed that neither the (DV − HV)T nor (HV − LV)(DV − HV)T term was significant. While these results may seem to suggest that a distractor effect was not present at an overall group level, we argue that the precise way in which a distractor affects decision making depends on how individuals integrate the attributes.
Identifying individual variabilities in combining choice attributes
During multi-attribute decision making, people integrate information from different attributes. However, the method of integration can be highly variable across individuals and this, in turn, has an impact on how a distractor effect manifests. Hence, it is imperative to first consider individual differences in how participants integrate the choice attributes. Previous work has demonstrated that choice behaviour may not be adequately described using either additive or multiplicative models, but rather a combination of both (Bongioanni et al., 2021). We fitted these models to the Two-Option Trial data, such that the fitting was independent of any distractor effects, and tested which model best describes participants’ choice behaviours. In particular, we included the same set of models suggested by Cao and Tsetsos, which included an EV model, an additive utility (AU) model, and an EV model combined with divisive normalisation. In addition, we included a composite model which utilises an integration coefficient to incorporate both additive and multiplicative methods of combining option attributes. A larger integration coefficient suggests an individual places more weight on using a multiplicative rule than an additive rule.
When a Bayesian model comparison was performed, the results showed that the composite model provides the best account of participants’ choice behaviour (Figure 3; exceedance probability = 1.000, estimated model frequency = 0.8785). Figures 3c and d show the fitted parameters of the composite model: 𝜂, the integration coefficient determining the relative weighting of the additive and multiplicative value (M = 0.324, SE = 0.0214); γ, the magnitude/probability weighing ratio (M = 0.415, SE = 0.0243); and ϑ, the inverse temperature (M = 9.643, SE = 0.400). Our finding that the average integration coefficient 𝜂 was 0.325 coincides with previous evidence that people were biased towards using an additive, rather than a multiplicative rule. However, it also shows rather than being fully additive (𝜂=0) or multiplicative (𝜂=1), people’s choice behaviour is best described as a mixture of both.
Multiplicative style of integrating choice attributes was associated with a significant positive distractor effect
It has been shown that evaluations of choices driven by more than just an additive combination of attribute features depend on the medial and/or adjacent ventromedial prefrontal cortex (Bongioanni et al., 2021; Papageorgiou et al., 2017). On the other hand, a positive distractor effect is also linked to the modulation of decision signals in a similar prefrontal region (Chau et al., 2014). On the basis of these findings, we expected that a positive distractor effect may be highly related to the use of a multiplicative method of choice evaluation given their similar anatomical associations. As such, we proceeded to explore how the distractor effect (i.e., the effect of (DV − HV)T obtained from GLM2; Figure 2c) was related to the integration coefficient (𝜂) of the optimal model via a Spearman’s rank correlation (Figure 4). As expected, a significant positive correlation was observed [r(142) = 0.286, p = 0.000548].
This correlation could be driven by three possible patterns of distractor effect. First, greater integration coefficients (i.e., being more multiplicative) could be related to more positive distractor effects. Second, smaller integration coefficients (i.e., being more additive) could be related to more negative distractor effects. Third, both positive and negative distractor effects could be present, but appeared separately in predominantly multiplicative and additive individuals respectively. To test which was the case, we used the mean integration coefficient value to divide the participants into two groups [Multiplicative Group (N = 71) and Additive Group (N = 73)]. We then analysed the data from each group using GLM2 (Figures 5b and c). We found that the distractor effect, reflected in the (DV − HV)T term, was significantly greater in the Multiplicative Group than the Additive Group (Figure 5; t(142) = 3.792, p = 0.00022). Critically, the distractor effects were significant within each of the individual groups but bore opposite signs: the distractor effect was positive in the Multiplicative Group [β = 0.105, t(70) = 3.438, p = 0.000991]but negative in the Additive Group [β = −0.0725, t(72) = −2.053, p = 0.0437].
Finally, we performed two additional analyses that revealed comparable results to those shown in Figure 5. In the first analysis, reported in Supplementary Figure 1, we added an HV + LV term to the GLM, because this term was included in some analyses of a previous study that used the same dataset (Chau et al., 2020). In the second analysis, reported in Supplementary Figure 2, we replaced the utility terms of GLM2. Since the above analyses involved using HV, LV, and DV values defined by the normative Expected Value model, here, we re-defined the values using the composite model prior to applying GLM2. The results of both analyses remained broadly comparable. In the Multiplicative Group a significant positive distractor effect was found in both analyses. In the Additive Group a significant negative distractor effect was found in one analysis and a similar trend, despite not reaching significance, was found in another analysis.
Discussion
It has been widely agreed that humans make irrational decisions in the presence of distractor options. However, there has been little consensus on how exactly distractors influence decision making. Does a seemingly attractive distractor impair or facilitate decision making? Does the distractor influence decision making at the level of overall utility or individual choice attributes? Often, one assumption behind these questions is that there is only a single type of distractor effect. Here, we demonstrated that, instead of a unidirectional effect, the way a distractor influences decision making depends on individual persons’ styles of integrating choice attributes during decision making. More specifically, those who employed a multiplicative style of integrating attributes were prone to a positive distractor effect. Conversely, those who employed an additive style of integrating attributes were prone to a negative distractor effect. These findings show that the precise way in which distractors affect decision making depends on an interaction between the distractor value and people’s style of decision making.
At the neuroanatomical level, the negative distractor effect is mediated by the posterior parietal cortex (PPC) (Louie et al., 2011; Chau et al., 2014), but the same region is also crucial for perceptual decision making processes (Shadlen and Shohamy, 2016). The additive heuristics for combining choice attributes are closer to a perceptual evaluation because distances in this subjective value space correspond linearly to differences in physical attributes of the stimuli, whereas objective (multiplicative) value has a non-linear relation with them (cf. Figure 1c). It is well understood that many sensory mechanisms, such as in primates’ visual systems or fruitflies’ olfactory systems, are subject to divisive normalization (Carandini and Heeger, 2012). Hence, the additive heuristics that are more closely based on sensory mechanisms could also be subject to divisive normalization, leading to negative distractor effects in decision making.
In contrast, the positive distractor effect is mediated by the medial prefrontal cortex (mPFC) (Chau et al., 2014; Fouragnan et al., 2019), which is also crucial for representing abstract cognitive space. In situations that require combining multiple dimensions non-linearly, the mPFC integrates dimensions to achieve a closer approximation of objective value, sometimes using a “grid-like” code, such as during multi-attribute decision making, understanding social relations, and abstract knowledge (Constantinescu et al., 2016; Bongioanni et al., 2021; Park et al., 2021). Indeed, disrupting the mPFC in macaques also impaired their use of a multiplicative strategy. Hence, a positive distractor effect appears only when mPFC constructs the actual expected value non-linearly from the component attributes, a more refined strategy than the one followed by PPC.
Other studies have provided evidence suggesting that multiple forms of distractor effect can co-exist. For example, when people were asked to choose between three food items, most people were poorer at choosing the best option when the worst option (i.e. the distractor) was still an appealing option (Louie et al., 2013; Webb et al., 2020). In other words, most people showed negative distractor effects, which is predicted by divisive normalization models (Carandini and Heeger, 2012; Louie et al., 2013).
However, it is noticeable that the degree of negative distractor effect varies across individuals and some even showed the reverse, positive distractor effect. Similarly, it has been shown in social decision making, choice utility was best described by a divisive normalization model (Chang et al., 2019).
However, at the same time, a positive distractor effect was found in choice behaviour, such that greater distractor values were instead associated with more choices of the best option, suggesting a positive distractor effect. The effect became even more robust when individual variability was considered in a mixed effects model. Together, these results suggest that divisive normalization (which predicts negative distractor effects) and positive distractor effect may co-exist, but that they predominate in different aspects of decision making.
Indeed, it is possible that multiple forms of distractor effect can co-exist because of their different neuroanatomical origins. In human and monkey, a positive distractor effect was found in decision signals in the mPFC (Chau et al., 2014; Fouragnan et al., 2019), whereas divisive normalization was found in decision signals in the PPC (Louie et al., 2011; Chau et al., 2014). As such, it should be expected that while these opposite distractor effects might sometimes diminish one another, disruption in one of the brain regions might result in the expression of the distractor effect related to the other brain region.
Indeed, this idea is supported by empirical data that the parietal-related, negative distractor effect was more prominent in humans and monkeys with a lesion in the medial prefrontal cortex (Noonan et al., 2010; Noonan et al., 2017). In addition, the prefrontal-related, positive distractor effect was more prominent after the parietal cortex was transiently disrupted using transcranial magnetic stimulation (Kohl et al., 2023). These findings concur with the general notion that decision making is mediated by a distributed neural circuit, rather than a single, localized brain region.
Methods
Multi-attribute decision-making task and datasets
The current study re-analysed five published datasets of empirical data based on a multi-attribute decision-making task (Chau et al., 2014; Gluth et al., 2018). The experimental task involved participants making a decision between two options in the absence (Two-Option Trials) or presence (Distractor Trials) of a third distractor option that could not be chosen. During each trial, two or three stimuli associated with different reward magnitudes and probabilities (represented by colours and orientations) were randomly presented in selected screen quadrants (Figure 1). Immediately following stimulus onset (0.1 s), options that were available for selection were surrounded by orange boxes, while the distractor option that could not be selected was surrounded by a purple box. The choice of the participant was indicated by the change in colour of the surrounding box from orange to red. At the end of each trial, the edge of each stimulus changed to yellow if the choice was rewarded, and to grey if the choice was not rewarded. A total of 144 human participants were included in the analysis with data from the original fMRI dataset (N = 21; (Chau et al., 2014)) and additional replication experiments (Experiments 1, 2, 3, and 4) performed by Gluth and colleagues (N = 123; (Gluth et al., 2018)).
GLM analysis of relative choice accuracy
The choice accuracy data from the Two-Option Trials and Distractor Trials were analysed separately using GLM1:
where HV, LV, and DV refer to the values of the chooseable higher value option, chooseable lower value option, and distractor, respectively. z(x) refer to the z-scoring of term x, which were applied to all terms within the GLMs. The unexplained error is represented by ε. Trials in which the distractor or an empty quadrant were mistakenly chosen was excluded from the analysis. Fitting was performed using MATLAB’s glmfit function by assuming a binomial distribution.
A binary variable (T) encoding trial type (i.e., 0 = Two-Option Trials and 1 = Distractor Trials) was introduced to combine both two-option and distractor trial types into a single GLM used to assess distractor-related interactions:
The choice accuracy data of the Two-Option Trials and Distractor Trials were analysed together using GLM2, which followed exactly the procedures described by Cao and Tsetsos (Cao & Tsetsos, 2022):
We focused on the relative accuracy of Distractor Trials compared to two choice trials by considering the proportion of H choices among trials with H or L choices and excluding a small number of D-choice trials. We also identified matched Two-Option Trials for each distractor trial. The 150 Distractor Trials yielded 149 unique conditions exhibiting a unique combination of probability (P) and magnitude (X) attributes across the three options (H, L, and D). However, the 150 Two-Option Trials contained only 95 unique conditions carrying a unique combination of P and X across the two available options (H and L). As the distractor and Two-Option Trials do not exhibit ‘one-to-one’ mapping, some Distractor Trials will have more than one matched Two-Option Trials. The different counts of matched Two-Option Trials were used as ‘observation weights’ in the GLM. Through identifying all matched Two-Option Trials of every distractor trial, the baselining approach introduces ‘trial-by-trial baseline accuracy’ as a new dependent variable as described by Cao and Tsetsos (Cao & Tsetsos, 2022).
Computational modelling
To determine which model best describes participants’ choice behaviour, we followed the procedures carried out by Cao and Tsetsos (Cao & Tsetsos, 2022). We fitted three models to choice data of the control, Two-Option Trials, namely the Expected Value (EV) model, Additive Utility (AU) model, and the Expected Value and Divisive Normalisation (EV+DN) model. In this study, we also included an additional composite model. For each model, we applied the softmax function as the basis for estimating choice probability:
where Ui refers to the utility of option i (either the HV or LV option) and ϑ refers to the inverse temperature parameter. Four models were used to estimate the options’ utility Ui, based on their corresponding reward magnitude Xi and probability Pi (both rescaled to the interval between 0 and 1). The four models were the Expected Value model, Additive Utility model, Expected Value and Divisive normalisation model, and Composite model:
Expected Value (EV) model
This model employs a multiplicative rule for estimating utility:
Additive Utility (AU) model
This model employs an additive rule for estimating utility based on the magnitude Xi and probability Pi as follows:
where γ is the magnitude/probability weighing ratio (0 ≤ γ ≤ 1).
Expected Value and Divisive Normalisation (EV+DN) model
We included the EV+DN model following the procedures carried out by Cao and Tsetsos (Cao & Tsetsos, 2022). Compared to the EV models, here utilities were normalised by the inputs of all values as follows:
where EVi, Xi, and Pi denotes expected value, reward magnitude, and reward probability of option i, respectively.
Composite model
We further explored the possibility that behaviour may be described as a mixture of both additive AU and multiplicative EV models (Scholl et al., 2014; Bongioanni et al., 2021):
where γ is the magnitude/probability weighing ratio (0 ≤ γ ≤ 1) and 𝜂 is the integration coefficient (0 ≤ 𝜂 ≤ 1) determining the relative weighting of the additive and multiplicative value. Simulations reported in Bongioanni et al., 2021 prove that the composite model can be accurately discriminated from the simple additive and multiplicative models, even if the latter ones include non-linear distortions of the magnitude and probability dimensions. Additionally, it was shown that the parameter estimate for the composite model is very accurate.
Model fitting and comparison
The empirical choice probabilities generated from the model predictions were used to calculate the binomial log-likelihood, following the procedures described by Cao and Tsetsos (Cao and Tsetsos, 2022):
Here, pe and pm represent the empirical and model-predicted p(H over L), respectively.
Model fitting was performed on the behavioural data to maximise the log-likelihood summed over trials. A grid of randomly generated starting values for free parameters of each model was used to fit each participant’s data at least 10 times to avoid local optima. Model fitting was performed using MATLAB’s fmincon function with the maximum number of function evaluations and iterations set to 5000 and the optimality and step tolerance set to 10−10. The variational Bayesian analysis (VBA) toolbox (Daunizeau et al., 2014; Rigoux et al., 2014) was used to calculate each model’s posterior frequency and protected exceedance probability. Aligning with the methods used by Cao and Tsetsos (Cao & Tsetsos, 2022), only the trials where H or L responses were made were included in the analysis. Trials in which participants opted for the unchooseable D were excluded.
Five-fold cross-validation was performed for model comparison. First, this involved dividing the 150 trials for each participant into five folds. Four random folds of trials would be randomly selected to generate best-fitting parameters used to calculate the log-likelihood summed across trials in the unchosen fold. We repeated this process five times for each unchosen fold and computed the average log-likelihood across cross-validation folds to generate the cross-validated log-likelihood for each model. We then simulated each model’s behaviour using the best-fitting parameters obtained during model fitting. The simulated behaviour was cross-fitted to all models to calculate the log-likelihoods summed over trials. Lastly, the goodness-of-fit for the models were evaluated using Bayesian model comparison.
Acknowledgements
This work was supported by the Hong Kong Research Grants Council (15105522) and Wellcome Trust grant 221794/Z/20/Z.
References
- Online evaluation of novel choices by simultaneous representation of multiple memoriesNat Neurosci 16:1492–1498
- Activation and disruption of a neural mechanism for novel choice in monkeysNature 591:270–274
- Clarifying the role of an unavailable distractor in human multiattribute choiceeLife 11
- Normalization as a canonical neural computationNat Rev Neurosci 13:51–62
- Comparing value coding models of context-dependence in social choiceJournal of Experimental Social Psychology 85
- A neural mechanism underlying failure of optimal choice with multiple alternativesNat Neurosci 17:463–470
- Consistent patterns of distractor effects during decision makingeLife 9
- Organizing conceptual knowledge in humans with a gridlike codeScience 352:1464–1468
- A map of decoy influence in human multialternative choiceProc Natl Acad Sci U S A 117:25169–25178
- Flexible combination of reward information across primatesNat Hum Behav 3:1215–1224
- The macaque anterior cingulate cortex translates counterfactual choice value into actual behavioral changeNat Neurosci 22:797–808
- Value-based attentional capture affects multi-alternative decision makingeLife 7
- Intraparietal stimulation disrupts negative distractor effects in human multi-alternative decision-makingeLife 12
- Reward value-based gain control: divisive normalization in parietal cortexJ Neurosci 31:10627–10639
- Normalization is a general neural mechanism for context-dependent decision makingProceedings of the National Academy of Sciences
- Contrasting Effects of Medial and Lateral Orbitofrontal Cortex Lesions on Credit Assignment and Decision-Making in HumansJ Neurosci 37:7023–7035
- Separate value comparison and learning mechanisms in macaque medial and lateral orbitofrontal cortexProc Natl Acad Sci U S A 107:20547–20552
- Inverted activity patterns in ventromedial prefrontal cortex during value-guided decision-making in a less-is-more taskNature communications 8
- Inferences on a multidimensional social hierarchy use a grid-like codeNat Neurosci 24:1292–1301
- A role beyond learning for NMDA receptors in reward-based decision-making-a pharmacological study using d-cycloserineNeuropsychopharmacology 39:2900–2909
- Decision Making and Sequential Sampling from MemoryNeuron 90:927–939
- Theory of Games and Economic BehaviorPrinceton: Princeton University Press
- Divisive normalization does influence decisions with multiple alternativesNat Hum Behav 4:1118–1120
Article and author information
Author information
Version history
- Sent for peer review:
- Preprint posted:
- Reviewed Preprint version 1:
- Reviewed Preprint version 2:
- Version of Record published:
Copyright
© 2023, Wong et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 584
- downloads
- 40
- citation
- 1
Views, downloads and citations are aggregated across all versions of this paper published by eLife.