Overt visual attention modulates decision-related signals in ventral and dorsal medial prefrontal cortex

  1. Center for Computational Psychiatry, Icahn School of Medicine at Mount Sinai, New York, United States
  2. Department of Psychology, The Ohio State University, Columbus, United States
  3. Department of Economics, The Ohio State University, Columbus, United States
  4. Department of Psychology, University of California Los Angeles, Los Angeles, United States

Peer review process

Revised: This Reviewed Preprint has been revised by the authors in response to the previous round of peer review; the eLife assessment and the public reviews have been updated where necessary by the editors and peer reviewers.

Read more about eLife’s peer review process.

Editors

  • Reviewing Editor
    Redmond O'Connell
    Trinity College Dublin, Dublin, Ireland
  • Senior Editor
    Joshua Gold
    University of Pennsylvania, Philadelphia, United States of America

Reviewer #1 (Public review):

Summary:

This study builds upon a major theoretical account of value-based choice, the 'attentional drift diffusion model' (aDDM), and examines whether and how this might be implemented in the human brain using functional magnetic resonance imaging (fMRI). The aDDM states that the process of internal evidence accumulation across time should be weighted by the decision maker's gaze, with more weight being assigned to the currently fixated item. The present study aims to test whether there are (a) regions of the brain where signals related to the currently presented value are affected by the participant's gaze; (b) regions of the brain where previously accumulated information is weighted by gaze.

To examine this, the authors developed a novel paradigm that allowed them to dissociate currently and previously presented evidence, at a timescale amenable to measuring neural responses with fMRI. They asked participants to choose between bundles or 'lotteries' of food times, which they revealed sequentially and slowly to the participant across time. This allowed modelling of the haemodynamic response to each new observation in the lottery, separately for previously accumulated and currently presented evidence.

Using this approach, they find that regions of the brain supporting valuation (vmPFC and ventral striatum) have responses reflecting gaze-weighted valuation of the currently presented item, where as regions previously associated with evidence accumulation (preSMA and IPS) have responses reflected gaze-weighted modulation of previously accumulated evidence.

A major strength of the current paper is the design of the task, nicely allowing the researchers to examine evidence accumulation across time despite using a technique with poor temporal resolution. The dissociation between currently presented and previously accumulated evidence in different brain regions in GLM1 (before gaze-weighting), as presented in Figure 5, is already compelling. The result that regions such as preSMA response positively to |AV| (absolute difference in accumulated value) is particularly interesting, as it would seem that the 'decision conflict' account of this region's activity might predict the exact opposite result. Additionally, the behaviour has been well modelled at the end of the paper when examining temporal weighting functions across the multiple samples.

In response to reviewer comments, the authors have explicitly tested for the effects of gaze-weighting over and above any main effect of value, and convincingly shown that these effects are both present in the main regions of interest - namely |SV| and gaze-weighted |SV| in the vmPFC, alongside |AV| and |AV_gaze| in the pre-SMA. This provides clear evidence in support of the notion of gaze-weighting of value signals in these regions.

Reviewer #2 (Public review):

Summary:

In this paper the authors seek to disentangle brain areas that encode the subjective value of individual stimuli/items (input regions) from those that accumulate those values into decision variables (integrators) for value-based choice. The authors used a novel task in which stimulus presentation was slowed down to ensure that such a dissociation was possible using fMRI despite its relatively low temporal resolution. In addition, the authors leveraged the fact that gaze increases item value, providing a means of distinguishing brain regions that encode decision variables from those that encode other quantities such as conflict or time-on-task. The authors adopt a region-of-interest approach based on an extensive previous literature and found that the ventral striatum and vmPFC correlated with the item values and not their accumulation whereas the pre-SMA, IPS and dlPFC correlated more strongly with their accumulation. Further analysis revealed that the pre-SMA was the only one of the three integrator regions to also exhibit gaze modulation.

The study uses a highly innovative design and addresses an important and timely topic. The manuscript is well-written and engaging, while the data analysis appears highly rigorous.

Weaknesses:

With 23 subjects the study has relatively low statistical power for fMRI although the within-subjects design and relatively high trial count reduces these concerns.

Author response:

The following is the authors’ response to the original reviews

Public Reviews:

Reviewer #1 (Public review):

Summary:

This study builds upon a major theoretical account of value-based choice, the 'attentional drift diffusion model' (aDDM), and examines whether and how this might be implemented in the human brain using functional magnetic resonance imaging (fMRI). The aDDM states that the process of internal evidence accumulation across time should be weighted by the decision maker's gaze, with more weight being assigned to the currently fixated item. The present study aims to test whether there are (a) regions of the brain where signals related to the currently presented value are affected by the participant's gaze; (b) regions of the brain where previously accumulated information is weighted by gaze.

To examine this, the authors developed a novel paradigm that allowed them to dissociate currently and previously presented evidence, at a timescale amenable to measuring neural responses with fMRI. They asked participants to choose between bundles or 'lotteries' of food times, which they revealed sequentially and slowly to the participant across time. This allowed modelling of the haemodynamic response to each new observation in the lottery, separately for previously accumulated and currently presented evidence.

Using this approach, they find that regions of the brain supporting valuation (vmPFC and ventral striatum) have responses reflecting gaze-weighted valuation of the currently presented item, whereas regions previously associated with evidence accumulation (preSMA and IPS) have responses reflecting gaze-weighted modulation of previously accumulated evidence.

Strengths:

A major strength of the current paper is the design of the task, nicely allowing the researchers to examine evidence accumulation across time despite using a technique with poor temporal resolution. The dissociation between currently presented and previously accumulated evidence in different brain regions in GLM1 (before gaze-weighting), as presented in Figure 5, is already compelling. The result that regions such as preSMA respond positively to |AV| (absolute difference in accumulated value) is particularly interesting, as it would seem that the 'decision conflict' account of this region's activity might predict the exact opposite result. Additionally, the behaviour has been well modelled at the end of the paper when examining temporal weighting functions across the multiple samples.

Weaknesses:

The results relating to gaze-weighting in the fMRI signal could do with some further explication to become more complete. A major concern with GLM2, which looks at the same effects as GLM1 but now with gaze-weighting, is that these gaze-weighted regressors may be (at least partially) correlated with their non-gaze-weighted counterparts (e.g., SVgaze will correlate with SV). But the non-gaze-weighted regressors have been excluded from this model. In other words, the authors are not testing for effects of gaze-weighting of value signals *over and above* the base effects of value in this model. In my mind, this means that the GLM2 results could simply be a replication of the findings from GLM1 at present. GLM3 is potentially a stronger test, as it includes the value signals and the interaction with gaze in the same model. But here, while the link to the currently attended item is quite clear (and a replication of Lim et al, 2011), the link to previously accumulated evidence is a bit contorted, depending upon the interpretation of a behavioural regression to interpret the fMRI evidence. The results from GLM3 are also, by the authors' own admission, marginal in places.

We have addressed this comment with new GLMs. The new GLM1 includes both non-gazeweighted and gaze-weighted regressors and finds that the vmPFC and striatum reflect gazeweighted sampled value, while the preSMA reflects gaze-weighted accumulated value. We have now dropped the old GLM3 and added two other GLMs, one that explicitly interacts accumulated value with accumulated dwell, and the other that considers only partial gaze discounting. These analyses all support the preSMA as encoding gaze-weighted accumulated value.

Reviewer #2 (Public review):

Summary:

In this paper, the authors seek to disentangle brain areas that encode the subjective value of individual stimuli/items (input regions) from those that accumulate those values into decision variables (integrators) for value-based choice. The authors used a novel task in which stimulus presentation was slowed down to ensure that such a dissociation was possible using fMRI despite its relatively low temporal resolution. In addition, the authors leveraged the fact that gaze increases item value, providing a means of distinguishing brain regions that encode decision variables from those that encode other quantities such as conflict or time-on-task. The authors adopt a region-of-interest approach based on an extensive previous literature and found that the ventral striatum and vmPFC correlated with the item values and not their accumulation, whereas the pre-SMA, IPS, and dlPFC correlated more strongly with their accumulation. Further analysis revealed that the preSMA was the only one of the three integrator regions to also exhibit gaze modulation.

Strengths:

The study uses a highly innovative design and addresses an important and timely topic. The manuscript is well-written and engaging, while the data analysis appears highly rigorous.

Weaknesses:

With 23 subjects, the study has relatively low statistical power for fMRI.

We believe several features of our study design and analytic approach mitigate concerns regarding statistical power.

First, our paradigm leveraged a within-subjects design with high total sample counts. Each participant completed approximately 60 choice trials across three 15-minute runs, with an average of 6.37 samples per trial. This yielded roughly 380 observations per participant, providing substantial statistical power at the individual level before aggregating across subjects. This within-subject power is particularly important for detecting parametric effects, as our regressors of interest (|∆_S_V| and |∆AV|) varied continuously across and within trials.

Second, rather than conducting an exploratory whole-brain analysis that would require larger sample sizes to correct for multiple comparisons, we employed a targeted ROI approach based on well-established regions from prior literature (e.g., Bartra et al., 2013; Hare et al., 2011). This ROI-driven approach substantially increases statistical power by reducing the search space and leverages theoretical predictions about where effects should occur. Our novel contribution that gaze modulation of accumulated evidence signals was reflected in preSMA activity builds naturally on established findings. However, we acknowledge that a larger sample size would provide greater confidence in the null effects and would enable more detailed individual differences analyses.

We have added a brief acknowledgement of the sample size limitation to the Discussion section of the main text:

“While our sample size of 20 subjects is modest by current neuroimaging standards, the withinsubject statistical power from our extended decision paradigm (~380 observations per subject), combined with hypothesis-driven ROI analyses and multiple comparisons correction, provides confidence in our core findings. Nevertheless, replication with larger samples would be valuable, particularly for more fully characterizing null effects and marginal findings.”

Recommendations for the authors:

Editor Comments:

Reviewer 1 in particular makes a number of suggestions for additional analyses that would help to strengthen the evidence supporting your conclusions.

We thank the editor and the reviewers for the helpful suggestions for improving our manuscript. We discuss our efforts to address each point below.

Reviewer #1 (Recommendations for the authors):

(1) To address my concerns about GLM2, the first thing to do might be to simply show the correlation between the regressors used across the three different models (e.g., as a figure in the methods). Although the authors have done a good job to ensure that AV and SV are decorrelated when including them both in the same model, they haven't shown us whether the regressors used in, for example, GLM2 are correlated/similar to the regressors used in GLM1. This is important information for interpretation.

Thank you for raising concerns about the overlap between different models. We agree that additional information regarding the correlation among sample-level regressors would aide readers in understanding the differences among the analyses. We now include this information in Figure 7 in the Methods section, as requested. While |SV| was uncorrelated with gaze-weighted |SV| (|SVGaze|; Pearson’s r = 0.002, p = 0.848), lagged |AV| was significantly correlated with lagged, gaze-weighted |AV| (lagged |AVGaze|; r = 0.365, p < 2.2 × 10<sup.-16).

(2) The acid test for gaze-modulation of value signals would be to show that the gazemodulated signals explain the fMRI results over and above the non-gaze-modulated signals. This could simply mean including SVgaze and SV (and equivalent terms for AV) within the same GLM. Following from point (1), the authors may point out that these terms are highly correlated - yes, but the GLM will then test for the effects of SVgaze *over and above* the effects of SV. (In fact, although I'd normally caution against orthogonalisation - it would here be totally legitimate to orthogonalise SVgaze w.r.t. SV).

We appreciate the reviewer’s suggestions for more robust tests of the presence of gaze-weighted signals. For reasons highlighted in our response above, we were initially hesitant to include both types of regressors in the same model due to their significant correlation. However, we now report the results of this analysis in the main text as the new GLM 1. This model incorporates both gaze-weighted and non-gaze-weighted terms. For each contrast we used the same procedures as reported in the main text (family-wise error corrected at p<0.05 and clusterforming thresholds at p<0.005).

In the vmPFC, we found significant effects of both |∆SV| (peak voxel: x = -14, y = 44, z = -12; t = 3.90, p = 0.0190) and |∆SVGaze| (peak voxel: x = 4, y = 38, z = -4; t= 5.21 p = 0.004), but no effects of |∆AV| or |∆AVGaze|. The striatum also showed a significant correlation with |∆SVGaze| (peak voxel: x = 22, y = 20, z = -10; t = 5.10 p = 0.014), but no other regressors.

In the pre-SMA, we found a significantly positive relationship with both |∆AV| (peak voxel: x = 4, y = 14, z = 50; t = 4.75 p < 0.001) and |∆AVGaze| (peak voxel: x = 4, y = 18, z = 50; t = 2.98, p = 0.032). In contrast, the dlPFC (x = 40, y = 34, z = 26; t = 6.83, p < 0.001) and IPS (x = 42, y = -50, z = 42; t = 5.16, p \= 0.010) were only correlated with |∆AV|. No other significant contrasts emerged.

These results provide direct support for the presence of gaze-modulated value signals in the brain, which we now describe in the main text Results section.

(3) With regards to GLM3, it would help to provide a bit more detail on what the time series looks like for the gaze regressor in this model - is it the entire timeseries of gaze (which presumably shifts back/forth between options multiple times within each trial) which is being convolved with the HRF? This seems different from how gaze is being calculated in GLM2, where it is amalgamated into an 'average gaze difference' within a sample between left/right options, if I understand the text correctly?

We apologize for the lack of details regarding how we operationalized the gaze regressors in our analyses. You are correct that the gaze regressor was calculated differently in GLM2 and GLM3.

However, in response to the reviewer’s points above (Major Point 2) and below (Major Point 4, Minor Point 1), we have decided to drop the old GLM3 from the paper while incorporating a revised GLM1 (combining old GLM1 and GLM2) and two new GLMs (see responses to Major Point 4 and Minor Point 1) to provide clearer evidence for gaze modulation of accumulated value in the brain.

(4) Also, is there not a reason why it isn't more appropriate to interact AV with *previously deployed gaze difference* (accumulated across previous samples) in this model, rather than the current gaze location? The latter seems to rely upon the indirect linkage via the behavioural modelling result, which seems to weaken the claim.

We thank the reviewer for this suggestion. We agree that our original GLM3 approach was limited because it interacted AV with current binary gaze location, which relies on the indirect behavioral relationship we established (i.e., that current gaze is negatively correlated with accumulated past gaze).

The original GLM2 (which is now incorporated into the new GLM1) implemented something similar to what the reviewer is suggesting as it used gaze-weighted values accumulated across all previous samples. Specifically, in GLM2, the gaze-weighted accumulated value (AVgaze) was calculated as the sum of all previous sampled values, each weighted by the proportion of gaze allocated to each option during that sampling period.

However, to more directly test whether accumulated evidence signals are modulated by accumulated gaze allocation we have now run an additional analysis (GLM2). In this analysis we have revised the old GLM3 to include additional regressors: ∆SV, lagged ∆AV, current gaze location, accumulated dwell advantage, ∆SV × current gaze location, and lagged ∆AV × accumulated dwell advantage.

The two new regressors were defined as follows:

Accumulated dwell advantage: For each sample t, accumulated dwell advantage represents the cumulative difference in gaze allocation up to sample t-1, calculated as (total dwell left – total dwell right) / (total dwell left + total dwell right). This is a continuous measure from -1 (all previous gaze to right) to +1 (all previous gaze to left).

∆AV × accumulated dwell advantage: The interaction between accumulated values and accumulated dwell advantage, which directly tests whether brain regions encoding accumulated value are modulated by the history of gaze allocation.

This approach is conceptually similar to old GLM2’s gaze-weighting method, but allows us to examine the interaction effect more explicitly as a separate regressor rather than having it embedded within the value calculation.

Here, we found that the pre-SMA showed a positive correlation with the ∆AV × accumulated dwell advantage term (peak voxel: x = 8, y = 10, z = 58; t = 3.10, p = 0.0258). Surprisingly, the striatum also showed a correlation with this term (peak: x = -16, y = 10, z = -6; t = 4.07, p = 0.0176). No other ROIs showed significant relationships.

This analysis provides additional evidence that pre-SMA encodes accumulated value signals that are modulated by accumulated gaze allocation, without relying on indirect relationships between current and past gaze. We now report these results in the main text as GLM2 as follows:

“To more directly test whether accumulated evidence signals were modulated by accumulated gaze allocation throughout a trial, we conducted additional, exploratory analyses. Specifically, we ran a GLM that incorporated the following two terms: accumulated dwell advantage and ∆AV × accumulated dwell advantage, in addition to ∆SV, the current gaze location, and ∆SV × current gaze location.

We calculated accumulated dwell advantage as follows: For each sample t, accumulated dwell advantage is the cumulative difference in gaze allocation up to sample t-1, calculated as (total dwell left – total dwell right) / (total dwell left + total dwell right). This is a continuous measure from -1 (all previous gaze to right) to +1 (all previous gaze to left).

We also included the interaction between accumulated dwell advantage and ∆AV (i.e., signed accumulated evidence). This interaction term is positive when gaze is primarily to the left and left has more value or when gaze is primarily to the right and right has more value. This interaction term directly tests whether brain regions encoding accumulated evidence are modulated by the history of gaze allocation. This approach allows us to examine the interaction effect more explicitly as a separate regressor rather than having it embedded within the value calculation itself.

This GLM revealed a positive correlation between pre-SMA activity and the ∆AV × accumulated dwell advantage term (peak voxel: x = 8, y = 10, z = 58; t = 3.01, p = 0.026). Surprisingly, the striatum also showed a correlation with this term (peak voxel: x = -16, y = 10, z = -6; t = 4.07, p = 0.018). Additionally, activity in the dlPFC was positively correlated with ∆SV (peak voxel: x = -36, y = 34, z = 22; t = 3.96, p \= 0.016). No other ROIs showed significant relations.

This analysis provides additional evidence that the pre-SMA encodes accumulated value signals that are modulated by the history of gaze allocation.”

Minor

(1) "In Trial A, the subject looks left 30% of the time and right 70% of the time. In Trial B, the subject looks left 70% of the time and right 30% of the time. In Trial A, the net input value ("drift rate") would be |0.3 ∙ 7 − 0.7 ∙ 3| = 0. In Trial B, the drift rate would be |0.7 ∙ 7 − 0.3 ∙ 3| = 4." I may be missing something, but isn't this consistent with an aDDM with theta=0, rather than theta=0.3-0.5 as is typically found?

The reviewer raises an important point about our assumptions regarding attentional discounting. We agree that our approach could be problematic as it may assume stronger discounting than has been observed in the literature.

To address this concern, we calculated drift on a sample-by-sample basis before aggregating to the trial level. Following Smith, Krajbich, and Webb (2019), for each individual sample within a trial, we computed:

β = (GLeft × VLeft) – (GRight × VRight)

γ = (GRight × VLeft) – (GLeft × VRight),

where GLeft and GRight represent the proportion of time spent fixating left versus right within that specific sample, and VLeft and VRight are the instantaneous values of the left and right options. We then averaged these sample-level β and γ values across all samples within each trial to obtain trial-level regressors. This approach preserves the fine-grained temporal dynamics of gazedependent value accumulation that would be lost by calculating gaze proportions only at the trial level.

Using this sample-level method in a mixed-effects logistic regression predicting choice (left vs. right), we estimated subject-specific values of θ = γ/β. Across our sample (N=20), we found mean θ = 0.77 (SD = 0.21, range = 0.55–1.25). These estimates are somewhat higher than the typical aDDM findings of attentional bias (θ = 0.3–0.5). This may reflect the drawn-out nature of this task relative to prior aDDM tasks.

Next, we ran a new GLM that incorporated these θ estimates in the sampled value estimates. For this GLM3, we computed θ-weighted sampled-value (|∆_TW_SV|) as:

TWSV = (GLeft × (VLeft – θVRight)) – (G_R × (VRight – θVLeft)).

Similar to GLM1, we computed an accumulated value signal based on the lagged sum of previous samples’ |∆_TW_SV| (i.e., |∆_TW_AV|).

We found significant positive effects of |∆TW_SV| in the vmPFC (peak voxel: x = -14, y = 44, z = -12; t = 3.57, _p = 0.0270) and IPS (peak voxel: x = 30, y = -28, z = 40; t = 4.58 p = 0.0198), but in no other ROI.

In contrast, we found significant positive relationships between |∆TW_AV| and activity in the preSMA (peak voxel: x = 0, y = 22, z = 52; t = 4.68, _p = 0.0014), dlPFC (peak voxel: x = 40, y = 32, z = 26; t = 4.32, p = 0.0040), and IPS (peak voxel: x = 44, y = -48, z = 42; t = 6.26, p < 0.0000). Notably, we also observed a significant relationship between |∆TW_AV| and activity in the vmPFC (x = 8, y = 38, z = 18; t = 3.89, _p = 0.0410). No other significant contrasts emerged.

We now report this additional analysis as GLM3 in the main text, as follows:

“In our first set of analyses, we implicitly assumed complete discounting of non-fixated information, in contrast with previous studies that have generally found only partial discounting (Krajbich et al., 2010; Sepulveda et al., 2020; Smith & Krajbich, 2019; Westbrook et al., 2020). To verify that our results are robust to inter-subject variability in attentional discounting, we estimated subject-level attentional discounting parameters and then re-estimated our original GLM with new, recalculated gaze-weighted value regressors.

Following Smith, Krajbich, and Webb (2019), for each individual sample within a trial, we computed:

β = (GLeft × VLeft) – (GRight × VRight) γ = (GRight × VLeft) – (GLeft × VRight), where GLeft and GRight represent the proportion of time spent gazing left versus right within that specific sample, and VLeft and VRight are the instantaneous values of the left and right options. We then averaged these sample-level β and γ values across all samples within each trial to obtain trial-level regressors. We then ran a mixed-effects logistic regression predicting choice (left vs. right) as a function of β and γ and then calculated subject-specific values of θ = γ/β. Across our sample (N=20), we found mean θ = 0.77 (SD = 0.21, range = 0.55–1.25).

Next, for the GLM, we computed θ-weighted sampled-value (|∆SVθ|) as:

SVθ = (GLeft × (VLeft − _θ_VRight)) – (GRight × (VRight − _θ_VLeft))

Similar to the original GLM, we computed an accumulated value signal, |∆AVθ|, based on the lagged sum of previous samples’ |∆SVθ|.

We found significant positive effects of |∆SVθ| in the vmPFC (peak voxel: x = -14, y = 44, z = 12; t = 3.57 p = 0.027) and IPS (peak voxel: x = 30, y = -28, z = 40; t = 4.58 p = 0.020), but in no other ROI.

In contrast, we found significant positive relationships between |∆AVθ| and activity in the preSMA (peak voxel: x = 0, y = 22, z = 52; t = 4.68, p = 0.001), dlPFC (peak voxel: x = 40, y = 32, z = 26; t = 4.32, p = 0.004), and IPS (peak voxel: x = 44, y = -48, z = 42; t = 6.26, p < 0.0001). Notably, we also observed a significant relationship between |∆AVθ| and activity in the vmPFC (x = 8, y = 38, z = 18; t = 3.89, p = 0.041). No other significant contrasts emerged.

In summary, these analyses provide additional evidence that the vmPFC encodes gaze-weighted sampled value signals and the pre-SMA encodes gaze-weighted accumulated value signals, though other correlations also emerged.”

(2) The reporting of statistical results in the fMRI could be sharpened - e.g. in the figure legends, don't just say "Voxels thresholded at p < .05.", but make clear whether you mean FWE whole-brain corrected (I think you do from the methods) or whether this is uncorrected for display; similarly, for the peak voxels, report the associated Z statistic at that voxel rather than just "negative beta".

We agree that it is important to include additional details regarding how we reported the statistical results. We now clarify our procedures in the main text:

“We report results using FWE-corrected statistical significance of p < 0.05 and a cluster significance threshold of p < 0.005.”

We now also report the T statistics for peak voxels.

(3) A couple of the citations are slightly wrong - e.g., Kolling et al 2012 shouldn't be cited as arguing for decision conflict, as in fact it argues strongly against this account and in favour of a foraging account of ACC activity. Similarly, Hunt et al 2018 doesn't provide support for decision conflict; instead, it shows signals in ACC show evidence accumulation for left/right actions over time (although not whether these accumulator signals are gazeweighted, in the same way as the present study).

We thank the reviewer for pointing out these mistakes in our citations. We have revised the references throughout.

Reviewer #2 (Recommendations for the authors):

(1) In some places, the introduction would benefit from fleshing out certain points. For example it is stated “For instance, decisions that are less predictable also tend to take more time (Konovalov & Krajbich, 2019) and can be influenced by attention manipulations (Parnamets et al., 2015; Tavares et al., 2017; Gwinn et al., 2019; Bhatnagar & Orquin, 2022). The quantitative relations between these measures argue for an evidenceaccumulation process.” It is not clear why the relations between them argue for an EA process, and the reader would benefit from some further explanation.

We thank the reviewer for this helpful suggestion. We agree that the original text did not sufficiently explain why these relationships support evidence-accumulation models. We have revised the introduction to better articulate the mechanistic basis for this claim.

This revision clarifies these points in the main text:

“Decisions like this are thought to rely on a bounded, evidence-accumulation process that depends on factors such as the value of the sampled information and shifts in attention. According to this framework, when two options are similar in value, evidence accumulates more slowly towards the decision threshold, resulting in longer response times (RT) and more opportunity for shifts in attention to influence the choice outcome. In contrast, when one option is clearly superior, evidence accumulates more rapidly and the decision is made quickly with less of a relation between gaze and choice. This choice process produces reliable, quantitative patterns in choice, RT, and eye-tracking data (Ashby et al., 2016; Callaway et al., 2021; Gluth et al., 2018; Krajbich et al., 2010; Smith & Krajbich, 2018). For instance, decisions with similar values are more random (i.e., less predictable), tend to take more time (Konovalov & Krajbich, 2019), and can be experimentally manipulated by diverting attention towards one option more than the other (Bhatnagar & Orquin, 2022; Gwinn et al., 2019; Pärnamets et al., 2015; Pleskac et al., 2022; Tavares et al., 2017). Critically, these behavioral measures do not simply correlate; rather, they exhibit precise quantitative relationships consistent with evidence accumulation models (Konovalov & Krajbich, 2019).”

(2) Some of the study hypotheses also need to be clarified. What are the hypotheses regarding how SV and AV should translate to BOLD in an input vs integrator region? Larger SV/AV = larger BOLD? What predictions would be made for a time-on-task or conflict region? Are the predictions the same or different? Clarifying this will help the reader to understand to what extent the gaze manipulation is pivotal in identifying integrator regions.

We thank the reviewer for this excellent suggestion. We agree that it is useful to clearly articulate our hypotheses about BOLD signal predictions for different aspects of the model, and why gaze manipulation is critical for distinguishing between them. We have now expanded the introduction to clarify these predictions.

For input regions, we predicted a straightforward positive relationship: larger sampled value (|ΔSV|) should produce larger BOLD activity. Input regions encode the momentary evidence being sampled (i.e., the relative value of currently presented stimuli). Consistent with prior work (Bartra et al., 2013), we expected such activity in the vmPFC and ventral striatum.

Critically, we also predicted that these sampled value signals should be modulated by gaze location. The attentional drift-diffusion model (aDDM; Krajbich et al., 2010) posits that attended items receive full value weight while unattended items are discounted. Consistent with prior work (Lim et al., 2011), we expected stronger vmPFC/striatum activity when the higher-value item is fixated compared to when the lower-value item is fixated

For integrator regions, we predicted an analogous positive relationship: larger accumulated value (|ΔAV|) should produce more BOLD activity. Accumulator regions encode the summed evidence over the course of the decision. Consistent with prior work (Hare et al. 2011; Gluth et al. 2021; Pisauro et al. 2017) we expected such activity in the pre-SMA, dlPFC, and, IPS.

As with sampled value, we predicted that integrator activity should reflect gaze-weighted accumulated value. Just as inputs are modulated by current gaze, the accumulated evidence should be weighted by the history of gaze allocation over the entire trial.

Conflict-based models make qualitatively different predictions. Regions implementing conflict monitoring should show increased activity when options are similar in value, regardless of time.

The conflict account predicts that BOLD activity should scale with inverse value difference: smaller |ΔV| → higher conflict → higher BOLD (Shenhav et al., 2014, 2016). In simple choice tasks, high conflict and high accumulated value are both associated with long RT (Pisauro et al. 2017), leading to ambiguity about how to interpret purported neural correlates of accumulated value. In our task we avoid this ambiguity – we analyze the effect of accumulated value at each point in time, not just at the time of decision. In this case, conflict should be inversely correlated with accumulated value. Moreover, the conflict account makes no predictions about how BOLD activity should be modulated by gaze allocation for a given set of values.

A more serious concern is the potential link to putative time-on-task BOLD activity. Accumulated value inevitably increases with time, leading to a correlation between the two variables (Grinband et al. 2011; Holroyd et al., 2018; Mumford et al. 2024). This is where the gaze data become particularly important. Time-on-task regions should show no relation with gaze allocation. After accounting for non-gaze-weighted accumulated value, only accumulator, and not time-on-task, regions should show a relation with gaze-weighted accumulated value. The results of the revised GLMs provide exactly such evidence.

We have edited the manuscript to make clear to readers why our gaze manipulation was not merely exploratory but rather a theoretically-motivated test to distinguish between competing models of decision-related neural activity.

We have clarified our study hypotheses in the Introduction as follows:

“We hypothesized that we would find (1) a positive correlation between gaze-weighted |SV| and activity in the reward network (the ventromedial prefrontal cortex (vmPFC) and ventral striatum), and (2) a positive correlation between gaze-weighted |AV| in the pre-supplementary motor area (pre-SMA) (Aquino et al., 2023), dorsolateral prefrontal cortex (dlPFC), and intraparietal sulcus (IPS).”

We have also added clarifying text about conflict and time-on-task to the Discussion as follows: “Conflict-based models make qualitatively different predictions. Regions implementing conflict monitoring should show increased activity when options are similar in value, regardless of time. The conflict account predicts that BOLD activity should scale with the inverse value difference: smaller |ΔV| → higher conflict → higher BOLD (Shenhav et al., 2014, 2016). In simple choice tasks, high conflict and high accumulated value are both associated with long response times (Pisauro et al., 2017), leading to ambiguity about how to interpret purported neural correlates of accumulated value. In our task we avoided this ambiguity by analyzing the effect of accumulated value at each point in time, not just at the moment of decision. Under this approach, conflict should be inversely correlated with accumulated value (as higher accumulated evidence indicates less similarity between options). Moreover, the conflict account makes no predictions about how BOLD activity should be modulated by gaze allocation for a given set of option values.

A more serious concern is the potential confound with time-on-task BOLD activity. Accumulated value inevitably increases with time within a trial, leading to a correlation between the two variables (Grinband et al., 2011; Holroyd et al., 2018; Mumford et al., 2024). This is where the gaze data were particularly important. Time-on-task regions should show no relation with gaze allocation patterns. After accounting for non-gaze-weighted accumulated value, only accumulator regions, and not time-on-task regions, should show a relationship with gazeweighted accumulated value. The results of our analyses provide exactly such evidence: preSMA activity was positively correlated with gaze-weighted accumulated value, even when accounting for previous gaze history and individual differences in attention discounting.”

(3) The authors allude to there being a correlation between SV and AV on this task, but the correlation is never reported. Please report the correlation with and without the removal of T-1.

We appreciate the reviewer pointing out this omission. We now report all correlations between SV and both the lagged and non-lagged versions of AV in the Methods section (Fig. 7). SV was significantly correlated with the full calculation of AV (Pearson’s r = 0.27). In contrast, this correlation, while still statistically significant, decreased when compared to lagged AV (Pearson’s r = 0.06).

(4) When examining relationships between SV, AV, and choice probability, the authors note that a larger coefficient for SV compared to AV is an inevitable consequence of an SSM choice process. Please explain why this is the case.

The reviewer is correct in observing that this point was not made sufficiently clear in the main text. We have now expanded the explanation in the behavioral results section.

The key insight is that in sequential sampling models, choices occur when accumulated evidence reaches a decision threshold. Importantly, the perceived value of each sample consists of the true underlying value plus random noise. The final sample (SV) is what pushes the accumulated evidence over the threshold, which creates a selection bias: decisions tend to occur when the noise component of SV happens to be positive and large. This means that the perceived final SV systematically overestimates the true SV, biasing upward the regression coefficient for the effect of SV on choice. In contrast, AV represents the sum of all previous sampled evidence, samples that we know did not lead to a choice. These samples are thus more likely to have had a negative or small noise component, meaning that the perceived AV systematically underestimates the true AV. This biases downwards the regression coefficient for the effect of AV on choice.

In the net, we expect that even when sample evidence is weighted equally over time in the true decision process, regression analyses will inevitably shower larger coefficients for the effects of SV then for those of AV. This is a statistical artefact of the threshold-crossing mechanism, and not a reflection of differential weighting. We have incorporated this explanation into the revised manuscript to make clear why this pattern is an expected consequence of the SSM framework:

“The larger coefficient for ∆SV compared to ∆AV is an inevitable consequence of an SSM choice process. In SSMs, a choice occurs when accumulated evidence reaches a threshold. Critically, perceived value for any given sample consists of the true underlying value plus random noise. The final sample (∆SV) is what pushes the accumulated evidence over the threshold, which creates a selection effect: decisions tend to be made when the noise component of ∆SV is relatively large and aligned with the ultimate choice, causing the perceived final ∆SV to systematically overestimate the true ∆SV. As a result, the regression coefficient for the effect of final ∆SV on choice is overestimated. In contrast, ∆AV represents the sum of all previous evidence, which includes samples that were insufficient to trigger a choice and thus more likely to have noise components that favored the non-chosen option. This means that the perceived ∆AV systematically underestimates the true ∆AV. As a result, the regression coefficient for the effect of ∆AV on choice is underestimated. This creates an inherent asymmetry between ∆SV and ∆AV: even when the true decision process weights evidence equally over time, regression analyses will show larger coefficients for ∆SV than ∆AV. For any data generated by an SSM, regressing choice probability on final ∆SV and total ∆AV would produce a larger coefficient for ∆SV due to this threshold-crossing selection effect.”

(5) It is not clear to me why the authors single out the pre-SMA only in the abstract when IPS and dlPFC also show stronger correlations with AV and exhibit gaze modulation in the authors' final non-linear analysis. Further explanation is required in the Discussion and I would also suggest amending the Abstract because the 'Most importantly' claim will not be meaningful for the reader.

We appreciate the reviewer’s point. In the revised manuscript, we have included several new GLMs, including the new GLM1 that looks at gaze-weighted AV, above and beyond the effect of non-gaze-weighted AV. That analysis only supports pre-SMA. We have now clarified this in the Abstract as follows:

“Finally, we found gaze modulated accumulated-value signals, above and beyond the non-gazemodulated signals, in the pre-supplementary motor area (pre-SMA), providing novel evidence that visual attention has lasting effects on decision variables and suggesting that activity in the pre-SMA reflects accumulated evidence.”

(6) Some discussion of statistical power would be warranted given that a sample of 23 is now considered small by current fMRI standards.

We appreciate the reviewer raising this important issue. We acknowledge that our sample size of 23 subjects (with only 20 having useable eye-tracking data) is on the small side by current fMRI standards. However, we believe several features of our study design and analytic approach mitigate concerns regarding statistical power.

First, our paradigm leveraged a within-subjects design with high total sample counts. Each participant completed approximately 60 choice trials across three 15-minute runs, with an average of 6.37 samples per trial. This yielded roughly 380 observations per participant, providing substantial statistical power at the individual level before aggregating across subjects. This within-subject power is particularly important for detecting parametric effects, as our regressors of interest (|∆SV| and |∆AV|) varied continuously across and within trials.

Second, rather than conducting an exploratory whole-brain analysis that would require larger sample sizes to correct for multiple comparisons, we employed a targeted ROI approach based on well-established regions from prior literature (e.g., Bartra et al., 2013; Hare et al., 2011). This ROI-driven approach substantially increases statistical power by reducing the search space and leverages theoretical predictions about where effects should occur. Our novel contribution that gaze modulation of accumulated evidence signals was reflected in pre-SMA activity builds naturally on established findings.

However, we acknowledge that a larger sample size would provide greater confidence in the null effects and would enable more detailed individual differences analyses.

We have added a brief acknowledgement of the sample size limitation to the Discussion section of the main text:

“While our sample size of 20 subjects is modest by current neuroimaging standards, the withinsubject statistical power from our extended decision paradigm (~380 observations per subject), combined with hypothesis-driven ROI analyses and multiple comparisons correction, provides confidence in our core findings. Nevertheless, replication with larger samples would be valuable, particularly for more fully characterizing null effects and marginal findings.”

  1. Howard Hughes Medical Institute
  2. Wellcome Trust
  3. Max-Planck-Gesellschaft
  4. Knut and Alice Wallenberg Foundation