Integrated externally and internally generated task predictions jointly guide cognitive control in prefrontal cortex
Abstract
Cognitive control proactively configures information processing to suit expected task demands. Predictions of forthcoming demand can be driven by explicit external cues or be generated internally, based on past experience (cognitive history). However, it is not known whether and how the brain reconciles these two sources of information to guide control. Pairing a probabilistic task-switching paradigm with computational modeling, we found that external and internally generated predictions jointly guide task preparation, with a bias for internal predictions. Using model-based neuroimaging, we then show that the two sources of task prediction are integrated in dorsolateral prefrontal cortex, and jointly inform a representation of the likelihood of a change in task demand, encoded in frontoparietal cortex. Upon task-stimulus onset, dorsomedial prefrontal cortex encoded the need for reactive task-set adjustment. These data reveal how the human brain integrates external cues and cognitive history to prepare for an upcoming task.
https://doi.org/10.7554/eLife.39497.001Introduction
‘Cognitive control’ describes a collection of neurocognitive mechanisms that allow us to use internal goals and ongoing context to strategically bias the manner in which we process information (Miller and Cohen, 2001; Egner, 2017). For instance, depending on current goals, humans can flexibly switch or update ‘task-sets’ that allow them to shift between different aspects of a stimulus to which they attend and respond (Monsell, 2003). Here, a task-set involves the mental representation of task-relevant stimuli, responses, and their corresponding stimulus-response mappings, allowing for an appropriate action to be executed in response to a stimulus (cf. Monsell, 2003; Kiesel et al., 2010). Cognitive control thus grants the organism considerable behavioral flexibility, but it also incurs costs, in that controlled processing is slow and effortful (Norman and Shallice, 1986): to wit, it takes longer to perform a trial when a shift in the task-set is required than when staying with the same set –– known as ‘switch costs’ –– which is thought to reflect additional cognitive processing, including the updating of the task-set and the resolution of proactive interference (i.e., the previously activated task-set’s interference with the retrieval of the currently required task-set) (Rogers and Monsell, 1995; Badre and Wagner, 2006).
A central question about cognitive control concerns its regulation –– that is, how does the brain determine when and how much control should be applied (Botvinick et al., 2001)? The broad suggestion is that people predict forthcoming task demands and adjust processing accordingly (e.g., Shenhav et al., 2013; Egner, 2014; Jiang et al., 2014; Abrahamse et al., 2016; Waskom et al., 2017). Importantly, such expectations about task demands can be driven by two sources: explicit predictions provided by external cues (Rogers and Monsell, 1995; Dreisbach et al., 2002; Badre and Wagner, 2006) and internally generated, trial history-based predictions, which are typically implicit (Dreisbach and Haider, 2006; Mayr, 2006; Bugg and Crump, 2012; Egner, 2014; Chiu and Egner, 2017).
Previous behavioral studies showed that the two types of predictions appear to drive control simultaneously. In particular, trial history based predictions impact cognitive control even in cases where these predictions are redundant, as in the presence of 100% valid external cues for selecting the correct control strategy (e.g., Alpay et al., 2009; Correa et al., 2009; Kemper et al., 2016). However, it is not presently known whether and how the brain reconciles explicit external and (typically implicit) internal predictions. Here, we sought to characterize the computational and neural mechanisms that mediate the joint influence of external, cued-based, and internal, cognitive history-based (i.e., a participant’s history of cognitive processing involved in performing prior trials on this task) predictions about future task demands that drive the engagement of cognitive control. Specifically, we sought to characterize how internal and external predictions jointly affect proactive task-set updating. Note that while we make the assumption that expectations regarding forthcoming task demand can mitigate the costs of switching tasks, we here do not aim to distinguish between reduced switch costs due to improved task-set reconfiguration (Rogers and Monsell, 1995) versus improved resolution of mnemonic interference, or ‘task-set inertia’ (Allport et al., 1994; Badre and Wagner, 2006). Rather, we consider both of these processes integral components of successful task-set updating (Qiao et al., 2017).
We combined computational modeling of behavior on a probabilistic variant of the classic cued task switching paradigm (Dreisbach et al., 2002) with functional magnetic resonance imaging (fMRI) in healthy humans. On each trial, participants performed one of two perceptual decision tasks on an array of moving, colored dots (Figure 1). Crucially, prior to the presentation of the trial’s task cue and stimulus, an explicit probabilistic pre-cue informed participants of the likelihood that the forthcoming trial would require performing one task (color categorization) or the other (motion categorization) (Figure 1). This pre-cue provided an explicit cue-induced task prediction that could be used to guide preparatory task-set updating, and be contrasted with trial history-based, internally generated predictions about the forthcoming task.

Example trial in the experimental task.
Each trial started with a pie chart, whose proportions of black/white area predicted the probability of encountering a black vs. white frame surrounding the forthcoming cloud of moving, colored dots. The frame color cued the task to be performed (color vs. motion categorization).
While the benefit of the explicit cue was represented by its predictive value, trial history was uninformative about the probability of task switching, as the task sequence was randomized. There was thus no objective benefit to generating trial history-based predictions. However, based on prior studies, we nevertheless anticipated that participants would form (likely implicit) internal expectations for forthcoming trials based on trial-history (e.g., Huettel et al., 2002), and this design ensured that trial-history and cue-based predictions were independent of each other. This allowed us to quantify the influence of each type of prediction in order to adjudicate between different possible models of control guidance, including a ‘rational’ model that is driven purely by the informative explicit cue.
We then leveraged the concurrently acquired fMRI data to trace neural representations of task and control demand predictions. Here, the first major question is whether externally and internally driven task predictions drive behavior in parallel or are integrated in a joint neural representation of task prediction. Second, it has been proposed that control processes, like the switching of a task-set, are guided by predictions of control demand (Shenhav et al., 2013; Egner, 2014; Jiang et al., 2014; Abrahamse et al., 2016; Waskom et al., 2017). Based on this proposal and the theory of dual mechanisms of cognitive control (Braver, 2012), we sought to characterize neural representations of proactive switch demand (the likelihood of having to switch tasks), which is determined by the relationship between the predicted forthcoming task and the task that was performed on the previous trial. Finally, our protocol also allowed us to assess the neural substrates of reactive cognitive control (specifically, reactive switch demand), based on the mismatch between task predictions and actual requirements at the time of task-stimulus onset.
To preview the results, task-switching behavior was jointly driven by internally generated and cue-induced task predictions and, strikingly, the impact of the former was stronger than that of the latter. Moreover, at the time of pre-cue onset, the fMRI data revealed an integrated representation of the joint external and internal predictions in left dorsolateral prefrontal cortex (dlPFC). This prediction informed a representation of proactive switch demand in the frontoparietal control network, and, at the time of task stimulus presentation, the prediction error associated with these joint predictions (i.e., reactive switch demand) was encoded in the dorsomedial prefrontal cortex (dmPFC), including the anterior cingulate cortex (ACC). Collectively, these data suggest that experientially acquired and explicitly cued expectations of control demand are reconciled in dlPFC and dmPFC to jointly guide the implementation of cognitive control.
Results
Behavioral data – Effects of external cues and cognitive history
Participants (N = 22) performed a cued task switching protocol involving two perceptual decision-making tasks (Figure 1) that required reporting either the predominant color or motion direction of a noisy dot cloud stimulus. Which task to perform was indicated by the color of a simultaneously presented frame that surrounded the dot cloud. Crucially, the task stimulus was preceded by a predictive pie chart cue (i.e., the pre-cue) that accurately indicated the probability that the forthcoming task would be the color (or motion) task (five probability levels: 0.2, 0.4, 0.5, 0.6 and 0.8), and thus, whether the forthcoming trial would likely involve the same task as the previous trial (i.e., a task-repeat trial) or the other task (i.e., a task-switch trial). The task sequence itself was pseudo-random, with an equal number of switch and repeat trials occurring within each run. Participants were not told anything about the statistical properties of the task sequence, allowing us to study how subjects mix external and internal predictions without explicit knowledge of the validity of the internal prediction.
The varying benefits of cue-induced and history-based predictions in our design allowed us to adjudicate between two competing hypotheses. Based on prior behavioral literature (Alpay et al., 2009; Correa et al., 2009; Kemper et al., 2016), cue-based and trial history-based predictions could jointly contribute to behavior (the ‘joint-guidance hypothesis’). Alternatively, a rival model assumes that control strategy is driven by a ‘rational’ arbitration between internally generated and external predictions that is based on the expected benefit of each prediction, as represented by their respective predictive value (or certainty, cf. Daw et al., 2005). Rational is in quotations here because this model does not take into account the potential (and unknown) costs of employing these two types of control predictions, which may be another important factor in driving the application of control (Kool et al., 2010; Shenhav et al., 2013). Given that trial-history was not informative of the upcoming task (i.e., it had no predictive value), this alternative model in effect corresponds to control being guided exclusively by the external cue, which has predictive value. We here refer to this as the ‘max-benefit hypothesis’.
These hypotheses differ in terms of how the pre-cue and trial history should drive task-set updating. Therefore, we started by analyzing how behavior was influenced by three factors: the relationship between the previous and current trial task (i.e., the task switch effect), the pre-cue probabilistic task prediction, and possible internally generated task predictions based on trial-history up to 4 trials back. We subsequently present formal modeling and model comparisons to quantify more precisely what type(s) of task prediction best accounted for behavior.
Participants performed with high accuracy (color task: 0.87 ± 0.02 [mean ±SEM]; motion task: 0.88 ± 0.02), which did not differ between tasks (t21 = 0.30, p=0.75). To test whether the previous-trial’s task (i.e., trial i-1) influenced behavior on the current trial, we conducted repeated-measures ANOVAs (previous task ×current task) on accuracy and response time (RT) data. Replicating the classic task switch cost, there was a significant interaction between previous- and current-trial tasks in both accuracy (F1,21=47.0, p<0.001; Figure 2A) and RT (F1,21=95.5, p<0.001; Figure 2B), driven by more accurate (task-repeat accuracy: 0.90 ± 0.01; task-switch accuracy: 0.85 ± 0.02) and faster responses (task-repeat RT: 0.87 ± 0.01 s; task-switch RT: 0.94 ± 0.02 s) when the task repeated than when it switched. Additionally, motion trials were faster than color trials (F1,21=21.3, p<0.001; Figure 2B; motion RT: 0.84 ± 0.02 s; color RT: 0.97 ± 0.02 s), which is consistent with previous studies using similar color/motion discrimination tasks and dot cloud stimuli (Jiang et al., 2016; Waskom et al., 2017). No other effects reached statistical significance.

Behavioral results.
(A, B) Group mean (±MSE) of accuracy (A) and RT (B), plotted as a function of task on the previous trial (i-1) and current trial. Results for trials i-2 and i-3 are plotted similarly in (E, F) and (G, H), respectively. (C, D) Group mean (±MSE) of accuracy (C) and RT (D), plotted as a function of (i) the pre-cue’s prediction of encountering the actual task and (ii) the task on the current trial.
To examine the effect of the probabilistic pre-cue on behavior, we performed repeated-measures ANOVAs (5 task prediction levels × current task) on accuracy and RT data (Figure 2C,D, see Table 1 for summary statistics). No significant effects were detected for accuracy. Responses were again faster for motion judgments than color judgments (F1,21=24.3, p<0.001; Figure 2D). Most importantly, there was a significant effect of the explicit pre-cue, as reflected by a main effect of task prediction on RT (F4,84=4.3, p<0.005; Figure 2D), with response speed scaling with predictive pre-cue information. No interaction between pre-cue information and current task was observed (F4,84 < 1), indicating that the effects of predictive cues on RT were similar in the motion and color tasks.
Descriptive statistics (group mean ± MSE) of behavioral data.
https://doi.org/10.7554/eLife.39497.004Task prediction | 0.2 | 0.4 | 0.5 | 0.6 | 0.8 |
---|---|---|---|---|---|
Color trial accuracy | 0.88 ± 0.02 | 0.87 ± 0.02 | 0.87 ± 0.02 | 0.88 ± 0.02 | 0.90 ± 0.03 |
Color trial RT (s) | 1.00 ± 0.02 | 0.97 ± 0.02 | 0.98 ± 0.02 | 0.97 ± 0.02 | 0.96 ± 0.02 |
Motion trial accuracy | 0.86 ± 0.03 | 0.86 ± 0.03 | 0.88 ± 0.02 | 0.88 ± 0.02 | 0.88 ± 0.02 |
Motion trial RT (s) | 0.85 ± 0.03 | 0.85 ± 0.02 | 0.86 ± 0.02 | 0.84 ± 0.02 | 0.84 ± 0.02 |
To assess trial-history effects beyond the immediately preceding trial, we tested whether performance was influenced by task-sets that had occurred in recent prior trials (from trials i-2 to i-4) by conducting repeated measures ANOVAs (task at prior trial × current task) on accuracy and RT. The interaction between the prior trial task-sets and the current task was significant in both accuracy (F1,21=6.2, p=0.02; Figure 2E) and RT (F1,21=73.2, p<0.001; Figure 2F) for trial i-2. For trial i-3, the interaction was not significant in accuracy (F1,21<1, n.s.; Figure 2G), but was significant in RT (F1,21=8.4, p=0.008; Figure 2H). Trial i-4 did not show a modulation effect in either accuracy or RT (both F1,21<1, n.s). Since the modulation effect decayed as the distance from previous trial to current trial increased (Figure 2A–B and E–H), trials beyond i-4 were not tested. These results provide strong evidence that trial-history based task predictions –– stemming from learning over the last three trials –– impact behavior, biasing participants towards expecting task repetitions.
Behavioral data – Model comparison
To formally compare how well the joint-guidance and max-benefit hypotheses explain the behavioral data, we constructed a quantitative model for each hypothesis and compared the two using trial-wise RTs. Accuracy was not modeled due to its insensitivity to external task prediction induced by the pre-cue (Figure 2C). We first defined three variables to represent task switching, cue-induced predictions, and internally generated predictions, respectively, with the latter two being continuous variables, ranging from 0 to 1, that represent task-set weighting (i.e., the relative activation of color vs. motion task-sets). Without loss of generality, a value greater than 0.5 means a prediction favoring the color task; and a value less than 0.5 indicates a prediction favoring the motion task. In particular, after the task cue was presented, the task-set became deterministic. Therefore, motion and color tasks were then represented by 0 and 1, respectively. All model variables used in this study are summarized in Table 2.
Summary of model variables used in this study.
Abbreviations: T = task; cur = current trial; prev = previous trial; p=prediction; int = internal.
Variable | Meaning | Definition/range |
---|---|---|
Tcur | Task required on the current trial | 0 if motion task; 1 if color task. |
Tprev | Task required on the previous trial | Same as above |
i | Trial index | |
Pcue | Probability of encountering a color task indicated by the pre-cue | 0.2, 0.4, 0.5, 0.6 or 0.8, depending on pre-cue |
α | Learning rate | [0, 1] |
Pint | Internally generated task prediction | (1 - α)Pint(i-1) + αTprev |
Pjoint | Joint task prediction | (1-β)Pint +βPcue |
PEcue | Prediction error of Pcue | |Tcur - Pcue| |
PEint | Prediction error of Pint | |Tcur - Pint| |
PEprev | Task switch effect | |Tcur - Tprev| |
PEjoint | Prediction error Pjoint | |Tcur - Pjoint| |
Modulation of PEcue on RT | ||
Modulation of PEint on RT | ||
β | scaled reliance on the pre-cue | |
Randomly sampled reliance on the pre-cue | ||
Proactive switch demand | |Tprev - Pjoint| | |
Confidence of joint task prediction | |Pjoint – 0.5| | |
Proactive interference effect | |Pint-Pcue| |
To capture the task switch effect, we define Tprev as the task required on the previous trial. A task switch/repetition can thus be defined by comparing the current task to Tprev. We then denote Pcue as the probability of encountering a color task trial based on the pre-cue. To formally model the internally generated, trial-history based task prediction, we employed a reinforcement learning model:
Where Pint(i) encodes the internally generated prediction of the task on trial i; α represents the learning rate (Figure 3A), which is a free parameter ranging from 0 to 1 and denotes how much this prediction relies on the previous trial (Tprev) in relation to older trials (integrated in Pint(i-1)). Pint(0) was initially set to 0.5 to reflect an unbiased initial belief of task-set. After each trial, Pint was updated based on Tprev and Equation 1 and was then used as the internal prediction for the next trial.

Model-based behavioral results.
(A) Weights of older trials on determining Pint, plotted as a function of different learning rates. (B) Illustration of model comparison. Given a trial sequence of tasks and the time course of a model variable (depicted as a string of nodes, with the height and brightness of the nodes coding the task-set), its corresponding time course of (unsigned) PE (with the height and the color of the nodes coding the magnitude of the PE) was calculated. The max-benefit model consisted of PEprev and PEcue; and the joint-guidance model consisted of PEint and PEcue. (C) Distribution of individual learning rates. (D) Distribution of the reliance on pre-cue relative to the self-generated task prediction. (E, F) Gain in RT when the pre-cue was informative relative to when the pre-cue was non-informative, plotted as a function of the reliance on pre-cue. (E) and (F) show results for motion and color trials, respectively.
To link these variables to RTs, the unsigned prediction error (discrepancy between the task-set weighting and actual task, denoted as PE) was calculated for each trial and each variable (i.e., PEprev, PEcue, and PEint; Figure 3B), where a larger PE indicates a greater need to adjust one’s task-set to the actual task demand. Based on the observation that trials with larger PE of the forthcoming task have slower RTs (Waskom et al., 2017), we assumed that RTs scale positively with PE. The two rival hypotheses were then modeled as general linear models (GLMs) consisting of the variables defined above. In the max-benefit model, PEcue was included to represent the effect of external cues. To maximize the utility of the informative pre-cue, the PEint that represented the non-informative trial history was not included in this model. Finally, PEprev was used to account for the classic task switch effect. The joint-guidance model consisted of PEcue and PEint to represent the modulations of cue-induced and trial history-based task predictions, respectively. The classic task switch effect in this model was accounted for by PEint, because the most recent task has already been factored into Pint. Our model comparison explored the full model space by also including an alternative model with all three variables (PEprev, PEint, and PEcue), in order to capture any additional effect from the previous trial (e.g., task-set inertia) above and beyond its contribution to Pint. We also included three control GLMs representing each of the three variables by themselves. Note that the model with only PEprev represents the classic task switch effect. All models also included a constant regressor, in order to model the portion of RT data that do not vary as a function of the present experimental manipulations. To account for mean RT differences in color and motion trials, RTs from the two tasks were fit separately.
Model comparison was conducted using cross-validation to prevent over-fitting and to control for different numbers of free parameters used in the candidate models (Chiu et al., 2017) (see Materials and methods: Modeling and model comparison). The performance of the different models was then submitted to Bayesian model comparison (Stephan et al., 2009), which calculated protected model exceedance probabilities (i.e., the likelihood of a given model providing the best explanation of the behavioral data) for each candidate model. The joint-guidance model clearly outperformed all other models, with a protected exceedance probability of 0.997, indicating that behavior was best explained by a collective contribution to proactive task-set updating from cue-induced and internally generated task predictions. The joint-guidance model was hence used for all subsequent behavioral and neuroimaging analyses.
Behavioral data – Quantifying respective contributions of cue-based and trial history-based task predictions
We next sought to more closely characterize how participants combined internally generated and external contributions to task predictions. We began by asking to what degree individual participants relied on an extended trial-history in generating a task prediction, as captured by the RL model’s learning rate. The learning rates estimated from individual participants displayed substantial inter-subject variance (range: 0.02–0.93, Figure 3C). The mean learning rate was 0.52, indicating that, on average, participants weighted the i-1 trial about as much as the prior trial history in determining the internally generated task prediction.
To quantify participants’ relative weighting of trial history-based versus explicit cue-based predictions, we calculated the scaled reliance on the pre-cue (denoted as β, range: 0 to 1) for each participant by , where and are the coefficients of PEcue and PEint, respectively, after fitting the joint-guidance model to RTs. Thus, a higher β indicates stronger reliance on the pre-cue and hence weaker dependence on internally generated task prediction; a β of 0.5 indicates equal reliance on Pcue and Pint. Strikingly, we found that even though trial history was not predictive of the forthcoming task, its effect on behavior was on average 3 times as strong as that of the cue-induced task prediction (group mean β: 0.26; range: 0-0.61; one sample t-test against 0.5: t21 = 5.42, P < 0.001; Figure 3C). Notably, five participants showed either no (β = 0, four subjects) or very little (β = 0.02, one subject) reliance on cue-induced task prediction. Even after excluding these participants (to rule out the possibility that the differential reliance on trial history was due to a failure to understand the associations between the pictorial pre-cue and the task-set prediction), the mean β of the remaining 17 participants remained significantly lower than 0.5 (t16=3.96, P = 0.001), again indicating stronger reliance on the internally generated task prediction. To test the robustness of the β estimates, we computed β separately using the first and last 3 runs. Across subjects, β estimates were significantly correlated between these two task phases (r = 0.63, p = 0.002), suggesting reliable β estimation within participants. Also, there was no significant difference in β estimates between the first and last 3 runs (t21=1.32, p = 0.20), suggesting that the reliance on Pcue relative to Pint remained unchanged throughout the 9 runs in the fMRI session.
We have cast the joint-guidance approach to control predictions as sub-optimal, due to the fact that trial-history was not predictive of task transitions. To corroborate this assumption, we quantitatively assessed whether relying more on the internally generated, trial-history based predictions than on explicit, cue-induced task predictions incurs a performance cost. We first estimated the acceleration in responding due to utilizing the pre-cue for each participant and each task. Specifically, the 0.5 (uninformative) prediction level condition was used as a baseline. Then, for each pre-cue/task (motion task vs. color task) combination, we calculated the respective probabilistic expectation of acceleration in RT relative to the 0.5 prediction level (i.e., the probability of encountering this pre-cue/task combination x the RT difference between this combination and the baseline). Across participants, the mean estimated gain of RT was positively correlated with β estimates in both the motion (r = 0.74, p<0.001; Figure 3E) and color (r = 0.46, p=0.03; Figure 3F) tasks, indicating a clear behavioral benefit for relying on the external cue. This analysis underlines the sub-optimal nature of relying on internally generated, trial history-based task prediction in the present context. We speculate that this seemingly irrational reliance on internally generated task prediction may be attributable to a relatively lower cost (e.g., due to high automaticity) of using internally generated control predictions compared to using cue-based predictions (cf. Shenhav et al., 2013; Shenhav et al., 2017; see Discussion).
In sum, the behavioral and modeling results clearly demonstrate that task demand predictions were jointly informed by internally generated and externally provided information. Moreover, in spite of being sub-optimal in terms of potential performance benefits, task performance depended more on trial history-based task predictions than on the explicit informative pre-cues. To characterize the brain mechanisms by which internally generated and cue-induced task predictions guide cognitive control, we next turned to interrogating the concurrently acquired fMRI data.
fMRI data – analysis strategy
The joint guidance model holds that cognitive control is guided both by internally generated and externally cued task prediction:
We here sought to characterize how this joint influence is instantiated at the neural level. The initial major question we sought to answer was whether cue-based and history-based expectations influence control in parallel or whether these predictions are in fact integrated in a single brain region. Moreover, we sought to characterize two additional key computations that are required for translating Pjoint into successful task-set updating (Figure 4A): first, after the onset of the pre-cue, the task-set needs to be proactively shifted from Tprev to Pjoint in anticipation of the predicted task demand. The demand for preparatory task-set updating (proactive switch demand) can thus be quantified as |Tprev - Pjoint|. Second, following the presentation of the actual task cue and stimulus, the task-set weighting (if not perfectly corresponding with the cued task) needs to be updated reactively from Pjoint to the actual task demand. The reactive switch demand can thus be quantified as the prediction error of Pjoint, or PEjoint. Hence, we conducted fMRI analyses to locate brain regions carrying significant information about trial-by-trial variations in these three key variables: the joint task prediction (Pjoint), the proactive switch demand (|Tprev - Pjoint|), and the reactive switch demand (PEjoint).

Neural representation of the joint task prediction.
(A) Illustration of how Pjoint is translated into proactive and reactive switch demand in relation to previous and forthcoming task demand. (B) Left: An MFG region showing significantly above-chance encoding of the joint task prediction. Right: Individual ROI-mean encoding strength (in z-score). (C) The dlPFC ROI (in red) defined by any linear combination of Pcue and Pintrepresentation. (D) Histogram showing the encoding strength of pseudo-Pjoint based on randomly sampled β parameters using fMRI data in (B). (E) Encoding strength shown in (D), plotted as a function of the distance from Pjoint.

Neural encoding of proactive task switch demand.
(A) T-statistics maps of brain regions showing significantly above-chance encoding of proactive task switch demand at the onset of the pre-cue. (B) Individual ROI-mean encoding strength of proactive task switch demand.
Given that the motion and color task-sets contain multiple dimensions of task information (e.g., whether the goal was to identify motion direction or color, which color/motion direction was mapped onto which key, frame color, etc.), their neural representations may differ with respect to both mean activity levels and multivariate activity patterns in local voxel clusters. Therefore, in examining the neural representations of the key variables above, we employed searchlight multivoxel pattern analysis (MVPA; see Methods: MVPA procedure) that relies on both multi-voxel activity patterns and univariate activity levels. As a validation, we replicated the classic task-switch effect using this approach (Figure 4—figure supplement 1A).
fMRI data – Encoding of joint task predictions at pre-cue onset
We started by probing for a possible integrated neural representation of the joint externally and internally guided task prediction. Because Pjoint is a weighted sum of Pint and Pcue, it is inherently correlated with both variables. To ensure that we identify regions that are specifically representing the integrated prediction only, we filtered out searchlights that showed significant encoding of either cue-induced or internally generated predictions (Figure 4—figure supplement 1B–D). This analysis produced a map of the spatial distribution of the representation strength of Pjoint, exclusively revealing a left dorsolateral prefrontal cortex (dlPFC) region centered on the middle frontal gyrus (MFG) (Figure 4B). To rule out the possibility that the MVPA encoding of Pjoint in dlPFC merely reflected a univariate task effect (e.g., the color task evoking stronger mean activity due to being more difficult than the motion task), we performed a univariate control analysis. Specifically, within each fold of 3 runs, the searchlight-means of trial-wise t-maps were correlated with the corresponding trial-wise Pjoint estimates to obtain the z-score of their linear correlation. The mean z-scores averaged across the 3 folds were then used as the estimate of the univariate encoding strength of Pjoint. This approach ensured maximal similarity with the MVPA. Importantly, this whole-brain univariate analysis did not reveal any regional encoding of Pjoint after correcting for multiple comparisons (voxel-wise threshold p<0.001 and cluster size >62 searchlights). Moreover, an ROI-based analysis focusing on the dlPFC region shown in Figure 4B showed that the mean univariate encoding strength of Pjoint did not significantly differ from 0 (z-score = −0.006 ± 0.03, one sample t-test: t21 = −0.20, p=0.85). Thus, the dlPFC results were not driven by univariate effects of task difficulty.
By definition, Pjoint is also inherently correlated with any linear combination of Pint and Pcue. Therefore, the left dlPFC region identified above may in principle encode a different mixture of Pint and Pcue than Pjoint. To rule out this possibility, we conducted a permutation test using randomly sampled βs for each subject (denoted as ). To ensure this analysis was not biased towards obtaining selective Pjoint effects, it was based on a search space in a left dlPFC area generated from an F-statistical map that measured the effect caused by any linear combination of Pcue and Pint. The search space was then defined as left dlPFC searchlights with uncorrected P value less than 0.01 in the resulting F-map (244 searchlights in total, Figure 4C). Then, was randomly sampled for each participant and was applied to the same MVPA procedure, in order to gauge the encoding strength of its corresponding pseudo-Pjoint. Group-level analysis was then conducted to determine the largest cluster size that showed significant (P < 0.001 uncorrected) encoding of the pseudo-Pjoint. This procedure was repeated 100,000 times, resulting in a null distribution of the largest cluster size. The mean of this null distribution was significantly smaller than the largest cluster size obtained using behaviorally derived βs (P = 0.02), suggesting that Pjoint better accounted for dlPFC fMRI activity patterns than other mixtures of Pint and Pcue.
As a post-hoc analysis, we further examined the selectivity of Pjoint encoding, relative to other randomly sampled s within the dlPFC cluster shown in Figure 4B. To this end, we compared the encoding strength of Pjoint to the encoding strength of arbitrarily mixed values of Pcue and Pint. Similar to the previous analysis, MVPA was conducted to measure the encoding strength of the pseudo-Pjoint. This procedure was repeated 100,000 times in order to ensure a robust estimation of the underlying null distribution. The group mean encoding strength (i.e., how well the fMRI data in the searchlights fit the model variable in a cross-validation procedure) of Pjoint was significantly stronger than the chance level derived from this random sampling procedure (P = 0.02, Figure 4D).
Moreover, we examined whether the behaviorally derived βs represent a local maximum state in the space of all possible βs. A maximum state implies that, given a set of , the encoding strength of its corresponding pseudo-Pjoint should steadily decay as becomes more distinct from β (measured by the Euclidean distance across subjects in the present analysis). On the contrary, if there exist other maxima, this decay would not be present, because distant would also achieve high encoding strength if they lie close to other maxima. Supporting the idea of β representing a local maximum, we observed a significant negative correlation between ’ distance from β and its encoding strength of how well the fMRI data fit the pseudo-Pjoint (r = -0.37, P < 0.001; Figure 4F). Thus, these results offer strong evidence for a specific role of this dlPFC region in integrating joint predictions of forthcoming task demand, and against the alternative possibility that explicit externally and (likely implicit) internally generated predictions might drive task-set updating independently, without being integrated. We also probed for the neural encoding of the relative strength, or confidence, of task prediction (inverse of prediction uncertainty, |Pjoint – 0.5|), which was represented in frontal and parietal regions (Figure 4—figure supplement 2).
fMRI data – Encoding of proactive switch demand at pre-cue onset
While Pjoint provides predictions about the forthcoming task, the degree to which accommodating this prediction requires proactive task-set updating depends on its relationship to the previous trial (|Tprev - Pjoint|). We here searched for brain regions that encoded this distance between the expected and prior task-set, and thus, the relative need to engage in preparatory task-set updating, or proactive switch demand. Regions encoding this demand are likely responsible for the actual reconfiguration of the task-set. We found encoding of proactive switch demand to be supported by a wide network of regions centered in frontal and parietal cortex (Figure 5A). Areas encoding switch demand included right frontopolar cortex (FPC, BA 10), left inferior frontal gyrus (IFG, pars opercularis), left precentral gyrus, bilateral supplementary motor area (SMA), left inferior parietal lobule/intraparietal sulcus (IPL/IPS), bilateral superior parietal lobule (SPL), precuneus, bilateral insula and the bilateral putamen of the dorsal striatum. This network extended well into visual cortex, including bilateral lingual gyrus and bilateral middle occipital gyrus, suggesting that participants also used task predictions to adjust visual processing of forthcoming information.
A core set of these regions (lateral PFC, SMA, and lateral posterior parietal cortex, Figure 5B) are responsive to multiple and changing task demands (Cole et al., 2013; Ruge et al., 2013), are functionally connected to each other (Yeo et al., 2011), and have been conceptualized as a frontoparietal cognitive control network (e.g., Duncan, 2013), while the dorsal striatum has long been proposed to contribute to updating of working memory content (Frank et al., 2001). A subset of these regions was also found to represent the confidence of task predictions (Figure 4—figure supplement 2). The current results suggest that the frontoparietal control network is not only involved in exerting control during task execution but also in the anticipatory updating of task-set representations driven by joint internally and externally generated predictions about forthcoming tasks.
fMRI data – proactive interference
An alternative interpretation regarding model variable Pint (and associated findings) is that they may reflect, or be confounded by, proactive interference from previously activated task-sets. In the current modeling framework, at the time of pre-cue onset the degree of proactive interference that would be exerted from prior trials’ task-sets can be quantified as the discrepancy between the internal and external predictions, or |Pint-Pcue|. Using this metric to test whether there are any regions that encoded proactive interference, we performed the equivalent analyses on |Pint-Pcue| that we had previously performed for Pint and Pcue. This failed to reveal any above-chance encoding of |Pint-Pcue| in whole-brain MVPA, which suggests that proactive interference is unlikely to have played a major role in contributing to our data. In fact, this observation seems to fit with prior studies of proactive interference effects. For instance, using computational modeling and behavioral data, Badre and Wagner, 2006 showed that proactive interference effects decreased exponentially as a function of cue-task stimulus interval over a time range from 250 ms to 1150 ms. As the intervals in the current study were substantially longer (ranging from 1750 ms to 2500 ms), we speculate that any proactive interference effects may have decayed too much to be a major contributor in modulating proactive task switch preparations in this protocol.
fMRI data – encoding reactive switch demand at task stimulus onset
We next sought to characterize the neural substrates of reactive switch demand, that is, the need for additional task-set reweighting once the task cue and stimulus are presented, as represented by the prediction error of the joint task prediction (PEjoint). To control for the influence of the actual task demand, PEjoint encoding strength was computed separately for motion and color trials, and the statistical analysis was then performed on data collapsed across the two tasks. This analysis revealed encoding of PEjoint in a set of regions consisting of left dmPFC (including the ACC), bilateral precentral and postcentral gyrus, precuneus, right SPL, left inferior occipital gyrus, and IFG (Figure 6A).

Neural representation of the joint task prediction error (PEjoint).
(A) T-statistics maps of brain regions showing significantly above-chance encoding of PEjoint. (B) Encoding strength (z-score) of PEjoint in the dmPFC/ACC cluster, plotted as a function of task. Each line represents one subject. (C,D) Histogram showing the encoding strength of pseudo-PEjoint based on randomly sampled β parameters and fMRI data in the dmPFC/ACC cluster in motion (C) and color (D) trials. (E) Encoding strength shown in (C), plotted as a function of the distance from PEjoint. (F) Encoding strength shown in (D), plotted as a function of the distance from PEjoint.
We also tested the encoding strength of Pjoint on both color and motion trials at the time of task-stimulus onset and did not find any brain areas passing the correction for multiple comparisons. In conjunction with the encoding of Pjoint at the onset of the pre-cue, this result corroborates the expectation that the representation of Pjoint and PEjoint should be temporally separated. We next compared the degree to which PEjoint was selectively represented based on behaviorally derived βs among the regions showing strong encoding of PEjoint. Compared to randomly sampled βs, the dmPFC (Figure 6B), right precentral and postcentral gyrus, and precuneus showed significantly above-chance encoding strength for PEjoint, with the dmPFC exhibiting the strongest effect. The effects remained above chance in the dmPFC when tested separately using motion (P < 0.001;Figure 6C) and color trials (P = 0.03; Figure 6D). Furthermore, for a given set ofs, the encoding strength of its corresponding pseudo-PEjoint decreased as a function of its Euclidean distance from βs for both motion (r = -0.51, P < 0.001; Figure 6E) and color trials (r = -0.29, P < 0.001; Figure 6F). Thus, we obtained strong evidence for an involvement of the dmPFC/ACC in the reactive updating of task demand predictions based on the joint-guidance of internally generated and externally provided cue information.
Finally, we sought to relate the current data set to findings from a recent study that traced neural substrates of (unsigned) PE of internally generated task predictions in a similar task-switching paradigm, but where internal predictions were driven by varying the likelihood of each task being cued over blocks of trials (Waskom et al., 2017). The corresponding analysis in the current data set is to search for regions that encode PEint at task-stimulus onset. In close correspondence to the results of Waskom et al., 2017, we observed robust encoding of PEint in the frontoparietal control network and the adjacent parietal portion of the dorsal attention network (Figure 6—figure supplement 1) (Yeo et al., 2011). These data show that the updating of internally generated task predictions derived from a non-predictive trial sequence (in the current study) recruits the same neural substrates as updating of task predictions in response to predictive trial sequences (Waskom et al., 2017).
Discussion
The human brain is capable of anticipating forthcoming task demand and exerting proactive control to reconfigure information processing in line with those predictions (Shenhav et al., 2013; Egner, 2014; Jiang et al., 2014; Abrahamse et al., 2016; Waskom et al., 2017). In the present study, we sought to characterize how this regulation of anticipatory control is implemented in the context of concurrent external, cue-based, and internally generated, cognitive history-based predictions of forthcoming control demand in the form of task-set updating. By directly manipulating and formally modeling cue-based and cognitive history-based task predictions, we demonstrated that behavior was driven by joint predictions from external and internal sources, and that these predictions were integrated in (left) dlPFC prior to task stimulus onset. The discrepancy between the joint prediction and the previous-trial task, which signals the demand of proactively shifting task-set, was represented in frontoparietal regions belonging to the frontoparietal control or multiple-demand network, along with the insula, putamen and visual areas. Moreover, upon task stimulus presentation, reactive updating of task-set based on the joint task-demand prediction error was found to be encoded most prominently in the dmPFC.
We conducted a rigorous test of the reliance on internally generated predictions by making them non-informative. Despite of the lack of validity in predicting the forthcoming task, trial history-based, internally generated task prediction exhibited a strong modulation on RTs. This modulation was non-trivial, being about 3 times as strong as the modulation of the informative cue-induced task prediction at the group level. Notably, the finding that behavioral modulation by previous trials could be traced up to three trials back (Figure 2E–H) indicates that the modulation of internally generated prediction extends well beyond the scope of the classic task switch effect, which typically concerns only the immediately preceding trial. Formal model comparison also confirmed that the reliance on internally generated task prediction cannot be explained by the classic task switch effect alone.
While these results are consistent with some prior behavioral studies that documented effects of internal, trial history-based predictions even in the presence of 100% informative external cues (Alpay et al., 2009; Correa et al., 2009; Kemper et al., 2016), they are nevertheless surprising, because the strong reliance on the non-informative internally generated task prediction seems to contradict the premise of using such predictions – that is, to optimize the engagement of cognitive control. However, we argue that these results indicate that the process of applying proactive cognitive control based on concurrent internal and cue-based control demand predictions is itself a result of optimization based on a cost/benefit analysis, as proposed by an influential recent model of control regulation, the expected value of control (EVC) model (Shenhav et al., 2013; Shenhav et al., 2017). This model considers not only the potential benefits of applying top-down control but also the assumed inherent costs (or effort) of doing so (cf. Kool et al., 2010), proposing that the engagement of cognitive control is driven by a cost/benefit analysis that optimizes the predicted gain of applying control relative to its cost. When applied to the conundrum of the apparent overreliance on internal, trial-history based versus, external, cue-based control predictions, the EVC model would predict that this situation could come about if applying internal control predictions incurred a smaller cognitive effort cost than employing external control predictions.
The implied lower costs for internally generated prediction than cue-induced prediction may originate from two, not mutually exclusive sources. First, the cue-induced prediction requires learning and retrieving the associations between an external graphical pre-cue and its corresponding task-set prediction, which may incur a higher cost than internally generated prediction. Second, it is reasonable to assume that internally generated predictions that anticipate the near future to resemble the recent past represent a default cognitive strategy grounded in evolutionary adaptation to an environment of high temporal auto-correlation of sensory inputs (Dong and Atick, 1995). Accordingly, it has been shown that ongoing perception (Fischer and Whitney, 2014) and decision-making (Cheadle et al., 2014) are strongly reliant on recent experience. The present results suggest that the same is true for cognitive control processes: the recent past is (implicitly) employed as a powerful predictor of the immediate future (see Egner, 2014; Jiang et al., 2015b; Waskom et al., 2017). If such self-generated predictions of task demand represent a default mode of control regulation, their generation is likely cheap and, importantly, it may require substantial cognitive effort to override such predictions, which would be another means of rendering the use of the explicit external cue more costly.
An alternative account could be that the reliance on Pint originated from Pcue. Specifically, if the learning of Pcue for each pre-cue is achieved by learning from previous trials sharing the same pre-cue, the ‘informativeness ‘of the pre-cue specific trial history may render the whole trial history informative, hence promoting the reliance on Pint. As the learning of Pcue asymptotes, Pcue would become disentangled from Pint. This account leads to the key prediction that the reliance on Pint (i.e., 1-β) should be higher in the beginning of the task than later on, as the gradual detachment of Pcue from trial history would decrease the reliance on trial history over time. However, the lack of difference in β estimates between the first and last 3 runs of the fMRI session does not support this notion. One possibility is that this type of process may have occurred during the stair-casing procedure, and hence did not end up affecting the fMRI session.
The internally generated predictions we observed here took the form of increased expectations of task repetition with increasing run length of a given task (also known as the ‘hot-hand fallacy’; Gilovich et al., 1985). This contrasts with another intuitive possibility, whereby participants could have predicted a change in task with increasing run length of task repetitions (known as the ‘gambler’s fallacy’; Jarvik, 1951). The fact that we observed the former rather than the latter is in line with our assumption that the internal predictions we measure here are likely implicit in nature, as previous work has demonstrated a dissociation between explicit outcome predictions following the gambler’s fallacy and implicit predictions, as expressed in behavior, following the hot-hand fallacy (Perruchet, 1985; Jiménez and Méndez, 2013; Perruchet, 2015). Note also that participants’ tendency to follow either fallacy should depend on their beliefs as to whether events are generated randomly (in gambling) or non-randomly (the hot hand). Thus, the fact that internally generated predictions followed the hot-hand pattern further corroborates that participants (likely implicitly) assumed the trial history to be non-random.
The EVC model also predicts that the relative reliance on internally vs. externally generated predictions, which are the outcome of a cost-benefit analysis, will change as a function of the cost and/or the benefit of engaging the more informative externally generated predictions. Further studies are encouraged to explore other manipulations of benefit and cost (e.g., monetary reward) and test how such manipulations shift the reliance on the informative but costly explicit predictions.
The fMRI data shed light on how the two types of task predictions jointly guide cognitive control in the brain. First, rather than distinct anatomical sources exerting parallel influences on behavior, we detected an integrated representation of task predictions at the onset of the pre-cue in left dlPFC. This is in line with this region’s well-established role in representing task rules and strategies (for review, see Sakai, 2008; cf. Waskom et al., 2014; Waskom et al., 2017). Additionally, based on this joint prediction, the anticipated switch demand or amount of required task-set updating was represented in the frontoparietal control network, insula, and dorsal striatum. These regions have long been implicated in task-set regulation, as revealed initially by studies comparing neural activity on task-switch with task-repeat cues or trials (reviewed in Ruge et al., 2013), and more recently by studies using multivariate pattern analysis to decode the currently active task-set from frontoparietal cortex (Woolgar et al., 2011; Waskom et al., 2014; Loose et al., 2017; Qiao et al., 2017). The present results move beyond these findings by showing that more than just encoding a currently active task rule, the frontoparietal control network is engaged in predicting control demands, in the shape of representing the degree to which a current set has to be updated to suit the forthcoming task (or switch likelihood).
After the onset of the task cue and task stimulus, encoding of the prediction error associated with that joint prediction was found in the dmPFC/ACC, which is consistent with a large literature demonstrating ACC and dmPFC encoding of prediction error in a variety of different contexts (Holroyd and Coles, 2002; Ito et al., 2003; Matsumoto et al., 2007; Alexander and Brown, 2011; Ullsperger et al., 2014). The representation of joint prediction error also reflects the degree to which the task-set has to be updated reactively; thus, this finding is also consistent with the theory that the ACC monitors performance and signals the need for adjustments in cognitive control (Botvinick et al., 2001; Ridderinkhof et al., 2004). Finally, we also documented that the neural substrates of prediction error from internally generated predictions in the present study matched those of a recent study where those self-generated predictions were related to probabilistically predictable task sequences (Waskom et al., 2017). This finding indicates that internal predictions derived from a non-predictive trial sequence are encoded in much the same fashion as those derived from a predictable sequence.
Our findings also exhibit a clear temporal segregation between the frontoparietal encoding of the proactive switch demand at the onset of pre-cue (Figure 5A) and the dmPFC/ACC encoding of reactive switch demand when the cue and task stimulus were presented (Figure 6A). This temporal and functional differentiation maps closely onto the ‘dual mechanisms of cognitive control’ framework (Braver, 2012), which distinguishes between proactive, anticipatory, and reactive, compensatory application of control. The present results strongly suggest that the lateral frontoparietal control network guides proactive control implementation, whereas the dmPFC/ACC detects the prediction error of control demand prediction, and signals the need of (or perhaps implements) reactive control to match the actual (rather than anticipated) cognitive control demand. In support of this contention, in a recent study that explicitly distinguished between proactive and reactive control in a conflict task, a left IFG area overlapping the IFG area in Figure 5 was also implicated in using proactive control demand predictions (Jiang et al., 2015a). Furthermore, disrupting function in this area using transcranial magnetic stimulation selectively diminished the effects of learned control predictions on behavior (Muhle-Karbe et al., 2018).
Finally, the current findings suggest a novel extension of the EVC theory, which originally addressed a cost/benefit analysis of engaging control after prediction of control demands are formed (Shenhav et al., 2013). The present results suggest that a cost/benefit optimization process is also applicable to the preceding stage, where different inputs to the control demand prediction process are reconciled. This demand prediction reconciliation process would not only precede the putative EVC calculation, but should also directly influence it, and do so in a unidirectional, hierarchical fashion. More generally, in light of recent theoretic proposals of the hierarchical architecture of cognitive control (Koechlin and Summerfield, 2007; Badre and Nee, 2018), an overarching optimization procedure may simultaneously consider the costs and benefits of multiple cognitive control processes across different levels in the hierarchy, in order to guide goal-directed behavior while taking into account both the benefit and the cost of engaging cognitive control.
In conclusion, we combined a probabilistic cued task switching with computational modeling and neuroimaging to show that concurrent externally and internally derived predictions of cognitive control demand are reconciled to form a joint prediction. Behavior was dominated by internal, trial-history based predictions, likely due to the lower cost of generating or higher cost of overriding these predictions. The integrated prediction was encoded in dlPFC and guided proactive cognitive control over task-set updating, which was represented in the frontoparietal control network. Subsequently, if actual control demand deviates from predicted demand, the dmPFC/ACC is engaged to compensate the prediction error of the joint prediction, and guide reactive updating. In this manner, the present findings reveal that flexible human behavior depends on multiple regulatory processes that govern cognitive control.
Materials and methods
Subjects
Request a detailed protocolTwenty-eight volunteers gave informed written consent, in accordance with institutional guidelines. All subjects had normal or corrected-to-normal vision. Data from six subjects were excluded from further analysis due to low (<50%) accuracy in at least one of the cells in the experimental design (see below). The final sample consisted of 22 subjects (15 females; 22–35 years old, mean age = 27 years). This study was approved by the Duke University Health System Institutional Review Board.
Experimental procedures
Request a detailed protocolVisual stimuli were presented on a back projection screen viewed via a mirror attached to the scanner head coil. Tasks and response collection were programmed using Psychtoolbox 3 (Brainard, 1997) in Matlab (Mathworks, inc.). All visual stimuli were presented in the center of the screen over a grey background. Each trial started with a presentation of a pie chart (radius ≈ 2.2° of visual angle) for 0.5 s. The relative areas of black vs. white regions indicated the probability of a black vs. white frame surrounding the imperative dot cloud later in the trial (see below). Five probability levels were used in this study: 20%, 40%, 50%, 60% and 80% (applied to both black and white colors). The probability of seeing a black vs. white frame always summed up to 1, as the black and white area always occupied the whole pie chart. To make the perceptual appearance of the predictive cue different across trials, the pie chart rotated by a random degree on each trial. Following the pie chart, a fixation crosshair was presented for an exponentially jittered duration between 1.75 and 2.5 s (step size = 0.25 s). The fixation crosshair was followed by a cloud of 60 colored (either purple or green, radius ≈ 0.15°) moving (speed randomly drawn from a uniform distribution from 13°/s to 15°/s) dots. The dot cloud spanned approximately 6° of visual angle both vertically and horizontally and lasted for 1.5 s. For each trial, the colors and motion directions of the dots were defined by their respective noise levels (ranging from 0.1 to 0.9, determined by a stair-case procedure described below), a dominant motion direction (left or right) and a dominant color (green or purple). For example, a combination of color noise level of 0.8 and motion noise level of 0.4 means that: (1) 20% (i.e., 1–0.8) of all dots were randomly selected to have the dominant color; (2) the remaining 80% of dots were randomly colored in green or purple with equal probability; (3) 60% (i.e, 1–0.4) of all dots were randomly chosen to move in the dominant direction; (4) the remaining 40% of dots had random motion directions; and (5) the dots with the dominant color were selected independently from the dots with the dominant motion direction.
The dot cloud was surrounded by a frame, whose color (either black or white) was predicted by the preceding pie chart. Depending on the color of the frame, participants had to judge the dominant color (green vs. purple) or motion direction (left vs. right) of the dot cloud via two MR-compatible button boxes (one for each hand). The association between frame color and task and the response mappings (e.g., left hand: middle/index finger = green/purple; and right hand: index/middle finger = left/right) were counterbalanced across subjects. If no response was detected upon the offset of the dot cloud, a warning message (‘Please respond faster!') was shown on the screen for 2 s. Finally, a second fixation cross (representing the inter-trial interval) was presented for an exponentially jittered duration between 3.5 and 5 s (step size = 0.5 s). If applicable, the duration of the warning message was deducted from this duration. The main task consisted of 9 runs of 50 trials each. Other factors were counterbalanced in a within-run manner, including (a) that each of the five probabilities appeared in 10 trials and (b) that each frame color, dominant color, and consistent motion direction appeared in 25 trials. Importantly, the sequences of tasks were pseudo-randomly produced so that the tasks performed on previous trials had no predictive power on the task to be performed on the current trial.
Prior to fMRI scanning, participants performed a practice session of 20 trials (ITI = 2 s for all trials) to ensure that they comprehended the task instructions. The practice session was followed by a stair-case procedure (4 runs of 50 trials each) that adaptively and separately adjusted the noise levels for color and motion to achieve accuracy of ~87.5% for both color and motion trials (cf. Waskom et al., 2017). The trial structure and counterbalancing were identical to the main task. The noise levels for color and motion both started at 0.5 and were re-evaluated respectively at every 5th color and motion trial (check points) based on two rules: (1) If at most one error was made since the last check point, the noise level for the check point’s corresponding target feature increased by 0.025; and (2) if the noise level for a feature did not change at any of the past 4 check points, its corresponding noise level decreased by 0.1.
Behavioral analysis
Request a detailed protocolError trials and outlier trials (RTs outside subject mean ±3 SDs) were removed from further analyses. Two repeated measures ANOVAs were conducted on both accuracy and RT data. The first ANOVA concerned the effect of the task at the previous trial (previous task: color or motion ×current task: color or motion). The second ANOVA focused on the effect of predictive cues (task prediction: 20%, 40%, 50%, 60% or 80% × current task: color or motion). Similar repeated measure ANOVAs were also conducted by replacing the previous trial with trials i-2 and i-3, in order to test the modulation of older trials on behavior. Note that a 3-way ANOVA (previous task ×task prediction×current task) was not performed due to low trial counts for unexpected task conditions (e.g., only 9 trials for the condition of a motion trial following a color trial and having wrong prediction of 80% chance of encountering a color trial).
Modeling and model comparison
Request a detailed protocolRival models in model comparison were GLMs with a subset of trial-wise estimates of PEprev, PEcue, and PEint. Based on the observation that larger PE slows down responses (Waskom et al., 2017), a nonnegative constraint was applied to the coefficients (Chiu et al., 2017). Each model also included a constant regressor. To compare models, behavioral data were divided into 3 folds, each of which consisted of data from 3 runs. Two folds were used as training data to fit GLMs to RTs. To account for the main effect of task in RT, fitting was performed separately for trials with color and motion tasks. The resulting fitting coefficients were then applied to the same GLMs to predict trial-wise RTs in the remaining fold (test data). This procedure was repeated until each fold served as test data once. Model performance was measured by the product of trial number and the logged average squared trial-wise PE of RTs from all 3 test folds and was calculated for each model and each subject.
MRI acquisition and preprocessing
Request a detailed protocolImages were acquired parallel to the AC–PC line on a 3T GE scanner. Structural images were scanned using a T1-weighted SPGR axial scan sequence (146 slices, slice thickness = 1 mm, TR = 8.124 ms, FoV = 256 mm * 256 mm, in-plane resolution = 1 mm * 1 mm). Functional images were scanned using a T2*-weighted single-shot gradient EPI sequence of 42 contiguous axial slices (slice thickness = 3 mm, TR = 2 s, TE = 28 ms, flip angle = 90°, FoV = 192 mm * 192 mm, in-plane resolution = 3 mm * 3 mm). Preprocessing was done using SPM8 http://www.fil.ion.ucl.ac.uk/spm/. After discarding the first five scans of each run, the remaining images underwent spatial realignment, slice-time correction, and spatial normalization, resulting in normalized functional images in their native resolution. Normalized images were then smoothed using a Gaussian kernel with 5 mm full-width-half-maximum to increase signal (Xue et al., 2010). Single trial fMRI activity levels at the onset of the pre-cue were estimated separately following Mumford et al. (2012) by regressing the fMRI signals against a GLM consisting of HRF–convolved onsets of the pre-cue at the trial, the onsets of all other pre-cues, and the onsets of all task stimuli. Other regressors of no-interest, such as the estimated motion parameters, mean white matter (WM) BOLD signal, mean cerebrospinal fluid BOLD signal were also included in the GLM. Single trial fMRI activity levels at the onset of the task stimulus were calculated similarly. For each trial, the resulting t-maps were subtracted by their respective mean in the WM mask, in order to reduce non-neural noise in t estimates across individual t-maps, such as the noise introduced by the different GLMs with partially overlapping regressors of varying co-linearity in the estimation of trial-level t-maps. The adjusted t-maps were then used in MVPA.
MVPA procedures
Request a detailed protocolWe conducted searchlight-based (r = 2 voxels) MVPAs to quantify the representation strength of a given variable (e.g., Pcue). MVPAs were conducted on a grey matter (GM) mask that was generated by dilating GM voxels (GM value >0.01) in the segmented T1 template by 1 voxel (Jiang et al., 2015a). For each searchlight, data from the 9 runs and the trial-wise variable time course were chronologically divided into 3 folds (3 runs per fold), based on which a 3-fold cross-validation was performed. For the training folds, trial-wise fMRI activity levels from all masked GM voxels within the searchlight were fit to the variable time course, resulting in one weight for each voxel. The fitting took the form of a ridge regression (Xue et al., 2010) to control for over-fitting. These weights were then applied to the fMRI data in the test fold to produce a predicted variable time course (Figure 7). High linear correlation between predicted and actual variable time courses indicated that the variable was represented in the neural data. No correlation (i.e., a correlation coefficient of 0) would indicate no representation of the variable of interest. Since each fold was used as test data once, three correlation coefficients were obtained. We Fisher-transformed these 3 correlation coefficients and used their mean as a quantification of representation strength for the searchlight. After searchlight analyses were performed across the whole brain (using each GM voxel as searchlight center once), a representation strength map was generated, with the center voxel of each searchlight encoding the degree to which each searchlight represents the variable in question.

Illustration of the MVPA procedure.
Training data were used to estimate the weights that fit (either linear-fitting or logistic regression) voxel-wise estimated fMRI activity (A1) to model estimates (y1) in a trial-by-trial manner. The weights were then applied to test fMRI data (A2) to predict the hypothetical BOLD signal in test data (y2). The goodness of prediction, or presentation strength, is measured by the correlation between predicted BOLD signal (Simy2) and y2.
For group-level analyses, a one-sample t-test against 0 was then conducted on each searchlight’s center voxel of the individual representational strength maps using AFNI’s 3dttest++ program. To correct for multiple comparisons, group-level results were corrected using a non-parametric approach (https://afni.nimh.nih.gov/pub/dist/doc/program_help/3dClustSim.html) that randomly permutated the data for 10,000 times, in order to generate a robust null distribution of the statistical map. When voxelwise threshold was set at p<0.001 and family-wise error rate was set at 0.05, this approach yielded cluster size thresholds ranging from 16 to 23 searchlights, depending on the specific analysis.
Data availability
Statistical maps for all whole-brain fMRI analyses have been uploaded to https://neurovault.org/collections/3732/. Original behavioral and fMRI data are available at https://openneuro.org/datasets/ds001493. The MATLAB source code for the task and key analyses has been made available on GitHub (https://github.com/JiefengJiang/eLife2018; copy archived at https://github.com/elifesciences-publications/eLife2018).
-
Statistical maps for all whole-brain fMRI analysesPublicly available at the NeuroVault database. (accession no. 3732).
-
Original behavioral and fMRI dataPublicly available at the OpenNeuro website.
References
-
Grounding cognitive control in associative learningPsychological Bulletin 142:693–728.https://doi.org/10.1037/bul0000047
-
Medial prefrontal cortex as an action-outcome predictorNature Neuroscience 14:1338–1344.https://doi.org/10.1038/nn.2921
-
BookShifting intentional set: Exploring the dynamic control of tasksIn: Umilta C, Moscovitvh M, editors. Conscious and Nonconscious Information Processing: Attention and Performance XV. Cambridge, MA: MIT Press. pp. 421–452.
-
Precueing imminent conflict does not override sequence-dependent interference adaptationPsychological Research Psychologische Forschung 73:803–816.https://doi.org/10.1007/s00426-008-0196-9
-
Frontal cortex and the hierarchical control of behaviorTrends in Cognitive Sciences 22:170–188.https://doi.org/10.1016/j.tics.2017.11.005
-
Conflict monitoring and cognitive controlPsychological Review 108:624–652.https://doi.org/10.1037/0033-295X.108.3.624
-
The variable nature of cognitive control: a dual mechanisms frameworkTrends in Cognitive Sciences 16:106–113.https://doi.org/10.1016/j.tics.2011.12.010
-
The caudate nucleus mediates learning of Stimulus-Control state associationsThe Journal of Neuroscience 37:1028–1038.https://doi.org/10.1523/JNEUROSCI.0778-16.2016
-
Cueing cognitive flexibility: item-specific learning of switch readinessJournal of Experimental Psychology: Human Perception and Performance 43:1950–1960.https://doi.org/10.1037/xhp0000420
-
Multi-task connectivity reveals flexible hubs for adaptive task controlNature Neuroscience 16:1348–1355.https://doi.org/10.1038/nn.3470
-
Anticipating conflict facilitates controlled stimulus-response selectionJournal of Cognitive Neuroscience 21:1461–1472.https://doi.org/10.1162/jocn.2009.21136
-
Statistics of natural time-varying imagesNetwork: Computation in Neural Systems 6:345–358.https://doi.org/10.1088/0954-898X_6_3_003
-
Preparatory processes in the task-switching paradigm: evidence from the use of probability cuesJournal of Experimental Psychology: Learning, Memory, and Cognition 28:468–483.https://doi.org/10.1037/0278-7393.28.3.468
-
Preparatory adjustment of cognitive control in the task switching paradigmPsychonomic Bulletin & Review 13:334–338.https://doi.org/10.3758/BF03193853
-
Serial dependence in visual perceptionNature Neuroscience 17:738–743.https://doi.org/10.1038/nn.3689
-
Interactions between frontal cortex and basal ganglia in working memory: a computational modelCognitive, Affective, & Behavioral Neuroscience 1:137–160.https://doi.org/10.3758/CABN.1.2.137
-
The hot hand in basketball: On the misperception of random sequencesCognitive Psychology 17:295–314.https://doi.org/10.1016/0010-0285(85)90010-6
-
Probability learning and a negative recency effect in the serial anticipation of alternative symbolsJournal of Experimental Psychology 41:291–297.https://doi.org/10.1037/h0056878
-
Bayesian modeling of flexible cognitive controlNeuroscience & Biobehavioral Reviews 46:30–43.https://doi.org/10.1016/j.neubiorev.2014.06.001
-
Visual prediction error spreads across object features in human visual cortexThe Journal of Neuroscience 36:12746–12763.https://doi.org/10.1523/JNEUROSCI.1546-16.2016
-
It is not what you expect: dissociating conflict adaptation from expectancies in a stroop taskJournal of Experimental Psychology: Human Perception and Performance 39:271–284.https://doi.org/10.1037/a0027734
-
The neural basis of task switching changes with skill acquisitionFrontiers in Human Neuroscience 8:339.https://doi.org/10.3389/fnhum.2014.00339
-
Control and interference in task switching--a reviewPsychological Bulletin 136:849–874.https://doi.org/10.1037/a0019842
-
An information theoretical approach to prefrontal executive functionTrends in Cognitive Sciences 11:229–235.https://doi.org/10.1016/j.tics.2007.04.005
-
Decision making and the avoidance of cognitive demandJournal of Experimental Psychology: General 139:665–682.https://doi.org/10.1037/a0020198
-
Switch-Independent task representations in frontal and parietal cortexThe Journal of Neuroscience 37:8033–8042.https://doi.org/10.1523/JNEUROSCI.3656-16.2017
-
Medial prefrontal cell activity signaling prediction errors of action valuesNature Neuroscience 10:647–656.https://doi.org/10.1038/nn1890
-
What matters in the cued task-switching paradigm: tasks or cues?Psychonomic Bulletin & Review 13:794–799.https://doi.org/10.3758/BF03193999
-
An integrative theory of prefrontal cortex functionAnnual Review of Neuroscience 24:167–202.https://doi.org/10.1146/annurev.neuro.24.1.167
-
Task switchingTrends in Cognitive Sciences 7:134–140.https://doi.org/10.1016/S1364-6613(03)00028-7
-
Causal evidence for Learning-Dependent frontal lobe contributions to cognitive controlThe Journal of Neuroscience : The Official Journal of the Society for Neuroscience 38:962–973.https://doi.org/10.1523/JNEUROSCI.1467-17.2017
-
Consciousness and Self-Regulation (Schwarz GE, Shapiro DAttention to action: willed and automatic control of behavior, Consciousness and Self-Regulation (Schwarz GE, Shapiro D, New York, Plenum Press, 10.1007/978-1-4757-0629-1_1.
-
A pitfall for the expectancy theory of human eyelid conditioningThe Pavlovian Journal of Biological Science 20:163–170.
-
Dissociating conscious expectancies from automatic link formation in associative learning: a review on the so-called perruchet effectJournal of Experimental Psychology: Animal Learning and Cognition 41:105–127.https://doi.org/10.1037/xan0000060
-
Dynamic Trial-by-Trial recoding of Task-Set representations in the frontoparietal cortex mediates behavioral flexibilityThe Journal of Neuroscience 37:11037–11050.https://doi.org/10.1523/JNEUROSCI.0935-17.2017
-
Costs of a predictible switch between simple cognitive tasksJournal of Experimental Psychology: General 124:207–231.https://doi.org/10.1037/0096-3445.124.2.207
-
Task set and prefrontal cortexAnnual Review of Neuroscience 31:219–245.https://doi.org/10.1146/annurev.neuro.31.060407.125642
-
Toward a rational and mechanistic account of mental effortAnnual Review of Neuroscience 40:99–124.https://doi.org/10.1146/annurev-neuro-072116-031526
-
Bayesian model selection for group studiesNeuroImage 46:1004–1017.https://doi.org/10.1016/j.neuroimage.2009.03.025
-
Neurophysiology of performance monitoring and adaptive behaviorPhysiological Reviews 94:35–79.https://doi.org/10.1152/physrev.00041.2012
-
Frontoparietal representations of task context support the flexible control of goal-directed cognitionJournal of Neuroscience 34:10743–10755.https://doi.org/10.1523/JNEUROSCI.5282-13.2014
-
Adaptive coding of task-relevant information in human frontoparietal cortexJournal of Neuroscience 31:14592–14599.https://doi.org/10.1523/JNEUROSCI.2616-11.2011
-
The organization of the human cerebral cortex estimated by intrinsic functional connectivityJournal of Neurophysiology 106:1125–1165.https://doi.org/10.1152/jn.00338.2011
Decision letter
-
David BadreReviewing Editor; Brown University, United States
-
Michael J FrankSenior and Reviewing Editor; Brown University, United States
In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.
[Editors’ note: a previous version of this study was rejected after peer review, but the authors submitted for reconsideration. The first decision letter after peer review is shown below.]
Thank you for submitting your work entitled "Integrated External and Internally Generated Task Predictions Jointly Guide Cognitive Control in Prefrontal Cortex" for consideration by eLife. Your article has been reviewed by a Senior Editor, a Reviewing Editor, and three reviewers.. The following individuals involved in review of your submission have agreed to reveal their identity: Carlo Reverberi (Reviewer #2).
Our decision has been reached after consultation between the reviewers. Based on these discussions and the individual reviews below, we regret to inform you that this submission will not be considered further for publication in eLife.
This paper takes a novel and sophisticated approach to an important problem. Understanding how internal predictions are integrated with external cues in order to manage task sets and control behavior is of high importance. The reviewers and editors all recognized this importance, and were impressed by the sophisticated approach to modeling behavior and the brain.
The "internal prediction" side of this problem is central to the theoretical impact of this study. However, as the reviews and the subsequent discussion of them made evident, it was unclear what this internal prediction reflects in this study. There appear to be several alternatives, with fairly different implications. For example, rather than an internal prediction, the internal effect could reflect some type of proactive interference (PI). Reviewer 2 points out that a more model-based internal prediction might have made qualitatively different predictions than PI that were not tested here. If true, this would complicate the interpretation of Pjoint, particularly if it did make different predictions. Reviewer 3 further raised a separate point that learning might occur over joint external/internal prediction, rather than keeping them separate.
After considerable deliberation, we decided that there is substantial work to be done to treat a number of alternative accounts of what is happening here and to specify the nature of internal prediction in this task. Some could be done with additional modeling; others might require additional data collection. At eLife, we only invite a resubmission in cases where a single revision is likely to conclusively address any major points. The amount to do here and the uncertain outcome of that process is more substantial than would be typical for this type of revision. So, we have decided to reject the paper in its current form. However, in light of the strengths we note above, if you were to undertake the extensive additional analysis and/or data collection required to better characterize these data, we would be willing to consider the paper at eLife as a new submission. Though, we must note, that this invitation does not guarantee that the new submission would be reviewed or accepted.
Reviewer #1:
The present study examined how internally and externally cued task sets are integrated using quantitative analysis of behavior and fMRI data. Externally cued task sets consisting of visually present probabilistic cues. Internally cued tasks sets were modeled by how much recent trial history affected present task performance. Each source (internal and external) can be formalized as a prediction of the forthcoming task. Violations of these predictions (i.e. prediction errors; PEs) were hypothesized to reduce performance. The authors found that both internal and external PEs slowed performance (or conversely, that accurate predictions sped performance). Based on behavior, individual reliance on both sources was estimated and related to fMRI signals. Joint task prediction was related to signals in the DLPFC, preparatory task-set updating was related to frontal-parietal cortices, and PEs were related to activation in the dmPFC. Collectively, these data indicate separable contributions of multiple control regions to flexible behavior.
There is a lot to like about this study, particularly regarding the formal quantification of internal and external predictions/PEs. I do feel as though there is some opacity with some of the methodological choices, which may breed misunderstanding. Indeed, it is possible that my most substantive concern stems from such misunderstandings. So, perhaps additional clarification is all that is needed.
Essential revisions:
If I understand correctly, Pjoint reflects the prediction of the color task, which was the harder of the two tasks. So, if activation correlated positively with Pjoint, that could reflect preparation for the color task or preparation for cognitive load (e.g., much like a univariate analysis of a color-word Stroop task would look if contrasting color vs word). It's quite difficult to discern among these possibilities and more general task-set prediction with the MVPA procedure employed by the authors. As I understand it, the MVPA procedure is very similar to a traditional univariate analysis, at least initially, with the key differences being that (A) a neighborhood of voxels is regressed onto a predictor rather than a single voxel, and (B) ridge regression is used to induce regularization. Given that the data are smoothed, I suspect that the use of a searchlight rather than a single voxel simply induces more smoothing (i.e. effectively mimic'd by a larger smoothing kernel during preprocessing), so then the only real difference is the use of ridge regression vs OLS. Then, where the methods differ substantively is that inference from a univariate method would interpret the sign of the resulting β values (e.g., positive is activation), whereas the sign is effectively removed in the MVPA procedure in the cross-validation step. In this case, cross-validation can succeed if the betas are positive in both the training and test data (i.e. more activity in the DLPFC in preparation for the color task/harder task), negative in both the training and test data (i.e. less activity in the DLPFC in preparation for the color task/harder task), or a mixture of the two. The mixture would be what would indicate an abstract task prediction. However, given the 30 or so years we've been contrasting harder tasks vs easier tasks and seeing the DLPFC engaged more for the former, I'm worried that what is being depicted is preparation for the harder task rather than an abstract task set. There is still something very interesting about that prediction being formulated by both internal and external sources, but the interpretation is quite different (e.g., representation vs processing).
Note that if the authors had used MVPA to predict the task itself (i.e. classification) this would not be an issue. Given the association between MVPA and classification, I'm worried that many readers will make this mistake. I think that this can be investigated fairly easily by either (A) doing a simple univariate analysis using Pjoint as a parametric modulator, or (B) examine the βs produced by the ridge regression procedure. If the DLPFC region is positive on these metrics, then it would seem it largely reflects preparation for the harder task.
Even if this holds true, all is not lost! I believe the data depicted in Figure 4—figure supplement 2 is what we really want here. Those data indicate the absolute deviation from chance (i.e. deviation from no prediction). I think that's really what a task set would reflect. So, perhaps it is as simple as swapping in Figure 4—figure supplement 2 for Figure 4 and changing the story from the DLPFC to rostrolateral PFC and the IPS.
Reviewer #2:
The manuscript contrasts two types of anticipatory control (i.e., control based on a prediction of future states of the environment): one "external" based on explicit cues, another "internal" based on observed history.
Notwithstanding no information on the next task is present in the task history, subjects seem to rely on it, even more than on informative cues.
The manuscript is interesting and well written. Nevertheless, I am not fully convinced on the interpretation that the authors offer on one of the primary measures.
If I correctly understood Figure 3 and the description of the model, Pint for the color task increases monotonically with the length of the most recent color-task series. Thus, to be clear, between these three series:a) […] c-m-c-c-m-m-mb) […] c-m-c-c-m-m-cc) […] m-m-c-m-c-c-c
Pint-color is c>b>a
In other words, subjects would strongly expect another color task in c) while they would be highly surprised to get a color task in a).
The task sequence, however, is actually (pseudo)random:
- There is no dependency between history and next trial
- The proportion of the two tasks is 50/50
- (I guess that) the distribution of the length of same-task series is roughly exponential with a mode of 1 and a median of 2 (?).
Given that, the subjects may rather fast realize that the probability of a long same-task series is a priori very low compared to a short one. The usual (invalid) psychological reaction to this situation (see e.g. studies on random sequences perception/prediction) is to expect/predict alternation.
Thus, overall, I would expect that for subjects the prediction of the probability of color task would not monotonically increase with the length of the color sequence. For example, I would guess thati) […] cii) […] c ciii) […] c c civ) […] c c c c csubjects would explicitly predict that another c would be more likely in i) or ii) rather than in iii) or iv).
Notice that here I am assuming that sequences like iv or iii were rare in the task. Things would change if this were not the case.
In this experiment, no explicit measures of future task prediction have been collected from subjects, so we cannot know for sure what subjects’ expectations were.
Overall, given the way Pint is computed, I would instead consider it a measure of the strength of proactive interference from past trials. This would be consistent with the observation that the interference is stronger with the most extended same-task series.
This view would produce a significant shift in the interpretation of the results.
On another point:
What incentive did the subjects have to perform any control adaptation in advance? Given the task, a "rational" option available to a subject would be to wait for the task phase in which all information is available. For example, in Wisniewski et al., 2015 we used monetary incentives + adaptive timing to keep subjects motivated to use advance information.
Reviewer #3:
In the present study by Jiang et al., differing progenitors of task-related demands on cognitive control are assessed behaviorally and neurally. Specifically, explicit demands guided by external cues and implicit demands driven by internal history are contrasted for their predictive impact on cognitive control. A probabilistic task-switching paradigm was utilized to dissociate these two sources of information. Additionally, prediction-error (PE) variables and a Q-learning-inspired computational model were used to more acutely probe the behavioral and neural data. Behaviorally, the findings include support of a 'joint guidance' hypothesis, which states that cognitive control is jointly informed by external and internal information. Neurally, prediction error derived from joint guidance was found to be integrated in a dorsolateral prefrontal cortex (dlPFC) region. Lastly, the demands of proactive task switching and reactive task updating were found to be encoded in the frontoparietal network (FPN) and dorsomedial prefrontal cortex (dmPFC), respectively. The foremost merit of this study is in addressing the varying sources of information that impact cognitive control in a manner that reveals them as distinct neurally. Thus, cognitive control is proposed to contain multiple processes.
Essential revisions:
1) One key assumption in this study is that the task allows external and internal sources of prediction to be separable. Moreover, the task is designed for trial history to be controlled for, such that this history is uninformative (e.g., "set to zero.", Introduction). However, given that there were pre-scan training trials, and many trials included in the scanning (test) sessions ("9 runs at 50 trials each", Discussion section), learning effects might be present that make external and internal information less dissociable. That is to say, even though the trials are randomized, trial history is informative because the probabilistic value of each cue is being stored in a way that constitutes 'internal history'. Therefore, even though it is noted that this task makes trial-history (internal info) and cueing (external info) independent and is exclusively informative on cueing, an alternative perspective is that the task is biased for internal history. After the initial training trials, cue-based predictions may be entangled with internal history, given that probabilities associated with each cue have been learned (approximately). Behavioral and computationally modeled findings reported here might support the latter. Firstly, trial history assessed by i-1 (and i-2 to i-3) trial reaction time biased behavior (subsection “Behavioral data – 130 Effects of external cues and cognitive history”). Secondly, trial history has a three-fold larger effect on behavior than cueing (subsection “Behavioral data – Model comparison.”). Lastly, prediction error for internal history was encoded by networks overlapping with a previous study that had "predictive trial sequences" (Results section). One suggestion is to examine if (and how) the variables derived from these assumptions change over time. More specifically, if such variables differ from run to run. For example, does the internal history prediction variable (Pint and related PEint) increase or decrease from the first 50 trials to the last 50 trials?
2) Relatedly, given that the intention was to set internal history to zero in terms of predictability, calling this source of information (during modeling and analyses) a type of 'prediction' is a bit confusing. That is to say, if internal history is truly set to zero, how can it be used as an independent factor in analyses and termed a source of prediction?
3) At various points, further context and/or justification for analytic choices would benefit the claims made herein.
3a. Firstly, what justifies using the following combinations that comprise the two guidance models: (1) prediction error related to the previous trial (PEprev) and PE of cueing (PEcue) amount to the "max benefit hypothesis", and (2) Pcue and prediction error related to internal trial-history (PEint) amount to "joint guidance" (subsection “Behavioral data – Model comparison”). Even though control models and a model with all three factors were also compared, what is the conceptual backing and/or prior literature supporting this choice?
3b. Next, why is reaction time used for the bulk of the modeling work (and development of variables) as opposed to accuracy?
3c. Why is it assumed that 'optimal performance' is equal to fastest performance? In interpreting results in terms of cost/benefit analyses (e.g., in the expected value of control framework), it is presumed herein that speedy responses are equivalent to optimal responses (subsection “Behavioral data – Quantifying respective contributions of cue-based and trial history-based task predictions.”, and Discussion section). Reaction time gain was found to be positively correlated with using cue-based information, therefore the stronger impact of history-based information on behavior was surprising (subsection “Behavioral data – Quantifying respective contributions of cue-based and trial history-based task predictions.”, and Discussion section), and this was explained in terms of history having a lower cost because it is more automatic. However, it is not clear why internal history would be more automatic and how accuracy factors into this line of reasoning on performance benefits. Mean accuracy was quite high (table 1, Materials and methods section), thus it is possible that the cost to reaction time is outweighed by the benefit to accuracy, and that the observed trial-history-bias supports this.
[Editors’ note: what now follows is the decision letter after the authors submitted for further consideration.]
Thank you for resubmitting your work entitled "Integrated External and Internally Generated Task Predictions Jointly Guide Cognitive Control in Prefrontal Cortex" for further consideration at eLife. Your revised article has been favorably evaluated by Michael Frank (Senior Editor), David Badre (Reviewing Editor), and three reviewers.
The reviewers feel that their major concerns were addressed by your revision. They and the editors are in agreement that the manuscript is now acceptable for publication at eLife. However, you will note that two of the reviewers raised some additional suggestions of ways the manuscript could be clarified. Though these are not essential revisions, we return this to you once more to give you the opportunity to make changes based on these suggestions and questions. In particular, we encourage you to carefully consider the suggestions that would help make the task paradigm and analysis logic clearer (the first comment from both reviewers). The comments for further clarification from the are copied below.
Reviewer #2:
1) Introduction: This was brought up in the first round of review, and adequately addressed in the added analyses. However, concerns over the language used here bears repeating as it was a conceptual obstacle in following the logic of the present paradigm. 'Trial history' is a broad concept that likely has components, and the single component of "trial sequence" is fixed at zero with randomization. This appears to be the point of the chosen paradigm: e.g., that the potential confound of "sequence" is controlled for by randomization, so any internally driven predictions are based on the subject's choice to do so (sub-optimally, as is probed subsection “Behavioral data – Quantifying respective contributions of cue-based and trial history-based task predictions”). That is to say, internally driven factors become independently "discoverable" (via computational modeling) after randomization. The "fixed to zero" description confuses this. Perhaps re-wording this sentence to be less all-encompassing (e.g., it currently reads that all of "trial history", as a singular concept, is fixed at zero) would allay potential confusion on part of the average reader.
1a) Note that the utility of the paradigm became clearer as I read the Materials methods section and Results section, but only became very clear once reading the text. It should be clarified before results are even reported, hence the suggestion to adjust (or otherwise qualify) the phrasing of "fixed at zero".
1b) Note that this also has implications for adjusting the commentary introducing the "rational/max-benefit hypothesis"in subsection “Behavioral data – Effects of external cues and cognitive history”. If a reader doesn't understand that randomization is a beneficial manipulation that allows for computational modeling of the internal factor, then it becomes confusing to suggest that on one hand this paradigm allows us to adjudicate between the impact of external and internal factors (or their joint influence/a comparison), but on the other hand, the internal factor (in its entirety, as is currently suggested) is set to zero, thus the max-benefit hypothesis amounts to explicitly externally-driven processing. The typical reader might wonder: How could the internal factor be part of the adjudication if it's set to zero? Conversely, if there is some discoverable aspect of the internal factor, how could it be discounted from the rational hypothesis (in principle, not mathematically)?
Reviewer #3:
The reply of the authors clarified my concerns. I have only a few further comments.
It is now clearer what the authors meant for internal prediction. The use of the word "prediction" both for internal and external prediction misguided me to think that both predictions would be explicit, i.e., the subject would be aware of the prediction. The authors argue that this is not the case: while the cue-based prediction is explicit, the internal prediction is implicit and likely unconscious. Even more: the authors suggest the possibility of a dissociation between an explicit internal prediction vs. an implicit internal prediction.
For the sake of clarity, the authors may emphasize the qualitative distinction between the two types of predictions also in Introduction and Materials and methods section. Otherwise, the reader might realize it only in discussion. This detail seems important for a correct interpretation of the findings and the paradigm.
Besides, hot hand vs. gambler fallacy both depart from a correct interpretation of chance but the violations go in opposing directions. The major driver of the difference between the two is the belief of the subject about the situation. If she is assuming that the generation of events is random (as in the casino) then she will fall for the gambler fallacy, if she assumes that the generation is not random (as for a basket player) then she may fall in the hot hand fallacy. What do the subjects believe in your task? You did not provide any information on the way sequences are generated, thus in principle participants may hypothesize any of the two. Given the task context, it is likely that the subjects assume a random generation. Thus, they would show the gambler fallacy if asked for an explicit prediction. The effect does not emerge because subject's behavior is dominated by the implicit effect discussed above. If the authors think so, then the mention of the hot hand may be removed.
The fact that subjects relied more on internal predictions might follow from the fact that there was a low/no incentive in performing the task as fast and accurate as possible. In fact, given that there is no incentive subjects may rationally decide to rely more on the cheaper prediction from the sequence, rather than on the most cognitively expensive prediction from cue. Given that, the balance between the two types of prediction might be specific for this task context and it should be generalized with caution.
https://doi.org/10.7554/eLife.39497.020Author response
[Editors’ note: the author responses to the first round of peer review follow.]
Reviewer #1:
[…] 1) If I understand correctly, Pjoint reflects the prediction of the color task, which was the harder of the two tasks. So, if activation correlated positively with Pjoint, that could reflect preparation for the color task or preparation for cognitive load (e.g. much like a univariate analysis of a color-word Stroop task would look if contrasting color vs word). It's quite difficult to discern among these possibilities and more general task-set prediction with the MVPA procedure employed by the authors. As I understand it, the MVPA procedure is very similar to a traditional univariate analysis, at least initially, with the key differences being that (A) a neighborhood of voxels is regressed onto a predictor rather than a single voxel, and (B) ridge regression is used to induce regularization. Given that the data are smoothed, I suspect that the use of a searchlight rather than a single voxel simply induces more smoothing (i.e. effectively mimic'd by a larger smoothing kernel during preprocessing), so then the only real difference is the use of ridge regression vs OLS. Then, where the methods differ substantively is that inference from a univariate method would interpret the sign of the resulting β values (e.g. positive is activation), whereas the sign is effectively removed in the MVPA procedure in the cross-validation step. In this case, cross-validation can succeed if the betas are positive in both the training and test data (i.e. more activity in the DLPFC in preparation for the color task/harder task), negative in both the training and test data (i.e. less activity in the DLPFC in preparation for the color task/harder task), or a mixture of the two. The mixture would be what would indicate an abstract task prediction. However, given the 30 or so years we've been contrasting harder tasks vs easier tasks and seeing the DLPFC engaged more for the former, I'm worried that what is being depicted is preparation for the harder task rather than an abstract task set. There is still something very interesting about that prediction being formulated by both internal and external sources, but the interpretation is quite different (e.g. representation vs processing).
Note that if the authors had used MVPA to predict the task itself (i.e. classification) this would not be an issue. Given the association between MVPA and classification, I'm worried that many readers will make this mistake. I think that this can be investigated fairly easily by either (A) doing a simple univariate analysis using Pjoint as a parametric modulator, or (B) examine the βs produced by the ridge regression procedure. If the DLPFC region is positive on these metrics, then it would seem it largely reflects preparation for the harder task.
We thank the reviewer for raising this important point, which allowed us to rule out a potential alternative interpretation for the dlPFC findings. We followed the reviewer’s first suggestion, i.e., to perform a univariate control analysis, as this approach allows for whole-brain analysis. To ensure that the results of the univariate analyses are maximally comparable to the MVPA results, we kept the identical (MVPA) procedure, with the only change that we used the (univariate) searchlight-mean for each trial-level t-map to form a time course of the t-values, which was then correlated to the time course of Pjoint within each of the 3 folds. The resulting correlation coefficients were then transformed into z values. The mean z values across the 3 folds were taken to reflect the univariate encoding strength. This approach corresponds to the reviewer’s suggestion in that: (1) similar to the parametric modulator, the correlation between the t-value time course and trial-level Pjoint captures how brain activity co-varies with model estimates; and (2) the use of the searchlight mean mimicked stronger smoothing while not having the smoothing kernel exceed the searchlight size. Using the same multiple-comparison correction procedure as in the MVPA (voxel-wise threshold P < 0.001 and cluster size > 62 searchlights using AFNI’s 3dttest++), no brain areas showed significant positive or negative correlations between searchlight-mean t-values and Pjoint. When using the dlPFC region shown in Figure 4B as an ROI, the ROI mean univariate encoding strength (z-score = -0.006 ± 0.03) did not significantly differ from 0. This outcome indicates that the contribution of univariate activity to the MVPA results appears to be rather limited. This finding also rules out the possibility that the multivariate encoding of Pjoint in the dlPFC could be solely attributed to increased univariate activity when the color task was predicted.
These new results are reported in the revised manuscript in the following manner:
“To rule out the possibility that the MVPA encoding of Pjoint in dlPFC merely reflected a univariate task effect (e.g., the color task evoking stronger mean activity due to being more difficult than the motion task), we performed a univariate control analysis. […] Moreover, an ROI-based analysis focusing on the dlPFC region in Figure 4B showed that the mean univariate encoding strength of Pjoint did not significantly differ from 0 (z-score = -0.006 ± 0.03, one sample t-test: t21 = -0.20, P = 0.85). Thus, the dlPFC results were not driven by univariate effects of task difficulty.” (subsection “fMRI data – Encoding of joint task predictions at pre-cue onset”)
Following reviewer 1’s comment 11, we also performed the same univariate analyses on Pcue (Multiple comparisons correction results: voxel-wise threshold: P < 0.001, cluster size > 68 searchlights) and Pint (Multiple comparisons correction results: voxel-wise threshold: P < 0.001, cluster size > 64 searchlights). These analyses revealed significant negative correlations between Pcue and searchlight-mean t-values in bilateral posterior temporal gyrus (MT+) and medial prefrontal cortex. Critically, these regions do not overlap with any of the MVPA results (Figure 4—figure supplement 1B), suggesting that the MVPA results cannot be accounted for by aggregate univariate effects. These results were integrated into the revised Figure 4—figure supplement 1 and its figure legend.
In sum, by following the reviewer’s suggested analysis approach, we successfully ruled out the possibility that our main dlPFC finding reflects a univariate task difficulty effect. We thank the reviewer for encouraging these additional analyses, as they provide further support for the inferences drawn.
Following the reviewer’s guidance, we have now also uploaded all t-maps to NeuroVault (https://neurovault.org/collections/3732/) for interested readers.
2) Even if this holds true, all is not lost! I believe the data depicted in Figure 4—figure supplement 2 is what we really want here. Those data indicate the absolute deviation from chance (i.e. deviation from no prediction). I think that's really what a task set would reflect. So, perhaps it is as simple as swapping in Figure 4—figure supplement 2 for Figure 4 and changing the story from the DLPFC to rostrolateral PFC and the IPS.
We agree with the reviewer that the deviation from neutral prediction shown in Figure 4—figure supplement 2 provides important information about the proactive task switch. Our interpretation is that it reflects the strength or confidence of the task-set prediction. However, unlike Pjoint, this deviation does not indicate the direction of the task prediction (i.e., whether the motion or color task is more likely to be required next), so we still think that Pjoint is the more direct reflection of task-set prediction.
Reviewer #2:
[…] Overall, given the way Pint is computed, I would instead consider it a measure of the strength of proactive interference from past trials. This would be consistent with the observation that the interference is stronger with the most extended same-task series. This view would produce a significant shift in the interpretation of the results.
The reviewer raises a number of very pertinent points. First, regarding the speculation on subjects’ expectations, our behavioral findings/modeling suggest that, as the reviewer notes, expectations for task A increase with increasing length of runs of task A trials. This runs counter to intuition and also to some studies of explicit predictions, which tend to follow the “gambler’s fallacy” (Jarvik, 1951) that the reviewer describes. However, importantly, this is not the case for implicit predictions, which tend to show the opposite pattern (the “hot-hand fallacy”; Gilovich et al., 1985). This dissociation between explicit and implicit predictions (as measured by behavior) has been demonstrated very convincingly in seminal work by Pierre Perruchet, (1985), and is now known as the Perruchet effect (for a recent review, see Perruchet, 2015). Importantly, this dissociation has also been demonstrated for predictions of task demands in the realm of cognitive control (Jimenez and Mendez, 2013). So, the fact that behavior in our task seems to follow the hot-hand rather than gambler’s fallacy is in fact in line with our conceptualization of internal predictions as being implicit. We now point this out explicitly in the revised manuscript in the following manner:
“The internally generated predictions we observed here took the form of increased expectations of task repetition with increasing run length of a given task (also known as the “hot-hand fallacy”; Gilovich et al., (1985)). This contrasts with another intuitive possibility, whereby participants could have predicted a change in task with increasing run length of task repetitions (known as the “gambler’s fallacy”; Jarvik, (1951)). The fact that we observed the former rather than the latter is in line with our assumption that the internal predictions we measure here are implicit in nature, as previous work has demonstrated a dissociation between explicit outcome predictions following the gambler’s fallacy and implicit predictions, as expressed in behavior, following the hot-hand fallacy (Perruchet, 1985; Jimenez and Mendez, 2013; Perruchet, 2015).” (Discussion section)
Second, the reviewer wonders whether our model variable Pint (and associated findings) may be better thought of as reflecting proactive interference from previously activated task-sets. This idea has intuitive appeal, but we believe that Pint in fact differs from proactive interference in subtle but important ways: specifically, while the Pint variable simply captures the (inferred) readiness for a particular task based on the previous task sequence, a proactive interference measure at the time of pre-cue onset should instead correspond to the difference between Pint and the (accurate) prediction supplied by the explicit cue value, because Pint only “interferes” with the upcoming task to the extent that it differs from that task. Thus, on conceptual grounds alone, we would argue that we should not shift our interpretation from an internal prediction to a proactive interference account.
However, it is of course nevertheless conceivable that proactive interference contributes to our task and the neural data we report. To formally assess this possibility, we conducted additional fMRI analyses and added a section to the revised manuscript in the following manner:
“fMRI data – proactive interference. An alternative interpretation regarding model variable Pint (and associated findings) is that they may reflect, or be confounded by, proactive interference from previously activated task-sets. […] As the intervals in the current study were substantially longer (ranging from 1750ms to 2500ms), we speculate that any proactive interference effects may have decayed too much to be a major contributor in modulating proactive task switch preparations in this protocol.” (Subsection “fMRI data – proactive interference”)
Relatedly, please note that the neural encoding of |Pint-Pcue| is not necessarily predicted by the joint-guidance model, because that model computes the joint task prediction as a weighted sum of Pint and Pcue. Finally, as with all other t-maps, we have now uploaded the t-map for this control analysis to NeuroVault (https://neurovault.org/collections/3732/) for the interested readers.
On another point:
What incentive did the subjects have to perform any control adaptation in advance? Given the task, a "rational" option available to a subject would be to wait for the task phase in which all information is available. For example, in Wisniewski et al., 2015 we used monetary incentives + adaptive timing to keep subjects motivated to use advance information.
To keep participants engaged, we used a calibration procedure to make the task challenging. Moreover, participants with low accuracy (possibly due to low motivation) were excluded from behavioral and fMRI analyses. Finally, the assumption that participants performed advance task-set preparation is of course borne out by our behavioral results, which showed a significant main effect of pre-cue prediction on response time.
The above said, we appreciate the reviewer’s point that motivation is likely to be an important determinant of engaging in proactive task switch preparation. We now allude to this issue in a revised line of the Discussion section:
“Further studies are encouraged to explore other manipulations of benefit and cost (e.g., monetary reward) and test how such manipulations shift the reliance on informative but costly explicit predictions.”
Reviewer #3:
[…] 1) One key assumption in this study is that the task allows external and internal sources of prediction to be separable. Moreover, the task is designed for trial history to be controlled for, such that this history is uninformative (e.g., "set to zero.", Introduction). However, given that there were pre-scan training trials, and many trials included in the scanning (test) sessions ("9 runs at 50 trials each", Discussion section), learning effects might be present that make external and internal information less dissociable. That is to say, even though the trials are randomized, trial history is informative because the probabilistic value of each cue is being stored in a way that constitutes 'internal history'. Therefore, even though it is noted that this task makes trial-history (internal info) and cueing (external info) independent and is exclusively informative on cueing, an alternative perspective is that the task is biased for internal history. After the initial training trials, cue-based predictions may be entangled with internal history, given that probabilities associated with each cue have been learned (approximately). Behavioral and computationally modeled findings reported here might support the latter. Firstly, trial history assessed by i-1 (and i-2 to i-3) trial reaction time biased behavior (subsection “Behavioral data – 130 Effects of external cues and cognitive history”). Secondly, trial history has a three-fold larger effect on behavior than cueing (subsection “Behavioral data – Model comparison.”). Lastly, prediction error for internal history was encoded by networks overlapping with a previous study that had "predictive trial sequences" (Results section). One suggestion is to examine if (and how) the variables derived from these assumptions change over time. More specifically, if such variables differ from run to run. For example, does the internal history prediction variable (Pint and related PEint) increase or decrease from the first 50 trials to the last 50 trials?
The reviewer raises an interesting point, that the reliance on external task prediction (Pcue) and the reliance on trial history based task prediction (Pint) may be entangled at the beginning of the experiment. In other words, if the learning of Pcue for each pre-cue is achieved by learning from previous trials sharing the same pre-cue, the ‘informativeness ‘of the pre-cue specific trial history may render the whole trial history informative, hence promoting the reliance on Pint. This account of learning processes in our task leads to the key prediction that the reliance on Pint (i.e., 1-β) should be higher in the beginning of the task than in later phases, as the gradual detachment of Pcue from trial history should decrease the reliance on trial history over time.
To test this possibility, we followed the reviewer’s suggestion and estimated the βs from the first and last run separately, using the same procedure for model-based behavioral analysis. Specifically, for each of the two runs, a grid search of the learning rate was conducted on the joint-guidance model to obtain the learning rate for Pint and its corresponding trial-wise PEint that maximized the joint-guidance model’s ability to explain the variance in trial-wise RTs. After the learning rate was determined, the β estimate can then be obtained as a scaled reliance on Pcue in relation to the total reliance on Pcue and Pint. We compared β estimates obtained from the first and last run (50 trials per run) and found no difference (paired t-test, t21=0.25, p = 0.8).
One potential caveat to this approach is that the relatively low run-wise trial counts may affect the reliability of β estimates, as we found no within-subject consistency of β when correlating the first and last run β estimates across subjects (r = 0.06, p = 0.80). Therefore, we re-ran this analysis using the first and last 3 runs, and obtained robust within-subject reliability (r = 0.63, p = 0.002). The relevant β estimates still did not differ between the first and last 3 runs of the main task, however (t21=1.32, p = 0.20). If the cue-specific learning was the source of the reliance on trial history, another prediction would be that reliance on Pcue should be stronger than the reliance on Pint, especially early on in the task, as the former was the source of the reliance on trial history. However, regardless of whether we use only the first run or the first 3 runs, β estimates at the beginning of the main task were significantly lower than 0.5 (one-sample t-tests: t21=6.28 and t21=6.04, both Ps < 0.001), indicating stronger reliance on Pint than Pcue.
We speculate that there might be two explanations for the data pattern we observe: (1) the disentanglement of Pcue occurred prior to the main task, as the stair-case procedure, which preceded the main task, consisted of 200 trials; (2) The entanglement of Pcue and Pint through trial history may be too weak to be detected in this task. In either case, these analyses do not support the idea that our results reflect an entanglement of internal and external sources of predictions. Nevertheless, we agree with the reviewer that this type of inter-dependence between external and internal sources of predictions is an important possibility in this type of task design, and we now discuss this issue (and associated analyses) in the revised paper in the following manner:
“To test the robustness of the β estimates, we computed β separately using the first and last 3 runs. Across subjects, β estimates were significantly correlated between these two task phases (r = 0.63, p = 0.002), suggesting reliable β estimation within participants. Also, there was no significant difference in β estimates between the first and last 3 runs (t21=1.32, p = 0.20), suggesting that the reliance on Pcue relative to Pint remained unchanged throughout the 9 runs in the fMRI session.” (subsection “Behavioral data – Quantifying respective contributions of cue-based and trial history-based task predictions”)
“An alternative account could be that the reliance on Pint originated from Pcue. Specifically, if the learning of Pcue for each pre-cue is achieved by learning from previous trials sharing the same pre-cue, the ‘informativeness ‘of the pre-cue specific trial history may render the whole trial history informative, hence promoting the reliance on Pint. As the learning of Pcue asymptotes, Pcue would become disentangled from Pint. This account leads to the key prediction that the reliance on Pint (i.e., 1-β) should be higher in the beginning of the task than later on, as the gradual detachment of Pcue from trial history would decrease the reliance on trial history over time. However, the lack of difference in β estimates between the first and last 3 runs of the fMRI session does not support this notion. One possibility is that this type of process may have occurred during the stair-casing procedure, and hence did not end up affecting the fMRI session.” (Discussion section)
2) Relatedly, given that the intention was to set internal history to zero in terms of predictability, calling this source of information (during modeling and analyses) a type of 'prediction' is a bit confusing. That is to say, if internal history is truly set to zero, how can it be used as an independent factor in analyses and termed a source of prediction?
It is a common finding that people intuit non-existing patterns in random sequences (Huettel et al., 2002), and it is in this sense that we call trial sequence a source of predictions. Therefore, even though objectively trial history was non-predictive, participants could still subjectively use trial history to make (invalid) prediction of forthcoming tasks. Importantly, regardless of the validity of the prediction, applying the prediction to proactively shift task-set was expected to have behavioral consequences, such that if the prediction (coincidentally) matched the forthcoming task, behavioral responses would be faster (cf. Waskom et al., 2017). Our behavioral findings clearly support this notion, as we show that participants’ behavior is in fact strongly affected by the non-predictive task sequence.
To clarify this issue further, we first added the following to the revised manuscript to highlight that other studies have also found that trial history modulates cognitive control even when it provides no additional predictive power:
“Previous behavioral studies showed that the two types of predictions appear to drive control simultaneously. In particular, trial history based predictions impact cognitive control even in cases where these predictions are redundant, as in the presence of 100% valid external cues for selecting the correct control strategy (e.g., (Alpay et al., 2009; Correa et al., 2009; Kemper et al., 2016)).” (Introduction)
Second, we also added a discussion on the nature of these predictions, which also relates to a point raised by reviewer 2 (comment 1):
“The internally generated predictions we observed here took the form of increased expectations of task repetition with increasing run length of a given task (also known as the “hot-hand fallacy”; Gilovich et al. (1985)). This contrasts with another intuitive possibility, whereby participants could have predicted a change in task with increasing run length of task repetitions (known as the “gambler’s fallacy”; Jarvik (1951)). The fact that we observed the former rather than the latter is in line with our assumption that the internal predictions we measure here are implicit in nature, as previous work has demonstrated a dissociation between explicit outcome predictions following the gambler’s fallacy and implicit predictions, as expressed in behavior, following the hot-hand fallacy (Perruchet, 1985; Jimenez and Mendez, 2013; Perruchet, 2015).” (Discussion section)
3) At various points, further context and/or justification for analytic choices would benefit the claims made herein.
3a. Firstly, what justifies using the following combinations that comprise the two guidance models: (1) prediction error related to the previous trial (PEprev) and PE of cueing (PEcue) amount to the "max benefit hypothesis", and (2) Pcue and prediction error related to internal trial-history (PE-int) amount to "joint guidance" (subsection “Behavioral data – Model comparison”). Even though control models and a model with all three factors were also compared, what is the conceptual backing and/or prior literature supporting this choice?
We thank the reviewer for drawing attention to the need for further justification of the considered models. We have added the following to clarify the choice of the two models:
“In the max-benefit model, PEcue was included to represent the effect of external cues. To maximize the utility of the informative pre-cue, the PEint that represented the non-informative trial history was not included in this model. Finally, PEprev was used to account for the classic task switch effect. The joint-guidance model consisted of PEcue and PEint to represent the modulations of cue-induced and trial history-based task predictions, respectively. The classic task switch effect in this model was accounted for by PEint, because the most recent task has already been factored into Pint.” (subsection “Behavioral data – Model comparison”)
3b. Next, why is reaction time used for the bulk of the modeling work (and development of variables) as opposed to accuracy?
First, RT data, being continuous in nature, tend to be a more powerful and sensitive measure of processing costs, and the task-switching literature has in fact focused on RT switch costs to a much greater extent than on switch costs in accuracy (for reviews, see (Monsell, 2003; Kiesel et al., 2010; Vandierendonck et al., 2010)). Second, in the ANOVA we observed a significant main effect of pre-cue prediction in RT data but not in accuracy. We speculate that one reason may be that trial-wise accuracy is binary and is thus a coarse measure to map onto a continuous variable like PE. Additionally, for incorrect trials, there may be other factors that caused the error (e.g., a trial with low PE may be incorrect due to mind wandering). Therefore, in the subsequent model-based analysis, we used RTs on correct trials to infer the external and internal predictions of task-set. In the revised manuscript, we added the following:
“To formally compare how well the joint-guidance and max-benefit hypotheses explain the behavioral data, we constructed a quantitative model for each hypothesis and compared the two using trial-wise RTs. Accuracy was not modeled due to its insensitivity to external task prediction induced by the pre-cue (Figure 2C).” (subsection “Behavioral data – Model comparison”)
3c. Why is it assumed that 'optimal performance' is equal to fastest performance? In interpreting results in terms of cost/benefit analyses (e.g., in the expected value of control framework), it is presumed herein that speedy responses are equivalent to optimal responses (subsection “Behavioral data – Quantifying respective contributions of cue-based and trial history-based task predictions.”, and Discussion section). Reaction time gain was found to be positively correlated with using cue-based information, therefore the stronger impact of history-based information on behavior was surprising (subsection “Behavioral data – Quantifying respective contributions of cue-based and trial history-based task predictions.”, and Discussion section), and this was explained in terms of history having a lower cost because it is more automatic. However, it is not clear why internal history would be more automatic and how accuracy factors into this line of reasoning on performance benefits. Mean accuracy was quite high (table 1, Materials and methods section), thus it is possible that the cost to reaction time is outweighed by the benefit to accuracy, and that the observed trial-history-bias supports this.
In the instructions, participants were told to respond as fast and accurately as possible, such that “optimal performance” would correspond to fast, correct trials. We agree with the reviewer that, in principle, accuracy may interact with RT to produce a speed-accuracy trade off, such that accuracy improves as RT increases. However, this was not the case in the current study, because (1) there was no main effect of external prediction on accuracy (Figure 2C) and (2) conditions that produced slower responses were also associated with lower accuracy (Figure 2A and 2E). In other words, better accuracy in conditions with faster RTs supports the notion that faster performance is here equivalent with better performance.
In any event, since the term “automatic” may have stronger connotations than we sought to convey, we have now removed the claim that the internal control predictions may be more automatic than external control predictions, and we have made some other minor tweaks to this aspect of the Discussion section. The related text now reads:
“[…] the EVC model would predict that this situation could come about if applying internal control predictions incurred a smaller cognitive effort cost than employing external control predictions. The implied lower costs for internally generated prediction than cue-induced prediction may originate from two, not mutually exclusive sources. […] If such self-generated predictions of task demand represent a default mode of control regulation, their generation is likely cheap and, importantly, it may require substantial cognitive effort to override such predictions, which would be another means of rendering the use of the explicit external cue more costly.”
[Editors' note: the author responses to the re-review follow.]
Reviewer #2:
1) Introduction: This was brought up in the first round of review, and adequately addressed in the added analyses. However, concerns over the language used here bears repeating as it was a conceptual obstacle in following the logic of the present paradigm. 'Trial history' is a broad concept that likely has components, and the single component of "trial sequence" is fixed at zero with randomization. This appears to be the point of the chosen paradigm: e.g., that the potential confound of "sequence" is controlled for by randomization, so any internally driven predictions are based on the subject's choice to do so (sub-optimally, as is probed subsection “Behavioral data – Quantifying respective contributions of cue-based and trial history-based task predictions”). That is to say, internally driven factors become independently "discoverable" (via computational modeling) after randomization. The "fixed to zero" description confuses this. Perhaps re-wording this sentence to be less all-encompassing (e.g., it currently reads that all of "trial history", as a singular concept, is fixed at zero) would allay potential confusion on part of the average reader.
1a) Note that the utility of the paradigm became clearer as I read the Materials methods section and Results section, but only became very clear once reading the text. It should be clarified before results are even reported, hence the suggestion to adjust (or otherwise qualify) the phrasing of "fixed at zero".
We apologize for the confusion. We have now removed the phrase “fixed at zero” in order to avoid giving the readers a wrong impression that the prediction is a constant. The revised text now reads:
“While the benefit of the explicit cue was represented by its predictive value, trial history was uninformative about the probability of task switching, as the task sequence was randomized. There was thus no objective benefit to generating trial history-based predictions. However, based on prior studies, we nevertheless anticipated…” (Introduction)
1b) Note that this also has implications for adjusting the commentary introducing the "rational/max-benefit hypothesis" in subsection “Behavioral data – Effects of external cues and cognitive history”. If a reader doesn't understand that randomization is a beneficial manipulation that allows for computational modeling of the internal factor, then it becomes confusing to suggest that on one hand this paradigm allows us to adjudicate between the impact of external and internal factors (or their joint influence/a comparison), but on the other hand, the internal factor (in its entirety, as is currently suggested) is set to zero, thus the max-benefit hypothesis amounts to explicitly externally-driven processing. The typical reader might wonder: How could the internal factor be part of the adjudication if it's set to zero? Conversely, if there is some discoverable aspect of the internal factor, how could it be discounted from the rational hypothesis (in principle, not mathematically)?
Following the reviewer’s suggestion, we revised the relevant text in the following manner:
“Given that trial-history was not informative of the upcoming task (i.e., it had no predictive value), this alternative model in effect corresponds to control being guided exclusively by the external cue, which has predictive value. We here refer to this as the ‘max-benefit hypothesis’.” (Results section)
Reviewer #3:
The reply of the authors clarified my concerns. I have only a few further comments.
It is now clearer what the authors meant for internal prediction. The use of the word "prediction" both for internal and external prediction misguided me to think that both predictions would be explicit, i.e., the subject would be aware of the prediction. The authors argue that this is not the case: while the cue-based prediction is explicit, the internal prediction is implicit and likely unconscious. Even more: the authors suggest the possibility of a dissociation between an explicit internal prediction vs. an implicit internal prediction.
For the sake of clarity, the authors may emphasize the qualitative distinction between the two types of predictions also in Introduction and Materials and methods section. Otherwise, the reader might realize it only in discussion. This detail seems important for a correct interpretation of the findings and the paradigm.
In line with the reviewer’s suggestion, we have now made some minor revisions and additions throughout the paper in order to (a) clarify the likely difference in nature between internal (more likely implicit) and external (explicit) predictions, and (b) that the implicit nature of the internal predictions is an assumption (rather than established fact). Specifically, we made sure to avoid calling the predictions “implicit” and instead called them only “internal” while occasionally adding “(likely implicit)” to stress the point that prior literature would suggest these predictions are probably implicit in nature. We made the following changes:
First, please note that we already raised this distinction in the Introduction:
“Importantly, such expectations about task demands can be driven by two sources: explicit predictions provided by external cues (Rogers and Monsell, 1995; Dreisbach et al., 2002; Badre and Wagner, 2006) and internally generated, trial history-based predictions, which are typically implicit (Dreisbach and Haider, 2006; Mayr, 2006; Bugg and Crump, 2012; Egner, 2014; Chiu and Egner, 2017).”
Second, in the subsequent section, we now write:
“However, it is not presently known whether and how the brain reconciles explicit external and (typically implicit) internal predictions.”
Further down in the Introduction, we now write:
“However, based on prior studies, we nevertheless anticipated that participants would form (likely implicit) internal expectations for forthcoming trials based on trial-history (e.g., (Huettel et al., 2002)), and this design ensured that trial-history and cue-based predictions were independent of each other.”
In the Results section, we amended a sentence to state:
“Thus, these results offer strong evidence for a specific role of this dlPFC region in integrating joint predictions of forthcoming task demand, and against the alternative possibility that explicit external and (likely implicit) internal predictions might drive task-set updating independently, without being integrated.”
Finally, in the Materials and methods section, we added the following description:
“Importantly, the sequences of tasks were pseudo-randomly produced so that the tasks performed on previous trials had no predictive power on the task to be performed on the current trial.”
Besides, hot hand vs. gambler fallacy both depart from a correct interpretation of chance but the violations go in opposing directions. The major driver of the difference between the two is the belief of the subject about the situation. If she is assuming that the generation of events is random (as in the casino) then she will fall for the gambler fallacy, if she assumes that the generation is not random (as for a basket player) then she may fall in the hot hand fallacy. What do the subjects believe in your task? You did not provide any information on the way sequences are generated, thus in principle participants may hypothesize any of the two. Given the task context, it is likely that the subjects assume a random generation. Thus, they would show the gambler fallacy if asked for an explicit prediction. The effect does not emerge because subject's behavior is dominated by the implicit effect discussed above. If the authors think so, then the mention of the hot hand may be removed.
We thank the reviewer for pointing this out. We believe that reference to the gambler’s and hot-hand fallacy may aid readers who are familiar with these concepts in appreciating an important aspect of our results. We therefore opted to keep mention of the hot hand in the Discussion section. However, we now also add the reviewer’s astute point that the type of fallacy a participant may be subjects to will likely depend on their belief about the randomness underlying the events in question.
Accordingly, we have added the following to the Discussion section:
“The fact that we observed the former rather than the latter is in line with our assumption that the internal predictions we measure here are likely implicit in nature, as previous work has demonstrated a dissociation between explicit outcome predictions following the gambler’s fallacy and implicit predictions, as expressed in behavior, following the hot-hand fallacy (Perruchet, 1985; Jimenez and Mendez, 2013; Perruchet, 2015). Note also that participants’ tendency to follow either fallacy should depend on their beliefs as to whether events are generated randomly (in gambling) or non-randomly (the hot hand). Thus, the fact that internally generated predictions followed the hot-hand pattern further corroborates that participants (likely implicitly) assumed the trial history to be non-random.”
The fact that subjects relied more on internal predictions might follow from the fact that there was a low/no incentive in performing the task as fast and accurate as possible. In fact, given that there is no incentive subjects may rationally decide to rely more on the cheaper prediction from the sequence, rather than on the most cognitively expensive prediction from cue. Given that, the balance between the two types of prediction might be specific for this task context and it should be generalized with caution.
We agree with the reviewer that the balance between internally and externally generated predictions may change depending on other factors such as motivation. In the revised manuscript, we added the following Discussion section:
“The EVC model also predicts that the relative reliance on internally vs. externally generated predictions, which are the outcome of a cost-benefit analysis, will change as a function of the cost and/or the benefit of engaging the more informative externally generated predictions. Further studies are encouraged to explore other manipulations of benefit and cost (e.g., monetary reward) and test how such manipulations shift the reliance on the informative but costly explicit predictions.”
https://doi.org/10.7554/eLife.39497.021Article and author information
Author details
Funding
National Institute on Aging (F32 AG056080)
- Jiefeng Jiang
National Institute on Aging (R21 AG058111)
- Anthony D Wagner
National Institute of Mental Health (R01 MH097965)
- Tobias Egner
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
We thank Anthony Sali for assistance with data collection and Michael L Waskom for helpful comments on a previous version of this manuscript. This project was supported in part by National Institute on Aging awards F32 AG056080 (JJ) and R21 AG058111 (ADW) and National Institute of Mental Health award R01 MH097965 (TE).
Ethics
Human subjects: Twenty-eight volunteers gave informed written consent, in accordance with institutional guidelines. This study was approved by the Duke University Health System Institutional Review Board.
Senior and Reviewing Editor
- Michael J Frank, Brown University, United States
Reviewing Editor
- David Badre, Brown University, United States
Publication history
- Received: June 23, 2018
- Accepted: August 14, 2018
- Accepted Manuscript published: August 16, 2018 (version 1)
- Version of Record published: September 6, 2018 (version 2)
- Version of Record updated: October 19, 2018 (version 3)
Copyright
© 2018, Jiang et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 2,308
- Page views
-
- 368
- Downloads
-
- 21
- Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
According to the efficient coding hypothesis, sensory neurons are adapted to provide maximal information about the environment, given some biophysical constraints. In early visual areas, stimulus-induced modulations of neural activity (or tunings) are predominantly single-peaked. However, periodic tuning, as exhibited by grid cells, has been linked to a significant increase in decoding performance. Does this imply that the tuning curves in early visual areas are sub-optimal? We argue that the time scale at which neurons encode information is imperative to understand the advantages of single-peaked and periodic tuning curves, respectively. Here, we show that the possibility of catastrophic (large) errors creates a trade-off between decoding time and decoding ability. We investigate how decoding time and stimulus dimensionality affect the optimal shape of tuning curves for removing catastrophic errors. In particular, we focus on the spatial periods of the tuning curves for a class of circular tuning curves. We show an overall trend for minimal decoding time to increase with increasing Fisher information, implying a trade-off between accuracy and speed. This trade-off is reinforced whenever the stimulus dimensionality is high, or there is ongoing activity. Thus, given constraints on processing speed, we present normative arguments for the existence of the single-peaked tuning organization observed in early visual areas.
-
- Genetics and Genomics
- Neuroscience
Aging is a major risk factor for Alzheimer’s disease (AD), and cell-type vulnerability underlies its characteristic clinical manifestations. We have performed longitudinal, single-cell RNA-sequencing in Drosophila with pan-neuronal expression of human tau, which forms AD neurofibrillary tangle pathology. Whereas tau- and aging-induced gene expression strongly overlap (93%), they differ in the affected cell types. In contrast to the broad impact of aging, tau-triggered changes are strongly polarized to excitatory neurons and glia. Further, tau can either activate or suppress innate immune gene expression signatures in a cell-type-specific manner. Integration of cellular abundance and gene expression pinpoints nuclear factor kappa B signaling in neurons as a marker for cellular vulnerability. We also highlight the conservation of cell-type-specific transcriptional patterns between Drosophila and human postmortem brain tissue. Overall, our results create a resource for dissection of dynamic, age-dependent gene expression changes at cellular resolution in a genetically tractable model of tauopathy.