Response-based outcome predictions and confidence regulate feedback processing and learning

  1. Romy Frömer  Is a corresponding author
  2. Matthew R Nassar
  3. Rasmus Bruckner
  4. Birgit Stürmer
  5. Werner Sommer
  6. Nick Yeung
  1. Humboldt-Universität zu Berlin, Germany
  2. Brown University, United States
  3. Freie Universität Berlin, Germany
  4. Max Planck School of Cognition, Germany
  5. International Max Planck Research School LIFE, Germany
  6. International Psychoanalytic University, Germany
  7. University of Oxford, United Kingdom
5 figures, 7 tables and 11 additional files


Interactions between performance monitoring and feedback processing.

(A) Illustration of dynamic updating of predicted outcomes based on response information. Pre-response the agent aims to hit the bullseye and selects the action he believes achieves this goal. Post-response the agent realizes that he made a mistake and predicts to miss the target entirely, being reasonably confident in his prediction. In line with his prediction and thus unsurprisingly the darts hits the floor. (B) Illustration of key concepts. Left: The feedback received is plotted against the prediction. Performance and prediction can vary in their accuracy independently. Perfect performance (zero deviation from the target, dark blue line) can occur for accurate or inaccurate predictions and any performance, including errors, can be predicted perfectly (predicted error is identical to performance, orange line). When predictions and feedback diverge, outcomes (feedback) can be better (closer to the target, area highlighted with coarse light red shading) or worse (farther from the target, area highlighted with coarse light blue shading) than predicted. The more they diverge the less precise the predictions are. Right: The precision of the prediction is plotted against confidence in that prediction. If confidence closely tracks the precision of the predictions, that is if agents know when their predictions are probably right and when they’re not, confidence calibration is high (green). If confidence is independent of the precision of the predictions, then confidence calibration is low. (C) Illustration of theoretical hypotheses. Left: We expect the correspondence between predictions and Feedback to be stronger when confidence is high and to be weaker when confidence is low. Right: We expect that agents with better confidence calibration learn better. (D) Trial schema. Participants learned to produce a time interval by pressing a button following a tone with their left index finger. Following each response, they indicated on a visual analog scale in sequence the estimate of their accuracy (anchors: ‘much too short’ = ‘viel zu kurz’ to ‘much too long’ = ‘viel zu lang’) and their confidence in that estimate (anchors: ‘not certain’ = ‘nicht sicher’ to ‘fully certain’ = ‘völlig sicher’) by moving an arrow slider. Finally, feedback was provided on a visual analog scale for 150 ms. The current error was displayed as a red square on the feedback scale relative to the target interval indicated by a tick mark at the center (Target, t) with undershoots shown to the left of the center and overshoots to the right, and scaled relative to the feedback anchors of -/+1 s (Scale, s; cf. E). Participants are told neither Target nor Scale and instead need to learn them based on the feedback. (E) Bayesian Learner with Performance Monitoring. The learner selects an intended response (i) based on the current estimate of the Target. The Intended Response and independent Response Noise produce the Executed Response (r). The Efference Copy (c) of this response varies in its precision as a function of Efference Copy Noise. It is used to generate a Prediction as the deviation from the estimate of Target scaled by the estimate of Scale. The Efference Copy Noise is estimated and expressed as Confidence (co), approximating the precision of the Prediction. Learners vary in their Confidence Calibration (cc), that is, the precision of their predictions, and higher Confidence Calibration (arrows: green >yellow > magenta) leads to more reliable translation from Efference Copy precision to Confidence. Feedback is provided according to the Executed Response and depends on the Target and Scale, which are unknown to the learner. Target and Scale are inferred based on Feedback (f), Response Noise, Prediction, and Confidence. Variables that are observable to the learner are displayed in solid boxes, whereas variables that are only partially observable are displayed in dashed boxes. (F) Target and scale error (absolute deviation of the current estimates from the true values) for the Bayesian learner with Performance monitoring (green, optimal calibration), a Feedback-only Bayesian Learner (solid black), and a Bayesian Learner with Outcome Prediction (dashed black).

Figure 2 with 3 supplements
Relationships between outcome predictions and actual outcomes in the model and observed data (top vs.bottom).

(A) Model prediction for the relationship between Prediction and actual outcome (Feedback) as a function of Confidence. The relationship between predicted and actual outcomes is stronger for higher confidence. Note that systematic errors in the model’s initial estimates of target (overestimated) and scale (underestimated) give rise to systematically late responses, as well as underestimation of predicted outcomes in early trials, visible as a plume of datapoints extending above the main cloud of simulated data. (B) The model-predicted effect of Confidence Calibration on learning. Better Confidence Calibration leads to better learning. (C) Observed relationship between predicted and actual outcomes. Each data point corresponds to one trial of one participant; all trials of all participants are plotted together. Regression lines are local linear models visualizing the relationship between predicted and actual error separately for high, medium, and low confidence. At the edges of the plot, the marginal distributions of actual and predicted errors are depicted by confidence levels. (D) Change in error magnitude across trials as a function of confidence calibration. Lines represent LMM-predicted error magnitude for low, medium and high confidence calibrations, respectively. Shaded error bars represent corresponding SEMs. Note that the combination of linear and quadratic effects approximates the shape of the learning curves, better than a linear effect alone, but predicts an exaggerated uptick in errors toward the end, Figure 2—figure supplement 3. Inset: Average Error Magnitude for every participant plotted as a function of Confidence Calibration level. The vast majority of participants show positive confidence calibration. The regression line represents a local linear model fit and the error bar represents the standard error of the mean.

Figure 2—figure supplement 1
Model comparison.

Shown are the relationship between predicted and actual outcomes as a function of Confidence (top) and the Target Error Magnitude as a function of Confidence Calibration for the model that learns from feedback only (left), the model that learns from feedback and outcome predictions (center) and our Bayesian Learner with Performance Monitoring. While the Feedback only model learns the target adequately, it is not able to predict outcomes of its actions. The Feedback and Prediction model learns faster and is able to predict outcomes; however, it cannot distinguish between accurate and inaccurate predictions. Finally, the Bayesian Learner with performance monitoring predicts outcomes and distinguishes between accurate and inaccurate predictions. Whether these are advantageous depends on the fidelity of confidence as a read-out of the precision of the predictions, that is confidence calibration. The well-calibrated Bayesian Learner with Performance Monitoring outperforms both alternative models.

Figure 2—figure supplement 2
Predictions and Confidence improve as learning progresses.

Plotted are actual errors as a function of predicted errors and confidence terciles. Regression lines represent local linear models. In block 1, many large actual errors are inaccurately predicted to be zero or small. These prediction errors decrease over time. Across blocks, Confidence further dissociates increasingly well between accurate and inaccurate predictions.

Figure 2—figure supplement 3
Running average log error magnitude across trials.

Running average performance averaged across participants within terciles of Confidence calibration. Shaded error bars represent standard error of the mean.

Changes in objective and subjective feedback.

(A) Dissociable information provided by feedback. An example for a prediction (hatched box) and a subsequent feedback (red box) are shown overlaid on a rating/feedback scale. We derived three error signals that make dissociable predictions across combinations of predicted and actual outcomes. The solid blue line indicates Error Magnitude (distance from outcome to goal). As smaller errors reflect greater rewards, we computed Reward Prediction Error (RPE) as the signed difference between negative Error Magnitude and the negative predicted error magnitude (solid orange line, distance from prediction to goal). Sensory Prediction Error (SPE, dashed line) was quantified as the absolute discrepancy between feedback and prediction. Values of Error Magnitude (left), RPE (middle), and SPE (right) are plotted for all combinations of prediction (x-axis) and outcome (y-axis) location. (B) Predictions and confidence associate with reduced error signals. Average error magnitude (left), Reward Prediction Error (center), and Sensory Prediction Error (right) are shown for each block and confidence tercile. Average prediction errors are smaller than average error magnitudes (dashed circles), particularly for higher confidence.

Multiple prediction errors in feedback processing.

(A-C) FRN amplitude is sensitive to predicted error magnitude. (A) FRN, grand mean, the shaded area marks the time interval for peak-to-peak detection of FRN. Negative peaks between 200 and 300 ms post feedback were quantified relative to positive peaks in the preceding 100 ms time window. (B) Expected change in FRN amplitude as a function of RPE (color) for two predictions (black curves represent schematized predictive distributions around the reported prediction for a given confidence), one too early (top: high confidence in a low reward prediction) and one too late (bottom: low confidence in a higher reward prediction). Vertical black arrows mark a sample outcome (deviation from the target; abscissa) resulting in different RPE/expected changes in FRN amplitude for the two predictions, indicated by shades. Blue shades indicate negative RPEs/larger FRN, red shades indicate positive RPEs/smaller FRN and gray denotes zero. Note that these are mirrored at the goal for any predictions, and that the likelihood of the actual outcome given the prediction (y-axis) does not affect RPE. In the absence of a prediction or a predicted error of zero, FRN amplitude should increase with the deviation from the target (abscissa). (C) LMM-estimated effects of RPE on peak-to-peak FRN amplitude visualized with the effects package; shaded error bars represent 95% confidence intervals. (D– I) P3a amplitude is sensitive to SPE and Confidence. (D) Grand mean ERP with the time-window for quantification of P3a, 330–430 ms, highlighted. (E) Hypothetical internal representation of predictions. Curves represent schematized predictive distributions around the reported prediction (zero on abscissa). Confidence is represented by the width of the distributions. (F) Predictions for SPE (x-axis) and Confidence (y-axis) effects on surprise as estimated with Shannon information (darker shades signify larger surprise) for varying Confidence and SPE (center). The margins visualize the predicted main effects for Confidence (left) and SPE (bottom). (G) P3a LMM fixed effect topographies for SPE, and Confidence. (H–I) LMM-estimated effects on P3a amplitude visualized with the effects package; shaded areas in (H) (SPE) and (I) (confidence) represent 95% confidence intervals.

Figure 5 with 1 supplement
Performance-relevant information converges in the P3b.

(A) Grand average ERP waveform at Pz with the time window for quantification, 416–516 ms, highlighted. (B) Effect topographies as predicted by LMMs for RPE, error magnitude, SPE and the RPE by Confidence by Block interaction. (C–F) LMM-estimated effects on P3b amplitude visualized with the effects package in R; shaded areas represent 95% confidence intervals. (C.) RPE. Note the interaction effects with Block and Confidence (D), that modulate the main effect (D) Three-way interaction of RPE, Confidence and Block. Asterisks denote significant RPE slopes within cells. (E) P3b amplitude as a function of SPE. (F) P3b amplitude as a function of Error Magnitude.

Figure 5—figure supplement 1
P3b to feedback modulates error-related adjustments on the subsequent trial.

Top: y-hats illustrating the interaction of previous error magnitude and P3b on improvement on the current trial for each block. Bottom: Distribution of errors in each block.


Table 1
Relations between actual performance outcome (signed error magnitude), predicted outcome, confidence in predictions and their modulations due to learning across blocks of trials.
Signed error magnitude
Predicted Outcome523.9929.66465..86–582.1217.677.438e-70
Confidence−27.0711.05−48.73 – −5.42−2..451.428e-02
Predicted Outcome: Block−149.7021.90−192.62 – −106.78−6..848.145e-12
Predicted Outcome: Confidence322.5627.31269.03–376.0911.813.477e-32
Block: Confidence−25.529..15−43.46 – −7.58−2..795.297e-03
Predicted Outcome: Block: Confidence90.6833.6524.73–156.642..697.043e-03
Random effectsModel Parameters
Predicted Outcome22357.33Deviance137632.185
  1. Formula: Signed error magnitude ~Predicted Outcome*Block*Confidence+(Confidence +Predicted Outcome+Block|participant); Note: ‘:” indicates interactions between predictors.

Table 2
Relations of confidence with the precision of prediction and the precision of performance and changes across blocks.
Sensory Prediction Error (SPE)−0.440.04−0.52 – −0.36−10.842.289e-27
Error Magnitude (EM)–0.273.731.910e-04
Block: SPE−0.080.04−0.15 – −0.00−1.994.642e-02
Block: EM0.150.050.05–
Random effectsModel Parameters
 Error Magnitude0.06Deviance7280.284
 Error Magnitude: Block0.04
  1. Formula: Confidence ~ (SPE +Error Magnitude)*Block+(SPE +Error Magnitude *Block|participant); Note: ‘:” indicates interactions between predictors.

Table 3
Confidence calibration modulation of learning effects on performance.
log Error Magnitude
(Intercept)–5.3080.740.000e + 00
Confidence Calibration0.580.58−0.57–1.720.993.228e-01
Trial (linear)−0.590.07−0.72 – −0.45−8..821.197e-18
Trial (quadratic)–0.206.801.018e-11
Trial (linear): Confidence Calibration−0.860.32−1.48 – −0.24−2.726.467e-03
Random effectsModel Parameters
 Trial (linear)0..03log-Likelihood−15106.705
  1. Formula: log Error Magnitude ~ (Confidence Calibration* Trial(linear)+Trial(quadratic) + (Trial(linear)|participant)); Note: ‘:' indicates interactions between predictors.

Table 4
LMM statistics of learning effects on FRN.
Peak-to-Peak FRN amplitude
Intercept−12.670.49−13.62 – −11.71−26.032.322e-149
Reward prediction error1.430.410.62–2.243.475.302e-04
Sensory prediction error−0.670.42−1.49–0.15−1.611.078e-01
Error magnitude0.510.55−0.57–1.580.923.553e-01
Random effectsModel Parameters
 Residuals27.69N vpn40
 Error magnitude2.24log-Likelihood−29908.910
  1. Formula: FRN ~ Confidence + RPE+SPE + EM+Block + (EM +Block|participant).

Table 5
LMM statistics of learning effects on P3a.
P3a Amplitude
Block−0.910.07−1.05 – −0.77−12.933.201e-38
Sensory prediction error2.060..481.11–3.004..271.969e-05
Reward prediction error−0.750.38−1.49 – −0..01−1.984.794e-02
Error magnitude−1..950..44−2.81 – −1..09−4.439.512e-06
Random effectsModel Parameters
  1. Formula: P3a ~ Confidence + Block +SPE + RPE+EM + (SPE|participant).

Table 6
LMM statistics of learning effects on P3b.
P3b Amplitude
Block−0.480.09−0.66 – −0.30−5.202.037e-07
Reward prediction error−1.120.46−2.03 – −0.22−2.431.493e-02
Sensory prediction error1.750.470.84–2.663.761.691e-04
Error magnitude−2.350.46−3.24 – −1.45−5.142.743e-07
Confidence: Reward prediction error−0.510.55−1.60–0.57−0.923.556e-01
Block: Confidence0.070.18−0.28–0.430.416.823e-01
Block: Reward prediction error−0.520.44−1.39–0.34−1.192.359e-01
Block: Sensory prediction error−0.980.46−1.88 – −0.07−2.123.405e-02
Block: Confidence: Reward prediction error2.220.720.81–3.643.082.057e-03
Random effects
 Sensory Prediction Error2.16log-Likelihood−29197.980
 Reward prediction error1.67Deviance58395.960
  1. Formula: P3b ~ Block*(Confidence*RPE +SPE)+Error Magnitude + (SPE +RPE + Confidence|participant); Note: ‘:” indicates interactions.

Table 7
LMM statistics of confidence weighted predicted error discounting on P3b.
P3b Amplitude
Predicted error magnitude−0.830.46−1.74–0.07−1.807.133e-02
Block−0.320.11−0.52 – −0.11−2.982.860e-03
Error magnitude−1.060.49−2.03 – −0.09−2.133.277e-02
Sensory prediction error1.490.400.71–2.283.721.992e-04
Confidence: Predicted error magnitude−0.980.69−2.34–0.38−1.411.582e-01
Confidence: Block−0.500.20−0.90 – −0.11−2.501.249e-02
Predicted Error magnitude: Block−1.120.56−2.22 – −0.02−2.004.540e-02
Predicted error magnitude: Block
Random effectsModel Parameters
 Error magnitude3.43log-Likelihood−29201.951
  1. Formula: P3b ~ Block*(Confidence*Predicted Error Magnitude +SPE)+Error Magnitude + (Error Magnitude +Confidence|participant); Note: ‘:” indicates interactions.

Additional files

Supplementary file 1

Follow-up on prediction and performance precision effects on confidence.
Supplementary file 2

Control analysis for confidence calibration effect on learning.
Supplementary file 3

Follow-up on block and confidence effects on relative error signals.
Supplementary file 4

Follow-up on confidence by block interaction on RPE benefit over error magnitude.
Supplementary file 5

Block and confidence effects on error signals.
Supplementary file 6

Follow-up on block and confidence effects on error signals.
Supplementary file 7

Block and confidence effects on error signals.
Supplementary file 8

Follow-up analyses on confidence-weighted predicted error magnitude effects on P3b.
Supplementary file 9

Trial-to-trial improvements by block and previous error and modulations by previous P3b.
Supplementary file 10

Follow-up on trial-to-trial improvements by block and previous error and modulations by previous P3b.
Transparent reporting form

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Romy Frömer
  2. Matthew R Nassar
  3. Rasmus Bruckner
  4. Birgit Stürmer
  5. Werner Sommer
  6. Nick Yeung
Response-based outcome predictions and confidence regulate feedback processing and learning
eLife 10:e62825.