For perceptual inference, human observers do not estimate sensory uncertainty instantaneously from the current sensory signals alone, but by combining past and current sensory inputs consistent with a Bayesian learner.
Confidence-dependent reinforcement learning is active and produces trial-to-trial choice updating even in well-learned perceptual decisions without explicit reward biases, across species and sensory modalities.
Temporal uncertainty interferes with the timely onset of evidence accumulation in perceptual decision making prompting the brain to rely instead on statistical regularities in the temporal structure of the environment.
Neural correlates of somatosensory target detection are restricted to secondary somatosensory cortex, whereas activity in insular, cingulate, and motor regions reflects stimulus uncertainty and overt reports.
When Rhesus monkeys plan reaching movements of which they are not fully confident, a particular area of the brain represents both the chosen action as well as alternate movements, perhaps as an aid for error correction or learning.