TY - JOUR TI - Reinforcement biases subsequent perceptual decisions when confidence is low, a widespread behavioral phenomenon AU - Lak, Armin AU - Hueske, Emily AU - Hirokawa, Junya AU - Masset, Paul AU - Ott, Torben AU - Urai, Anne E AU - Donner, Tobias H AU - Carandini, Matteo AU - Tonegawa, Susumu AU - Uchida, Naoshige AU - Kepecs, Adam A2 - Salinas, Emilio A2 - Frank, Michael J A2 - Salinas, Emilio A2 - Brody, Carlos D A2 - Ding, Long VL - 9 PY - 2020 DA - 2020/04/14 SP - e49834 C1 - eLife 2020;9:e49834 DO - 10.7554/eLife.49834 UR - https://doi.org/10.7554/eLife.49834 AB - Learning from successes and failures often improves the quality of subsequent decisions. Past outcomes, however, should not influence purely perceptual decisions after task acquisition is complete since these are designed so that only sensory evidence determines the correct choice. Yet, numerous studies report that outcomes can bias perceptual decisions, causing spurious changes in choice behavior without improving accuracy. Here we show that the effects of reward on perceptual decisions are principled: past rewards bias future choices specifically when previous choice was difficult and hence decision confidence was low. We identified this phenomenon in six datasets from four laboratories, across mice, rats, and humans, and sensory modalities from olfaction and audition to vision. We show that this choice-updating strategy can be explained by reinforcement learning models incorporating statistical decision confidence into their teaching signals. Thus, reinforcement learning mechanisms are continually engaged to produce systematic adjustments of choices even in well-learned perceptual decisions in order to optimize behavior in an uncertain world. KW - reinforcement learning KW - uncertainty KW - reward KW - sensory decision JF - eLife SN - 2050-084X PB - eLife Sciences Publications, Ltd ER -