Combining GABA with fMRI measurements in the human brain uncovers distinct suppression mechanisms that optimize perceptual decisions through learning and experience-dependent plasticity in the visual cortex.
fMRI evidence for off-task replay predicts subsequent replanning behavior in humans, suggesting that learning from simulated experience during replay helps update past policies in reinforcement learning.
The human brain is capable of implementing inverse reinforcement learning, where an observer infers the hidden reward structure of a decision problem solely through observing another individual take actions.
Mice and humans learned to distinguish an arbitrary tactile sequence from other stimuli that differed only in their temporal patterning over hundreds of milliseconds, showing that sequence learning generalises across sensory modalities.
A domain-general structure learning mechanism, supported by anterior insula, moves beyond explicit category labels and dyadic similarity as the sole inputs to social group representations and predicts ally-choice behavior.