MRI methods are promising techniques for investigating the human subcortical auditory system, and these publicly available data, atlases, and tools make researching human audition simpler and more reliable.
By combining ultra-high-field imaging with physiological and saliva measures it is established that interactions between locus coeruleus, hippocampus and amygdala vary along emotional memory stages, putatively reflecting distinct cognitive states.
Inter-individual human brain alignment that uses macro-anatomical priors in addition to cortical curvature improves micro-anatomical correspondence between auditory areas.
The combination of intraneural microstimulation and 7T fMRI makes it possible to bridge the gap between first-order mechanoreceptive afferent input codes and their spatial, dynamic and perceptual representations in human cortex.
Pronounced cerebellar activation during unexpected omission of a potentially harmful event suggests that the cerebellum has to be added to the neural network processing prediction errors underlying emotional associative learning.
Ultra-high field neuroimaging dissects the ventral medial geniculate body (vMGB) of the primary auditory pathway from other MGB subregions and reveals that vMGB top-down modulation is relevant for speech recognition.
Functional MRI measurements of orientation reflect coarse-scale biases that are wholly determined by second-order interactions between the stimulus aperture and the underlying orientation.