Brain responses in humans demonstrate that the analysis of crowded acoustic scenes is based on a mechanism that infers the predictability of sensory information and up-regulates processing for reliable signals.
Experiments with realistic acoustic stimuli have revealed that humans distinguish salient sounds from background noise by integrating frequency and temporal information.
Faced with multiple sources of sound, humans can better perceive all of a target sound's features when one of those features changes in time with a visual stimulus.
A non-invasive cognitive assistant for blind people endows objects in the environment with voices, allowing users to explore the scene, localize objects, and navigate through a building with minimal training.
Cocktail-party listening performance in normal-hearing listeners is associated with the ability to focus attention on a target stimulus in the presence of distractors.
Everyday soundscapes dynamically engage attention towards target sounds or salient ambient events, with both attentional forms engaging the same fronto-parietal network but in a push-pull competition for limited neural resources.