Separate ways

Studying differences in middle-aged and older individuals sheds light on the relationships between a filtering mechanism in the brain and listening behavior.

Image credit: Tune & Obleser, Designed by Freepik (CC BY 4.0)

Humans are social animals. Communicating with other humans is vital for our social wellbeing, and having strong connections with others has been associated with healthier aging. For most humans, speech is an integral part of communication, but speech comprehension can be challenging in everyday social settings: imagine trying to follow a conversation in a crowded restaurant or decipher an announcement in a busy train station. Noisy environments are particularly difficult to navigate for older individuals, since age-related hearing loss can impact the ability to detect and distinguish speech sounds. Some aging individuals cope better than others with this problem, but the reason why, and how listening success can change over a lifetime, is poorly understood.

One of the mechanisms involved in the segregation of speech from other sounds depends on the brain applying a ‘neural filter’ to auditory signals. The brain does this by aligning the activity of neurons in a part of the brain that deals with sounds, the auditory cortex, with fluctuations in the speech signal of interest. This neural ‘speech tracking’ can help the brain better encode the speech signals that a person is listening to.

Tune and Obleser wanted to know whether the accuracy with which individuals can implement this filtering strategy represents a marker of listening success. Further, the researchers wanted to answer whether differences in the strength of the neural filtering observed between aging listeners could predict how their listening ability would develop, and determine whether these neural changes were connected with changes in people’s behaviours.

To answer these questions, Tune and Obleser used data collected from a group of healthy middle-aged and older listeners twice, two years apart. They then built mathematical models using these data to investigate how differences between individuals in the brain and in behaviours relate to each other. The researchers found that, across both timepoints, individuals with stronger neural filtering were better at distinguishing speech and listening. However, neural filtering strength measured at the first timepoint was not a good predictor of how well individuals would be able to listen two years later. Indeed, changes at the brain and the behavioural level occurred independently of each other.

Tune and Obleser’s findings will be relevant to neuroscientists, as well as to psychologists and audiologists whose goal is to understand differences between individuals in terms of listening success. The results suggest that neural filtering guided by attention to speech is an important readout of an individual’s attention state. However, the results also caution against explaining listening performance based solely on neural factors, given that listening behaviours and neural filtering follow independent trajectories.