Spoken language is colored by fluctuations in pitch and rhythm. Rather than speaking in a flat monotone, we allow our sentences to rise and fall. We vary the length of syllables, drawing out some, and shortening others. These fluctuations, known as prosody, add emotion to speech and denote punctuation. In written language, we use a comma or a period to signal a boundary between phrases. In speech, we use changes in pitch – how deep or sharp a voice sounds – or in the length of syllables.
Having more than one type of cue that can signal emotion or transitions between sentences has a number of advantages. It means that people can understand each other even when factors such as background noise obscure one set of cues. It also means that people with impaired sound perception can still understand speech. Those with a condition called congenital amusia, for example, struggle to perceive pitch, but they can compensate for this difficulty by placing greater emphasis on other aspects of speech.
Jasmin et al. showed how the brain achieves this by comparing the brain activities of people with and without amusia. Participants were asked to read sentences on a screen where a comma indicated a boundary between two phrases. They then heard two spoken sentences, and had to choose the one that matched the written sentence. The spoken sentences used changes in pitch and/or syllable duration to signal the position of the comma. This provided listeners with the information needed to distinguish between "after John runs the race, ..." and "after John runs, the race...", for example.
When two brain regions communicate, they tend to increase their activity at around the same time. The brain regions are then said to show functional connectivity. Jasmin et al. found that compared to healthy volunteers, people with amusia showed less functional connectivity between left hemisphere brain regions that process language and right hemisphere regions that process pitch. In other words, because pitch is a less reliable source of information for people with amusia, they recruit pitch-related brain regions less when processing speech.
These results add to our understanding of how brains compensate for impaired perception. This may be useful for understanding the neural basis of compensation in other clinical conditions. It could also help us design bespoke hearing aids or other communication devices, such as computer programs that convert text into speech. Such programs could tailor the pitch and rhythm characteristics of the speech they produce to suit the perception of individual users.