Abstract
Emotional responsiveness in neonates, particularly their ability to discern vocal emotions, plays an evolutionarily adaptive role in human communication and adaptive behaviors. The developmental trajectory of emotional sensitivity in neonates is crucial for understanding the foundations of early social-emotional functioning. However, the precise onset of this sensitivity and its relationship with gestational age (GA) remain subjects of investigation. In a study involving 120 healthy neonates categorized into six groups based on their GA (ranging from 35 and 40 weeks), we explored their emotional responses to vocal stimuli. These stimuli encompassed disyllables with happy and neutral prosodies, alongside acoustically matched nonvocal control sounds. The assessments occurred during natural sleep states using the odd-ball paradigm and event-related potentials. The results reveal a distinct developmental change at 37 weeks GA, marking the point at which neonates exhibit heightened perceptual acuity for emotional vocal expressions. This newfound ability is substantiated by the presence of the mismatch response, akin to an initial form of adult mismatch negativity, elicited in response to positive emotional vocal prosody. Notably, this perceptual shift’s specificity becomes evident when no such discrimination is observed in acoustically matched control sounds. Neonates born before 37 weeks GA do not display this level of discrimination ability. This developmental change has important implications for our understanding of early social-emotional development, highlighting the role of gestational age in shaping early perceptual abilities. Moreover, while these findings introduce the potential for a valuable screening tool for conditions like autism, characterized by atypical social-emotional functions, it is important to note that the current data are not yet robust enough to fully support this application. This study makes a substantial contribution to the broader field of developmental neuroscience and holds promise for future research on early intervention in neurodevelopmental disorders.
Significance statement
This study illuminates a key developmental change, pinpointing the emergence of heightened emotional perceptual acuity at 37 weeks of gestational age. Employing rigorous methods, we reveal that neonates at this stage exhibit remarkable discrimination abilities for emotional vocal prosody, a vital turning point in early social-emotional functioning. These findings emphasize the pivotal role of gestational age in shaping neonatal perception and provides a potential pathway for early screening of neurodevelopmental disorders, particularly autism. This insight holds profound implications for understanding the foundations of early social-emotional development in humans, offering a potential tool for early intervention in neurodevelopmental disorders, thereby enhancing child health and well-being.
Introduction
Emotions represent a fundamental aspect of human social interaction, serving as a compelling subject of inquiry within the disciplines of neuroscience, psychology, and psychiatry. Over the course of evolution, the human brain has evolved to possess a heightened sensitivity to the emotional expressions of others (Lindquist et al., 2012). Remarkably, even prior to the full maturation of their visual system, human infants exhibit a remarkable ability to discern vocal emotions (Blasi et al., 2011; Soderstrom et al., 2017; Vaish & Striano, 2004). Prosodic elements of speech, including pitch, intensity, and rhythm, function as universal and non-linguistic channels for emotional communication (Latinus & Belin, 2011). Numerous studies have established that infants, including those who have not yet acquired language, exhibit differentiated responses to emotional prosody conveying happiness, fear, anger, and sadness within the age range of 2 to 12 months (e.g., Caron et al., 1988; Fernald, 1993; Graham et al., 2013; Grossmann et al., 2010; Singh et al., 2002; Walker-Andrews & Grolnick, 1983; Zhao et al., 2021).
More specifically, during the very early stages of postnatal life, often termed the neonatal period (encompassing infants under four weeks of age), compelling evidence points to the presence of emotion-specific responses to emotional cues conveyed through vocal prosody. These responses have been identified through various measurement methods, including assessments of eye-opening scores (Mastropieri & Turkewitz, 1999), event-related potentials (Cheng et al., 2012), and near-infrared spectroscopy (Zhang et al., 2019). However, prior research has primarily focused on the perception and discrimination of emotions among traditionally defined term neonates, a group that includes infants born within a five-week span (37 to 41 weeks) of gestational age (GA), treating them as a homogenous cohort. This raises a crucial question: when does emotional sensitivity begin to manifest in newborns? Does it exist in preterm neonates (GA < 37 weeks)? And does it vary among neonates born at early term (GA = 37-38 weeks) and full term (GA = 39-40 weeks), as defined by the refined ‘term’ classification (Spong, 2013)? Surprisingly, to date, no study has explored emotion processing in neonates with varying GAs. The discovery of this developmental milestone not only advances our understanding of the cognitive mechanisms underlying human social-emotional functioning but also provides valuable insights for early diagnosis of neurodevelopmental disorders, such as autism (Jones et al., 2014; Molnar-Szakacs et al., 2021).
The principal objective of this study is to investigate emotional responses in neonates across a range of GAs, spanning from 35 to 40 weeks, and to determine whether their heightened sensitivity to emotional voices is influenced by GA. To achieve this, we utilized the odd-ball paradigm in conjunction with an event-related potential (ERP) component known as mismatch negativity (MMN) to probe the neurobiological encoding of emotional voices in the neonatal brain. MMN is an auditory ERP component that demonstrates a negative shift in response to deviant sounds when compared to standard sounds (Näätänen et al., 2007). Importantly, it can be elicited without requiring the subject’s attention, making it particularly suitable for recording in young infants (Cheour et al., 1998, 2002). It is worth noting that in neonates, this ERP component often manifests as a positive response rather than the traditional MMN (e.g., Cheng et al., 2012; Chládková et al., 2021; Kostilainen et al., 2020; Richard et al., 2022; Thiede et al., 2019; Virtala et al., 2022; Winkler et al., 2003, 2009), leading many researchers to refer to it as the mismatch response (MMR) in the neonatal brain.
In our study, we exposed neonates to speech samples characterized by positive (i.e., happy) and neutral prosodies. Our selection of positive emotions over negative ones (e.g., fear, sadness, or anger) was guided not only by ethical considerations but also by previous research indicating an early preference for positive emotions in neonates (Farroni et al., 2007; Mastropieri & Turkewitz, 1999; Zhang et al., 2019; with the exception of Cheng et al., 2012). Additionally, to eliminate the possibility of neonates distinguishing emotional voices solely based on their low-level acoustic features, we included another set of control sounds. These nonvocal stimuli were meticulously matched with their vocal prosodic counterparts in terms of mean intensity and fundamental frequency (Cheng et al., 2012). Consequently, our primary objective is to pinpoint the developmental stage (i.e., the GA group) at which the discrimination between happy and neutral stimuli becomes apparent for emotional voices while remaining absent for acoustically matched control sounds.
Results
The MMR was extracted using ERP difference waves, computed by subtracting the ERP evoked by the standard stimulus (neutral sound) from the ERP evoked by the deviant stimulus (happy sound) (Näätänen et al., 2007). Brain electrical activity was recorded from the F3, F4, C3, C4, P3, and P4 sites following the international 10/20 system. However, this study primarily focused on data from electrodes F3 and F4, as the neonatal MMR exhibits a frontal distribution (Cheng et al., 2012; Cheour et al., 2002). Figure 1 displays MMR waveforms recorded from all six electrodes.
Initially, we conducted a three-way repeated measures ANOVA on the mean MMR amplitudes (time window: 150 ms to 400 ms after sound onset) with factors including condition (vocal/nonvocal), hemisphere (left/right frontal, i.e., F3/F4) as within-subjects factors, and neonatal group (GA = 35, 36, 37, 38, 39, and 40 weeks) as the between-subjects factor. However, neither the main effect nor the interaction effects involving the hemisphere factor were statistically significant (for detailed statistics, please refer to the supplemental file subtitled “Result of the three-way ANOVA”). Consequently, we removed the hemisphere factor and averaged the MMR waveforms recorded at the F3 and F4 electrodes.
Subsequently, we performed a two-way repeated measures ANOVA with condition and group as the two factors. The main effect of stimuli was significant, F(1,114) = 38.827, p < 0.001, . Specifically, vocal stimuli elicited larger MMRs (mean ± standard deviation: 3.839 ± 4.855 μV) compared to nonvocal stimuli (0.496 ± 4.779 μV). The main effect of group was also significant, F(5,114) = 3.228, p = 0.009, . In general, MMR amplitudes were smaller in the GA35 (0.590 ± 4.579 μV) and GA36 (0.141 ± 4.807 μV) groups compared to the GA37 (2.801 ± 5.585 μV), GA38 (2.760 ± 4.382 μV), GA39 (3.401 ± 4.871 μV), and GA40 groups(3.311 ± 5.491 μV). However, no significant differences were found in pairwise comparisons after Bonferroni adjustment for multiple comparisons.
The interaction between stimuli and group was significant, F(5,114) = 3.127, p = 0.011, (as shown in Figure 2). Simple effect analysis revealed that MMR amplitudes were larger in the vocal condition compared to the nonvocal condition in the GA37 (F(1,114) = 15.254, p < 0.001, ; vocal = 5.367 ± 5.165 μV, nonvocal = 0.235 ± 4.847 μV), GA38 (F(1,114) =16.072, p < 0.001, ; vocal = 5.394 ± 3.145 μV, nonvocal = 0.126 ± 3.861 μV), GA39 (F(1,114) = 8.393, p = 0.005, ; vocal = 5.305 ± 4.011 μV, nonvocal = 1.498 ± 4.998 μV), and GA40 groups (F(1,114) = 14.482, p < 0.001, ; vocal = 5.811 ± 5.298 μV,nonvocal = 0.811 ± 4.546 μV). However, there were no significant differences in MMR amplitudes between the two kinds of stimuli in the GA35 (F(1,114) = 0.026, p = 0.873, 0.001; vocal = 0.695 ± 4.031 μV, nonvocal = 0.485 ± 5.173 μV) and GA36 groups (F(1,114) = 0.236, p = 0.628, ; vocal = 0.460 ± 4.104 μV, nonvocal = -0.179 ± 5.511 μV).
Further analysis revealed that vocal stimuli evoked varying MMR amplitudes across groups,F(5,114) = 6.768, p < 0.001,. Specifically, the MMRs evoked by vocal stimuli were smaller in the GA35 group compared to GA37 (p = 0.014), GA38 (p = 0.013), GA39 (p = 0.017), and GA40 groups (p = 0.005). Similarly, the MMRs evoked by vocal stimuli were smaller in the GA36 group compared to GA37 (p = 0.008), GA38 (p = 0.008), GA39 (p = 0.009), and GA40 groups (p = 0.003). However, nonvocal stimuli did not elicit significantly different MMR amplitudes across groups, F(5,114) = 0.300, p = 0.912, .
Discussion
The current study elucidates a pivotal developmental change in neonatal emotional responsiveness by investigating their ability to perceive vocal emotions. The findings illuminate a distinct turning point at 37 weeks of gestational age (GA), representing the onset of heightened perceptual acuity for emotional vocal expressions. This change is particularly evident in the robust MMR to positive emotional vocal prosody. Significantly, the absence of this discrimination ability when acoustically matched control sounds were presented underscores the specificity of this developmental shift towards emotional voice processing. Our identification of the 37-week GA mark aligns with previous research, which indicated emotional sensitivity in term neonates born at or after 37 weeks of gestation (Cheng et al., 2012; Farroni et al., 2007; Mastropieri & Turkewitz, 1999; Zhang et al., 2019) and in preterm neonates (GA < 37 weeks) tested at term age (Kostilainen et al., 2020). Notably, our findings also reveal that neonates born before 37 weeks GA do not exhibit these emotional discrimination abilities.
In the final trimester of pregnancy, the human brain undergoes a period of rapid and continuous changes in neural structure and cognitive functions (Bayer et al., 1993; Clancy et al., 2007). Although there is no direct evidence to support the notion that preterm neonates can decode vocal emotions, they have displayed an aptitude for processing speech stimuli with social contexts. For example, neonates born at or after 29 weeks GA have shown a preference for infant-directed speech, characterized by a high pitch, exaggerated pitch modifications, and a slow rate (Butler et al., 2014). This preference has been associated with increased visual attention, heightened alertness (Eckerman et al., 1994), reduced heart rate (White-Traut et al., 1997), and enhanced speech differentiation in premature babies (Richard et al., 2022). Additionally, neonates born at or after 30 weeks GA have demonstrated an age-related increase in sensitivity to maternal voices (D Chorna et al., 2018; A. P. F. Key et al., 2012), leading to beneficial effects on cognitive and neurobehavioral development (Caskey et al., 2011; Picciolini et al., 2014; see Provenzi et al., 2018 for a review). These effects encompass improved feeding behaviors, heightened responsiveness (Katz, 1971; Krueger et al., 2010), enhanced weight gain (Zimmerman et al., 2013), and activated auditory cortical plasticity (Webb et al., 2015). Both infant-directed speech and maternal voices feature extensive pitch modulation, and the preference for these emotionally prosodic-like voices during the preterm stage may prepare the developing brain to discriminate vocal emotions at 37 weeks GA, as demonstrated in this study.
Traditionally, 37 weeks of gestation served as the benchmark for fetal maturity, and term infants born within the 37 to 41 weeks GA range were generally considered healthy, forming a homogenous group. Recent insights, however, have unveiled variations in physical and cognitive maturation within this 5-week span of full-term pregnancy. Research indicates that neonates born at 37-38 weeks GA face increased risks of neonatal mortality and pediatric respiratory, neurologic, and endocrine morbidities compared to those born at 39-41 weeks GA (Cahen-Peretz et al., 2022; Clark et al., 2009; Edwards et al., 2013; Ghartey et al., 2012; Paz Levy et al., 2017; Sengupta et al., 2013; Tita et al., 2009). Furthermore, a dose-response relationship inversely linking GA to the risk of developmental delay has been identified in infants from preterm to full-term births (Rose et al., 2013; Schonhaut et al., 2015). Early birth (34-38 weeks GA) has also been found to have a detrimental impact on child development and academic achievement during school age (Bentley et al., 2016; Chan et al., 2016; Dong et al., 2012; Hedges et al., 2021; Murray et al., 2017; Nielsen et al., 2019; Noble et al., 2012). Consequently, the definition of a full-term pregnancy has been narrowed to a two-week window starting at 39 weeks (Spong, 2013), with nonmedically indicated deliveries between 37 and 38 weeks of gestation discouraged (ACOG Committee Opinion, 2019). While accumulating evidence underscores the adverse effects of the traditional 37-week threshold, our findings contribute to the limited body of research suggesting that neonatal social-emotional functioning may reach a development milestone at 37 weeks GA.
When interpretating the current findings, it is important to consider that the nonvocal control sounds utilized in this study may not have adequately eliminated all low-level acoustic properties that could aid neonatal discrimination. Specifically, while the nonvocal control counterparts retained the fundamental frequency (f0) of the emotional prosodic voices, they did not replicate the burst of energy associated with consonants. Consequently, it cannot be ruled out that neonates utilized consonant characteristics to discriminate emotional prosodies conveyed by disyllables. Additionally, the nonvocal sounds were generated using a simple filtering method, resulting in certain vocal-like components persisting in these control sounds. The limitations of the control sound materials should be given greater consideration in future replication or further research.
Furthermore, there is a compelling need for future investigations to expand upon the present findings by incorporating a broader array of emotional stimuli. Non-speech emotional vocalizations, such as laughter, crying, or retching, as well as natural emotional auditory cues like thunder, flowing water, hissing snakes, and bird calls, offer a rich spectrum of emotional materials that have been shown to engage the perceptual faculties of neonates and infants (Blasi et al., 2011; Erlich et al., 2013). This multifaceted approach could illuminate whether the developmental milestone observed at 37 weeks GA is specific to the processing of emotional prosodic speech and vocal expressions, or if it extends to encompass a broader range of both artificial and natural emotional auditory cues. Moreover, the use of non-speech emotional stimuli aids in resolving the debate between nature and nurture concerning the onset of emotional sensitivity at 37 weeks GA. It cannot solely attribute the current finding of discrimination to the innate maturational explanation, given that the auditory system becomes functional at the end of the second trimester of pregnancy, allowing exposure to spoken language in utero to influence the development of speech perception (DeCasper & Spence, 1986; Moon et al., 2013; Partanen et al., 2013). The ability to discriminate prosodic emotions starting at 37 weeks GA could stem from additional exposure in utero to speech. Future exploration is need to definitively investigate prenatal learning by utilizing emotional sounds that are infrequently encountered in the prenatal environment. Finally, the inclusion of non-speech emotional materials may offer insights into the potential right lateralization of emotional processing in the neonatal brain. While prior studies (including some cited herein) have identified right lateralization for emotional processing in full-term neonates (Cheng et al., 2012; Zhang et al., 2019; see Bisiacchi & Cainelli, 2022 for a comprehensive review), the introduction of non-speech materials can help disentangle the confounding effects of left lateralization, which is associated with language processing and has been identified in both preterm (Mahmoudzadeh et al., 2013) and full-term neonates (Kotilahti et al., 2010; May et al., 2018; Peña et al., 2003; Sato et al., 2012; Vannasing et al., 2016; Wu et al., 2022).
A more comprehensive understanding of the developmental trajectory of emotional sensitivity has the potential to revolutionize decision-making in the final weeks of pregnancy and the identification of newborns at risk of emotional and neurodevelopmental disorders, particularly autism. Individuals with autism often exhibit atypical perceptual and neural processing of emotional information, including emotional prosodic voices (Kuhl et al., 2005; Lindström et al., 2018; Van Lancker et al., 1989; Wang et al., 2007; for comprehensive reviews, see Frühholz & Staib, 2017; Yeung, 2022). While previous studies have indicated that social-emotional behavioral indicators typically begin to demonstrate predictive power for autism from the second year of life (Gliga et al., 2014; Jones et al., 2014), brain functional indicators of emotional processing during infancy, especially within the first year of life, have already shown their predictive value (Ayoub et al., 2022; Clairmont et al., 2021; Molnar-Szakacs et al., 2021). For instance, infants subsequently diagnosed with autism displayed a smaller amplitude and shorter duration of the negative central (Nc) component at six months of age when viewing smiling faces compared to toys, a pattern not observed in infants who were subsequently undiagnosed (Jones et al., 2016). Additionally, while the Nc and P400 components were able to distinguish between smiling, fearful, and neutral facial expressions in typically developing 9-to-10-month-old infants, these EEG indicators failed to differentiate emotional faces in infants at high risk for autism (Di Lorenzo et al., 2021; A. P. Key et al., 2015). Moreover, it has been observed that infants at high risk for autism exhibit diminished activation in the fusiform gyrus and hippocampus compared to healthy controls when exposed to sad cries between the ages of 4 and 7 months (Blasi et al., 2015). The fusiform gyrus, a region crucial for face perception and memory, and the hippocampus, which plays a significant role in general learning and memory processes, are both implicated in this phenomenon (Lisman et al., 2017; Rossion et al., 2024). Consequently, the hippocampus-fusiform network, essential for the development of social cognitive skills, may serve as a predictive indicator for the onset of autism. Building upon these existing studies, our research suggests that the neonatal MMR in response to emotional voices could potentially serve as an early screening indicator for autism. However, we acknowledge that the current data are not yet robust enough to fully support this recommendation. We advocate for future longitudinal studies with more rigorous experimental materials and designs to further explore the predictive role of this neurophysiological indicator, which could ultimately facilitate early diagnosis and intervention for social-emotional disorders.
In summary, this study highlights a pivotal developmental change – the emergence of heightened perceptual acuity for emotional vocal expressions at 37 weeks GA. It is important to note that neonates’ perceptual sensitivity at this stage is unlikely to be associated with a deep conceptual understanding of emotions. Nevertheless, this unique discrimination ability in early life may serve as a foundational building block for the later development of emotional and social cognition. Overall, this work deepens our understanding of neonatal social and emotional development and suggests a potential avenue for supporting early diagnosis of neurodevelopmental disorders, where early detection is critical for effective intervention.
Materials and methods
Subjects
The research received approval from both the Ethical Committee of Peking University First Hospital and the Chinese Clinical Trial Registry (ChiCTR2300069898). Initially, we planned to include 120 healthy neonates, with 60 being boys, in the data analysis. These participants were categorized into six groups based on their GA, specifically 35, 36, 37, 38, 39, and 40 weeks, with each group comprising twenty subjects. For instance, the GA35 group comprised neonates with GA ranging from 35 weeks plus 0 day to 6 days. However, we ultimately recruited 198 neonates to obtain 120 valid datasets due to non-cooperation of newborns (n = 75) or technical issues (n = 3). Specially, 11, 12, 11, 14, 13, and 14 neonates were excluded from data analysis in the GA35, GA36, GA37, GA38, GA39, and GA40 groups, respectively, due to crying or irritable movements during EEG device preparation and EEG recording.
The mothers of these neonates were monolingual and nurtured their babies in a native language environment. All neonates participated the experiment within the first 24 hours after birth, with a mean ± standard deviation of 17.8 ± 0.4 hours for the 120 valid data.
Prior to data collection, written consent was obtained from the parents or legal guardians of all participating neonates for access to clinical information and EEG data collection for scientific purposes. While sample sizes were not statistically predetermined, including twenty subjects per GA group represented the maximum feasible number within a two-year period at Peking University First Hospital.
All subjects met the following inclusion criteria: 1) normal birth weight for their GA; 2) absence of clinical symptoms at the time of EEG recording; 3) no previous sedation or medication prior to EEG recording; and 4) normal hearing results in an evoked otoacoustic emissions test (ILO88 Dpi, Otodynamics Ltd, Hatfield, UK). Additionally, subjects did not exhibit any of the following neurological or metabolic disorders: 1) hypoxic-ischemic encephalopathy, 2) intraventricular hemorrhage or white matter damage detected by cranial ultrasound, 3) congenital malformation, 4) central nervous system infection, 5) metabolic disorder, 6) clinical evidence of seizures, and 7) signs of asphyxia.
Stimuli
A total of 85 possible combinations of consonants and vowels, which are standard in Chinese (Lee & Zee, 2003) and common to most human languages (e.g., ‘dada’ and ‘keke’), were recorded by a native Chinese-speaking adult woman with the Peking dialect. Each disyllable was recorded with four repetitions, two using a happy prosody and two with a neutral prosody, resulting in a total of 340 disyllables (85 × 4). Twenty Chinese undergraduate students (10 men, mean age 20.1 ± 1.2 years) performed a discrimination task, distinguishing between happy and neutral stimuli, and rated the affective content of these stimuli.
In the affective rating task, participants assessed the intensity of happiness (on a 9-point scale ranging from 1 being the least happy to 9 being the happiest) and the valence (on a 9-point scale ranging from 1 being the most negative, 5 being neutral, to 9 being the most positive) of the 340 stimuli. This study selected five pairs of happy and neutral disyllables that shared the same consonant-monophthong combinations and achieved 100% discrimination accuracy in the discrimination task (i.e., ‘dada’, ‘dudu’, ‘gege’, ‘keke’, and ‘tutu’ in Chinese Pinyin). Paired-samples t-tests demonstrated that the happy disyllables were rated as significantly happier (t(4) = 24.70, p < 0.001; happy intensity: 7.49 ± 0.20 versus 3.53 ± 0.19) and had a more positive valence (t(4) = 18.55, p < 0.001; valence: 7.11 ± 0.12 versus 4.91 ± 0.24) than their neutral counterparts. These ten disyllables were then standardized to have the same mean intensity and had duration of 400 ms using Adobe Audition (v.2022; Adobe Systems Inc., San Jose, CA).
To ensure that neonates were discriminating based on prosodic cues containing emotional content rather than low-level acoustic properties, we employed a method similar to Cheng et al. (2012) and created a separate set of nonvocal control sounds. We hypothesized that the fundamental frequency (f0) alone does not convey emotional content in voices and that neonates require multiple other prosodic cues embedded in the high-frequent component to discern emotions, as suggested by Cheng et al. (2012) and Zhang et al. (2014). As a result, ten nonvocal sounds were generated to match the f0 contours and temporal envelopes of their corresponding vocal sounds. This matching process was carried out using Matlab (v.2021b; MathWorks, Inc., South Natick, MA). Specifically, we initially applied a zero-phase filter with a bandpass of mean f0 ± 150 Hz to obtain f0-matched sounds of prosodic voices. Subsequently, a normalization procedure was implemented to ensure that the intensity of each pair of vocal and nonvocal sounds was equal. Oscillograms and spectrograms of the auditory stimuli utilized in this study are presented in Figure 3, generated using Praat (v.6.3.17, www.praat.org). All auditory stimuli, along with their pronunciations and rating scores, are available in the supplemental material labeled “experimental sounds”.
To optimize the diversity of our material and increase the generalizability of our results, we utilized ten sets of sounds. Each set included both positive and neutral prosodic voices, along with their respective nonvocal counterparts. These auditory materials were distributed randomly and evenly within each neonatal GA group, ensuring that each set was presented twice (to two individuals) in each GA group.
Procedure
The sound stimuli were presented in two blocks: the vocal and nonvocal conditions, utilizing the odd-ball paradigm. The standard stimulus was either a vocal or nonvocal neutral sound, while the deviant stimulus was either a vocal or nonvocal happy sound. Each block consisted of 240 standard stimuli (80%) and 60 deviant stimuli (20%). The standard and deviant stimuli were presented randomly, ensuring that each deviant stimulus was followed by at least two standard stimuli. Each sound had a duration of 400 ms, and the inter-trial interval was silent, with varying durations ranging from 500 to 700 ms. Each block lasted for 5 minutes, and the order of the vocal and nonvocal blocks was counterbalanced across participants. A 5-minute break separated the two blocks, resulting in a total EEG recording duration of 15 minutes.
The experiment took place in the neonatal ward of Peking University First Hospital. Neonates were transported to a designated testing room for EEG recording as soon as their condition stabilized after birth. In this room, they were separated from their mothers to minimize any natural exposure to speech or speech stimuli other than those utilized in the experiment. Auditory stimuli were presented through a pair of loudspeakers positioned approximately 30 cm away from the neonates’ left and right ears, at a sound pressure level of 55 to 60 dB, with an average background noise intensity level of 30 dB. EEG recording was conducted while the neonates were in a natural sleep state (Cheour et al., 2002; Wu et al., 2022).
Data recording and analysis
We recorded brain electrical activity using an electrical amplifier (NeuSen.W32, Neuracle, Changzhou, China) at a sampling frequency of 1000 Hz. Initially, the data were recorded online with reference to the left mastoid and subsequently re-referenced offline to the average of the left and right mastoids. The ground electrode was positioned on the forehead. For the recording of vertical eye movements, an electrooculogram (EOG) electrode was positioned beneath the left eye, while another was placed at the left external canthi for recording horizontal eye movements. Throughout the recording process, electrode impedances were meticulously maintained below 10 kΩ.
We eliminated ocular artifacts from the EEG data using a regression procedure implemented in NeuroScan software (Scan 4.3, NeuroScan, Herndon, VA). Subsequently, we employed Matlab (v.2021b; Mathworks, Inc., Sherborn, MA) for data processing and result presentation. The EOG-corrected EEG data were then offline filtered with a half-amplitude cutoff range of 0.01∼30 Hz and segmented from 200 ms before sound presentation until 1000 ms after sound onset. Epochs were baseline-corrected relative to the mean voltage during the 200 ms preceding sound presentation. Any epochs containing artifacts with peak deflections exceeding ±200 μV were rejected (see also Biro et al., 2021; Di Lorenzo et al., 2021; Kumaravel et al., 2022), followed by averaging for each experimental condition. The number of valid epochs did not exhibit a significant difference across neonatal groups (please refer to the supplemental file subtitled “Epoch number”). The time window for the MMR component was pre-defined as 150 ms to 400 ms after sound onset, based on prior knowledge (Cheour et al., 2002), and utilized throughout the data analysis.
We performed statistical analyses using SPSS Statistics (v. 20.0; IBM, Somers, USA). Descriptive data are reported as mean ± standard deviation. The significance level was set at 0.05. We applied the Greenhouse-Geisser correction for ANOVA tests when deemed appropriate. Post-hoc tests for significant main effects were conducted using the Bonferroni method. Significant interactions were explored through simple effects models. We reported partial eta-squared as a measure of effect size in ANOVA tests.
Acknowledgements
This study was funded by the National High Level Hospital Clinical Research Funding (High Quality Clinical Research Project of Peking University First Hospital, 2022CR68), the National Natural Science Foundation of China (32271102; 31920103009), the Major Project of National Social Science Foundation (20&ZD153), Shenzhen-Hong Kong Institute of Brain Science (2024SHIBS0004), and the National Key Research and Development Program of China (2021YFC2700700).
Data availability
The experimental materials are accessible as supplementary files for download. EEG epochs from all 120 datasets can be downloaded from https://osf.io/a3xzy/. The data and materials from this study are available for academic purpose free of charge, on the condition that proper citation to this article is provided.
References
- ACOG committee opinion no. 765: Avoidance of nonmedically indicated early-term deliveries and associated neonatal morbiditiesObstetrics & Gynecology 133:e156–e163https://doi.org/10.1097/AOG.0000000000003076
- Neuroimaging techniques as descriptive and diagnostic tools for infants at risk for autism spectrum disorder: A systematic reviewBrain Sciences 12https://doi.org/10.3390/brainsci12050602
- Timetables of neurogenesis in the human brain based on experimentally determined patterns in the ratNeurotoxicology 14:83–144
- Planned birth before 39 weeks and child development: A population-based studyPediatrics 138https://doi.org/10.1542/peds.2016-2002
- Frontal EEG asymmetry in infants observing separation and comforting events: The role of infants’ attachment relationshipDevelopmental Cognitive Neuroscience 48https://doi.org/10.1016/j.dcn.2021.100941
- Structural and functional brain asymmetries in the early phases of life: A scoping reviewBrain Structure and Function 227:479–496https://doi.org/10.1007/s00429-021-02256-1
- Atypical processing of voice sounds in infants at risk for autism spectrum disorderCortex 71:122–133https://doi.org/10.1016/j.cortex.2015.06.015
- Early specialization for voice and emotion processing in the infant brainCurrent Biology 21:1220–1224https://doi.org/10.1016/j.cub.2011.06.009
- Preference for infant-directed speech in preterm infantsInfant Behavior and Development 37:505–511https://doi.org/10.1016/j.infbeh.2014.06.007
- Long-term respiratory outcomes in early-term born offspring: A systematic review and meta-analysisAmerican Journal of Obstetrics and Gynecology MFM 4https://doi.org/10.1016/j.ajogmf.2022.100570
- Infant discrimination of naturalistic emotional expressions: The role of face and voiceChild Development 59:604–616
- Importance of parent talk on the development of preterm infant vocalizationsPediatrics 128:910–916https://doi.org/10.1542/peds.2011-0609
- Long-term cognitive and school outcomes of late-preterm and early-term births: A systematic reviewChild: Care, Health and Development 42:297–312https://doi.org/10.1111/cch.12320
- Voice and emotion processing in the human neonatal brainJournal of Cognitive Neuroscience 24:1411–1419https://doi.org/10.1162/jocn_a_00214
- Development of language-specific phoneme representations in the infant brainNature Neuroscience 1:351–353https://doi.org/10.1038/1561
- Speech sounds learned by sleeping newbornsNature 415:599–600https://doi.org/10.1038/415599b
- Newborns’ neural processing of native vowels reveals directional asymmetriesDevelopmental Cognitive Neuroscience 52https://doi.org/10.1016/j.dcn.2021.101023
- The value of brain imaging and electrophysiological testing for early screening of autism spectrum disorder: A systematic reviewFrontiers in Neuroscience 15https://doi.org/10.3389/fnins.2021.812946
- Extrapolating brain development from experimental species to humansNeurotoxicology 28:931–937https://doi.org/10.1016/j.neuro.2007.01.014
- Neonatal and maternal outcomes associated with elective term deliveryAmerican Journal of Obstetrics and Gynecology 200:156–156https://doi.org/10.1016/j.ajog.2008.08.068
- Feasibility of event-related potential (ERP) biomarker use to study effects of mother’s voice exposure on speech sound differentiation of preterm infantsDevelopmental Neuropsychology 43:123–134https://doi.org/10.1080/87565641.2018.1433671
- Prenatal maternal speech influences newborns’ perception of speech soundsInfant Behavior and Development 9:133–150https://doi.org/10.1016/0163-6383(86)90025-1
- Is it fear? Similar brain responses to fearful and neutral faces in infants with a heightened likelihood for autism spectrum disorderJournal of Autism and Developmental Disorders 51:961–972https://doi.org/10.1007/s10803-020-04560-x
- A systematic review and meta-analysis of long-term development of early term infantsNeonatology 102:212–221https://doi.org/10.1159/000338099
- Premature newborns as social partners before term ageInfant Behavior and Development 17:55–70https://doi.org/10.1016/0163-6383(94)90022-1
- Respiratory distress of the term newborn infantPaediatric Respiratory Reviews 14:29–36https://doi.org/10.1016/j.prrv.2012.02.002
- The perception of facial expressions in newbornsThe European Journal of Developmental Psychology 4:2–13https://doi.org/10.1080/17405620601046832
- Approval and disapproval: Infant responsiveness to vocal affect in familiar and unfamiliar languagesChild Development 64:657–674
- Neurocircuitry of impaired affective sound processing: A clinical disorders perspectiveNeuroscience and Biobehavioral Reviews 83:516–524https://doi.org/10.1016/j.neubiorev.2017.09.009
- Neonatal respiratory morbidity in the early term deliveryAmerican Journal of Obstetrics and Gynecology 207:292–292https://doi.org/10.1016/j.ajog.2012.07.022
- From early markers to neuro-developmental mechanisms of autismDevelopmental Review 34:189–207https://doi.org/10.1016/j.dr.2014.05.003
- What sleeping babies hear: A functional MRI study of interparental conflict and infants’ emotion processingPsychological Science 24:782–789https://doi.org/10.1177/0956797612458803
- The developmental origins of voice processing in the human brainNeuron 65:852–858https://doi.org/10.1016/j.neuron.2010.03.001
- Gestational age at term and educational outcomes at age ninePediatrics 148https://doi.org/10.1542/peds.2020-021287
- Developmental pathways to autism: A review of prospective studies of infants at riskNeuroscience and Biobehavioral Reviews 39:1–33https://doi.org/10.1016/j.neubiorev.2013.12.001
- Reduced engagement with social stimuli in 6-month-old infants with later autism spectrum disorder: A longitudinal prospective study of infants at high familial riskJournal of Neurodevelopmental Disorders 8https://doi.org/10.1186/s11689-016-9139-8
- Auditory stimulation and developmental behavior of the premature infantNursing Research 20
- Influence of gestational age and postnatal age on speech sound processing in NICU infantsPsychophysiology 49:720–731https://doi.org/10.1111/j.1469-8986.2011.01353.x
- Positive affect processing and joint attention in infants at high risk for autism: An exploratory studyJournal of Autism and Developmental Disorders 45:4051–4062https://doi.org/10.1007/s10803-014-2191-x
- Neural processing of changes in phonetic and emotional speech sounds and tones in preterm infants at term ageInternational Journal of Psychophysiology 148:111–118https://doi.org/10.1016/j.ijpsycho.2019.10.009
- Hemodynamic responses to speech and music in newborn infantsHuman Brain Mapping 31:595–603https://doi.org/10.1002/hbm.20890
- Maternal voice and short-term outcomes in preterm infantsDevelopmental Psychobiology 52:205–212https://doi.org/10.1002/dev.20426
- Links between social and linguistic processing of speech in preschool children with autism: Behavioral and electrophysiological measuresDevelopmental Science 8:F1–F12https://doi.org/10.1111/j.1467-7687.2004.00384.x
- NEAR: An artifact removal pipeline for human newborn EEG dataDevelopmental Cognitive Neuroscience 54https://doi.org/10.1016/j.dcn.2022.101068
- Human voice perceptionCurrent Biology 21:R143–145https://doi.org/10.1016/j.cub.2010.12.033
- Standard Chinese (Beijing)Journal of the International Phonetic Association 33:109–112https://doi.org/10.1017/S0025100303001208
- The brain basis of emotion: A meta-analytic reviewThe Behavioral and Brain Sciences 35:121–143https://doi.org/10.1017/S0140525X11000446
- Atypical perceptual and neural processing of emotional prosodic changes in children with autism spectrum disordersClinical Neurophysiology 129:2411–2420https://doi.org/10.1016/j.clinph.2018.08.018
- Viewpoints: how the hippocampus contributes to memory, navigation and cognitionNature Neuroscience 20:1434–1447https://doi.org/10.1038/nn.4661
- Syllabic discrimination in premature human infants prior to complete formation of cortical layersProceedings of the National Academy of Sciences of the United States of America 110:4846–4851https://doi.org/10.1073/pnas.1212220110
- Prenatal experience and neonatal responsiveness to vocal expressions of emotionDevelopmental Psychobiology 35:204–214https://doi.org/10.1002/(sici)1098-2302(199911)35:3<204::aid-dev5>3.0.co;2-v
- The specificity of the neural response to speech at birthDevelopmental Science 21https://doi.org/10.1111/desc.12564
- Neuroimaging markers of risk and pathways to resilience in autism spectrum disorderBiological Psychiatry. Cognitive Neuroscience and Neuroimaging 6:200–210https://doi.org/10.1016/j.bpsc.2020.06.017
- Language experienced in utero affects vowel perception after birth: a two-country studyActa Paediatrica 102:156–160https://doi.org/10.1111/apa.12098
- Long term cognitive outcomes of early term (37-38 weeks) and late preterm (34-36 weeks) births: A systematic reviewWellcome Open Research 2https://doi.org/10.12688/wellcomeopenres.12783.1
- The mismatch negativity (MMN) in basic research of central auditory processing: A reviewClinical Neurophysiology 118:2544–2590https://doi.org/10.1016/j.clinph.2007.04.026
- Long-term cognition and behavior in children born at early term gestation: A systematic reviewActa Obstetricia Et Gynecologica Scandinavica 98:1227–1234https://doi.org/10.1111/aogs.13644
- Academic achievement varies with gestational age among children born at termPediatrics 130:e257–264https://doi.org/10.1542/peds.2011-2157
- Learning-induced neural plasticity of speech processing before birthProceedings of the National Academy of Sciences of the United States of America 110:15145–50https://doi.org/10.1073/pnas.1302159110
- Evidence that children born at early term (37-38 6/7 weeks) are at increased risk for diabetes and obesity-related disordersAmerican Journal of Obstetrics and Gynecology 217:588–588https://doi.org/10.1016/j.ajog.2017.07.015
- Sounds and silence: An optical topography study of language recognition at birthProceedings of the National Academy of Sciences of the United States of America 100:11702–11705https://doi.org/10.1073/pnas.1934290100
- Early exposure to maternal voice: Effects on preterm infants developmentEarly Human Development 90:287–292https://doi.org/10.1016/j.earlhumdev.2014.03.003
- Do mothers sound good? A systematic review of the effects of maternal voice exposure on preterm infants’ developmentNeuroscience and Biobehavioral Reviews 88:42–50https://doi.org/10.1016/j.neubiorev.2018.03.009
- Randomized trial to increase speech sound differentiation in infants born pretermThe Journal of Pediatrics 241:103–108https://doi.org/10.1016/j.jpeds.2021.10.035
- Developmental scores at 1 year with increasing gestational age, 37-41 weeksPediatrics 131:e1475–1481https://doi.org/10.1542/peds.2012-3215
- The anterior fusiform gyrus: The ghost in the cortical face machineNeuroscience & Biobehavioral Reviews 158https://doi.org/10.1016/j.neubiorev.2024.105535
- Cerebral hemodynamics in newborn infants exposed to speech sounds: A whole-head optical topography studyHuman Brain Mapping 33:2092–2103https://doi.org/10.1002/hbm.21350
- Gestational age and developmental risk in moderately and late preterm and early term infantsPediatrics 135:e835–841https://doi.org/10.1542/peds.2014-1957
- Adverse neonatal outcomes associated with early-term birthJAMA Pediatrics 167:1053–1059https://doi.org/10.1001/jamapediatrics.2013.2581
- Infants’ listening preferences: Baby talk or happy talk?Infancy 3:365–394https://doi.org/10.1207/S15327078IN0303_5
- Do infants discriminate non-linguistic vocal expressions of positive emotions?Cognition and Emotion 31:298–311https://doi.org/10.1080/02699931.2015.1108904
- Defining “term” pregnancy: Recommendations from the defining “term” pregnancy workgroupJAMA 309:2445–2446https://doi.org/10.1001/jama.2013.6235
- An extensive pattern of atypical neural speech-sound discrimination in newborns at risk of dyslexiaClinical Neurophysiology 130:634–646https://doi.org/10.1016/j.clinph.2019.01.019
- Timing of elective repeat cesarean delivery at term and neonatal outcomesThe New England Journal of Medicine 360:111–120https://doi.org/10.1056/NEJMoa0803267
- Is visual reference necessary? Contributions of facial versus vocal cues in 12-month-olds’ social referencing behaviorDevelopmental Science 7:261–269https://doi.org/10.1111/j.1467-7687.2004.00344.x
- Recognition of emotionalLprosodic meanings in speech by autistic, schizophrenic, and normal childrenDevelopmental Neuropsychology 5:207–226https://doi.org/10.1080/87565648909540433
- Distinct hemispheric specializations for native and non-native languages in one-day-old newborns identified by fNIRSNeuropsychologia 84:63–69https://doi.org/10.1016/j.neuropsychologia.2016.01.038
- Infancy and early childhood maturation of neural auditory change detection and its associations to familial dyslexia riskClinical Neurophysiology 137:159–176https://doi.org/10.1016/j.clinph.2022.03.005
- Discrimination of vocal expressions by young infantsInfant Behavior and Development 6:491–498https://doi.org/10.1016/S0163-6383(83)90331-4
- Reading affect in the face and voice: Neural correlates of interpreting communicative intent in children and adolescents with autism spectrum disordersArchives of General Psychiatry 64:698–708https://doi.org/10.1001/archpsyc.64.6.698
- Mother’s voice and heartbeat sounds elicit auditory plasticity in the human brain before full gestationProceedings of the National Academy of Sciences of the United States of America 112:3152–3157https://doi.org/10.1073/pnas.1414924112
- Responses of preterm infants to unimodal and multimodal sensory interventionPediatric Nursing 23:169–175
- Newborn infants detect the beat in musicProceedings of the National Academy of Sciences of the United States of America 106:2468–2471https://doi.org/10.1073/pnas.0809035106
- Newborn infants can organize the auditory worldProceedings of the National Academy of Sciences of the United States of America 100:11812–11815https://doi.org/10.1073/pnas.2031891100
- Rapid learning of a phonemic discrimination in the first hours of lifeNature Human Behaviour 6https://doi.org/10.1038/s41562-022-01355-1
- A systematic review and meta-analysis of facial emotion recognition in autism spectrum disorder: The specificity of deficits and the role of task characteristicsNeuroscience and Biobehavioral Reviews 133https://doi.org/10.1016/j.neubiorev.2021.104518
- Near-infrared spectroscopy reveals neural perception of vocal emotions in human neonatesHuman Brain Mapping 40https://doi.org/10.1002/hbm.24534
- Discrimination of fearful and angry emotional voices in sleeping human neonates: A study of the mismatch brain responsesFrontiers in Behavioral Neuroscience 8https://doi.org/10.3389/fnbeh.2014.00422
- Development of the neural processing of vocal emotion during the first year of lifeChild Neuropsychology 27:333–350https://doi.org/10.1080/09297049.2020.1853090
- Weight gain velocity in very low-birth-weight infants: Effects of exposure to biological maternal soundsAmerican Journal of Perinatology 30:863–870https://doi.org/10.1055/s-0033-1333669
Article and author information
Author information
Version history
- Sent for peer review:
- Preprint posted:
- Reviewed Preprint version 1:
- Reviewed Preprint version 2:
- Reviewed Preprint version 3:
- Version of Record published:
Copyright
© 2024, Hou et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 630
- downloads
- 85
- citations
- 0
Views, downloads and citations are aggregated across all versions of this paper published by eLife.