Towards a more informative representation of the fetal-neonatal brain connectome using variational autoencoder
Abstract
Recent advances in functional magnetic resonance imaging (fMRI) have helped elucidate previously inaccessible trajectories of early-life prenatal and neonatal brain development. To date, the interpretation of fetal-neonatal fMRI data has relied on linear analytic models, akin to adult neuroimaging data. However, unlike the adult brain, the fetal and newborn brain develops extraordinarily rapidly, far outpacing any other brain development period across the lifespan. Consequently, conventional linear computational models may not adequately capture these accelerated and complex neurodevelopmental trajectories during this critical period of brain development along the prenatal-neonatal continuum. To obtain a nuanced understanding of fetal-neonatal brain development, including non-linear growth, for the first time, we developed quantitative, systems-wide representations of brain activity in a large sample (>500) of fetuses, preterm, and full-term neonates using an unsupervised deep generative model called Variational Autoencoder (VAE), a model previously shown to be superior to linear models in representing complex resting state data in healthy adults. Here, we demonstrated that non-linear brain features, i.e., latent variables, derived with the VAE pretrained on rsfMRI of human adults, carried important individual neural signatures, leading to improved representation of prenatal-neonatal brain maturational patterns and more accurate and stable age prediction in the neonate cohort compared to linear models. Using the VAE decoder, we also revealed distinct functional brain networks spanning the sensory and default mode networks. Using the VAE, we are able to reliably capture and quantify complex, non-linear fetal-neonatal functional neural connectivity. This will lay the critical foundation for detailed mapping of healthy and aberrant functional brain signatures that have their origins in fetal life.
Data availability
Data from the Children's National cohort (or DBI dataset) are accessible here: https://doi.org/10.5061/dryad.cvdncjt6n. The Developing Human Connectome Project dataset (dHCP dataset) are here: http://www.developingconnectome.org. The source code, model and documentation for the VAE described in this paper are publicly available at https://github.com/libilab/rsfMRI-VAE.
-
Towards A More Informative Representation of the Fetal-Neonatal Brain Connectome using Variational AutoencoderDryad Digital Repository, doi:10.5061/dryad.cvdncjt6n.
Article and author information
Author details
Funding
National Heart, Lung, and Blood Institute (R01 HL116585-01)
- Catherine Limperopoulos
Canadian Institute of Health Research (MOP-81116)
- Catherine Limperopoulos
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Human subjects: All experiments were conducted under the regulations and guidelines approved by the Institutional Review Board (IRB) of Children's National (Study ID: Pro00013618); written informed consent was obtained from each pregnant woman who participated in the study.
Copyright
© 2023, Kim et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 681
- views
-
- 125
- downloads
-
- 7
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Recent studies suggest that calcitonin gene-related peptide (CGRP) neurons in the parabrachial nucleus (PBN) represent aversive information and signal a general alarm to the forebrain. If CGRP neurons serve as a true general alarm, their activation would modulate both passive nad active defensive behaviors depending on the magnitude and context of the threat. However, most prior research has focused on the role of CGRP neurons in passive freezing responses, with limited exploration of their involvement in active defensive behaviors. To address this, we examined the role of CGRP neurons in active defensive behavior using a predator-like robot programmed to chase mice. Our electrophysiological results revealed that CGRP neurons encode the intensity of aversive stimuli through variations in firing durations and amplitudes. Optogenetic activation of CGRP neuron during robot chasing elevated flight responses in both conditioning and retention tests, presumably by amyplifying the perception of the threat as more imminent and dangerous. In contrast, animals with inactivated CGRP neurons exhibited reduced flight responses, even when the robot was programmed to appear highly threatening during conditioning. These findings expand the understanding of CGRP neurons in the PBN as a critical alarm system, capable of dynamically regulating active defensive behaviors by amplifying threat perception, ensuring adaptive responses to varying levels of danger.
-
- Neuroscience
Movie-watching is a central aspect of our lives and an important paradigm for understanding the brain mechanisms behind cognition as it occurs in daily life. Contemporary views of ongoing thought argue that the ability to make sense of events in the ‘here and now’ depend on the neural processing of incoming sensory information by auditory and visual cortex, which are kept in check by systems in association cortex. However, we currently lack an understanding of how patterns of ongoing thoughts map onto the different brain systems when we watch a film, partly because methods of sampling experience disrupt the dynamics of brain activity and the experience of movie-watching. Our study established a novel method for mapping thought patterns onto the brain activity that occurs at different moments of a film, which does not disrupt the time course of brain activity or the movie-watching experience. We found moments when experience sampling highlighted engagement with multi-sensory features of the film or highlighted thoughts with episodic features, regions of sensory cortex were more active and subsequent memory for events in the movie was better—on the other hand, periods of intrusive distraction emerged when activity in regions of association cortex within the frontoparietal system was reduced. These results highlight the critical role sensory systems play in the multi-modal experience of movie-watching and provide evidence for the role of association cortex in reducing distraction when we watch films.