Cerebral chemoarchitecture shares organizational traits with brain structure and function
Abstract
Chemoarchitecture, the heterogeneous distribution of neurotransmitter transporter and receptor molecules, is a relevant component of structure-function relationships in the human brain. Here, we studied the organization of the receptome, a measure of interareal chemoarchitectural similarity, derived from Positron-Emission Tomography imaging studies of 19 different neurotransmitter transporters and receptors. Nonlinear dimensionality reduction revealed three main spatial gradients of cortical chemoarchitectural similarity - a centro-temporal gradient, an occipito-frontal gradient, and a temporo-occipital gradient. In subcortical nuclei, chemoarchitectural similarity distinguished functional communities and delineated a striato-thalamic axis. Overall, the cortical receptome shared key organizational traits with functional and structural brain anatomy, with node-level correspondence to functional, microstructural, and diffusion MRI-based measures decreasing along a primary-to-transmodal axis. Relative to primary and paralimbic regions, unimodal and heteromodal regions showed higher receptomic diversification, possibly supporting functional flexibility.
Data availability
All data and software used in this study is openly accessible. PET data is available at https://github.com/netneurolab/hansen_receptors. FC, SC and MPC data is available at https://portal.conp.ca/dataset?id=projects/mica-mics. ENIGMA data is available through enigmatoolbox (https://github.com/MICA-MNI/ENIGMA). Meta-analytical functional activation data is available through Neurosynth (https://neurosynth.org/analyses/topics/v5-topics-50). The code used to perform the analyses can be found at https://github.com/CNG-LAB/cngopen/receptor_similarity.
-
Mapping neurotransmitter systems to the structural and functional organization of the human neocortexgithub, https://doi.org/10.1101/2021.10.28.466336.
-
MICA-MICs: a dataset for Microstructure-Informed ConnectomicsCONP, https://n2t.net/ark:/70798/d72xnk2wd397j190qv.
-
The ENIGMA Toolbox: multiscale neural contextualization of multisite neuroimaging datasetsgithub, https://doi.org/10.1038/s41592-021-01186-4.
-
Large-scale automated synthesis of human functional neuroimaging dataneurosynth, https://doi.org/10.1038/nmeth.1635.
Article and author information
Author details
Funding
Max-Planck-Institut für Kognitions- und Neurowissenschaften (Open Access funding)
- Sofie Louise Valk
FRQ-S
- Boris C Bernhardt
Tier-2 Canada Research Chairs program
- Boris C Bernhardt
Human Brain Project
- Simon B Eickhoff
Max Planck Gesellschaft (Otto Hahn award)
- Sofie Louise Valk
Helmholtz International Lab grant agreement (InterLabs-0015)
- Boris C Bernhardt
- Simon B Eickhoff
- Sofie Louise Valk
Canada First Research Excellence Fund (CFREF Competition 2,2015-2016)
- Boris C Bernhardt
- Simon B Eickhoff
- Sofie Louise Valk
European Union's Horizon 2020 (No. 826421 TheVirtualBrain-Cloud"")
- Juergen Dukart
Helmholtz International BigBrain Analytics & Laboratory
- Justine Y Hansen
- Boris C Bernhardt
- Simon B Eickhoff
- Sofie Louise Valk
Natural Sciences and Engineering Research Council of Canada
- Justine Y Hansen
- Boris C Bernhardt
- Bratislav Misic
Canadian Institutes of Health Research
- Boris C Bernhardt
- Bratislav Misic
Brain Canada Foundation Future Leaders Fund
- Boris C Bernhardt
- Bratislav Misic
Canada Research Chairs
- Bratislav Misic
Michael J. Fox Foundation for Parkinson's Research
- Bratislav Misic
SickKids Foundation (NI17-039)
- Boris C Bernhardt
Azrieli Center for Autism Research (ACAR-TACC)
- Boris C Bernhardt
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Human subjects: The current research complies with all relevant ethical regulations as set by The Independent Research Ethics Committee at the Medical Faculty of the Heinrich-Heine-University of Duesseldorf (study number 2018-317). The current data was based on open access resources, and ethic approvals of the individual datasets are available in the original publications of each data source.
Reviewing Editor
- Birte U Forstmann, University of Amsterdam, Netherlands
Version history
- Preprint posted: August 26, 2022 (view preprint)
- Received: September 30, 2022
- Accepted: July 12, 2023
- Accepted Manuscript published: July 13, 2023 (version 1)
- Version of Record published: July 26, 2023 (version 2)
Copyright
© 2023, Hänisch et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 861
- Page views
-
- 183
- Downloads
-
- 2
- Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
The computational principles underlying attention allocation in complex goal-directed tasks remain elusive. Goal-directed reading, that is, reading a passage to answer a question in mind, is a common real-world task that strongly engages attention. Here, we investigate what computational models can explain attention distribution in this complex task. We show that the reading time on each word is predicted by the attention weights in transformer-based deep neural networks (DNNs) optimized to perform the same reading task. Eye tracking further reveals that readers separately attend to basic text features and question-relevant information during first-pass reading and rereading, respectively. Similarly, text features and question relevance separately modulate attention weights in shallow and deep DNN layers. Furthermore, when readers scan a passage without a question in mind, their reading time is predicted by DNNs optimized for a word prediction task. Therefore, we offer a computational account of how task optimization modulates attention distribution during real-world reading.
-
- Neuroscience
Even though human experience unfolds continuously in time, it is not strictly linear; instead, it entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a continuously varying acoustic signal into phonemes, words, and meaning, and these levels all have distinct but interdependent temporal structures. Time-lagged regression using temporal response functions (TRFs) has recently emerged as a promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here we introduce the Eelbrain Python toolkit, which makes this kind of analysis easy and accessible. We demonstrate its use, using continuous speech as a sample paradigm, with a freely available EEG dataset of audiobook listening. A companion GitHub repository provides the complete source code for the analysis, from raw data to group level statistics. More generally, we advocate a hypothesis-driven approach in which the experimenter specifies a hierarchy of time-continuous representations that are hypothesized to have contributed to brain responses, and uses those as predictor variables for the electrophysiological signal. This is analogous to a multiple regression problem, but with the addition of a time dimension. TRF analysis decomposes the brain signal into distinct responses associated with the different predictor variables by estimating a multivariate TRF (mTRF), quantifying the influence of each predictor on brain responses as a function of time(-lags). This allows asking two questions about the predictor variables: 1) Is there a significant neural representation corresponding to this predictor variable? And if so, 2) what are the temporal characteristics of the neural response associated with it? Thus, different predictor variables can be systematically combined and evaluated to jointly model neural processing at multiple hierarchical levels. We discuss applications of this approach, including the potential for linking algorithmic/representational theories at different cognitive levels to brain responses through computational models with appropriate linking hypotheses.