The structural connectome constrains fast brain dynamics
Abstract
Brain activity during rest displays complex, rapidly evolving patterns in space and time. Structural connections comprising the human connectome are hypothesized to impose constraints on the dynamics of this activity. Here, we use magnetoencephalography (MEG) to quantify the extent to which fast neural dynamics in the human brain are constrained by structural connections inferred from diffusion MRI tractography. We characterize the spatio-temporal unfolding of whole-brain activity at the millisecond scale from source-reconstructed MEG data, estimating the probability that any two brain regions will significantly deviate from baseline activity in consecutive time epochs. We find that the structural connectome relates to, and likely affects, the rapid spreading of neuronal avalanches, evidenced by a significant association between these transition probabilities and structural connectivity strengths (r=0.37, <0.0001). This finding opens new avenues to study the relationship between brain structure and neural dynamics.
Data availability
The MEG data and the reconstructed avalanches are available upon request to the corresponding author (Pierpaolo Sorrentino), conditional on appropriate ethics approval at the local site. The availability of the data was not previously included in the ethical approval, and therefore data cannot be shared directly. In case data are requested, the corresponding author will request an amendment to the local ethical committee. Conditional to approval, the data will be made available. The Matlab code is available at https://github.com/pierpaolosorrentino/Transition-Matrices-
Article and author information
Author details
Funding
No external funding was received for this work.
Ethics
Human subjects: All participants gave written informed consent. The study complied with the declaration of Helsinki and was approved by the local Ethics Committee (Prot.n.93C.E./Reg. n.14-17OSS).
Copyright
© 2021, Sorrentino et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 2,412
- views
-
- 331
- downloads
-
- 64
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Computational and Systems Biology
- Neuroscience
Fiber photometry has become a popular technique to measure neural activity in vivo, but common analysis strategies can reduce the detection of effects because they condense within-trial signals into summary measures, and discard trial-level information by averaging across-trials. We propose a novel photometry statistical framework based on functional linear mixed modeling, which enables hypothesis testing of variable effects at every trial time-point, and uses trial-level signals without averaging. This makes it possible to compare the timing and magnitude of signals across conditions while accounting for between-animal differences. Our framework produces a series of plots that illustrate covariate effect estimates and statistical significance at each trial time-point. By exploiting signal autocorrelation, our methodology yields joint 95% confidence intervals that account for inspecting effects across the entire trial and improve the detection of event-related signal changes over common multiple comparisons correction strategies. We reanalyze data from a recent study proposing a theory for the role of mesolimbic dopamine in reward learning, and show the capability of our framework to reveal significant effects obscured by standard analysis approaches. For example, our method identifies two dopamine components with distinct temporal dynamics in response to reward delivery. In simulation experiments, our methodology yields improved statistical power over common analysis approaches. Finally, we provide an open-source package and analysis guide for applying our framework.
-
- Computational and Systems Biology
The principle of efficient coding posits that sensory cortical networks are designed to encode maximal sensory information with minimal metabolic cost. Despite the major influence of efficient coding in neuroscience, it has remained unclear whether fundamental empirical properties of neural network activity can be explained solely based on this normative principle. Here, we derive the structural, coding, and biophysical properties of excitatory-inhibitory recurrent networks of spiking neurons that emerge directly from imposing that the network minimizes an instantaneous loss function and a time-averaged performance measure enacting efficient coding. We assumed that the network encodes a number of independent stimulus features varying with a time scale equal to the membrane time constant of excitatory and inhibitory neurons. The optimal network has biologically plausible biophysical features, including realistic integrate-and-fire spiking dynamics, spike-triggered adaptation, and a non-specific excitatory external input. The excitatory-inhibitory recurrent connectivity between neurons with similar stimulus tuning implements feature-specific competition, similar to that recently found in visual cortex. Networks with unstructured connectivity cannot reach comparable levels of coding efficiency. The optimal ratio of excitatory vs inhibitory neurons and the ratio of mean inhibitory-to-inhibitory vs excitatory-to-inhibitory connectivity are comparable to those of cortical sensory networks. The efficient network solution exhibits an instantaneous balance between excitation and inhibition. The network can perform efficient coding even when external stimuli vary over multiple time scales. Together, these results suggest that key properties of biological neural networks may be accounted for by efficient coding.