Virtual Brain Inference (VBI), a flexible and integrative toolkit for efficient probabilistic inference on whole-brain models
eLife Assessment
This paper presents a valuable software package, named "Virtual Brain Inference" (VBI), that enables faster and more efficient inference of parameters in dynamical system models of whole-brain activity, grounded in artificial network networks for Bayesian statistical inference. The authors have provided convincing evidence, across several case studies, for the utility and validity of the methods using simulated data from several commonly used models, but more thorough benchmarking could be used to demonstrate the practical utility of the toolkit. This work will be of interest to computational neuroscientists interested in modelling large-scale brain dynamics.
https://doi.org/10.7554/eLife.106194.4.sa0Valuable: Findings that have theoretical or practical implications for a subfield
- Landmark
- Fundamental
- Important
- Valuable
- Useful
Convincing: Appropriate and validated methodology in line with current state-of-the-art
- Exceptional
- Compelling
- Convincing
- Solid
- Incomplete
- Inadequate
During the peer-review process the editor and reviewers write an eLife Assessment that summarises the significance of the findings reported in the article (on a scale ranging from landmark to useful) and the strength of the evidence (on a scale ranging from exceptional to inadequate). Learn more about eLife Assessments
Abstract
Network neuroscience has proven essential for understanding the principles and mechanisms underlying complex brain (dys)function and cognition. In this context, whole-brain network modeling—also known as virtual brain modeling—combines computational models of brain dynamics (placed at each network node) with individual brain imaging data (to coordinate and connect the nodes), advancing our understanding of the complex dynamics of the brain and its neurobiological underpinnings. However, there remains a critical need for automated model inversion tools to estimate control (bifurcation) parameters at large scales associated with neuroimaging modalities, given their varying spatio-temporal resolutions. This study aims to address this gap by introducing a flexible and integrative toolkit for efficient Bayesian inference on virtual brain models, called Virtual Brain Inference (VBI). This open-source toolkit provides fast simulations, taxonomy of feature extraction, efficient data storage and loading, and probabilistic machine learning algorithms, enabling biophysically interpretable inference from non-invasive and invasive recordings. Through in-silico testing, we demonstrate the accuracy and reliability of inference for commonly used whole-brain network models and their associated neuroimaging data. VBI shows potential to improve hypothesis evaluation in network neuroscience through uncertainty quantification and contribute to advances in precision medicine by enhancing the predictive power of virtual brain models.
Introduction
Understanding the complex dynamics of the brain and their neurobiological underpinnings, with the potential to advance precision medicine (Falcon et al., 2016; Tan et al., 2016; Vogel et al., 2023; Williams and Whitfield Gabrieli, 2025), is a central goal in neuroscience. Modeling these dynamics provides crucial insights into causality and mechanisms underlying both normal brain function and various neurological disorders (Breakspear, 2017; Wang et al., 2023b; Ross and Bassett, 2024). By integrating the average activity of large populations of neurons (e.g. neural mass models; Wilson and Cowan, 1972; Jirsa and Haken, 1996; Deco et al., 2008; Jirsa et al., 2014; Montbrió et al., 2015; Cook et al., 2022) with information provided by structural imaging modalities (i.e. connectome; Honey et al., 2009; Sporns et al., 2005; Schirner et al., 2015; Bazinet et al., 2023), the whole-brain network modeling has proven to be a powerful tractable approach for simulating brain activities and emergent dynamics as recorded by functional imaging modalities (such as (s)EEG, MEG, and fMRI; Sanz-Leon et al., 2015; Schirner et al., 2022; Amunts et al., 2022; D’Angelo and Jirsa, 2022; Patow et al., 2024; Hashemi et al., 2025).
The whole-brain models have been well-established in network neuroscience (Sporns, 2016; Bassett and Sporns, 2017) for understanding the brain structure and function (Ghosh et al., 2008; Honey et al., 2010; Park and Friston, 2013; Melozzi et al., 2019; Suárez et al., 2020; Feng et al., 2024; Tanner et al., 2024) and investigating the mechanisms underlying brain dynamics at rest (Deco et al., 2011; Wang et al., 2019; Ziaeemehr et al., 2020; Kong et al., 2021), normal aging (Lavanga et al., 2023; Zhang et al., 2024), and also altered states such as anesthesia and loss of consciousness (Barttfeld et al., 2015; Hashemi et al., 2017; Luppi et al., 2023; Perl et al., 2023b). This class of computational models, also known as virtual brain models (Jirsa et al., 2010; Sanz Leon et al., 2013; Sanz-Leon et al., 2015; Schirner et al., 2022; Jirsa et al., 2023; Wang et al., 2024), has shown remarkable capability in delineating the pathophysiological causes of a wide range of brain diseases, such as epilepsy (Jirsa et al., 2017; Proix et al., 2017; Wang et al., 2023b), multiple sclerosis (Wang et al., 2024; Mazzara et al., 2025), Alzheimer’s disease (Yalçınkaya et al., 2023; Perl et al., 2023a), Parkinson’s disease (Jung et al., 2022; Angiolelli et al., 2025), neuropsychiatric disorders (Deco and Kringelbach, 2014; Iravani et al., 2021), stroke (Allegra Mascaro et al., 2020; Idesis et al., 2022), and focal lesions (Rabuffo et al., 2025). In particular, they enable the personalized simulation of both normal and abnormal brain activities, along with their associated imaging recordings, thereby stratifying between healthy and diseased states (Liu et al., 2016; Patow et al., 2023; Perl et al., 2023a) and potentially informing targeted interventions and treatment strategies (Jirsa et al., 2017; Proix et al., 2018; Wang et al., 2023b; Jirsa et al., 2023; Hashemi et al., 2025). Although there are only a few tools available for forward simulations at the whole-brain level, for example the brain network simulator The Virtual Brain (TVB; Sanz Leon et al., 2013), there is a lack of tools for addressing the inverse problem, that is finding the set of control (generative) parameters that best explains the observed data. This study aims to bridge this gap by addressing the inverse problem in large-scale brain networks, a crucial step toward making these models operable for clinical applications.
Accurately and reliably estimating the parameters of whole-brain models remains a formidable challenge, mainly due to the high dimensionality and nonlinearity inherent in brain activity data, as well as the non-trivial effects of noise and network inputs. A large number of previous studies in whole-brain modeling have relied on optimization techniques to identify a single optimal value from an objective function, scoring the model’s performance against observed data (Wang et al., 2019; Kong et al., 2021; Cabral et al., 2022; Liu et al., 2023). This approach often involves minimizing metrics such as the Kolmogorov-Smirnov distance or maximizing the Pearson correlation between observed and generated data features such as functional connectivity (FC), functional connectivity dynamics (FCD), and/or power spectral density (PSD). Although fast, such a parametric approach results in only point estimates and fails to capture the relationship between parameters and their associated uncertainty. This limits the generalizability of findings and hinders identifiability analysis, which explores the uniqueness of solutions. Furthermore, optimization algorithms can easily get stuck in local extrema, requiring multi-start strategies to address potential parameter degeneracies. These additional steps, while necessary, ultimately increase the computational cost. Critically, the estimation heavily depends on the form of the objective function defined for optimization (Svensson et al., 2012; Hashemi et al., 2018). These limitations can be overcome by employing Bayesian inference, which naturally quantifies the uncertainty in the estimation and statistical dependencies between parameters, leading to more robust and generalizable models. Bayesian inference is a principal method for updating prior beliefs with information provided by data through the likelihood function, resulting in a posterior probability distribution that encodes all the information necessary for inferences and predictions. This approach has proven essential for understanding the intricate relationships between brain structure and function (Hashemi et al., 2021; Lavanga et al., 2023; Rabuffo et al., 2025), as well as for revealing the pathophysiological causes underlying brain disorders (Hashemi et al., 2023; Yalçınkaya et al., 2023; Wang et al., 2024; Wang et al., 2024; Hashemi et al., 2025; Hashemi et al., 2024).
In this context, simulation-based inference (SBI; Cranmer et al., 2020; Gonçalves et al., 2020; Hashemi et al., 2023; Hashemi et al., 2024) has gained prominence as an efficient methodology for conducting Bayesian inference in complex models where traditional inference techniques become inapplicable. SBI leverages computational simulations to generate synthetic data and employs advanced probabilistic machine learning methods to infer the joint distribution over parameters that best explain the observed data, along with associated uncertainty. This approach is particularly well-suited for Bayesian inference on whole-brain models, which often exhibit complex dynamics that are difficult to retrieve from neuroimaging data with conventional estimation techniques. Crucially, SBI circumvents the need for explicit likelihood evaluation and the Markovian (sequential) property required in sampling. Markov chain Monte Carlo (MCMC; Gelman et al., 1995) is the gold-standard nonparametric technique and asymptotically exact for sampling from a probability distribution. However, for Bayesian inference on whole-brain models given high-dimensional data, the likelihood function becomes intractable, rendering MCMC sampling computationally prohibitive. SBI offers significant advantages, such as parallel simulation while leveraging amortized learning, making it effective for personalized inference from large datasets (Hashemi et al., 2024). Amortization in artificial neural networks refers to the idea of reusing learned computations across multiple tasks or inputs (Gershman and Goodman, 2014). Amortization in Bayesian inference refers to the process of training a shared inference network (e.g. a neural network) with an intensive upfront computational cost, to perform fast inference across many different observations. Instead of re-running inference for each new observation, the trained model can rapidly return posterior estimates, significantly reducing computational cost at test time. Following an initial computational cost during simulation and training to learn all posterior distributions, subsequent evaluation of new hypotheses can be conducted efficiently, without additional computational overhead for further simulations (Hashemi et al., 2023). Importantly, SBI sidesteps the convergence issues caused by complex geometries that are often encountered when using gradient-based MCMC methods (Betancourt and Girolami, 2013; Betancourt et al., 2014; Hashemi et al., 2020). It also substantially outperforms approximate Bayesian computation (ABC) methods, which rely on a threshold to accept or reject samples (Sisson et al., 2007; Beaumont et al., 2009; Gonçalves et al., 2020). Such a likelihood-free approach provides us with generic inference on complex systems as long as we can provide three modules:
A prior distribution, describing the possible range of parameters from which random samples can be easily drawn, that is .
A simulator in computer code that takes parameters as input and generates data as output, that is .
A set of low-dimensional data features, which are informative of the parameters that we aim to infer.
These elements prepare us with a training data set with a budget of simulations. Then, using a class of deep neural density estimators, such as masked autoregressive flows (MAFs; Papamakarios and Pavlakou, 2017) or neural spline flows (NSFs; Durkan et al., 2019), we can approximate the posterior distribution of parameters given a set of observed data, that is . Therefore, a versatile toolkit should be flexible and integrative, adeptly incorporating these modules to enable efficient Bayesian inference over complex models.
To address the need for widely applicable, reliable, and efficient parameter estimation from different (source-localized) neuroimaging modalities, we introduce Virtual Brain Inference (VBI), a flexible and integrative toolkit for probabilistic inference at whole-brain level. This open-source toolkit offers fast simulation through just-in-time (JIT) compilation of various brain models in different programming languages (Python/C++) and devices (CPUs/GPUs). It supports space-efficient storage of simulated data (HDF5/NPZ/PT), provides a memory-efficient loader for batched data, and facilitates the extraction of low-dimensional data features (FC/FCD/PSD). Additionally, it enables the training of deep neural density estimators (MAFs/NSFs), making it a versatile tool for inference on neural sources corresponding to (s)EEG, MEG, and fMRI recordings. VBI leverages high-performance computing, significantly enhancing computational efficiency through parallel processing of large-scale datasets, which would be impractical with current alternative methods. Although SBI has been used for low-dimensional parameter spaces (Gonçalves et al., 2020; Wang et al., 2024; Baldy et al., 2024), we demonstrate that it can scale to whole-brain models with high-dimensional unknown parameters, as long as informative data features are provided. VBI is now accessible on the cloud platform EBRAINS (https://ebrains.eu), enabling users to explore more realistic brain dynamics underlying brain (dys)functioning using Bayesian inference.
In the following sections, we will describe the architecture and workflow of the VBI toolkit and demonstrate the validation through a series of case studies using in silico data. We explore various whole-brain models corresponding to different types of brain recordings: a whole-brain network model of Wilson-Cowan (Wilson and Cowan, 1972), Jansen-Rit (Jansen and Rit, 1995; David and Friston, 2003), and Stuart-Landau (Selivanov et al., 2012) for simulating neural activity associated with EEG/MEG signals, the Epileptor (Jirsa et al., 2014) related to stereoelectro-EEG (sEEG) recordings, and Montbrió (Montbrió et al., 2015), and Wong-Wang (Wong and Wang, 2006; Deco et al., 2013) mapped to fMRI BOLD signals. Although these models represent source signals and could be applied to other modalities (e.g. Stuart-Landau representing generic oscillatory dynamics), we focused on their capabilities to perform optimally in specific contexts. For instance, some are better suited for encephalographic signals (e.g. EEG/MEG) due to their ability to preserve spectral properties, while others have been used for fMRI data, emphasizing their ability to capture dynamic features such as bistability and time-varying functional connectivity.
VBI workflow
Figure 1 illustrates an overview of our approach in VBI, which combines virtual brain models and SBI to make probabilistic predictions on brain dynamics from (sourc-localized) neuroimaging recordings. The inputs to the pipeline include the structural imaging data (for building the connectome), functional imaging data such as (s)EEG/MEG, and fMRI as the target for fitting, and prior information as a plausible range over control parameters for generating random simulations. The main computational costs involve model simulations and data feature extraction. The output of the pipeline is the joint posterior distribution of control parameters (such as excitability, synaptic weights, or effective external input) that best explains the observed data. Since the approach is amortized (i.e. it learns across all combinations in the parameter space), it can be readily applied to any new data from a specific subject.
The workflow of Virtual Brain Inference (VBI).
This probabilistic approach is designed to estimate the posterior distribution of control parameters in virtual brain models from whole-brain recordings. (A) The process begins with constructing a personalized connectome using diffusion tensor imaging and a brain parcellation atlas, such as Desikan-Killiany (Desikan et al., 2006), Automated Anatomical Labeling (Tzourio-Mazoyer et al., 2002), or VEP (Wang et al., 2021). (B) The personalized virtual brain model is then assembled. Neural mass models describing the averaged activity of neural populations, in the generic form of , are placed to each brain region and interconnected via the structural connectivity matrix. Initially, the control parameters are randomly drawn from a simple prior distribution. (C) Next, the VBI operates as a simulator that uses these samples to generate time series data associated with neuroimaging recordings. (D) We extract a set of summary statistics from the low-dimensional features of the simulations (FC, FCD, PSD) for training. (E) Subsequently, a class of deep neural density estimators is trained on pairs of random parameters and their corresponding data features to learn the joint posterior distribution of the model parameters. (F) Finally, the amortized network allows us to quickly approximate the posterior distribution for new (empirical) data features, enabling us to make probabilistic predictions that are consistent with the observed data.
In the first step, non-invasive brain imaging data, such as T1-weighted MRI and Diffusion-weighted MRI (DW-MRI), are collected for a specific subject (Figure 1A). T1-weighted MRI images are processed to obtain brain parcellation, while DW-MRI images are used for tractography. Using the estimated fiber tracts and the defined brain regions from the parcellation, the connectome (i.e. the complete set of links between brain regions) is constructed by counting the fibers connecting all regions. The SC matrix, with entries representing the connection strength between brain regions, forms the structural component of the virtual brain which constrains the generation of brain dynamics and functional data at arbitrary brain locations (e.g. cortical and subcortical structures).
Subsequently, each brain network node is equipped with a computational model of average neuronal activity, known as neural mass models (see Figure 1B and Materials and methods). They can be represented in the generic form of a dynamical model as , with the system variables (such as membrane potential and firing rate), the control parameters (such as excitability), and the input current (such as stimulation). This integration of mathematical mean-field modeling (neural mass models) with anatomical information (connectome) allows us to efficiently analyze functional neuroimaging modalities at the whole-brain level.
To quantify the posterior distribution of control parameters given a set of observations, , we first need to define a plausible range for the control parameters based on background knowledge , that is a simple base distribution known as a prior. We draw random samples from the prior and provide them as input to the VBI simulator (implemented by Simulation module) to generate simulated time series associated with neuroimaging recordings, as shown in Figure 1C. Subsequently, we extract low-dimensional data features (implemented by Features module), as shown in Figure 1D for FC/FCD/PSD, to prepare the training dataset , with a budget of simulations. Then, we use a class of deep neural density estimators, such as MAF or NSF models, as schematically shown in Figure 1E, to learn all the posterior . Finally, we can readily sample from , which determines the probability distribution in parameter space that best explains the observed data.
Figure 2 depicts the structure of the VBI toolkit, which consists of three main modules. The first module, referred to as the Simulation module, is designed for fast simulation of whole-brain models, such as Wilson-Cowan (Wilson-Cowan model), Jansen-Rit (Jansen-Rit model), Stuart-Landau (Stuart-Landau oscillator), Epileptor (Epileptor model), Montbrió (Montbrió model), and Wong-Wang (Wong-Wang model). These whole-brain models are implemented across various numerical computing libraries such as Cupy (GPU-accelerated computing with Python), C++ (a high-performance systems programming language), Numba (a JIT compiler for accelerating Python code), and PyTorch (an open-source machine learning library for creating deep neural networks).
Flowchart of the VBI Structure.
This toolkit consists of three main modules: (1) The Simulation module, implementing various whole-brain models, such as Wilson-Cowan (WCo), Jansen-Rit (JR), Stuart-Landau (SL), Epileptor (EPi), Montbrió (MPR), and Wong-Wang (WW), across different numerical computing libraries (C++, Cupy, PyTorch, Numba). (2) The Features module, offering an extensive toolbox for extracting low-dimensional data features, such as spectral, temporal, connectivity, statistical, and information theory features. (3) The Inference module, providing neural density estimators (such as MAF and NSF) to approximate the posterior of parameters.
The second module, Features, provides a versatile tool for extracting low-dimensional features from simulated time series (see Comprehensive feature extraction). The features include, but are not limited to, spectral, temporal, connectivity, statistical, and information theory related features, and the associated summary statistics. The third module focuses on Inference, that is training the deep neural density estimators, such as MAF and NSF (see Simulation-based inference), to learn the joint posterior distribution of control parameters. See Figure 2—figure supplement 1 and Figure 2—figure supplement 2 for benchmarks comparing CPU/GPU and MAF/NSF performances, and Figure 2—figure supplement 3 for the estimation of the global coupling parameter across different whole-brain network models, evaluated under multiple configurations.
Results
In the following, we demonstrate the capability of VBI for inference on the state-of-the-art whole-brain network models using in silico testing, where the ground truth is known. We apply this approach to simulate neural activity and associated measurements, including (s)EEG/MEG and fMRI, while also providing diagnostics for the accuracy and reliability of the estimation. Note that for (s)EEG/MEG neuroimaging, we perform inference at the regional level rather than at the sensor level, whereas for fMRI, it is mapped using the Balloon-Windkessel model (see The Balloon-Windkessel model). The results presented are based on synthetic data generated using a set of predefined parameters, referred to as the ground truth, randomly selected within biologically plausible ranges and incorporating a certain level of heterogeneity.
Whole-brain network of Wilson-Cowan model
We first demonstrate inference on the whole-brain network model of the Wilson-Cowan (see Equation 1), which is capable of generating a wide range of oscillatory dynamics depending on the control parameters. Specifically, we estimate the bifurcation parameters , representing the external input to each excitatory population, and the global excitatory coupling parameter . Figure 3A and B present the observed and predicted EEG-like signals, represented by the activity of excitatory populations across regions, and Figure 3C and D show the corresponding power spectral density (PSD), as data features. Figure 3E and F illustrate the inferred posterior distributions for parameters and , respectively, given . For training, we conducted 250 k random simulations from uniform priors and . After approximately 2 hr of training using MAF density estimators, posterior sampling was completed within a few seconds. Due to the large number of simulations and informativeness of data features, we achieved accurate estimations of the high-dimensional and heterogeneous control parameters. Ground-truth values (shown in green) are well recovered, leading to close agreement between observed and predicted PSDs of the signals. Finally, Figure 3G reports the posterior shrinkage and z-score metrics used to evaluate the quality of the parameter estimation. The results indicate that the inferred posteriors are both precise and well-centered around the ground-truth values, as reflected by high shrinkage and low z-scores. See Figure 3—figure supplement 1 for estimation over other configurations. Moreover, Figure 3—figure supplement 2, and Figure 3—figure supplement 3, show the estimations by ignoring the spatial information in the data features, indicating the higher accuracy of NSF, though with substantially more computational cost for training compared to MAF.
Bayesian inference on heterogeneous control parameters in the whole-brain network of Wilson-Cowan model.
The set of inferred parameters is , with the global scaling parameter and average external input current to excitatory populations per region, given parcelled regions. Summary statistics of power spectrum density (PSD) were used for training of MAF density estimators, with a budget of 260 k simulations. (A) and (B) illustrate the observed and predicted neural activities, respectively. (C) and (D) show the observed and predicted PSDs as the data features. (E) and (F) display the posterior distribution of per region, and global coupling , respectively. The ground truth and prior are represented by a vertical green line and a blue distribution, respectively. (G) shows the inference evaluation using posterior shrinkage and z-score.
Whole-brain network of Jansen-Rit model
Then, we demonstrate the inference on heterogeneous control parameters in the whole-brain network of Jansen-Rit (see Equation 2), commonly used for modeling EEG/MEG data, for example in dementia and Alzheimer’s disease (Triebkorn et al., 2022; Stefanovski et al., 2019). Figure 4A and B show the observed and predicted EEG signals, given by at each region, while Figure 4C and D illustrate the observed and predicted features such as PSD, respectively. Figure 4E and F show the estimated posterior distributions of synaptic connections , and the global coupling parameter , respectively, given the set of unknown parameters . Here we conducted 50 k random simulations with samples drawn from uniform priors and . After approximately 45 min of training (MAF density estimator), the posterior sampling took only a few seconds. With such a sufficient number of simulations and informative data features, VBI shows accurate estimation of high-dimensional heterogeneous parameters (given the ground truth, shown in green), leading to a strong correspondence between the observed and predicted PSD of EEG/MEG data. Figure 4G displays the shrinkage and z-score as the evaluation metrics, indicating an ideal Bayesian estimation for parameters, but not for the coupling parameter . This occurred because the network input did not induce a significant change in the intrinsic frequency of activities at the regional level, resulting in diffuse uncertainty in its estimation for this model.
Bayesian inference on heterogeneous control parameters in the whole-brain network of the Jansen-Rit model.
The set of inferred parameters is , with the global scaling parameter and average numbers of synapse between neural populations per region, given parcelled regions. Summary statistics of power spectrum density (PSD) were used for training, with a budget of 50 k simulations. (A) and (B) illustrate the observed and predicted neural activities, respectively. (C) and (D) show the observed and predicted data features, such as PSD. (E) and (F) display the posterior distribution of per region, and global coupling , respectively. The ground truth and prior are represented by vertical green lines and a blue distribution, respectively. (G) shows the inference evaluation using the shrinkage and z-score of the estimated posterior distributions.
Note that relying on only alpha-peak while excluding other summary statistics, such as total power (i.e., area under the curve), leads to poor estimation of synaptic connections across brain regions (see Figure 4—figure supplement 1). This results in less accurate predictions of the PSD, with more dispersion in their amplitudes. This example demonstrates that VBI provides a valuable tool for hypothesis evaluation and improved insight into data features by uncertainty quantification and their impact on predictions.
Whole-brain network of Stuart-Landau oscillators
To demonstrate efficient inference on the whole-brain time delay from EEG/MEG data, we used a whole-brain network model of coupled generic oscillators (see Equation 3). This model could establish a causal link between empirical spectral changes and the slower conduction velocities observed in multiple sclerosis patients, resulting from immune system attacks on the myelin sheath (Wang et al., 2024; Mazzara et al., 2025). The parameter set to estimate is , consisting of the global scaling parameter and the averaged velocity of signal transmission . The training was performed using a budget of only 2 k simulations, which was sufficient due to the low dimensionality of the parameter space. Figure 5A illustrates the comparison between observed (in green) and predicted neural activities (in red). Figure 5B shows a close agreement between observed and predicted PSD signals, as the data feature used for training. Figure 5C and D provide visualizations of the posterior distributions for the averaged velocity and the global coupling . In these panels, we can see a large shrinkage in the posterior (in red) from the uniform prior (in blue) centered around the true values (vertical green lines). Importantly, Figure 5E presenting the joint posterior distribution indicates a high correlation of between parameters and . This illustrates the advantage of Bayesian estimation in identifying statistical relationships between parameters, which helps to detect degeneracy among them. This is crucial for causal hypothesis evaluation and guiding conclusions in clinical settings. Finally, Figure 5F illustrates the sensitivity analysis (based on the eigenvalues of the posterior distribution), revealing that the posterior is more sensitive to changes in compared to . This highlights the relative impact of these parameters on the model’s posterior estimates.
Bayesian inference on global scaling parameter and the averaged velocity of signal transmission using the whole-brain network model of Stuart-Landau oscillators.
The set of estimated parameters is , and the summary statistics of PSD signals with a budget of 2 k simulations were used for training. (A) illustrates exemplary observed and predicted neural activities (in green and red, respectively). (B) shows the observed and predicted PSD signals (in green and red, respectively). (C) and (D) display the posterior distribution of averaged velocity and global coupling , respectively. The true values and prior are shown as vertical green lines and a blue distribution, respectively. (E) shows the joint posterior distribution indicating a high correlation between posterior samples. (F) illustrates the sensitivity analysis based on the eigenvalues of the posterior distribution.
Whole-brain network of Epileptor model
Next, we demonstrate the inference on a whole-brain model of epilepsy spread, known as the Virtual Epileptic Patient (VEP; Jirsa et al., 2017; Hashemi et al., 2020), used to delineate the epileptogenic and propagation zone networks from the invasive sEEG recordings (see Equation 4). Here, we used a large value for system time constant to generate slow-fast dynamics in pathological areas, corresponding to seizure envelope at each brain region. Figure 6 demonstrates the inference of the set of inferred parameters , with the global scaling parameter and spatial map of epileptogenicity , given parcelled regions. Figure 6A and B show the observed and predicted envelope, respectively, at each brain region. Here, the whole brain regions are classified into two epileptogenic zones (in red, corresponding to high excitability), three propagation zones (in yellow, corresponding to excitability close to bifurcation), and the rest as healthy regions (in green, corresponding to low excitability). Figure 6C and D illustrate the observed and predicted data features as the total power energy per region, calculated as the area under the curve. Additionally, the seizure onset at each region was used as a data feature for training the MAF density estimator. From these panels, we observe accurate recovery of seizure envelopes in pathological regions. Figure 6E and F show that the posterior distribution of heterogeneous , and global coupling parameter , respectively, indicating 100% accurate recovery of the true values (in green). Figure 6G confirms the reliability and accuracy of the estimates through shrinkage and z-score diagnostics. With our efficient implementation, generating 10 k whole-brain simulations took less than a minute (using 10 CPU cores). The training took approximately 13 min to converge, while posterior sampling required only a few seconds. See Figure 6—figure supplement 1 for a similar analysis with a slower time scale separation (). These results demonstrate an ideal and fast Bayesian estimation, despite the stiffness of equations in each region and the high dimensionality of the parameters. See Figure 6—figure supplement 2 and Figure 6—figure supplement 3 showing the accuracy and reliability of estimation under different levels of additive and dynamical noise. Note that for the VEP model, the total integration time is less than 100 ms, and due to the model’s stable behavior and a large time step integration, the simulation cost is significantly lower compared to other whole-brain models.
Bayesian inference on the spatial map of epileptogenicity across brain regions in the VEP model.
The set of inferred parameters is , as the global scaling parameter and spatial map of epileptogenicity with parcelled regions. (A) The observed seizure envelope generated by the Epileptor model, given two regions as epileptogenic zones (in red) and three regions as propagation zones (in yellow), while the rest are healthy (in green). (B) The predicted seizure envelope, by training MAF model on a dataset containing 10 k simulations, using only the total power and seizure onset per region as the data features. (C) and (D) show the observed and predicted data features, respectively. (E) and (F) show the posterior distributions of heterogeneous control parameters , and global coupling parameter , respectively. (G) The posterior z-scores versus posterior shrinkages for estimated parameters.
Whole-brain network of Montbrió model
Targeting the fMRI data, we demonstrate the inference on the whole-brain dynamics using Montbrió model (see Equation 5). Figure 7 demonstrates the inference on heterogeneous control parameters of the Montbrió model, operating in a bistable regime. Figure 7A and B show the observed and predicted BOLD time series, respectively, while Figure 7C and D illustrate the observed and predicted data features, such as the static and dynamical functional connectivity matrices (FC and FCD, respectively). Figure 7E and F show the estimated posterior distributions of excitability per brain region, and the global coupling parameter . Figure 7G displays the reliability and accuracy of estimation through the evaluation of posterior shrinkage and z-score (see Equation 13 and Equation 14). See Figure 7—figure supplement 1 for estimation over different configurations of the ground-truth values in this model.
Bayesian inference on heterogeneous control parameters in the whole-brain dynamics using the Montbrió model.
The set of inferred parameters is , as the global scaling parameter and excitability per region, with parcelled regions. VBI provides accurate and reliable posterior estimation using both spatio-temporal and functional data features for training, with a budget of 500 k simulations. (A) and (B) illustrate the observed and predicted BOLD signals, respectively. (C) and (D) show the observed (upper triangular) and predicted (lower triangular) data features (FC and FCD), respectively. (E) and (F) display the posterior distribution of excitability parameters per region, and global coupling , respectively. The true values and prior are shown as vertical green lines and a blue distribution, respectively. (G) shows the inference evaluation by the shrinkage and z-score of the posterior distributions.
Due to the large number of simulations for training and the informativeness of the data features (both spatio-temporal and functional data features), the results indicate that we achieved accurate parameter estimation and, consequently, a close agreement between the observed and predicted features of BOLD data. This required 500 k simulations for training, given the uniform priors and . After approximately 10 h of training (of MAF density estimator), posterior sampling took only 1 min. Our results indicate that training the MAF model was two to four times faster than the NSF model. This showcase demonstrates the capability of VBI in inferring heterogeneous excitability, given the bistable brain dynamics, for fMRI studies. Note that removing the spatio-temporal features and considering only FC/FCD as the data features (see Figure 7—figure supplement 2) leads to poor estimation of the excitability parameter across brain regions (see Figure 7—figure supplement 3). Interestingly, accurate estimation of the only global coupling parameter, , from only FC/FCD requires around 100 simulations (see Figure 7—figure supplement 4 and Figure 7—figure supplement 5). See Figure 7—figure supplement 6 and Figure 7—figure supplement 7 showing the accuracy and reliability of estimation under different levels of additive and dynamical noise.
Whole-brain network of Wong-Wang model
Finally, in Figure 8, we show the inference on the so-called parameterized dynamics mean-field (pDMF) model, that is a whole-brain network model of the reduced Wong-Wang equation (see Equation 6 and Equation 7), comprising 10 control parameters: the global scaling of connections and the linear coefficients . These parameters are introduced to reduce the dimension of whole-brain parameters as recurrent connection strength , external input current , and noise amplitude for each region (in total, 264 parameters were reduced to 9 dimensions; see Equation 7). Here, we used summary statistics of both spatio-temporal and functional data features extracted from simulated BOLD data to train the MAF density estimator, with a budget of 200 k simulations. The training took around 160 min to converge, whereas posterior sampling took only a few seconds.
Bayesian inference on the parametric mean-field model of Wong-Wang (also known as pDMF model), with linear coefficients , reparameterizing the recurrent connection strength , external input current , and noise amplitude for each region.
Summary statistics of spatio-temporal and functional data features were used for training, with a budget of 200 k simulations. (A) The diagonal panels display the ground-true values (in green), the uniform prior (in blue), and the estimated posterior distributions (in red). The upper diagonal panels illustrate the joint posterior distributions between parameters, along with their correlation (, in the upper left corners), and ground-truth values (green stars). High-probability areas are color-coded in yellow, while low-probability areas are shown in black. (B) The observed and predicted BOLD time series (in green and red, respectively). (C) The observed and predicted data features, such as FC/FCD matrices. (D) The inference evaluation by calculating the shrinkage and z-score of the estimated posterior distributions.
The diagonal panels in Figure 8A show estimated posterior distributions (in red), along with the prior (in blue), and true values (in green). The upper diagonal panels illustrate the joint posterior distributions between parameters (i.e. statistical dependency between parameters). Figure 8B illustrates the observed and predicted BOLD time series, generated by true and estimated parameters (in blue and red, respectively). From Figure 8C, we can see a close agreement between the observed and predicted data features (FC/FCD matrices). Note that due to the stochastic nature of the generative process, we do not expect an exact element-wise correspondence between these features, but rather a match in their summary statistics, such as the mean, variance, and higher order moments (see Figure 8—figure supplement 1). Figure 8D shows the posterior z-score versus shrinkage, indicating less accurate estimation for the coefficients , , and compared to others, as they are not informed by anatomical data such as the T1w/T2w myelin map and the first FC principal gradient (see Equation 7). This showcase demonstrates the advantage of Bayesian inference over optimization in assessing the accuracy and reliability of parameter estimation, whether informed by anatomical data.
Note that in the whole-brain network of the Wong-Wang model, the global scaling parameter and synaptic coupling exhibit structural non-identifiability, meaning their combined effects on the system cannot be uniquely disentangled (see Figure 8—figure supplement 2, and Figure 8—figure supplement 3). This is evident in the parameter estimations corresponding to selected observations, where the posterior distributions appear diffuse. The joint posterior plots reveal a nonlinear dependency (banana shape) between and , arising from their product in the neural mass equation (see Equation 6). Such a nonlinear relationship between parameters poses challenges for deriving causal conclusions, as often occurs in other neural mass models. This is a demonstration of how Bayesian inference facilitates causal hypothesis testing without requiring additional non-identifiability analysis.
Discussion
This study introduces the VBI, a flexible and integrative toolkit designed to facilitate probabilistic inference on complex whole-brain dynamics using connectome-based models (forward problem) and SBI (inverse problem). The toolkit leverages high-performance programming languages (C++) and dynamic compilers (such as Python’s JIT compiler), alongside the computational power of parallel processors (GPUs), to significantly enhance the speed and efficiency of simulations. Additionally, VBI integrates popular feature extraction libraries with parallel multiprocessing to efficiently convert simulated time series into low-dimensional summary statistics. Moreover, VBI incorporates state-of-the-art deep neural density estimators (such as MAF and NSF generative models) to estimate the posterior density of control parameters within whole-brain models given low-dimensional data features. Our results demonstrated the versatility and efficacy of the VBI toolkit across commonly used whole-brain network models, such as Wilson-Cowan, Jansen-Rit, Stuart-Landau, Epileptor, Montbrió, and Wong-Wang equations placed at each region. The ability to perform parallel and rapid simulations, coupled with a taxonomy of feature extraction, allows for detailed and accurate parameter estimation from associated neuroimaging modalities such as (s)EEG/MEG/fMRI. This is crucial for advancing our understanding of brain dynamics and the underlying mechanisms of various brain disorders. Overall, VBI represents a substantial improvement over alternative methods, offering a robust framework for both simulation and parameter estimation and contributing to the advancement of network neuroscience, potentially, to precision medicine.
The alternatives for parameter estimation include optimization techniques (Hashemi et al., 2018; Wang et al., 2019; Kong et al., 2021; Cabral et al., 2022; Liu et al., 2023), approximate Bayesian computation (ABC) method, and MCMC sampling (Jha et al., 2022). Optimization techniques are sensitive to the choice of the objective function (e.g. minimizing distance error or maximizing correlation) and do not provide estimates of uncertainty. Although multiple runs and thresholding can be used to address these issues, such methods often fall short in revealing relationships between parameters, such as identifying degeneracy, which is crucial for reliable causal inference. Alternatively, a technique known as ABC compares observed and simulated data using a distance measure based on summary statistics (Beaumont et al., 2002; Beaumont, 2010; Sisson et al., 2018). It is known that ABC methods suffer from the curse of dimensionality, and their performance also depends critically on the tolerance level in the accepted/rejected setting (Gonçalves et al., 2020; Cranmer et al., 2020). The self-tuning variants of MCMC sampling have also been used for model inversion at the whole-brain level (Hashemi et al., 2020; Hashemi et al., 2021). Although MCMC is unbiased and exact with infinite runs, it can be computationally prohibitive, and sophisticated reparameterization methods are often required to facilitate convergence at whole-brain level (Jha et al., 2022; Baldy et al., 2023). This becomes more challenging for gradient-based MCMC algorithms, due to the bistability and stiffness of neural mass models. Tailored to Bayes’ rule, SBI sidesteps these issues by relying on expressive deep neural density estimators (such as MAF and NSF) on low-dimensional data features to efficiently approximate the posterior distribution of model parameters. Taking spiking neurons as generative models, this approach has demonstrated superior performance compared to alternative methods, as it does not require model or data features to be differentiable (Gonçalves et al., 2020; Baldy et al., 2024).
In previous studies, we demonstrated the effectiveness of SBI on virtual brain models of neurological (Hashemi et al., 2023; Wang et al., 2024; Mazzara et al., 2025; Mazzara et al., 2025), and neurodegenerative diseases (Hashemi et al., 2024; Yalçınkaya et al., 2023; Hashemi et al., 2025) as well as focal intervention (Rabuffo et al., 2025) and healthy aging (Lavanga et al., 2023). In this work, we extended this probabilistic methodology to encompass a broader range of whole-brain network models, highlighting its flexibility and scalability in leveraging diverse computational resources, from CPUs/GPUs to high-performance computing facilities.
Our results indicated that the VBI toolkit effectively estimates posterior distributions of control parameters in whole-brain network modeling, offering a deeper understanding of the mechanisms underlying brain activity. For example, using the Montbrió and Wong-Wang models, we achieved a close match between observed and predicted FC/FCD matrices derived from BOLD time series (Figures 7 and 8). Additionally, the Jansen-Rit and Stuart-Landau models provided accurate inferences of PSD from neural activity (Figures 4 and 5), while the Epileptor model precisely captured the spread of seizure envelopes (Figure 6). These results underscore the toolkit’s capability to manage complex, high-dimensional data with precision. Uncertainty quantification using VBI can illuminate and combine the informativeness of data features (e.g., FC/FCD) and reveal the causal drivers behind interventions (Lavanga et al., 2023; Hashemi et al., 2024; Rabuffo et al., 2025). This adaptability ensures that VBI can be applied across various (source-localized) neuroimaging modalities, accommodating different computational capabilities and research needs.
Note that there is no specific rule for determining the optimal number of simulations required for training. In general, a larger number of simulations, depending on the available computational resources, tends to improve the quality of posterior estimation. However, when using synthetic data, we can monitor the z-score and posterior shrinkage to assess the accuracy and reliability of the inferred parameters. This also critically depends on the parameter dimensionality. For instance, in estimating only global coupling parameter, a maximum of 300 simulations was used, demonstrating accurate estimation across models and different configurations (see Figure 2—figure supplement 3), except for the Jansen-Rit model, where coupling did not induce a significant change in the intrinsic frequency of regional activity. Importantly, the choice of data features is critical, and some factors (e.g. that lead to inaccurate feature calculation) can lead to the collapse of this method. For instance, high noise levels in observations or dynamical noise can compromise the accurate calculation of data features, undermining the inference process (see Figure 6—figure supplement 2, Figure 6—figure supplement 3, Figure 7—figure supplement 6, Figure 7—figure supplement 7). Identifying the set of low-dimensional data features that are relevant to the control parameters for each case study is another challenge in effectively applying SBI. Nevertheless, the uncertainty of the posterior informs us about the predictive power of these features. Statistical moments of time series could be effective candidates for most models. However, this poses a formidable challenge for inference from empirical data, as certain moments, such as the mean and variance, may be lost during preprocessing steps. The hyperparameter and noise estimation can also be challenging for SBI. Moreover, there is no established rule for determining the number of simulations for training, aside from relying on z-score values during in silico testing, as it depends on available computational resources.
Various sequential methods, such as SNPE (Greenberg et al., 2019), SNLE (Lueckmann et al., 2019), and SNRE (Durkan et al., 2020), have been proposed to reduce computational costs of SBI by iteratively refining the fit to specific targets. These approaches aim for more precise parameter estimation by progressively adjusting the model based on each new data set or subset, potentially enhancing the accuracy of the fit at the reduced computational effort. The choice of method depends on the specific characteristics and requirements of the problem being addressed (Lueckmann et al., 2021). Our previous study indicates that for inferring whole-brain dynamics of epilepsy spread, the SNPE method outperforms alternative approaches (Hashemi et al., 2023). Nevertheless, sequential methods can become unstable, with simulators potentially diverging and causing probability mass to leak into regions that lack prior support (Hashemi et al., 2023). In this study, we used single-round training to benefit from an amortization strategy. This approach brings the costs of simulation and network training upfront, enabling inference on new data to be performed rapidly (within seconds). This strategy facilitates personalized inference at the subject level, as the generative model is tailored by the SC matrix, thereby allowing for rapid hypothesis evaluation specific to each subject (e.g. in delineating the epileptogenic and propagation zones). Note that model comparison across different configurations or model structures, as well established in dynamic causal modeling (Penny et al., 2004; Penny, 2012; Baldy et al., 2025), has yet to be explored in this context.
Deep learning algorithms are increasingly gaining traction in the context of whole-brain modeling. The VBI toolkit leverages a class of deep generative models, called Normalizing Flows (NFs; Rezende and Mohamed, 2015; Kobyzev et al., 2019), to model probability distributions given samples drawn from those distributions. Using NFs, a base probability distribution (e.g. a standard normal) is transformed into any complex distribution (potentially multi-modal) through a sequence of invertible transformations. Variational autoencoders (VAEs; Kingma et al., 2014; Doersch, 2016) are a class of deep generative models to encode data into a latent space and then decode it back to reconstruct the original data. Recently, Sip et al., 2023 introduced a method using VAEs for nonlinear dynamical system identification, enabling the inference of neural mass models and region- and subject-specific parameters from functional data. VAEs have also been employed for dimensionality reduction of whole-brain functional connectivity (Perl et al., 2023b), and to investigate various pathologies and their severity by analyzing the evolution of trajectories within a low-dimensional latent space (Perl et al., 2023a). Additionally, Generative Adversarial Networks (GANs; Goodfellow et al., 2014; Creswell et al., 2018) have demonstrated remarkable success in mapping latent space to data space by learning a manifold induced from a base density (Liu et al., 2021). This method merits further exploration within the context of whole-brain dynamics. To fully harness the potential of deep generative models in large-scale brain network modeling, integrating VAEs and GANs into the VBI framework would be beneficial. This will elucidate their strengths and limitations within this context and guide future advancements in the field.
In summary, VBI offers fast simulations, taxonomy of feature extraction, and deep generative models, making it a versatile tool for model-based inference from different neuroimaging modalities, helping researchers to explore brain (dys)functioning in greater depth. This advancement not only enhances our theoretical understanding but also holds promise for practical applications in diagnosing and treating neurological conditions.
Materials and methods
The virtual brain models
To build a virtual brain model (see Figure 1), the process begins with parcellating the brain into regions using anatomical data, typically derived from T1-MRI scans. Each region, represented as nodes in the network, is then equipped with a neural mass model to simulate the collective behavior of neurons within that area. These nodes are interconnected using a structural connectivity (SC) matrix, typically obtained from diffusion-weighted magnetic resonance imaging (DW-MRI). The connectome was built with TVB-specific reconstruction pipeline using generally available neuroimaging software (Schirner et al., 2015). The entire network of interconnected nodes is then simulated using neuroinformatic tools, such as The Virtual Brain (TVB; Sanz-Leon et al., 2015), generating neural activities at the source level. However, neural sources are not directly observable in real-world experiments, and a projection needs to be established to transform the simulated neural activity into empirically measurable quantities, such as (s)EEG, MEG, and fMRI. This approach offers insights into both normal brain function and neurological disorders (Hashemi et al., 2025). In the following, we describe commonly used whole-brain network models at the source level, which can be mapped to different types of neuroimaging recordings. Note that each model represents one of many possible variants used in the literature, and the choice of model often depends on the specific research question, the spatial and temporal resolution of the available data, and the desired level of biological or mathematical detail.
Wilson-Cowan model
Request a detailed protocolThe Wilson-Cowan model (Wilson and Cowan, 1972) is a seminal neural mass model that describes the dynamics of connected excitatory and inhibitory neural populations, at cortical microcolumn level. It has been widely used to understand the collective behavior of neurons and simulate neural activities recorded by methods such as local field potentials (LFPs) and EEG. The model effectively captures phenomena such as oscillations, wave propagation, pattern formation in neural tissue, and responses to external stimuli, offering insights into various brain (dys)functions, particularly in Parkinson’s disease (Duchet et al., 2021; Sermon et al., 2023).
The Wilson-Cowan model describes the temporal evolution of the mean firing rates of excitatory () and inhibitory () populations using nonlinear differential equations. Each population’s activity is governed by a balance of self-excitation, cross-inhibition, external inputs, and network interactions through long-range coupling. The nonlinearity arises from a sigmoidal transfer function , which maps the total synaptic input to the firing rate, capturing saturation effects and thresholds in neural response. In the whole-brain network extension, each neural population at node receives input from other nodes via a weighted connectivity matrix, allowing the study of large-scale brain dynamics and spatial pattern formation (Wilson and Cowan, 1972; Wilson and Cowan, 1973; Daffertshofer and van Wijk, 2011):
which incorporates both local dynamics and global interactions, modulated by coupling strengths and synaptic weights. Here, is an element of the (non)symmetric structural connectivity matrix and is nonzero if there is a connection between regions and . The nominal parameter values and the prior range for the target parameters are summarized in Table 1.
Parameter descriptions for capturing whole-brain dynamics using the Wilson-Cowan neural mass model.
| Parameter | Description | Value | Prior |
|---|---|---|---|
| Excitatory to excitatory synaptic strength | 16.0 | ||
| Inhibitory to excitatory synaptic strength | 12.0 | ||
| Excitatory to inhibitory synaptic strength | 15.0 | ||
| Inhibitory to inhibitory synaptic strength | 3.0 | ||
| Time constant of excitatory population | 8.0 | ||
| Time constant of inhibitory population | 8.0 | ||
| Sigmoid slope for excitatory population | 1.3 | ||
| Sigmoid slope for inhibitory population | 2.0 | ||
| Sigmoid threshold for excitatory population | 4.0 | ||
| Sigmoid threshold for inhibitory population | 3.7 | ||
| Maximum output of sigmoid for excitatory population | 1.0 | ||
| Maximum output of sigmoid for inhibitory population | 1.0 | ||
| Firing threshold for excitatory population | 0.0 | ||
| Firing threshold for inhibitory population | 0.0 | ||
| Refractoriness of excitatory population | 1.0 | ||
| Refractoriness of inhibitory population | 1.0 | ||
| Scaling constant for excitatory output | 0.994 | ||
| Scaling constant for inhibitory output | 0.999 | ||
| Gain of excitatory population | 1.0 | ||
| Gain of inhibitory population | 1.0 | ||
| External input to excitatory population | 0.0 | ||
| External input to inhibitory population | 0.0 | ||
| Global coupling strength (excitatory) | 0.0 | ||
| Global coupling strength (inhibitory) | 0.0 | ||
| Standard deviation of Gaussian noise | 0.005 |
Jansen-Rit model
Request a detailed protocolThe Jansen-Rit neural mass model has been widely used to simulate physiological signals from various recording methods like intracranial LFPs and scalp MEG/EEG recordings. For example, it has been shown to recreate responses similar to evoked-related potentials after a series of impulse stimulations (David and Friston, 2003; David et al., 2006), generating high-alpha and low-beta oscillations, with added recurrent inhibitory connections and spike-rate modulation (Moran et al., 2007), and also seizure patterns similar to those seen in temporal lobe epilepsy (Wendling et al., 2001). This biologically motivated model comprises three main populations of neurons: excitatory pyramidal neurons, inhibitory interneurons, and excitatory interneurons. These populations interact with each other through synaptic connections, forming a feedback loop that produces oscillatory activity governed by a set of nonlinear ordinary differential equations (Jansen and Rit, 1995; David and Friston, 2003; Daffertshofer and van Wijk, 2022):
where , , and denote the average membrane potentials of pyramidal cells, excitatory interneurons, and inhibitory interneurons, respectively. Their corresponding time derivatives, , , and , represent the rates of change of these membrane potentials. also represents an external input current. The sigmoid function maps the average membrane potential of neurons to their mean action potential firing rate. SC is a normalized structural connectivity matrix. The model’s output at region corresponds to the membrane potential of pyramidal cells and is given by . The nominal parameter values and the prior range for the target parameters are summarized in Table 2.
Parameter descriptions for capturing whole-brain dynamics using Jansen-Rit neural mass model.
EP: excitatory populations, IP: inhibitory populations, PSP: post synaptic potential, PSPA: post synaptic potential amplitude.
| Parameters | Description | Value | Prior |
|---|---|---|---|
| Excitatory PSPA | 3.25 mV | ||
| Inhibitory PSPA | 22 mV | ||
| Time constant of excitatory PSP | |||
| Time constant of inhibitory PSP | |||
| Average numbers of synapses between EP | |||
| Average numbers of synapses between IP | |||
| Maximum firing rate | 5 Hz | ||
| Potential at half of maximum firing rate | 6 mV | ||
| Slope of sigmoid function at | |||
| Average numbers of synapses between neural populations | 135 | ||
| Scaling the strength of network connections | 1.5 |
Stuart-Landau oscillator
Request a detailed protocolThe Stuart-Landau oscillator (Selivanov et al., 2012) is a generic mathematical model used to describe oscillatory phenomena, particularly those near a Hopf bifurcation, which is often employed to study the nonlinear dynamics of neural activity (Deco et al., 2017; Petkoski and Jirsa, 2019; Cabral et al., 2022; Wang et al., 2024). One approach uses this model to capture slow hemodynamic changes in BOLD signal (Deco et al., 2017), while others apply it to model fast neuronal dynamics, which can be linked directly to EEG/MEG data (Petkoski and Jirsa, 2019; Cabral et al., 2022; Wang et al., 2024). Note that this is a phenomenological framework, and both applications operate on completely different time scales.
In the network, each brain region, characterized by an autonomous Stuart-Landau oscillator, can exhibit either damped or limit-cycle oscillations depending on the bifurcation parameter . If , the system shows damped oscillations, similar to a pendulum under friction. In this regime, the system, when subjected to perturbation, relaxes back to its stable fixed point through damped oscillations with an angular frequency . The rate of amplitude damping is determined by . Conversely, if , the system supports limit cycle solutions, allowing for self-sustained oscillations even in the absence of external noise. At a critical value of , the system undergoes a Hopf bifurcation, that is small changes in parameters can lead to large variations in the system’s behavior.
Using whole-brain network modeling of EEG/MEG data (Sorrentino et al., 2024; Cabral et al., 2022), the oscillators are interconnected via white-matter pathways, with coupling strengths specified by subject-specific DTI fiber counts, that is elements of SC matrix. This adjacency matrix is then scaled by a global coupling parameter . Note that coupling between regions accounts for finite conduction times, which are often estimated by dividing the Euclidean distances between nodes by an average conduction velocity . Knowing the personalized time-delays (Lemaréchal et al., 2022; Sorrentino et al., 2022), we can use the distance as a proxy, assuming a constant propagation velocity. The distance itself can be defined as either the length of the tracts or the Euclidean distance. Taking this into account, the activity of each region is given by a set of complex differential equations:
where is a complex variable, and is the corresponding time series. In this particular realization, each region has a natural frequency of 40 Hz (), motivated by empirical studies demonstrating the emergence of gamma oscillations from the balance of excitation and inhibition, playing a role in local circuit computations (Funk and Epstein, 2004).
In this study, for the sake of simplicity, a common cortico-cortical conduction velocity is estimated, that is the distance-dependent average velocities . We also consider , capturing the highly variable amplitude envelope of gamma oscillations as reported in experimental recordings (Buzsáki and Wang, 2012; Cabral et al., 2022). This choice also best reflects the slowest decay time constants of inhibitory receptors–approximately 1 s (Schnitzler and Gross, 2005). A Gaussian noise (here denoted by ) with an intensity of is added to each oscillator to mimic stochastic fluctuations. The nominal parameter values and the prior range for the target parameters are summarized in Table 3.
Parameter descriptions for capturing whole-brain dynamics using Stuart-Landau oscillator.
| Parameters | Description | Value | Prior |
|---|---|---|---|
| Bifurcation parameter | -5 | ||
| Natural angular frequency | |||
| Noise factor | 10-4 | ||
| Average conduction velocity | 6.0 m/s | ||
| Global coupling parameter | 350 |
Epileptor model
Request a detailed protocolIn personalized whole-brain network modeling of epilepsy spread (Jirsa et al., 2017), the dynamics of each brain region are governed by the Epileptor model (Jirsa et al., 2014). The Epileptor model provides a comprehensive description of epileptic seizures, encompassing the complete taxonomy of system bifurcations to simultaneously reproduce the dynamics of seizure onset, progression, and termination (Saggio et al., 2020). The full Epileptor model comprises five state variables that couple two oscillatory dynamical systems operating on three different time scales (Jirsa et al., 2014). Then, motivated by Synergetic theory (Haken, 1977; Jirsa and Haken, 1997) and under time-scale separation (Proix et al., 2014), the fast variables rapidly collapse on the slow manifold McIntosh and Jirsa, 2019, whose dynamics is governed by the slow variable. This adiabatic approximation yields the 2D reduction of whole-brain model of epilepsy spread, also known as the VEP, as follows:
where and indicate the fast and slow variables corresponding to brain region, respectively, and the set of unknowns is the spatial map of epileptogenicity to be estimated. SC is a normalized structural connectivity matrix. In real-world epilepsy applications (Hashemi et al., 2023; Hashemi et al., 2021; Wang et al., 2023b), we compute the envelope function from sEEG data to perform inference. The nominal parameter values and the prior range for the target parameters are summarized in Table 4.
Parameter descriptions for capturing whole-brain dynamics using 2D Epileptor neural mass model.
| Parameter | Description | Value | Prior |
|---|---|---|---|
| Input electric current | 3.1 | ||
| System time constant | 90ms | ||
| Spatial map of epileptogenicity | –3.65 | ||
| Global scaling factor on network connections | 1.0 |
Montbrió model
Request a detailed protocolThe exact macroscopic dynamics of a specific brain region (represented as a node in the network) can be analytically derived in thermodynamic limit of infinitely all-to-all coupled spiking neurons (Montbrió et al., 2015) or neuron representation (Byrne et al., 2020). By assuming a Lorentzian distribution on excitabilities in large ensembles of quadratic integrate-and-fire neurons with synaptic weights and a half-width centered at , the macroscopic dynamics has been derived in terms of the collective firing activity and mean membrane potential (Montbrió et al., 2015). Then, by coupling the brain regions via an additive current (e.g. in the average membrane potential equations), the dynamics of the whole-brain network can be described as follows (Rabuffo et al., 2025):
where and are the average membrane potential and firing rate, respectively, at the brain region, and parameter is the network scaling parameter that modulates the overall impact of brain connectivity on the state dynamics. The denotes the connection weight between and regions, and the dynamical noise follows a Gaussian distribution with mean zero and variance .
The model parameters are tuned so that each decoupled node is in a bistable regime, exhibiting a down-state stable fixed point (low-firing rate) and an up-state stable focus (high-firing rate) in the phase space (Montbrió et al., 2015; Baldy et al., 2024). Bistability is a fundamental property of regional brain dynamics to ensure a switching behavior in the data (e.g. to generate FCD), that has been recognized as representative of realistic dynamics observed empirically (Rabuffo et al., 2021; Breyton et al., 2023; Fousek et al., 2024; Rabuffo et al., 2025). The solution of the coupled system yields a neuroelectric dataset that describes the evolution of the variables in each brain region , providing measures of macroscopic activity. The surrogate BOLD activity for each region is then derived by filtering this activity through the Balloon-Windkessel model (Friston et al., 2000). The input current represents the stimulation to selected brain regions, which increase the basin of attraction of the up-state in comparison to the down-state, while the fixed points move farther apart (Rabuffo et al., 2021; Breyton et al., 2023; Hashemi et al., 2024; Rabuffo et al., 2025). The nominal parameter values and the prior range for the target parameters are summarized in Table 5.
Parameter descriptions for capturing whole-brain dynamics using Montbrió model.
| Parameter | Description | Nominal value | Prior |
|---|---|---|---|
| Characteristic time constant | 1 ms | ||
| Synaptic weight | 14.5ms-1 | ||
| Spread of the heterogeneous noise distribution | 0.7ms-1 | ||
| Input current representing stimulation | 0.0 | ||
| Gaussian noise variance | 0.037 | ||
| Excitability | –4.6 | ||
| Scaling the strength of network connections | 0.56 |
Wong-Wang model
Request a detailed protocolAnother commonly used whole-brain model for simulation of neural activity is the so-called parameterized dynamics mean-field (pDMF) model (Hansen et al., 2015; Kong et al., 2021; Deco et al., 2013). At each region, it comprises a simplified system of two nonlinear coupled differential equations, motivated by the attractor network model, which integrates sensory information over time to make perceptual decisions, known as Wong-Wang model (Wong and Wang, 2006). This biophysically realistic cortical network model of decision making then has been simplified further into a single-population model (Deco et al., 2013), which has been widely used to understand the mechanisms underpinning brain resting state dynamics (Kong et al., 2021; Deco et al., 2021; Zhang et al., 2024). The pDMF model has also been used to study whole-brain dynamics in various brain disorders, including Alzheimer’s disease (Monteverdi et al., 2023), schizophrenia (Klein et al., 2021), and stroke (Klein et al., 2021). The pDMF model equations are given as:
where , , and denote the population firing rate, the average synaptic gating variable, and the total input current at the brain region, respectively. is uncorrelated standard Gaussian noise and the noise amplitude is controlled by . The nominal parameter values and the prior range for the target parameters are summarized in Table 6.
Parameter descriptions for capturing whole-brain dynamics using the Wong-Wang model.
| Parameter | Description | Value | Prior |
|---|---|---|---|
| Max feeding rate of | 270 n/C | ||
| Half saturation of | 108 Hz | ||
| Control the steepness of curve of | 0.154 s | ||
| Kinetic parameter | 0.641/1000 | ||
| Synaptic time constant | 100 ms | ||
| Synaptic coupling | 0.2609 nA | ||
| Local excitatory recurrence | 0.6 | ||
| Overall effective external input | 0.3 nA | ||
| Scaling the strength of network connections | 6.28 | ||
| Noise amplitude | 0.005 |
According to recent studies (Kong et al., 2021; Zhang et al., 2024), we can parameterize the set of , and as linear combinations of group-level T1w/T2w myelin maps (Glasser and Van Essen, 2011) and the first principal gradient of functional connectivity:
where and are the average values of the T1w/T2w myelin map and the first FC principal gradient, respectively, within the brain region. Therefore, the set of unknown parameters to estimate includes and linear coefficients , hence 10 parameters in total.
The Balloon-Windkessel model
Request a detailed protocolThe Balloon-Windkessel model is a biophysical framework that links neural activity to the BOLD signals detected in fMRI. This is not a neuronal model but rather a representation of neurovascular coupling, describing how neural activity influences hemodynamic responses. The model is characterized by two state variables: venous blood volume () and deoxyhemoglobin content (). The system’s input is blood flow (), and the output is the BOLD signal ():
where represents the resting blood volume fraction, is the oxygen extraction fraction at rest, is the ratio of intra- to extravascular signals, is the slope of the relationship between the intravascular relaxation rate and oxygen saturation, is the frequency offset at the surface of a fully deoxygenated vessel at 1.5 T, and TE is the echo time. The dynamics of venous blood volume and deoxyhemoglobin content are governed by the Balloon model’s hemodynamic state equations:
where is the transit time of blood flow, reflects the resistance of the venous vessel (stiffness), and denotes blood inflow at time , given by
where is an exponentially decaying vasodilatory signal defined by
where represents the efficacy of neuronal activity (i.e. integrated synaptic activity) in generating a signal increase, is the time constant for signal decay, and is the time constant for autoregulatory blood flow feedback (Friston et al., 2000). For parameter values, see Table 7, taken from Friston et al., 2000; Stephan et al., 2007; Stephan et al., 2008. The resulting time series is downsampled to match the TR value in seconds.
Parameter descriptions for the Balloon-Windkessel model to map neural activity to the BOLD signals detected in fMRI.
| Parameter | Description | Value |
|---|---|---|
| Rate constant of vasodilatory signal decay in seconds | 1.5 | |
| Time of flow-dependent elimination in seconds | 4.5 | |
| Grubb’s vessel stiffness exponent | 0.2 | |
| Hemodynamic transit time in seconds | 1.0 | |
| Efficacy of synaptic activity to induce signal | 0.1 | |
| Slope of intravascular relaxation rate in Hertz | 25.0 | |
| Frequency offset at outer surface of magnetized vessels | 40.3 | |
| Ratio of intra- and extravascular BOLD signal at rest | 1.43 | |
| Resting blood volume fraction | 0.02 | |
| Resting oxygen extraction fraction | 0.8 | |
| Echo time, 1.5 T scanner | 0.04 |
Simulation-based inference
Request a detailed protocolIn the Bayesian framework (van de Schoot et al., 2021), parameter estimation involves quantifying and propagating uncertainty through probability distributions placed on the parameters (prior information before seeing data), which are updated with information provided by the data (likelihood function). The formidable challenge to conducting efficient Bayesian inference is evaluating the likelihood function . This typically involves intractable integrating over all possible trajectories in the latent space: , where is the joint probability density of the data and latent variables , given parameters . For whole-brain network models with high-dimensional and nonlinear latent spaces, the computational cost can be prohibitive, making likelihood-based inference with MCMC sampling challenging to converge (Hashemi et al., 2020; Hashemi et al., 2021; Jha et al., 2022).
SBI (Cranmer et al., 2020; Gonçalves et al., 2020; Hashemi et al., 2023), or likelihood-free inference (Papamakarios et al., 2019; Greenberg et al., 2019; Brehmer et al., 2020), addresses issues with explicit likelihood evaluation in complex systems, where it often becomes intractable. The task of density estimation, one of the most fundamental problems in statistics, is to infer an underlying probability distribution based on a set of independently and identically distributed data points drawn from that distribution. Traditional density estimators, such as histograms and kernel density estimators, typically perform well only in low-dimensional settings. Recently, neural network-based approaches have been proposed for conditional density estimation, showing promising results in Bayesian inference problems involving high-dimensional data (Papamakarios and Murray, 2016; Papamakarios and Pavlakou, 2017; Greenberg et al., 2019; Gonçalves et al., 2020; Lueckmann et al., 2021; Liu et al., 2021; Hashemi et al., 2023).
Given a prior distribution placed on the parameters , random simulations are generated (with samples from prior), resulting in pairs , where and is the simulated data given . By training a deep neural density estimator (such as NFs; Papamakarios and Pavlakou, 2017; Durkan et al., 2019), we can approximate the posterior with by minimizing the loss function:
over network parameters . Once the parameters of the neural network are optimized, for observed data we can readily estimate the target posterior . This allows for rapid approximation and sampling from the posterior distribution for any new observed data through a forward pass in the trained network (Hashemi et al., 2023; Hashemi et al., 2024).
This approach uses a class of generative machine learning models known as NFs (Rezende and Mohamed, 2015; Kobyzev et al., 2019) to transform a simple base distribution into any complex target through a sequence of invertible mappings. Here, generative modeling is an unsupervised machine learning method for modeling a probability distribution given samples drawn from that distribution. The state-of-the-art NFs, such as MAFs (Papamakarios and Pavlakou, 2017) and NSFs (Durkan et al., 2019), enable fast and exact density estimation and sampling from high-dimensional distributions. These models learn mappings between input data and probability densities, capturing complex dependencies and multi-modal distributions (Kobyzev et al., 2019; Kobyzev et al., 2020).
In our work, we integrate the implementation of these models from the open-source SBI tool, leveraging both MAF and NSF architectures. The MAF model comprises five flow transforms, each with two blocks and 50 hidden units, tanh nonlinearity, and batch normalization after each layer. The NSF model consists of five flow transforms, two residual blocks of 50 hidden units each, ReLU nonlinearity, and 10 spline bins. We apply these generative models to virtual brain simulations conducted with random parameters to approximate the full posterior distribution of parameters from low-dimensional data features. Note that we employ a single round of SBI to benefit from amortization strategy rather than using a sequential approach that is designed to achieve a better fit but only for a specific dataset (Hashemi et al., 2023; Hashemi et al., 2024).
Sensitivity analysis
Request a detailed protocolSensitivity analysis is a crucial step for identifying which model parameters influence the model’s behavior in response to changes in input (Hashemi et al., 2018; Hashemi et al., 2023). A local sensitivity can be quantified by computing the curvature of the objective function through the Hessian matrix (Bates and Watts, 1980; Hashemi et al., 2018). Using SBI, after estimating the posterior for a specific observation, we can perform sensitivity analysis by computing the eigenvalues and corresponding eigenvectors of the following matrix (Tejero-Cantero et al., 2020; Deistler et al., 2021):
which then does an eigendecomposition . A large eigenvalue in the so-called active subspaces (Constantine, 2015) indicates that the gradient of the posterior is large in the corresponding direction, suggesting that the system output is sensitive to changes along that eigenvector.
Evaluation of posterior fit
Request a detailed protocolTo assess the reliability of Bayesian inference using synthetic data, we evaluate the posterior z-scores (denoted by ) against the posterior shrinkage (denoted by ), as defined by Betancourt et al., 2014:
where and are the posterior mean and the true values, respectively, is the standard deviation of the prior, and is the standard deviation of the posterior.
The z-score quantifies how far the posterior mean of a parameter lies from a reference value (e.g. the true value), scaled by the posterior standard deviation. The shrinkage quantifies how much the posterior distribution has contracted relative to the initial prior distribution after learning from data. A small z-score indicates that the posterior estimate is close to the true value, reflecting accurate inference. A large shrinkage value suggests that the posterior is sharply concentrated, indicating that the parameter is well identified. According to these definitions, an ideal Bayesian inference is characterized by z-scores close to zero and posterior shrinkage values close to one, reflecting both accuracy and reliability in the inferred parameters.
Flexible simulator and model building
Request a detailed protocolA key feature of the VBI pipeline is its modularity and flexibility in integrating various simulators (see Figure 2). The Simulation module of the VBI pipeline is designed to be easily interchangeable, allowing researchers to replace it with other simulators, such as TVB (Sanz-Leon et al., 2015), Neurolib (Cakan et al., 2023), Brian (Stimberg et al., 2019), and Brainpy (Wang et al., 2023a). This adaptability supports a wide range of simulation needs and computational environments, making the VBI a versatile tool for inference in system neuroscience. In particular, the Simulation module offers a comprehensive implementation of commonly used whole-brain models. This is a customized version of the implementation from the open-source TVB simulator. While VBI does not encompass all the features of the original TVB, it is mainly designed to leverage the computational power of GPU devices and significantly reduce RAM requirements (see Figure 2—figure supplement 1). This optimization ensures that high-performance clusters can be fully utilized, enabling parallel and scalable simulations, as often is required to perform scalable SBI.
Comprehensive feature extraction
Request a detailed protocolVBI offers a comprehensive toolbox for feature extraction across various datasets. The Features module includes but is not limited to: (1) Statistical features, including mean (average of elements), variance (spread of the elements around mean), kurtosis (tailedness of the distribution of elements), and skewness (the asymmetry of the distribution of elements), that can be applied to any matrix. (2) Spectral features, such as low-dimensional summary statistics of power spectrum density (PSD). (3) Temporal features, including zero crossings, area under the curve, average power, and envelope. (4) Connectivity features, including functional connectivity (FC), which represents the statistical dependencies or correlations between activity patterns of different brain regions, and functional connectivity dynamics (FCD), which captures the temporal variations and transitions in these connectivity patterns over time. These calculations are performed for the whole-brain and/or subnetwork (e.g., limbic system, resting state networks). However, since these matrices are still high-dimensional, we use standard dimensional reduction techniques, such as principal component analysis (PCA) on FC/FCD matrices, to extract their associated low-dimensional summary statistics. (5) Information theory features, such as mutual information and transfer entropy. Following (Hashemi et al., 2024), we use the term spatio-temporal data features to refer to both statistical features and temporal features derived from time series. In contrast, we refer to the connectivity features extracted from FC/FCD matrices as functional data features. Note that here, ‘spatial’ does not necessarily refer to the actual spatial characteristics of the data, such as traveling waves in neural fields, but rather to differences across brain regions.
The Features module uses parallel multiprocessing to speed up feature calculation. Additionally, it provides flexibility for users to add their own custom feature calculations with minimal effort and expertise, or to adjust the parameters of existing features based on the type of input time series. The feature extraction module is designed to be interchangeable with existing feature extraction libraries such as tsfel (Barandas et al., 2020), pyspi (Cliff et al., 2023), hctsa (Fulcher and Jones, 2017), and scikit-learn (Pedregosa et al., 2011). Note that some lightweight libraries such as catch22 (Lubba et al., 2019) are directly accessible from the VBI feature extraction module.
Data availability
No new data were created or analyzed in this study. All code is available on GitHub (https://github.com/ins-amu/vbi copy archived at Ziaeemehr, 2025).
References
-
The virtual parkinsonian patientNPJ Systems Biology and Applications 11:40.https://doi.org/10.1038/s41540-025-00516-y
-
Inference on the macroscopic dynamics of spiking neuronsNeural Computation 36:2030–2072.https://doi.org/10.1162/neco_a_01701
-
Dynamic causal modelling in probabilistic programming languagesJournal of The Royal Society Interface 22:20240880.https://doi.org/10.1098/rsif.2024.0880
-
Relative curvature measures of nonlinearityJournal of the Royal Statistical Society Series B 42:1–16.https://doi.org/10.1111/j.2517-6161.1980.tb01094.x
-
Towards a biologically annotated brain connectomeNature Reviews Neuroscience 24:747–760.https://doi.org/10.1038/s41583-023-00752-3
-
Adaptive approximate bayesian computationBiometrika 96:983–990.https://doi.org/10.1093/biomet/asp052
-
Approximate bayesian computation in evolution and ecologyAnnual Review of Ecology, Evolution, and Systematics 41:379–406.https://doi.org/10.1146/annurev-ecolsys-102209-144621
-
Dynamic models of large-scale brain activityNature Neuroscience 20:340–352.https://doi.org/10.1038/nn.4497
-
Mechanisms of gamma oscillationsAnnual Review of Neuroscience 35:203–225.https://doi.org/10.1146/annurev-neuro-062111-150444
-
Next-generation neural mass and field modelingJournal of Neurophysiology 123:726–742.https://doi.org/10.1152/jn.00406.2019
-
Neurolib: a simulation framework for whole-brain neural mass modelingCognitive Computation 15:1132–1152.https://doi.org/10.1007/s12559-021-09931-9
-
Unifying pairwise interactions in complex dynamicsNature Computational Science 3:883–893.https://doi.org/10.1038/s43588-023-00519-x
-
BookActive Subspaces: Emerging Ideas for Dimension Reduction in Parameter StudiesSIAM.
-
Neural field models: a mathematical overview and unifying frameworkMathematical Neuroscience and Applications 2:7284.https://doi.org/10.46298/mna.7284
-
The frontier of simulation-based inferencePNAS 117:30055–30062.https://doi.org/10.1073/pnas.1912789117
-
Generative adversarial networks: an overviewIEEE Signal Processing Magazine 35:53–65.https://doi.org/10.1109/MSP.2017.2765202
-
On the influence of amplitude on the connectivity between phasesFrontiers in Neuroinformatics 5:00006.https://doi.org/10.3389/fninf.2011.00006
-
On the influence of input triggering on the dynamics of the Jansen-Rit oscillators networkFrontiers in Neuroinformatics 5:6.https://doi.org/10.48550/arXiv.2202.06634
-
The quest for multiscale brain modelingTrends in Neurosciences 45:777–790.https://doi.org/10.1016/j.tins.2022.06.007
-
A neural mass model for MEG/EEGNeuroImage 20:1743–1755.https://doi.org/10.1016/j.neuroimage.2003.07.015
-
The dynamic brain: from spiking neurons to neural masses and cortical fieldsPLOS Computational Biology 4:e1000092.https://doi.org/10.1371/journal.pcbi.1000092
-
Emerging concepts for the dynamical organization of resting-state activity in the brainNature Reviews Neuroscience 12:43–56.https://doi.org/10.1038/nrn2961
-
ConferenceNeural spline flowsProceedings of the 33rd International Conference on Neural Information Processing Systems. pp. 7511–7522.
-
ConferenceOn contrastive learning for likelihood-free inferenceInternational Conference on Machine Learning. pp. 2771–2781.
-
A new neuroinformatics approach to personalized medicine in neurology: The Virtual BrainCurrent Opinion in Neurology 29:429–436.https://doi.org/10.1097/WCO.0000000000000344
-
Symmetry breaking organizes the brain’s resting state manifoldScientific Reports 14:83542.https://doi.org/10.1038/s41598-024-83542-w
-
Natural rhythm: evidence for occult 40Hz gamma oscillation in resting motor cortexNeuroscience Letters 371:181–184.https://doi.org/10.1016/j.neulet.2004.08.066
-
ConferenceAmortized inference in probabilistic reasoningProceedings of the annual meeting of the cognitive science society. pp. 2434–2449.
-
Noise during rest enables the exploration of the brain’s dynamic repertoirePLOS Computational Biology 4:e1000196.https://doi.org/10.1371/journal.pcbi.1000196
-
Mapping human cortical areas in vivo based on myelin content as revealed by T1- and T2-weighted MRIThe Journal of Neuroscience 31:11597–11616.https://doi.org/10.1523/JNEUROSCI.2180-11.2011
-
ConferenceGenerative adversarial netsAdvances in Neural Information Processing Systems.
-
ConferenceAutomatic posterior transformation for likelihood-free inferenceInternational Conference on Machine Learning. pp. 2404–2414.
-
Principles and operation of virtual brain twinsIEEE Reviews in Biomedical Engineering 1:1–29.https://doi.org/10.1109/RBME.2025.3562951
-
Field theory of electromagnetic brain activityPhysical Review Letters 77:960–963.https://doi.org/10.1103/PhysRevLett.77.960
-
Towards the virtual brain: network modeling of the intact and the damaged brainArchives Italiennes de Biologie 148:189–205.
-
Personalised virtual brain models in epilepsyThe Lancet Neurology 22:443–454.https://doi.org/10.1016/S1474-4422(23)00008-X
-
Whole-brain dynamical modelling for classification of Parkinson’s diseaseBrain Communications 5:fcac331.https://doi.org/10.1093/braincomms/fcac331
-
ConferenceNormalizing flows for probabilistic modeling and inferenceIEEE Transactions on Pattern Analysis and Machine Intelligence.
-
Normalizing flows: an introduction and review of current methodsIEEE Transactions on Pattern Analysis and Machine Intelligence 43:3964–3979.https://doi.org/10.1109/TPAMI.2020.2992934
-
Classification of Alzheimer’s disease using whole brain hierarchical networkIEEE/ACM Transactions on Computational Biology and Bioinformatics 15:624–632.https://doi.org/10.1109/TCBB.2016.2635144
-
catch22: canonical time-series characteristics: selected through highly comparative time-series analysisData Mining and Knowledge Discovery 33:1821–1852.https://doi.org/10.1007/s10618-019-00647-x
-
ConferenceLikelihood-free inference with emulator networksSymposium on Advances in Approximate Bayesian Inference. pp. 32–53.
-
ConferenceBenchmarking Simulation-Based InferenceInternational Conference on Artificial Intelligence and Statistics. pp. 343–351.
-
The hidden repertoire of brain dynamics and dysfunctionNetwork Neuroscience 3:994–1008.https://doi.org/10.1162/netn_a_00107
-
Macroscopic description for networks of spiking neuronsPhysical Review X 5:021028.https://doi.org/10.1103/PhysRevX.5.021028
-
Virtual brain simulations reveal network-specific parameters in neurodegenerative dementiasFrontiers in Aging Neuroscience 15:1204134.https://doi.org/10.3389/fnagi.2023.1204134
-
ConferenceSequential neural likelihood: Fast likelihood-free inference with autoregressive flowsThe 22nd International Conference on Artificial Intelligence and Statistics. pp. 837–848.
-
Whole-brain modeling of the differential influences of amyloid-beta and tau in Alzheimer’s diseaseAlzheimer’s Research & Therapy 15:013499.https://doi.org/10.1186/s13195-023-01349-9
-
Whole-brain modelling: an essential tool for understanding brain dynamicsNature Reviews Methods Primers 4:360.https://doi.org/10.1038/s43586-024-00336-0
-
Scikit-learn: machine learning in pythonJournal of Machine Learning Research 12:2825–2830.
-
Comparing dynamic causal modelsNeuroImage 22:1157–1172.https://doi.org/10.1016/j.neuroimage.2004.03.026
-
Transmission time delays organize the brain network synchronizationPhilosophical Transactions of the Royal Society A 377:20180132.https://doi.org/10.1098/rsta.2018.0132
-
Permittivity coupling across brain regions determines seizure recruitment in partial epilepsyThe Journal of Neuroscience 34:15009–15021.https://doi.org/10.1523/JNEUROSCI.1570-14.2014
-
ConferenceVariational inference with normalizing flowsICML’15: Proceedings of the 32nd International Conference on International Conference on Machine Learning. pp. 1530–1538.
-
Causation in neuroscience: keeping mechanism meaningfulNature Reviews Neuroscience 25:81–90.https://doi.org/10.1038/s41583-023-00778-7
-
The Virtual Brain: a simulator of primate brain network dynamicsFrontiers in Neuroinformatics 7:10.https://doi.org/10.3389/fninf.2013.00010
-
Normal and pathological oscillatory communication in the brainNature Reviews Neuroscience 6:285–296.https://doi.org/10.1038/nrn1650
-
BookHandbook of Approximate Bayesian Computation. Chapman & Hall/CRC Handbooks of Modern Statistical MethodsCRC Press.
-
Whole-brain propagation delays in multiple sclerosis, a combined tractography-magnetoencephalography studyThe Journal of Neuroscience 42:8807–8816.https://doi.org/10.1523/JNEUROSCI.0938-22.2022
-
The human connectome: A structural description of the human brainPLOS Computational Biology 1:e42.https://doi.org/10.1371/journal.pcbi.0010042
-
Comparing hemodynamic models with DCMNeuroImage 38:387–401.https://doi.org/10.1016/j.neuroimage.2007.07.040
-
Linking structure and function in macroscale brain networksTrends in Cognitive Sciences 24:302–315.https://doi.org/10.1016/j.tics.2020.01.008
-
Toward precision medicine in neurological diseasesAnnals of Translational Medicine 4:20160326.https://doi.org/10.21037/atm.2016.03.26
-
sbi: A toolkit for simulation-based inferenceJournal of Open Source Software 5:2505.https://doi.org/10.21105/joss.02505
-
Brain simulation augments machine‐learning–based classification of dementiaAlzheimer’s & Dementia: Translational Research & Clinical Interventions 8:12303.https://doi.org/10.1002/trc2.12303
-
Bayesian statistics and modellingNature Reviews Methods Primers 1:1–26.https://doi.org/10.1038/s43586-020-00001-2
-
Connectome-based modelling of neurodegenerative diseases: towards precision medicine and mechanistic insightNature Reviews Neuroscience 24:620–639.https://doi.org/10.1038/s41583-023-00731-8
-
VEP atlas: An anatomic and functional human brain atlas dedicated to epilepsy patientsJournal of Neuroscience Methods 348:108983.https://doi.org/10.1016/j.jneumeth.2020.108983
-
Delineating epileptogenic networks using brain imaging data and personalized modeling in drug-resistant epilepsyScience Translational Medicine 15:abp8982.https://doi.org/10.1126/scitranslmed.abp8982
-
Virtual brain twins: from basic neuroscience to clinical useNational Science Review 11:079.https://doi.org/10.1093/nsr/nwae079
-
Interpretation of interdependencies in epileptic signals using a macroscopic physiological model of the EEGClinical Neurophysiology 112:1201–1218.https://doi.org/10.1016/S1388-2457(01)00547-8
-
Neuroimaging for precision medicine in psychiatryNeuropsychopharmacology 50:246–257.https://doi.org/10.1038/s41386-024-01917-z
-
A recurrent network mechanism of time integration in perceptual decisionsThe Journal of Neuroscience 26:1314–1328.https://doi.org/10.1523/JNEUROSCI.3733-05.2006
Article and author information
Author details
Funding
European Commission (EBRAINS 2.0 Project)
https://doi.org/10.3030/101147319- Viktor Jirsa
- Meysam Hashemi
European Commission (Virtual Brain Twin Project)
https://doi.org/10.3030/101137289- Viktor Jirsa
European Commission (EnvironMENTAL)
https://doi.org/10.3030/101057429- Viktor Jirsa
France 2030 program (ANR-22-PESN-0012)
- Spase Petkoski
- Viktor Jirsa
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
This project/research has received funding from the European Union’s Horizon Europe Programme under the Specific Grant Agreement No. 101147319 (EBRAINS 2.0 Project) to MH and VJ, No. 101137289 (Virtual Brain Twin Project) to VJ, No. 101057429 (project environMENTAL) to VJ, and government grant managed by the Agence Nationale de la Recherche reference ANR-22-PESN-0012 (France 2030 program) to SP and VJ. We acknowledge the use of Fenix Infrastructure resources, which are partially funded from the European Union’s Horizon 2020 research and innovation programme through the ICEI project under the grant agreement No. 800858. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Version history
- Preprint posted:
- Sent for peer review:
- Reviewed Preprint version 1:
- Reviewed Preprint version 2:
- Reviewed Preprint version 3:
- Version of Record published:
Cite all versions
You can cite all versions using the DOI https://doi.org/10.7554/eLife.106194. This DOI represents all versions, and will always resolve to the latest one.
Copyright
© 2025, Ziaeemehr et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,533
- views
-
- 148
- downloads
-
- 1
- citation
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Citations by DOI
-
- 1
- citation for Reviewed Preprint v1 https://doi.org/10.7554/eLife.106194.1