Abstract
The activity of healthy neuronal networks is tightly regulated, and a shift towards hyperexcitability can cause various problems, such as epilepsies, memory deficits, and motor disorders. Numerous cellular, synaptic, and intrinsic mechanisms of hyperexcitability and compensatory mechanisms to restore healthy activity have been proposed. However, quantifying multiple compensatory mechanisms and their dependence on specific pathophysiological mechanisms has proven challenging, even in computational models. We use simulation-based inference to quantify the interactions of compensatory mechanisms in a spiking neuronal network model. Various parameters of the model can compensate for changes in other parameters to maintain baseline activity, and we rank them by their compensatory potential. Furthermore, specific causes of hyperexcitability - interneuron loss, excitatory recurrent synapses, and principal cell depolarization - have distinct compensatory mechanisms that can restore normal excitability. Our results show that spiking neuronal network simulators could provide the quantitative foundation for targeting pathophysiological network mechanisms with precise interventions.
1 Introduction
Neuronal networks operate within narrow ranges of average activity. A shift outside that range is a key aspect of many brain disorders. Epilepsies are an extreme case where hyperactivity of a neuronal network spreads across the brain, causing seizures (Devinsky et al., 2018). Alzheimer’s disease shows local hyperexcitability at the early stage, increased bursting, and elevated intracellular calcium levels (Anastacio et al., 2022; Mittag et al., 2023). Although this kind of hyperexcitability rarely causes seizures, there are similarities between local patterns of hyperactivity in epilepsy and Alzheimer’s disease (Kamondi et al., 2024). In schizophrenia, the imbalance of excitation and inhibition shifts cortical networks toward hyperactivity (Liu et al., 2021). While hyperexcitability has wide-ranging effects on brain health, its causes are also diverse and complex.
From a theoretical perspective, anything that increases the average probability of a principal cell to fire is a step toward hyperexcitability. This includes changes in the kinetics, number and location of ion channels, synaptic changes, or cell loss. In epilepsies, changes at all levels of the network have been found At the cellular level, death of inhibitory interneurons is a key feature of epileptic animal models (Huusko et al., 2015), which has prompted the development of interneuron transplantation as a novel treatment strategy (Hunt & Baraban, 2015). At the synaptic level, granule cells of the healthy dentate gyrus lack recurrent excitatory connections but have pathological excitatory recurrent connections in epileptic tissue, a phenomenon called mossy fiber sprouting (Scharfman et al., 2003). Ion channels have been most consistently associated with genetic but also acquired epilepsies (Beck & Yaari, 2008; Lerche et al., 2013), coining the term “channelopathy,” a disease relating to ion channels (Wolfart & Laker, 2015). Ion channels are also well-known from fundamental neuroscience to exhibit nonuniqueness (also known as degeneracy), meaning that widely different ion channel combinations can perform the same function (Marom & Marder, 2023). Nonuniqueness has intriguing implications for epilepsy (Stöber et al., 2023). For the causes of epilepsies, nonuniqueness implies that there are multiple different ways to acquire clinically identical symptoms. For the cures of epilepsies, it means that there are various ways to recover healthy function. Thus far, these implications have been primarily theoretical, and it has been impossible to unify nonuniqueness at the ion channel level with synaptic and cell-type nonuniqueness. In a spiking neuronal network model, we quantify nonuniqueness at the intrinsic, synaptic, and cellular levels to identify mechanisms that can compensate for specific pathophysiology.
To find compensatory mechanisms, we use simulation-based inference (SBI), a broad category of tools used to solve the inverse problem of a simulator (Cranmer et al., 2020). While a simulator generates output given a set of parameters, SBI estimates the probability of sets of parameters to produce a given output. It can, therefore, be used to find multiple parameter sets to create the same output dynamic (Gonçalves et al., 2020), effectively quantifying nonuniqueness. For the longest time, approximate Bayesian computation (ABC) was the method of choice for simulation-based inference. However, ABC is sampling inefficient, which restricts it to small simulators. Several new approaches to SBI have been developed (Cranmer et al., 2020), one of which is neural posterior estimation (NPE, Papamakarios & Murray (2018)), which works on mechanistic models of neurons and identifies compensatory mechanisms in the classic stomatogastric ganglion model system (Gonçalves et al., 2020). Recently, NPE has been used to find nonunique parameters that generate diverse output dynamics (Gao et al., 2024). Here, we use NPE to quantify nonuniqueness and identify compensatory mechanisms that avoid hyperexcitability in a spiking microcircuit simulator with two interneuron populations.
We specifically investigate three well-known pathophysiological issues: interneuron (IN) loss, excitatory recurrent sprouting, and principal cell (PC) depolarization. We find that various parameters related to these three conditions have compensatory potential. Which parameters are most effective depends on the specific pathophysiology. IN loss is best compensated by increasing IN → PC connectivity. Sprouting is compensated by a set of PC intrinsic properties and PC → PC synaptic weight. Intrinsic depolarization is compensated best by increasing the PC spiking threshold. Our results show that identifying the specific pathophysiology could be used to rank compensatory mechanisms and thereby find precise interventions.
2 Materials and Methods
2.1 The Spiking Neuronal Network Simulator
To quantify the compensatory mechanisms of hyperexcitability with simulation-based inference, we implemented a phenomenological spiking neuronal network simulator. We used adaptive exponential leaky integrate-and-fire neurons (AdEx) (Brette & Gerstner, 2005) to model one excitatory and two inhibitory populations connected with conductance-based Tsodyks-Markram synapses to model shortterm plasticity (Tsodyks et al., 1998). The voltage changed according to Equation 1. The adaptation current w changed according to Equation 2. At spike time, variables were reset, as in Equation 3.1 and Equation 3.2. See Table 1 for descriptions and values of all intrinsic parameters.

The intrinsic parameters of the AdEx model populations.
Square brackets indicate the lower and upper bound of the uniform prior distribution. If a cell contains a single number, that parameter is held constant. ΔT, slope factor; τw, adaptation time constant; a, subthreshold adaptation; b, spike-triggered adaptation; Vr, spike triggered reset voltage; C, capacitance; gL, leak conductance; EL, resting membrane potential; VT, spike threshold. Note that VT determines the point of exponential rise but is not the hard threshold for triggering the spike reset. A spike is triggered when the voltage is larger than 0 mV, see Equation 3.1; n, number of neurons in the population.
A neuron received its input through Itotal (Equation 4). gPC, gNIN & gAIN are the conductances from the presynaptic principal cells (PC), non-adapting interneurons (NIN) and adapting interneurons (AIN), respectively. gINP is the excitatory conductance from a homogeneous Poisson spike train. gGAP is a constant conductance that exists only between some NINs to simulate symmetric gap junction connections. The conductances and the synaptic resources changed as in Equation 5.1, Equation 5.2, Equation 5.3, Equation 5.4 and Equation 5.5 (Tsodyks et al., 1998). Cells were randomly connected with a probability p, excluding self-connectivity if a population was connected to itself. A presynaptic spike changed the synaptic resources according to Equation 6.1, Equation 6.2 and Equation 6.3. If τfacil was set to 0, Equation 5.3 was deleted from the model, and u was kept constant at USE. Table 2 shows the synaptic parameters for all connections.

The parameters of the Tsodyks-Markram synaptic connections.
ASE, maximum synaptic conductance; τin, time constant of synaptic decay; τrec, synaptic resource recovery time constant; USE, fraction of synaptic resources activated; τfacil, decay of facilitation time constant; p pairwise connection probability in %. NIN GJ NIN, is a gap junction connection, modeled as a fixed conductance between connected NINs.
The model consists of three different populations: principal cells (PCs), adapting interneurons (AINs), and non-adapting interneurons (NINs). Their parameters are largely taken from Naud et al. (2008). The AdEx model has nine parameters, and we additionally varied the number of neurons in the interneuron populations (see Table 1 for all parameters). We kept ΔT, τw, a, b, and Vr constant to keep the parameter space tractable for simulation-based inference (SBI) and to keep qualitative changes of firing patterns to a minimum. That left C, gL, EL and VT subject to SBI. This totals twelve free neuronal parameters plus the cell numbers of the two interneuron populations.
The neuronal populations and their homogeneous Poisson input were connected through 10 types of synapses and 1 gap junction. The homogeneous Poisson input innervated the PCs and the NINs. All parameters of both inputs were fixed. For the other chemical synapses we fixed the parameters τin, τrec, USE and τfacil. The connection probability (in %) and the strength were varied during SBI. The chemical connections in the network were PC → PC, PC → aIN, PC → NIN, NIN → PC, NIN → NIN, AIN → PC, AIN → NIN, and AIN → AIN. Gap junctions connected NIN with other NIN. Therefore, eighteen connectivity parameters were varied during SBI.
2.2 Summary Statistics
We performed inference given seven summary statistics, all of which were calculated on the PCs. The mean rate was calculated from all PCs, including silent ones. To calculate the mean interspike interval (ISI) entropy, we calculated the histogram of the ISIs in bins of 1 ms (Using scipy.stats.entropy from the SciPy library). Theta, gamma and fast power were calculated from the power spectral density (PSD; scipy.signal.periodogram). Theta power was calculated by summing the PSD between 8 Hz and 12 Hz, Gamma power between 30 Hz and 100 Hz and fast power between 100 and 150. The correlation was calculated between the convolved spike trains of 50 randomly chosen PCs and then averaged. For the average pairwise Pearson correlation, spike trains were convolved with as Gaussian kernel of 10 ms width. Pairs where one of the neurons had zero spikes were excluded from the average. The ISI coefficient of variance (CV) for each cell was calculated by dividing the mean of the ISI by the standard deviation of the ISI. Then, CVs of all cells were averaged, excluding cells with undefined CVs (less than two spikes or zero standard deviation). All target outcomes for SBI are shown in Table 3.

The quantified outcomes of the simulator.
ISI, interspike interval; CV, coefficient of variation. We implemented the model with Brian2 (Stimberg et al., 2019). All simulations ran for 1 s with a temporal resolution of 0.1 ms.
2.3 Simulation-Based Inference and Quality Checks
For SBI we used neural posterior estimation (NPE; Gonçalves et al. (2020); Papamakarios & Murray (2018)) with the SNPE_C inference implementation from the sbi Python package 0.22.0 (Tejero-Cantero et al., 2020). Initially 2620000 parameter samples were drawn from the prior, then simulated, and the parameter-outcome pairs were used to train a neuronal density estimator with default parameters (num_atoms=10, training_batch_size=50, learning_rate=0.0005, validation_fraction=0.1, stop_after_epochs=20, max_num_epochs=2147483647, clip_max_norm=5.0). For sequential inference, we used truncated sequential NPE (Deistler et al., 2022). The posterior estimate was truncated at the 0.0001 quantile in four sequential rounds, with 20000 simulations in each round. 1000000 samples were used to estimate the support. Undefined summary statistics (ISI Entropy, correlation or ISI CV) were set to 0. To calculate marginal pairwise correlations, 200000 samples were drawn from the baseline posterior distribution, and we used spearmanr from scipy.stats. To calculate the conditional correlation coefficients we used the sbi package function conditional_corrcoeff, conditioning other parameters on the maximum-a-posteriori (MAP) estimate. The MAP estimate was calculated with the posterior.map method.
2.4 Specific Conditional Baseline Distributions
We introduced specific conditions on the posterior distribution to investigate the mechanisms that can compensate for interneuron loss, excitatory synaptic sprouting, and intrinsic hyperexcitability. For conditional sampling, we used the sbi.inference.MCMCPosterior implementation in the sbi Python package. To investigate interneuron loss, we compared the distributions:
Equation 7.2 produces healthy dynamics, despite interneuron loss and Equation 7.1 is the healthy control, which produces healthy dynamics with a normal number of interneurons. 55 and 54 are the MAP estimates for the interneuron counts of the marginal baseline posterior.
To investigate excitatory synaptic sprouting, we compared the distributions:
Equation 8.2 produces healthy dynamics despite excitatory sprouting. The connection probability 3% is double the baseline MAP estimate of 1.5%. For comparison we chose an order of magnitude smaller percentage points for Equation 8.1. The connection strength 4.35 nS is the MAP estimate of the marginal BL posterior.
To investigate intrinsic changes in principal cell excitability, we compare the distributions:
These represent a minor increase of input resistance and resting membrane potential in principal cells. MAP estimate for baseline output of the two parameters was
To compare conditional distributions of parameters, 100000 samples were drawn and we used scipy’s two sample implementation of the Kolmogorov-Smirnov test (ks_2samp).
2.5 Conditional Baseline versus Hyperexcitable Distributions
In Figure 4, the compared distributions for IN Loss are: p(θ|xBL, AINn = 15, NINn = 15) versus p(θ|xHE, AINn = 15, NINn = 15). For Sprouting: p(θ|xBL, PC → PCP = 3%,
2.6 Compensatory Effects on Simulator Output
In Figure 5, 100 samples were drawn from each pathophysiological conditional distribution: p(θ|xHE,
2.7 Software
Prior samples were drawn and simulated on the Okinawa Institute of Science High Performance computing resources running CentOS 8. with the 4. version of the Linux kernel. The brian2 version was 2.5.0.3 (Stimberg et al., 2019). The Python version was 3.9.18.
The neural density estimator training, sequential NPE and conditional sampling were run on a Windows 10 computer with brian2 2.7.1. The Python version was 3.10.14.
Plotting was done with matplotlib 3.9.1 (Hunter, 2007) or seaborn 0.13.2 (Waskom, 2021). pandas 2.2.3 was used for data wrangling (McKinney, 2010). statsmodels 0.14.4 was used for ANOVA test (Seabold & Perktold, 2010). scipy 1.13.1 was used for Kolmogorov-Smirnov tests and Spearman correlation coefficient (Virtanen et al., 2020).
3 Results
To study the mechanisms of hyperexcitability, we implemented a phenomenological spiking neuronal microcircuit simulator (Figure 1 A, Methods Section 2.1). We drew 2620000 samples from the 32 dimensional prior parameter space (Table 1 & Table 2) and simulated them. Out of those simulations, 1462613 had zero PC spikes. Figure 1 B shows the outcome variables from the simulations with at least one action potential, showing that the simulator can generate a wide variety of outcomes, including the baseline and the hyperexcitable values targeted with SBI shown as crosses (Table 3). We used all 2620000 samples to train a neuronal density estimator, with the mean ISI entropy, correlation and ISI CV set to zero if there were zero PC spikes. We then used simulation-based calibration (Talts et al., 2018) to test the reliability of the posterior estimate q(θ|x), for all x, which are within the support of the likelihood. Simulation-based calibration showed that the posterior estimate was unreliable for several parameters (not shown). Therefore, we did not rely on the amortized posterior estimate but went on to perform truncated sequential NPE (Deistler et al., 2022) to get more precise estimates for the specific outcomes in Table 3. Figure 1 C shows simulation results from the maximum-a-posteriori (MAP) estimated parameters from the sequential posterior estimate, given baseline and hyperexcitable outcomes. To ensure that the posteriors were accurate, we also performed prior-predictive checks. Figure 1 D shows the simulation results from 1000 parameter sets sampled from the posterior. The simulation results are distinct for the two conditions and the target value for each outcome variable are well within the support for the baseline condition. For the hyperexcitable condition, however, the gamma and fast power outcomes are not within the interquartile range. This indicates that the model might be unable to decrease gamma and fast power while increasing excitability. However, because the mean rate and synchrony are increased, the posterior is still a good estimate of a hyperexcitable condition. Before exploring the hyperexcitable posterior, we investigated compensatory mechanisms that maintain the baseline condition by investigating the pairwise correlations in the posterior sample.

Sequential NPE finds distributions of parameters that produce different levels of excitability.
A) An illustration of the spiking neuronal network simulator. See methods Section 2.1 for details. B) Simulator outcome from 1122586 simulations. Simulations with zero PC spikes or undefined CV are not shown. The parameters were drawn from the prior distribution (see Table 1 & Table 2). The cross shows the two target outcomes the NPE was conditioned on. C) Shows the result of simulating the MAP parameters from p(θ |xBaseline) and p(θ |xHyperexcitable). The top shows the voltage trace of a single PC, and the bottom shows spike raster plots for 50 of 500 PCs. D) Outcomes from simulating 1000 parameter sets that were drawn from each of the two distributions. The black x marks the value that was targeted with NPE. See Table 3. In the baseline condition, the targeted outcome is well within support of the outcome distribution for all parameters. In the hyperexcitable condition, gamma and fast power targets are not within the interquartile range.
To identify compensatory mechanisms, we calculated the pairwise correlation coefficients between parameter samples from the baseline posterior estimate. These correlations identify compensatory mechanisms, because they indicate that baseline activity can be maintained despite changes in one parameter if the other parameter changes accordingly. Since our model has 32 free parameters, 496 unique pairwise correlations exist. For each pair, we calculated the Spearman correlation coefficient from 200000 marginal posterior samples. Out of all pairs, only 19 have correlation coefficients larger than 0.1 or smaller than −0.1 Figure 2 A. The largest negative correlation (−0.94) is between the PC connection probability (PC → PCp) and the connection strength

Various parameter pairs can compensate each other to maintain baseline excitability.
200000 samples were drawn from the baseline posterior estimator and pairwise Spearman correlations were calculated for each of the 496 unique pairwise comparisons. A) shows 2D histograms of the 19 pairs with correlation coefficients larger >0.1 or <−0.1. The maximum pixel values read from top left to bottom right are: 813, 588,525,1718,411,863, 3328, 2923, 704, 757, 673, 2371, 327, 294, 3337, 278, 262, 301 and 291. The minimum pixel value is always 0, except for the second to last pair (
The largest positive correlation is between two intrinsic properties of
Out of the 19 meaningfully (correlation coefficient >0.1 or <−0.1) correlated parameter pairs, 12 involve intrinsic PC properties. None of the intrinsic interneuron parameters feature in meaningful correlations but both interneuron numbers are correlated with their respective population’s connection probability to principal cell Figure 2 A.
However, these are the marginal correlations and they are known to be weakly correlated. Marginal distributions are not constrained by any other parameters. In other words, a parameter’s marginal posterior distribution gives the probability for the parameter value to achieve the targeted outcome, while all other parameters are unknown. Because the unknown parameters are the broadest possible estimate of the parameter space, the marginal distributions are broad. However, the posterior is a better, more narrow estimate of the likely parameters, and we can use it to constrain the distribution of parameters. These constrained distributions are called conditional distributions because the other parameters are conditioned on particular values. When calculating conditional distributions of two parameters and correlating their samples, the correlation is called “conditional correlation”.
We therefore used the MAP estimate of the posterior distribution to calculate the conditional correlations (Figure 2 C). Most pairs that were meaningfully correlated with marginal sampling were also meaningfully correlated with conditional sampling. However, many pairs are only correlated for conditional sampling (Figure Supp 2.1). Stronger correlations have also been shown previously for conditioning on posterior samples, rather than the MAP (Gonçalves et al., 2020). Figure Supp 2.2 & 2.3 show the full marginal and conditional posterior distribution, respectively.
These correlations identify parameters that can compensate for changes in other parameters to maintain the baseline output. However, there is a more specific way of investigating compensatory mechanisms. By comparing two conditioned distributions, p(θ|xBL, θi = k) and p(θ|xBL, θi = j), we can find the compensatory mechanisms that maintain xBL despite the condition. For example, if θi = k is a normal amount of interneurons and θi = j is the amount of interneurons expected in sclerotic tissue, we can find the parameters changes that can maintain healthy output despite the interneuron loss. We used this technique to investigate IN loss, recurrent excitation, and PC depolarization.
To find specific compensatory mechanisms, we sampled from the baseline posterior estimator with additional conditions on parameters representing known pathophysiological parameters (Figure 3). For interneuron loss, we conditioned the cell numbers in both interneuron populations (Normal: Equation 7.1 versus IN Loss: Equation 7.2). For recurrent excitation (called sprouting), we conditioned the connection probability PC → PCp (Normal: Equation 8.1 versus Sprouting: Equation 8.2). And for PC depolarization we conditioned PCEL and

Specific pathophysiological conditions have distinct compensatory parameters that maintain baseline outcomes.
We compared samples from the baseline posterior estimator conditioned on normal and pathophysiological parameter values. To compare distributions, we calculated the Kolmogorov-Smirnov (KS) test statistic. A), B) and C) show the five parameters with the largest test statistic in each condition. A) compares 55 AIns and 54 NINs in each subpopulation (Normal, Equation 7.1) to 15 neurons in each population (IN Loss, Equation 7.2). B) compares PC − PCp of 0.15% (Normal, Equation 8.1) to 3% (Sprouting, Equation 8.2). C) compares
For example, when the number of INs is constrained, increasing the AIN → PCp connection probability maintains baseline excitability (Figure 3 A, leftmost). Figure 3 D shows the KS test statistic for each parameter in each condition. As expected from the pairwise correlations in Figure 2, the PC → PC connection strength has the largest KS statistic in the sprouting conditional. For IN Loss and Depolarization (Figure 3 A & C) on the other hand, the compensatory potential of the PC → PC connection is low. This shows that some compensatory parameters depend on other conditions. This is also true for IN loss and intrinsics. During IN loss, the connection probabilities from INs → PCs stand out as being particularly effective. In contrast, the threshold potential is uniquely positioned to maintain baseline excitability despite intrinsic depolarization.
For comparison, we also show the absolute correlation coefficients from Figure 2 for the specific pathophysiological parameters (Figure 3 E & F). PC → PC strength, AIN → PC strength, and
So far we focused on the posterior estimator for baseline excitability. Next, we investigated the compensatory mechanisms that can move the network from hyperexcitable to baseline (Figure 4). The principle of Figure 4 is similar to Figure 3, with one critical difference: Instead of comparing normal and pathophysiological parameter conditions at baseline (Figure 3), we compare baseline and hyperexcitable posterior estimation at the pathophysiological condition (Figure 4). A, B & C show the parameters with the largest KS statistic. The KS statistics for the parameters are overall different from Figure 3. For example, three intrinsic PC properties are among the five parameters with the largest difference for IN loss. Furthermore, the IN → PC connections are no longer among the five largest. Excitatory sprouting and intrinsic depolarization are also different. However, the excitatory connection strength is still the parameter with the strongest compensatory potential in the sprouting condition. During intrinsic depolarization, PC threshold is still in the largest five, but the PC → PC connection probability is now first. This shows that the compensatory potential of some parameters depends on specifics of the compensated conditions, while others are critical across conditions. Finally, we also analyzed the correlation of parameters with specific outcome parameters.

Specific pathophysiological conditions have distinct compensatory parameters that can change the network from hyperexcitable to baseline.
We compared samples from the baseline posterior estimator to those from the hyperexcitable posterior estimator conditioned on pathophysiological parameter values. To compare distributions, we calculated the Kolmogorov-Smirnov (KS) test statistic. A), B) and C) show the five parameters with the largest test statistic. Each compares baseline and the hyperexcitable estimator for a specific pathophysiological condition. In A) the condition is that the IN numbers are set to 15. In B) the condition is that PC → PCp is set to 3%. In C) the condition is that
We have used posterior density estimators to quantify compensatory mechanisms. However, the ultimate goal of a compensatory mechanism, in the clinical sense, is to bring the system from the pathophysiological state back to the healthy state. In the simulator, we can directly test the effect of parameter changes on the output and, furthermore, whether the effect depends on the specific pathophysiological cause. To this end, we used the hyperexcitable posterior density estimator and drew samples given each pathophysiological condition: IN Loss, Sprouting and Depolarization. To test the effect of a parameter on output, a single parameter was then systematically varied and simulated. If the pathophysiological condition influences the effect of a parameter on the output, we expect a change in the difference between conditions at some parameter point. The interaction of an ANOVA shows whether the interaction between the parameter change and any of the three pathophysiological conditions is significant. A statistically significant interaction indicates that the effect of a parameter on the outcome depends on the condition (or vice versa) and is thus specific to the pathophysiology. The results clearly show that the condition is important for the compensatory potential of many parameters (Figure 5). As expected, IN loss affects the influence of AIN → PC connectivity, and sprouting strongly affects

The pathophysiological condition changes the effect of parameters on simulated outcomes.
The hyperexcitable posterior estimator was sampled with additional pathophysiological conditions. Then, each parameter was varied to cover 50 points in the parameters prior range and the parameters were simulated. Each of the 50 points on the x-axis contains 100 samples and the error bars show the 95% confidence interval. The asterisks indicate where the p-value of the interaction between parameter and condition is
4 Discussion
The ideal compensatory mechanisms depend on the underlying pathological causes. Using simulation-based inference we quantify the ability of parameters to compensate for different causes of hyperexcitability. For example, we find that the threshold potential for exponential voltage rise (VT) is the best parameter to compensate for intrinsic depolarization. However, when compensating for interneuron loss or recurrent excitatory sprouting, VT compensation is less effective. The exponential voltage rise in neurons is mediated by voltage-activated sodium channels, which are a crucial target of antiepileptic drugs (Kaplan et al., 2016). Our findings, therefore, predict that in patients where seizures are not primarily caused by principal cell hyperexcitability, commonly used sodium channel blockers could be less effective. This could explain why sodium channel blockers do not control seizures in some patients.
Two other interesting parameters are the interneuron to principal cell connection probabilities (AIN → PCp and NIN → PCp). Increasing the connection probability of either can compensate for intrinsic depolarization but also for the loss of interneurons. On one hand, fewer interneurons mean less overall inhibition and it makes sense that this can be compensated for by increasing connectivity. On the other hand, fewer interneurons means that the effect of the connectivity parameter should decrease, because the same increase in connection probability creates fewer overall synapses. We predict that even in sclerotic tissue with severe interneuron loss, increasing the number of inhibitory synapses has great compensatory potential. In vivo, thousands of interneurons remain even after interneuron loss in the pilocarpine or traumatic brain injury rat model of epilepsy (Huusko et al., 2015). The remaining interneurons could restore healthy activity by maintaining more synapses on principal cells. There are currently no epilepsy treatments that target inhibitory synapses directly, and increasing the number of inhibitory synapses without affecting excitatory synapses could prove difficult. A promising direction could be to use astrocyte-secreted factors such as neurocan, which specifically control inhibitory synaptogenesis (Irala et al., 2024).
While synaptogenesis has not yet been targeted for epilepsy treatments, interneuron transplantation to replace lost interneurons has been explored and proven effective in rodents (Hunt & Baraban, 2015). We find that the interneuron cell numbers, compared to synaptic and intrinsic parameters, do not compensate well for excitatory sprouting or intrinsic depolarization. Before using interneuron transplants, it could, therefore, be advisable to determine whether interneuron loss affects the epileptogenic region. This is especially important since some epileptogenic regions are non-sclerotic, having cell numbers similar to healthy controls (Blümcke et al., 2013). In these regions, seizures could be caused by other factors, and interneuron transplantation could be less effective.
Our results indicate that the pathophysiological mechanisms could be relevant to choosing optimal treatments, but measuring the number of interneurons, the number of recurrent excitatory synapses, or intrinsic excitability is currently impossible in living patients. It is, therefore, essential to develop methods to estimate these difficult-to-measure mechanistic features from measurable properties. Simulation-based inference can be used to estimate these mechanistic biomarkers. For example, Doorn et al. (2024) measured the activity of patient-derived neuronal cultures with multi-electrode arrays and used the posterior estimates of recurrent spiking neuronal network parameters conditioned on the activity as estimates of pathological mechanisms. This method shows some promise, but a gap remains between the activity in epileptic networks and the activity in cell cultures. Cell cultures are dominated by the details of the culturing technique and the patient’s genetics. This makes cell culture particularly useful for epilepsies with strong genetic components. In contrast, the activity of an epileptogenic network in vivo is more complex and depends on the patient’s genetics but also various protective or harmful environmental and developmental factors. Therefore, intracranial field recordings, as they are performed in pharmacoresistant epilepsies, could be necessary. Such recordings with phenomenological neuronal mass models, which simulate the average activity of coupled brain regions rather than the spiking of single neurons, have been used to estimate the epileptogenicity index (Makhalova et al., 2022). While this index does not describe a specific pathophysiological mechanism, it could be clinically meaningful. A clinical trial is ongoing to determine its efficacy in guiding surgical decisions on the resection of epileptogenic brain tissue (Jirsa et al., 2023). If neural mass models can provide meaningful estimates of epileptogenicity, spiking neuronal network models could provide estimates of pathophysiological mechanisms such as interneuron number, which we have found might be helpful to guide personalized treatment.
Comparing conditional distributions has, to our knowledge, not been used to identify compensatory mechanisms. It is thus worth comparing it to the more common way of identifying compensation from the posterior correlations between parameters (Gonçalves et al., 2020). If we sample from the posterior p(θ|x), correlations between θi and θj indicate that if either of the parameters were changed, we could keep the probability of generating x-type output constant by changing the other parameter. This identifies broadly applicable compensatory mechanisms that work across the entire parameter range.
On the other hand, comparing p(θi|x, θj = 5) to p(θi|x, θj = 10) shows, how θi could change to maintain x-type dynamics, despite θj being perturbed from 5 to 10. The conditional also tests only the compensation of one parameter to another (the unconditioned compensates for the conditioned), whereas the correlations find compensation in both directions. Overall, correlations discover broadly applicable mechanisms, while conditionals identify specific mechanisms. Therefore, the conditional distributions are likely more useful for precision treatments than the unspecific correlations.
In summary, we show that the pathophysiological mechanisms underlying hyperexcitability change the efficacy of compensatory mechanisms to restore normal excitability in a spiking neuronal network model. These findings suggest that these mechanisms and a biologically detailed simulator could be used to predict the outcome of treatment options, such as voltage-gated sodium channel blockers or interneuron transplants. Simulation-based prediction of treatment outcomes would be a major advance for treating brain disorders and precision medicine.
Code and Data Availability
The Python code to run the simulator, train the NPE, infer posteriors, analyze the data and plot the figures is available on GitHub (https://github.com/danielmk/hyperexcitability_sbi). The simulated data and the trained NPEs will be publicly available in a data repository.
Acknowledgements
We are grateful for the help and support provided by the Scientific Computing and Data Analysis section of Core Facilities at OIST. We thank Milena Menezes Carvalho, Joanna Komorowska-Müller and Gastón Sivori for helpful comments on the manuscript. This work was supported by JSPS KAKENHI grant no. JP23H05476 to T.F.
Additional files
References
- Neuronal hyperexcitability in Alzheimer’s disease: what are the drivers behind this aberrant phenotype?Translational Psychiatry 12https://doi.org/10.1038/s41398-022-02024-7Google Scholar
- Plasticity of intrinsic neuronal properties in CNS disordersNature Reviews Neuroscience 9:357–369https://doi.org/10.1038/nrn2371Google Scholar
- International consensus classification of hippocampal sclerosis in temporal lobe epilepsy: A Task Force report from the ILAE Commission on Diagnostic MethodsEpilepsia 54:1315–1329https://doi.org/10.1111/epi.12220Google Scholar
- Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal ActivityJournal of Neurophysiology 94:3637–3642https://doi.org/10.1152/jn.00686.2005Google Scholar
- The frontier of simulation-based inferenceProceedings of the National Academy of Sciences of the United States of America 117:30055–30062https://doi.org/10.1073/pnas.1912789117Google Scholar
- Truncated proposals for scalable and hassle-free simulation-based inferencehttps://arxiv.org/abs/2210.04815
- EpilepsyNature Reviews Disease Primers 4https://doi.org/10.1038/nrdp.2018.24Google Scholar
- Automated inference of disease mechanisms in patient-hiPSC-derived neuronal networksBiorxiv https://doi.org/10.1101/2024.05.23.595522Google Scholar
- Deep inverse modeling reveals dynamic-dependent invariances in neural circuit mechanismsBiorxiv https://doi.org/10.1101/2024.08.21.608969Google Scholar
- Training deep neural density estimators to identify mechanistic models of neural dynamicseLife 9:1–46https://doi.org/10.7554/ELIFE.56261Google Scholar
- Interneuron transplantation as a treatment for epilepsyCold Spring Harbor Perspectives in Medicine 5https://doi.org/10.1101/cshperspect.a022376Google Scholar
- Matplotlib: A 2D graphics environmentComputing in Science & Engineering 9:90–95https://doi.org/10.1109/MCSE.2007.55Google Scholar
- Loss of hippocampal interneurons and epileptogenesis: a comparison of two animal models of acquired epilepsyBrain Structure and Function 220:153–191https://doi.org/10.1007/s00429-013-0644-1Google Scholar
- Astrocyte-secreted neurocan controls inhibitory synapse formation and functionNeuron 112:1657–1675https://doi.org/10.1016/j.neuron.2024.03.007Google Scholar
- Personalised virtual brain models in epilepsyThe Lancet Neurology 22:443–454https://doi.org/10.1016/S1474-4422(23)00008-XGoogle Scholar
- Epilepsy and epileptiform activity in late-onset Alzheimer disease: clinical and pathophysiological advances, gaps and conundrumsNature Reviews Neurology 20:162–182https://doi.org/10.1038/s41582-024-00932-4Google Scholar
- Role of sodium channels in epilepsyCold Spring Harbor Perspectives in Medicine 6https://doi.org/10.1101/cshperspect.a022814Google Scholar
- Ion channels in genetic and acquired forms of epilepsyJournal of Physiology 591:753–764https://doi.org/10.1113/jphysiol.2012.240606Google Scholar
- A Selective Review of the Excitatory-Inhibitory Imbalance in Schizophrenia: Underlying Biology, Genetics, Microcircuits, and SymptomsFrontiers in Cell and Developmental Biology 9https://doi.org/10.3389/fcell.2021.664535Google Scholar
- Virtual epileptic patient brain modeling: Relationships with seizure onset and surgical outcomeEpilepsia 63:1942–1955https://doi.org/10.1111/epi.17310Google Scholar
- A biophysical perspective on the resilience of neuronal excitability across timescalesNature Reviews Neuroscience 24:640–652https://doi.org/10.1038/s41583-023-00730-9Google Scholar
- Data Structures for Statistical Computing in PythonIn:
- van der Walt S.
- Millman J.
- Modelling the contributions to hyperexcitability in a mouse model of Alzheimer’s diseaseThe Journal of Physiology 601:3403–3437https://doi.org/10.1113/JP283401Google Scholar
- Firing patterns in the adaptive exponential integrate-and-fire modelBiological Cybernetics 99:335–347https://doi.org/10.1007/s00422-008-0264-7Google Scholar
- Fast \epsilon-free Inference of Simulation Models with Bayesian Conditional Density Estimationhttps://arxiv.org/abs/1605.06376
- Electrophysiological evidence of monosynaptic excitatory transmission between granule cells after seizure-induced mossy fiber sproutingJournal of Neurophysiology 90:2536–2547https://doi.org/10.1152/jn.00251.2003Google Scholar
- statsmodels: Econometric and statistical modeling with pythonIn: 9th Python in Science Conference Google Scholar
- Brian 2, an intuitive and efficient neural simulatoreLife 41https://doi.org/10.7554/eLife.47314Google Scholar
- Degeneracy in epilepsy: multiple routes to hyperexcitable brain circuits and their repairCommunications Biology 6https://doi.org/10.1038/s42003-023-04823-0Google Scholar
- Validating Bayesian Inference Algorithms with Simulation-Based Calibrationhttp://arxiv.org/abs/1804.06788
- sbi: A toolkit for simulation-based inferenceJournal of Open Source Software 5:2505https://doi.org/10.21105/joss.02505Google Scholar
- Neural Networks with Dynamic SynapsesNeural Computation 10:821–835https://doi.org/10.1162/089976698300017502Google Scholar
- SciPy 1.0: Fundamental Algorithms for Scientific Computing in PythonNature Methods 17:261–272https://doi.org/10.1038/s41592-019-0686-2Google Scholar
- seaborn: statistical data visualizationJournal of Open Source Software 6:3021https://doi.org/10.21105/joss.03021Google Scholar
- Homeostasis or channelopathy? Acquired cell type-specific ion channel changes in temporal lobe epilepsy and their antiepileptic potentialFrontiers in Physiology 6https://doi.org/10.3389/fphys.2015.00168Google Scholar
Article and author information
Author information
Version history
- Preprint posted:
- Sent for peer review:
- Reviewed Preprint version 1:
Cite all versions
You can cite all versions using the DOI https://doi.org/10.7554/eLife.107045. This DOI represents all versions, and will always resolve to the latest one.
Copyright
© 2025, Daniel Müller-Komorowska & Tomoki Fukai
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 0
- downloads
- 0
- citations
- 0
Views, downloads and citations are aggregated across all versions of this paper published by eLife.