1 Introduction

Neuronal networks operate within narrow ranges of average activity. A shift outside that range is a key aspect of many brain disorders. Epilepsies are an extreme case where hyperactivity of a neuronal network spreads across the brain, causing seizures (Devinsky et al., 2018). Alzheimer’s disease shows local hyperexcitability at the early stage, increased bursting, and elevated intracellular calcium levels (Anastacio et al., 2022; Mittag et al., 2023). Although this kind of hyperexcitability rarely causes seizures, there are similarities between local patterns of hyperactivity in epilepsy and Alzheimer’s disease (Kamondi et al., 2024). In schizophrenia, the imbalance of excitation and inhibition shifts cortical networks toward hyperactivity (Liu et al., 2021). While hyperexcitability has wide-ranging effects on brain health, its causes are also diverse and complex.

From a theoretical perspective, anything that increases the average probability of a principal cell to fire is a step toward hyperexcitability. This includes changes in the kinetics, number and location of ion channels, synaptic changes, or cell loss. In epilepsies, changes at all levels of the network have been found At the cellular level, death of inhibitory interneurons is a key feature of epileptic animal models (Huusko et al., 2015), which has prompted the development of interneuron transplantation as a novel treatment strategy (Hunt & Baraban, 2015). At the synaptic level, granule cells of the healthy dentate gyrus lack recurrent excitatory connections but have pathological excitatory recurrent connections in epileptic tissue, a phenomenon called mossy fiber sprouting (Scharfman et al., 2003). Ion channels have been most consistently associated with genetic but also acquired epilepsies (Beck & Yaari, 2008; Lerche et al., 2013), coining the term “channelopathy,” a disease relating to ion channels (Wolfart & Laker, 2015). Ion channels are also well-known from fundamental neuroscience to exhibit nonuniqueness (also known as degeneracy), meaning that widely different ion channel combinations can perform the same function (Marom & Marder, 2023). Nonuniqueness has intriguing implications for epilepsy (Stöber et al., 2023). For the causes of epilepsies, nonuniqueness implies that there are multiple different ways to acquire clinically identical symptoms. For the cures of epilepsies, it means that there are various ways to recover healthy function. Thus far, these implications have been primarily theoretical, and it has been impossible to unify nonuniqueness at the ion channel level with synaptic and cell-type nonuniqueness. In a spiking neuronal network model, we quantify nonuniqueness at the intrinsic, synaptic, and cellular levels to identify mechanisms that can compensate for specific pathophysiology.

To find compensatory mechanisms, we use simulation-based inference (SBI), a broad category of tools used to solve the inverse problem of a simulator (Cranmer et al., 2020). While a simulator generates output given a set of parameters, SBI estimates the probability of sets of parameters to produce a given output. It can, therefore, be used to find multiple parameter sets to create the same output dynamic (Gonçalves et al., 2020), effectively quantifying nonuniqueness. For the longest time, approximate Bayesian computation (ABC) was the method of choice for simulation-based inference. However, ABC is sampling inefficient, which restricts it to small simulators. Several new approaches to SBI have been developed (Cranmer et al., 2020), one of which is neural posterior estimation (NPE, Papamakarios & Murray (2018)), which works on mechanistic models of neurons and identifies compensatory mechanisms in the classic stomatogastric ganglion model system (Gonçalves et al., 2020). Recently, NPE has been used to find nonunique parameters that generate diverse output dynamics (Gao et al., 2024). Here, we use NPE to quantify nonuniqueness and identify compensatory mechanisms that avoid hyperexcitability in a spiking microcircuit simulator with two interneuron populations.

We specifically investigate three well-known pathophysiological issues: interneuron (IN) loss, excitatory recurrent sprouting, and principal cell (PC) depolarization. We find that various parameters related to these three conditions have compensatory potential. Which parameters are most effective depends on the specific pathophysiology. IN loss is best compensated by increasing IN → PC connectivity. Sprouting is compensated by a set of PC intrinsic properties and PC → PC synaptic weight. Intrinsic depolarization is compensated best by increasing the PC spiking threshold. Our results show that identifying the specific pathophysiology could be used to rank compensatory mechanisms and thereby find precise interventions.

2 Materials and Methods

2.1 The Spiking Neuronal Network Simulator

To quantify the compensatory mechanisms of hyperexcitability with simulation-based inference, we implemented a phenomenological spiking neuronal network simulator. We used adaptive exponential leaky integrate-and-fire neurons (AdEx) (Brette & Gerstner, 2005) to model one excitatory and two inhibitory populations connected with conductance-based Tsodyks-Markram synapses to model shortterm plasticity (Tsodyks et al., 1998). The voltage changed according to Equation 1. The adaptation current w changed according to Equation 2. At spike time, variables were reset, as in Equation 3.1 and Equation 3.2. See Table 1 for descriptions and values of all intrinsic parameters.

The intrinsic parameters of the AdEx model populations.

Square brackets indicate the lower and upper bound of the uniform prior distribution. If a cell contains a single number, that parameter is held constant. ΔT, slope factor; τw, adaptation time constant; a, subthreshold adaptation; b, spike-triggered adaptation; Vr, spike triggered reset voltage; C, capacitance; gL, leak conductance; EL, resting membrane potential; VT, spike threshold. Note that VT determines the point of exponential rise but is not the hard threshold for triggering the spike reset. A spike is triggered when the voltage is larger than 0 mV, see Equation 3.1; n, number of neurons in the population.

A neuron received its input through Itotal (Equation 4). gPC, gNIN & gAIN are the conductances from the presynaptic principal cells (PC), non-adapting interneurons (NIN) and adapting interneurons (AIN), respectively. gINP is the excitatory conductance from a homogeneous Poisson spike train. gGAP is a constant conductance that exists only between some NINs to simulate symmetric gap junction connections. The conductances and the synaptic resources changed as in Equation 5.1, Equation 5.2, Equation 5.3, Equation 5.4 and Equation 5.5 (Tsodyks et al., 1998). Cells were randomly connected with a probability p, excluding self-connectivity if a population was connected to itself. A presynaptic spike changed the synaptic resources according to Equation 6.1, Equation 6.2 and Equation 6.3. If τfacil was set to 0, Equation 5.3 was deleted from the model, and u was kept constant at USE. Table 2 shows the synaptic parameters for all connections.

The parameters of the Tsodyks-Markram synaptic connections.

ASE, maximum synaptic conductance; τin, time constant of synaptic decay; τrec, synaptic resource recovery time constant; USE, fraction of synaptic resources activated; τfacil, decay of facilitation time constant; p pairwise connection probability in %. NIN GJ NIN, is a gap junction connection, modeled as a fixed conductance between connected NINs.

The model consists of three different populations: principal cells (PCs), adapting interneurons (AINs), and non-adapting interneurons (NINs). Their parameters are largely taken from Naud et al. (2008). The AdEx model has nine parameters, and we additionally varied the number of neurons in the interneuron populations (see Table 1 for all parameters). We kept ΔT, τw, a, b, and Vr constant to keep the parameter space tractable for simulation-based inference (SBI) and to keep qualitative changes of firing patterns to a minimum. That left C, gL, EL and VT subject to SBI. This totals twelve free neuronal parameters plus the cell numbers of the two interneuron populations.

The neuronal populations and their homogeneous Poisson input were connected through 10 types of synapses and 1 gap junction. The homogeneous Poisson input innervated the PCs and the NINs. All parameters of both inputs were fixed. For the other chemical synapses we fixed the parameters τin, τrec, USE and τfacil. The connection probability (in %) and the strength were varied during SBI. The chemical connections in the network were PC → PC, PC → aIN, PC → NIN, NIN → PC, NIN → NIN, AIN → PC, AIN → NIN, and AIN → AIN. Gap junctions connected NIN with other NIN. Therefore, eighteen connectivity parameters were varied during SBI.

2.2 Summary Statistics

We performed inference given seven summary statistics, all of which were calculated on the PCs. The mean rate was calculated from all PCs, including silent ones. To calculate the mean interspike interval (ISI) entropy, we calculated the histogram of the ISIs in bins of 1 ms (Using scipy.stats.entropy from the SciPy library). Theta, gamma and fast power were calculated from the power spectral density (PSD; scipy.signal.periodogram). Theta power was calculated by summing the PSD between 8 Hz and 12 Hz, Gamma power between 30 Hz and 100 Hz and fast power between 100 and 150. The correlation was calculated between the convolved spike trains of 50 randomly chosen PCs and then averaged. For the average pairwise Pearson correlation, spike trains were convolved with as Gaussian kernel of 10 ms width. Pairs where one of the neurons had zero spikes were excluded from the average. The ISI coefficient of variance (CV) for each cell was calculated by dividing the mean of the ISI by the standard deviation of the ISI. Then, CVs of all cells were averaged, excluding cells with undefined CVs (less than two spikes or zero standard deviation). All target outcomes for SBI are shown in Table 3.

The quantified outcomes of the simulator.

ISI, interspike interval; CV, coefficient of variation. We implemented the model with Brian2 (Stimberg et al., 2019). All simulations ran for 1 s with a temporal resolution of 0.1 ms.

2.3 Simulation-Based Inference and Quality Checks

For SBI we used neural posterior estimation (NPE; Gonçalves et al. (2020); Papamakarios & Murray (2018)) with the SNPE_C inference implementation from the sbi Python package 0.22.0 (Tejero-Cantero et al., 2020). Initially 2620000 parameter samples were drawn from the prior, then simulated, and the parameter-outcome pairs were used to train a neuronal density estimator with default parameters (num_atoms=10, training_batch_size=50, learning_rate=0.0005, validation_fraction=0.1, stop_after_epochs=20, max_num_epochs=2147483647, clip_max_norm=5.0). For sequential inference, we used truncated sequential NPE (Deistler et al., 2022). The posterior estimate was truncated at the 0.0001 quantile in four sequential rounds, with 20000 simulations in each round. 1000000 samples were used to estimate the support. Undefined summary statistics (ISI Entropy, correlation or ISI CV) were set to 0. To calculate marginal pairwise correlations, 200000 samples were drawn from the baseline posterior distribution, and we used spearmanr from scipy.stats. To calculate the conditional correlation coefficients we used the sbi package function conditional_corrcoeff, conditioning other parameters on the maximum-a-posteriori (MAP) estimate. The MAP estimate was calculated with the posterior.map method.

2.4 Specific Conditional Baseline Distributions

We introduced specific conditions on the posterior distribution to investigate the mechanisms that can compensate for interneuron loss, excitatory synaptic sprouting, and intrinsic hyperexcitability. For conditional sampling, we used the sbi.inference.MCMCPosterior implementation in the sbi Python package. To investigate interneuron loss, we compared the distributions:

Equation 7.2 produces healthy dynamics, despite interneuron loss and Equation 7.1 is the healthy control, which produces healthy dynamics with a normal number of interneurons. 55 and 54 are the MAP estimates for the interneuron counts of the marginal baseline posterior.

To investigate excitatory synaptic sprouting, we compared the distributions:

Equation 8.2 produces healthy dynamics despite excitatory sprouting. The connection probability 3% is double the baseline MAP estimate of 1.5%. For comparison we chose an order of magnitude smaller percentage points for Equation 8.1. The connection strength 4.35 nS is the MAP estimate of the marginal BL posterior.

To investigate intrinsic changes in principal cell excitability, we compare the distributions:

These represent a minor increase of input resistance and resting membrane potential in principal cells. MAP estimate for baseline output of the two parameters was and .

To compare conditional distributions of parameters, 100000 samples were drawn and we used scipy’s two sample implementation of the Kolmogorov-Smirnov test (ks_2samp).

2.5 Conditional Baseline versus Hyperexcitable Distributions

In Figure 4, the compared distributions for IN Loss are: p(θ|xBL, AINn = 15, NINn = 15) versus p(θ|xHE, AINn = 15, NINn = 15). For Sprouting: p(θ|xBL, PC → PCP = 3%, ) versus p(θ|xHE, PC → PCP = 3%, ). For Depolarized: p(θ|xBL, ) versus p(θ|xHE, ).

2.6 Compensatory Effects on Simulator Output

In Figure 5, 100 samples were drawn from each pathophysiological conditional distribution: p(θ|xHE, ), p(θ|xHE, PC → PCP = 3%, ) and p(θ|xHE, AINn = 15, NINn = 15). For each parameter and each sample, we varied the parameter linearly to get fifty values from lowest to highest prior value.

2.7 Software

Prior samples were drawn and simulated on the Okinawa Institute of Science High Performance computing resources running CentOS 8. with the 4. version of the Linux kernel. The brian2 version was 2.5.0.3 (Stimberg et al., 2019). The Python version was 3.9.18.

The neural density estimator training, sequential NPE and conditional sampling were run on a Windows 10 computer with brian2 2.7.1. The Python version was 3.10.14.

Plotting was done with matplotlib 3.9.1 (Hunter, 2007) or seaborn 0.13.2 (Waskom, 2021). pandas 2.2.3 was used for data wrangling (McKinney, 2010). statsmodels 0.14.4 was used for ANOVA test (Seabold & Perktold, 2010). scipy 1.13.1 was used for Kolmogorov-Smirnov tests and Spearman correlation coefficient (Virtanen et al., 2020).

3 Results

To study the mechanisms of hyperexcitability, we implemented a phenomenological spiking neuronal microcircuit simulator (Figure 1 A, Methods Section 2.1). We drew 2620000 samples from the 32 dimensional prior parameter space (Table 1 & Table 2) and simulated them. Out of those simulations, 1462613 had zero PC spikes. Figure 1 B shows the outcome variables from the simulations with at least one action potential, showing that the simulator can generate a wide variety of outcomes, including the baseline and the hyperexcitable values targeted with SBI shown as crosses (Table 3). We used all 2620000 samples to train a neuronal density estimator, with the mean ISI entropy, correlation and ISI CV set to zero if there were zero PC spikes. We then used simulation-based calibration (Talts et al., 2018) to test the reliability of the posterior estimate q(θ|x), for all x, which are within the support of the likelihood. Simulation-based calibration showed that the posterior estimate was unreliable for several parameters (not shown). Therefore, we did not rely on the amortized posterior estimate but went on to perform truncated sequential NPE (Deistler et al., 2022) to get more precise estimates for the specific outcomes in Table 3. Figure 1 C shows simulation results from the maximum-a-posteriori (MAP) estimated parameters from the sequential posterior estimate, given baseline and hyperexcitable outcomes. To ensure that the posteriors were accurate, we also performed prior-predictive checks. Figure 1 D shows the simulation results from 1000 parameter sets sampled from the posterior. The simulation results are distinct for the two conditions and the target value for each outcome variable are well within the support for the baseline condition. For the hyperexcitable condition, however, the gamma and fast power outcomes are not within the interquartile range. This indicates that the model might be unable to decrease gamma and fast power while increasing excitability. However, because the mean rate and synchrony are increased, the posterior is still a good estimate of a hyperexcitable condition. Before exploring the hyperexcitable posterior, we investigated compensatory mechanisms that maintain the baseline condition by investigating the pairwise correlations in the posterior sample.

Sequential NPE finds distributions of parameters that produce different levels of excitability.

A) An illustration of the spiking neuronal network simulator. See methods Section 2.1 for details. B) Simulator outcome from 1122586 simulations. Simulations with zero PC spikes or undefined CV are not shown. The parameters were drawn from the prior distribution (see Table 1 & Table 2). The cross shows the two target outcomes the NPE was conditioned on. C) Shows the result of simulating the MAP parameters from p(θ |xBaseline) and p(θ |xHyperexcitable). The top shows the voltage trace of a single PC, and the bottom shows spike raster plots for 50 of 500 PCs. D) Outcomes from simulating 1000 parameter sets that were drawn from each of the two distributions. The black x marks the value that was targeted with NPE. See Table 3. In the baseline condition, the targeted outcome is well within support of the outcome distribution for all parameters. In the hyperexcitable condition, gamma and fast power targets are not within the interquartile range.

To identify compensatory mechanisms, we calculated the pairwise correlation coefficients between parameter samples from the baseline posterior estimate. These correlations identify compensatory mechanisms, because they indicate that baseline activity can be maintained despite changes in one parameter if the other parameter changes accordingly. Since our model has 32 free parameters, 496 unique pairwise correlations exist. For each pair, we calculated the Spearman correlation coefficient from 200000 marginal posterior samples. Out of all pairs, only 19 have correlation coefficients larger than 0.1 or smaller than −0.1 Figure 2 A. The largest negative correlation (−0.94) is between the PC connection probability (PC → PCp) and the connection strength . Both parameters are narrowly tuned and must be extremely small to maintain healthy dynamics (Figure 2 A). The MAP estimate (an estimate of the posterior distribution’s peak) for connection probability is 0.015, and for connection strength, it is 4.35 nS. Probability and strength of connections from both interneuron types to PCs are also negatively correlated, albeit with smaller correlation coefficients (Figure 2 B) and they are not as narrowly constrained (Figure 2 A).

Various parameter pairs can compensate each other to maintain baseline excitability.

200000 samples were drawn from the baseline posterior estimator and pairwise Spearman correlations were calculated for each of the 496 unique pairwise comparisons. A) shows 2D histograms of the 19 pairs with correlation coefficients larger >0.1 or <−0.1. The maximum pixel values read from top left to bottom right are: 813, 588,525,1718,411,863, 3328, 2923, 704, 757, 673, 2371, 327, 294, 3337, 278, 262, 301 and 291. The minimum pixel value is always 0, except for the second to last pair ( vs where the minimum is 3. B) Shows the correlation coefficient of all parameter pairs as a diverging heatmap. The marginal correlation coefficients for the pairs shown in A from top left to bottom right, are: −0.31, −0.21, −0.13, 0.15, 0.13, −0.32, 0.19, −0.25, −0.13, −0.14, 0.54, 0.14, −0.22, −0.12, −0.94, −0.19, −0.16, 0.11, −0.32. C) Shows the correlation coefficients of samples drawn for each parameter pair while conditioning all other parameters on the map estimate. Correlation coefficients for the order in A from top left to bottom right, are: −0.43, 0.06, −0.41, 0.49, −0.01, −0.59, 0.42, 0.35, −0.47, −0.39, 0.59, 0.54, −0.47, −0.35, −0.49, −0.45, −0.47, −0.23, −0.51.

The largest positive correlation is between two intrinsic properties of and (0.54). Those are the equilibrium potential of the leak current and the threshold VT, which causes the exponential non-linearity of the AdEX model. If PCs rest at a higher potential, similar activity levels can be achieved by raising the threshold and vice-versa. The capacitance is negatively correlated with the other three intrinsic properties (Figure 2 B). Increasing the capacitance reduces the overall excitability of the cell but also changes the filtering properties and, thereby, the frequency response. This might explain why PCC and are negatively correlated. If PCC would only decrease the mean rate, the intuitive compensatory mechanism would be to increase excitability by raising , which would imply a positive correlation. The negative correlation is, therefore, likely compensating for one of the summary statistics that is not about the principal cell rate.

Out of the 19 meaningfully (correlation coefficient >0.1 or <−0.1) correlated parameter pairs, 12 involve intrinsic PC properties. None of the intrinsic interneuron parameters feature in meaningful correlations but both interneuron numbers are correlated with their respective population’s connection probability to principal cell Figure 2 A.

However, these are the marginal correlations and they are known to be weakly correlated. Marginal distributions are not constrained by any other parameters. In other words, a parameter’s marginal posterior distribution gives the probability for the parameter value to achieve the targeted outcome, while all other parameters are unknown. Because the unknown parameters are the broadest possible estimate of the parameter space, the marginal distributions are broad. However, the posterior is a better, more narrow estimate of the likely parameters, and we can use it to constrain the distribution of parameters. These constrained distributions are called conditional distributions because the other parameters are conditioned on particular values. When calculating conditional distributions of two parameters and correlating their samples, the correlation is called “conditional correlation”.

We therefore used the MAP estimate of the posterior distribution to calculate the conditional correlations (Figure 2 C). Most pairs that were meaningfully correlated with marginal sampling were also meaningfully correlated with conditional sampling. However, many pairs are only correlated for conditional sampling (Figure Supp 2.1). Stronger correlations have also been shown previously for conditioning on posterior samples, rather than the MAP (Gonçalves et al., 2020). Figure Supp 2.2 & 2.3 show the full marginal and conditional posterior distribution, respectively.

These correlations identify parameters that can compensate for changes in other parameters to maintain the baseline output. However, there is a more specific way of investigating compensatory mechanisms. By comparing two conditioned distributions, p(θ|xBL, θi = k) and p(θ|xBL, θi = j), we can find the compensatory mechanisms that maintain xBL despite the condition. For example, if θi = k is a normal amount of interneurons and θi = j is the amount of interneurons expected in sclerotic tissue, we can find the parameters changes that can maintain healthy output despite the interneuron loss. We used this technique to investigate IN loss, recurrent excitation, and PC depolarization.

To find specific compensatory mechanisms, we sampled from the baseline posterior estimator with additional conditions on parameters representing known pathophysiological parameters (Figure 3). For interneuron loss, we conditioned the cell numbers in both interneuron populations (Normal: Equation 7.1 versus IN Loss: Equation 7.2). For recurrent excitation (called sprouting), we conditioned the connection probability PC → PCp (Normal: Equation 8.1 versus Sprouting: Equation 8.2). And for PC depolarization we conditioned PCEL and (Normal: Equation 9.1 versus Depolarized: Equation 9.2). While conditioning on these parameters, we sampled all other parameters and compared the normal with the pathophysiological samples. The difference between the distributions of a parameter reveals compensatory mechanisms because it shows how the parameter can change to maintain baseline activity despite the pathophysiology. We use the Kolmogorov-Smirnov (KS) test statistic to quantify the difference between distributions, where 0 indicates identical distributions and 1 indicates non-overlapping distributions. Figure 3 A, B and C show the five parameters with the largest KS test statistic sorted from largest to smallest in each pathophysiological condition.

Specific pathophysiological conditions have distinct compensatory parameters that maintain baseline outcomes.

We compared samples from the baseline posterior estimator conditioned on normal and pathophysiological parameter values. To compare distributions, we calculated the Kolmogorov-Smirnov (KS) test statistic. A), B) and C) show the five parameters with the largest test statistic in each condition. A) compares 55 AIns and 54 NINs in each subpopulation (Normal, Equation 7.1) to 15 neurons in each population (IN Loss, Equation 7.2). B) compares PC − PCp of 0.15% (Normal, Equation 8.1) to 3% (Sprouting, Equation 8.2). C) compares (Normal, Equation 9.1), with (Depolarized, Equation 9.2). D) KS test statistic for all parameters and conditions. KS test results for all parameters and conditions are in Table Supp 3.1. E) & F) show the absolute correlation coefficients from Figure 2. IN loss is the average absolute correlation of NINn and AINn with the other parameters. Sprouting is the absolute correlation of PC → PCp with the other parameters. Intrinsic is the average absolute correlation of and with the other parameters.

For example, when the number of INs is constrained, increasing the AIN → PCp connection probability maintains baseline excitability (Figure 3 A, leftmost). Figure 3 D shows the KS test statistic for each parameter in each condition. As expected from the pairwise correlations in Figure 2, the PC → PC connection strength has the largest KS statistic in the sprouting conditional. For IN Loss and Depolarization (Figure 3 A & C) on the other hand, the compensatory potential of the PC → PC connection is low. This shows that some compensatory parameters depend on other conditions. This is also true for IN loss and intrinsics. During IN loss, the connection probabilities from INs → PCs stand out as being particularly effective. In contrast, the threshold potential is uniquely positioned to maintain baseline excitability despite intrinsic depolarization.

For comparison, we also show the absolute correlation coefficients from Figure 2 for the specific pathophysiological parameters (Figure 3 E & F). PC → PC strength, AIN → PC strength, and show similar trends for all three conditions. However, the overall profile changes markedly. This highlights an important difference between quantifying compensatory potential from pairwise correlations as opposed to changes between different conditionals. The pairwise correlations are calculated across the entire range of possible values, whereas conditionals are specific points. Therefore, conditionals quantify the compensatory potential in response to a specific perturbation from point A to point B, which gives them an advantage if the perturbation is known. Pairwise correlations quantify whether there is a correlation across the entire prior range (or in case of conditional correlation coefficients: the range within support of the posterior estimate). Overall, pairwise correlations and conditionals represent distinct methods of quantifying compensation and many parameters are expectedly different, but some parameters such as and synaptic strengths show similar trends.

So far we focused on the posterior estimator for baseline excitability. Next, we investigated the compensatory mechanisms that can move the network from hyperexcitable to baseline (Figure 4). The principle of Figure 4 is similar to Figure 3, with one critical difference: Instead of comparing normal and pathophysiological parameter conditions at baseline (Figure 3), we compare baseline and hyperexcitable posterior estimation at the pathophysiological condition (Figure 4). A, B & C show the parameters with the largest KS statistic. The KS statistics for the parameters are overall different from Figure 3. For example, three intrinsic PC properties are among the five parameters with the largest difference for IN loss. Furthermore, the IN → PC connections are no longer among the five largest. Excitatory sprouting and intrinsic depolarization are also different. However, the excitatory connection strength is still the parameter with the strongest compensatory potential in the sprouting condition. During intrinsic depolarization, PC threshold is still in the largest five, but the PC → PC connection probability is now first. This shows that the compensatory potential of some parameters depends on specifics of the compensated conditions, while others are critical across conditions. Finally, we also analyzed the correlation of parameters with specific outcome parameters.

Specific pathophysiological conditions have distinct compensatory parameters that can change the network from hyperexcitable to baseline.

We compared samples from the baseline posterior estimator to those from the hyperexcitable posterior estimator conditioned on pathophysiological parameter values. To compare distributions, we calculated the Kolmogorov-Smirnov (KS) test statistic. A), B) and C) show the five parameters with the largest test statistic. Each compares baseline and the hyperexcitable estimator for a specific pathophysiological condition. In A) the condition is that the IN numbers are set to 15. In B) the condition is that PC → PCp is set to 3%. In C) the condition is that ) shows the KS test statistic calculated between the baseline and hyperexcitable posterior for each pathophysiological condition. KS test results for all parameters and conditions are in Table Supp 4.1.

We have used posterior density estimators to quantify compensatory mechanisms. However, the ultimate goal of a compensatory mechanism, in the clinical sense, is to bring the system from the pathophysiological state back to the healthy state. In the simulator, we can directly test the effect of parameter changes on the output and, furthermore, whether the effect depends on the specific pathophysiological cause. To this end, we used the hyperexcitable posterior density estimator and drew samples given each pathophysiological condition: IN Loss, Sprouting and Depolarization. To test the effect of a parameter on output, a single parameter was then systematically varied and simulated. If the pathophysiological condition influences the effect of a parameter on the output, we expect a change in the difference between conditions at some parameter point. The interaction of an ANOVA shows whether the interaction between the parameter change and any of the three pathophysiological conditions is significant. A statistically significant interaction indicates that the effect of a parameter on the outcome depends on the condition (or vice versa) and is thus specific to the pathophysiology. The results clearly show that the condition is important for the compensatory potential of many parameters (Figure 5). As expected, IN loss affects the influence of AIN → PC connectivity, and sprouting strongly affects . The effect of on a variety of outcomes is also strongly condition-dependent. Overall, this shows that the compensatory effect of many parameters depends on the specific pathophysiological condition.

The pathophysiological condition changes the effect of parameters on simulated outcomes.

The hyperexcitable posterior estimator was sampled with additional pathophysiological conditions. Then, each parameter was varied to cover 50 points in the parameters prior range and the parameters were simulated. Each of the 50 points on the x-axis contains 100 samples and the error bars show the 95% confidence interval. The asterisks indicate where the p-value of the interaction between parameter and condition is . Figure Supp 5.1 shows the effect of more parameters and Table Supp 5.1 contains the p-values of the interactions.

4 Discussion

The ideal compensatory mechanisms depend on the underlying pathological causes. Using simulation-based inference we quantify the ability of parameters to compensate for different causes of hyperexcitability. For example, we find that the threshold potential for exponential voltage rise (VT) is the best parameter to compensate for intrinsic depolarization. However, when compensating for interneuron loss or recurrent excitatory sprouting, VT compensation is less effective. The exponential voltage rise in neurons is mediated by voltage-activated sodium channels, which are a crucial target of antiepileptic drugs (Kaplan et al., 2016). Our findings, therefore, predict that in patients where seizures are not primarily caused by principal cell hyperexcitability, commonly used sodium channel blockers could be less effective. This could explain why sodium channel blockers do not control seizures in some patients.

Two other interesting parameters are the interneuron to principal cell connection probabilities (AIN → PCp and NIN → PCp). Increasing the connection probability of either can compensate for intrinsic depolarization but also for the loss of interneurons. On one hand, fewer interneurons mean less overall inhibition and it makes sense that this can be compensated for by increasing connectivity. On the other hand, fewer interneurons means that the effect of the connectivity parameter should decrease, because the same increase in connection probability creates fewer overall synapses. We predict that even in sclerotic tissue with severe interneuron loss, increasing the number of inhibitory synapses has great compensatory potential. In vivo, thousands of interneurons remain even after interneuron loss in the pilocarpine or traumatic brain injury rat model of epilepsy (Huusko et al., 2015). The remaining interneurons could restore healthy activity by maintaining more synapses on principal cells. There are currently no epilepsy treatments that target inhibitory synapses directly, and increasing the number of inhibitory synapses without affecting excitatory synapses could prove difficult. A promising direction could be to use astrocyte-secreted factors such as neurocan, which specifically control inhibitory synaptogenesis (Irala et al., 2024).

While synaptogenesis has not yet been targeted for epilepsy treatments, interneuron transplantation to replace lost interneurons has been explored and proven effective in rodents (Hunt & Baraban, 2015). We find that the interneuron cell numbers, compared to synaptic and intrinsic parameters, do not compensate well for excitatory sprouting or intrinsic depolarization. Before using interneuron transplants, it could, therefore, be advisable to determine whether interneuron loss affects the epileptogenic region. This is especially important since some epileptogenic regions are non-sclerotic, having cell numbers similar to healthy controls (Blümcke et al., 2013). In these regions, seizures could be caused by other factors, and interneuron transplantation could be less effective.

Our results indicate that the pathophysiological mechanisms could be relevant to choosing optimal treatments, but measuring the number of interneurons, the number of recurrent excitatory synapses, or intrinsic excitability is currently impossible in living patients. It is, therefore, essential to develop methods to estimate these difficult-to-measure mechanistic features from measurable properties. Simulation-based inference can be used to estimate these mechanistic biomarkers. For example, Doorn et al. (2024) measured the activity of patient-derived neuronal cultures with multi-electrode arrays and used the posterior estimates of recurrent spiking neuronal network parameters conditioned on the activity as estimates of pathological mechanisms. This method shows some promise, but a gap remains between the activity in epileptic networks and the activity in cell cultures. Cell cultures are dominated by the details of the culturing technique and the patient’s genetics. This makes cell culture particularly useful for epilepsies with strong genetic components. In contrast, the activity of an epileptogenic network in vivo is more complex and depends on the patient’s genetics but also various protective or harmful environmental and developmental factors. Therefore, intracranial field recordings, as they are performed in pharmacoresistant epilepsies, could be necessary. Such recordings with phenomenological neuronal mass models, which simulate the average activity of coupled brain regions rather than the spiking of single neurons, have been used to estimate the epileptogenicity index (Makhalova et al., 2022). While this index does not describe a specific pathophysiological mechanism, it could be clinically meaningful. A clinical trial is ongoing to determine its efficacy in guiding surgical decisions on the resection of epileptogenic brain tissue (Jirsa et al., 2023). If neural mass models can provide meaningful estimates of epileptogenicity, spiking neuronal network models could provide estimates of pathophysiological mechanisms such as interneuron number, which we have found might be helpful to guide personalized treatment.

Comparing conditional distributions has, to our knowledge, not been used to identify compensatory mechanisms. It is thus worth comparing it to the more common way of identifying compensation from the posterior correlations between parameters (Gonçalves et al., 2020). If we sample from the posterior p(θ|x), correlations between θi and θj indicate that if either of the parameters were changed, we could keep the probability of generating x-type output constant by changing the other parameter. This identifies broadly applicable compensatory mechanisms that work across the entire parameter range.

On the other hand, comparing p(θi|x, θj = 5) to p(θi|x, θj = 10) shows, how θi could change to maintain x-type dynamics, despite θj being perturbed from 5 to 10. The conditional also tests only the compensation of one parameter to another (the unconditioned compensates for the conditioned), whereas the correlations find compensation in both directions. Overall, correlations discover broadly applicable mechanisms, while conditionals identify specific mechanisms. Therefore, the conditional distributions are likely more useful for precision treatments than the unspecific correlations.

In summary, we show that the pathophysiological mechanisms underlying hyperexcitability change the efficacy of compensatory mechanisms to restore normal excitability in a spiking neuronal network model. These findings suggest that these mechanisms and a biologically detailed simulator could be used to predict the outcome of treatment options, such as voltage-gated sodium channel blockers or interneuron transplants. Simulation-based prediction of treatment outcomes would be a major advance for treating brain disorders and precision medicine.

Code and Data Availability

The Python code to run the simulator, train the NPE, infer posteriors, analyze the data and plot the figures is available on GitHub (https://github.com/danielmk/hyperexcitability_sbi). The simulated data and the trained NPEs will be publicly available in a data repository.

Acknowledgements

We are grateful for the help and support provided by the Scientific Computing and Data Analysis section of Core Facilities at OIST. We thank Milena Menezes Carvalho, Joanna Komorowska-Müller and Gastón Sivori for helpful comments on the manuscript. This work was supported by JSPS KAKENHI grant no. JP23H05476 to T.F.

Additional files

Supplemental figures