Generation of biophysical neuron model parameters from recorded electrophysiological responses

  1. Jimin Kim
  2. Minxian Peng
  3. Shuqi Chen
  4. Qiang Liu  Is a corresponding author
  5. Eli Shlizerman  Is a corresponding author
  1. Department of Electrical and Computer Engineering, University of Washington, United States
  2. Department of Neuroscience, City University of Hong Kong, Hong Kong
  3. Department of Applied Mathematics, University of Washington, United States

eLife Assessment

This study is a valuable contribution to the field of neuronal modeling by way of providing a method for rapidly obtaining neuronal physiology parameters from electrophysiological recordings. The method is solid as the generated models reproduce both ground-truth simulated data and empirical data, and there is now a quantitative comparison with other approaches.

https://doi.org/10.7554/eLife.95607.4.sa0

Abstract

Recent advances in connectomics, biophysics, and neuronal electrophysiology warrant modeling of neurons with further details in both network interaction and cellular dynamics. Such models may be referred to as ElectroPhysiome, as they incorporate the connectome and individual neuron electrophysiology to simulate neuronal activities. The nervous system of Caenorhabditis elegans is considered a viable framework for such ElectroPhysiome studies due to advances in connectomics of its somatic nervous system and electrophysiological recordings of neuron responses. In order to achieve a simulated ElectroPhysiome, the set of parameters involved in modeling individual neurons needs to be estimated from electrophysiological recordings. Here, we address this challenge by developing a deep generative estimation method called ElectroPhysiomeGAN (EP-GAN), which, once trained, can instantly generate parameters associated with the Hodgkin–Huxley neuron model (HH-model) for multiple neurons with graded potential response. The method combines generative adversarial network (GAN) architecture with recurrent neural network encoder and can generate an extensive number of parameters (>170) given the neuron’s membrane potential responses and steady-state current profiles. We validate our method by estimating HH-model parameters for 200 simulated neurons with graded membrane potential followed by nine experimentally recorded neurons (where six of them are newly recorded) in the nervous system of C. elegans. Comparison of EP-GAN with existing estimation methods shows EP-GAN's advantage in the accuracy of estimated parameters and inference speed for both small and large numbers of parameters being inferred. In addition, the architecture of EP-GAN permits input with arbitrary clamping protocols, allowing inference of parameters even when partial membrane potential and steady-state currents profiles are given as inputs. EP-GAN is designed to leverage the generative capability of GAN to align with the dynamical structure of the HH-model and thus is able to achieve such performance.

Introduction

Models of the nervous system aim to achieve biologically detailed simulations of large-scale neuronal activity through the incorporation of both structural connectomes (connectivity maps) and individual neural dynamics. The nervous system of Caenorhabditis elegans is considered a framework for such a model as the connectome of its somatic nervous system for multiple types of interaction is mapped (White et al., 1986; Varshney et al., 2011; Cook et al., 2019). In addition to the connectome, advances in electrophysiological methodology allow the recording of whole-cell responses of individual neurons. These advances provide biophysically relevant details of individual neuro-dynamical properties and warrant a type of model for the C. elegans nervous system incorporating both the connectomes and individual biophysical processes of neurons. Such a model could be referred to as ElectroPhysiome, as it incorporates a layer of individual neural dynamics on top of the layer of inter-cellular interactions facilitated by the connectome.

The development of nervous system models that are further biophysically descriptive for each neuron, that is, modeling neurons using the Hodgkin–Huxley type equations (HH-model), requires fitting a large number of parameters associated with ion channels found in the system. For a typical single neuron, these parameters could be tuned via local optimizations of individual ion channel parameters estimated separately to fit their respective in vivo channel recordings such as activation/inactivation curves (Hodgkin and Huxley, 1952; Willms, 2002; Willms et al., 1999; Nicoletti et al., 2019; Liu et al., 2018; Jiang et al., 2022). Such a method requires multiple experiments to collect each channel data, and when such experiments are infeasible, the parameters are often estimated through hand-tuning. In the context of developing the ElectroPhysiome of C. elegans, the method would have to model approximately 300 neurons each including an order of hundreds of parameters associated with up to 15 to 20 ionic current terms (with some of them having unknown ion channel composition), which would require large experimental studies (Nicoletti et al., 2019). Furthermore, the fitted model may not be the unique solution as different HH-parameters can produce similar neuron activity (Marder and Goaillard, 2006; Marder and Taylor, 2011; Prinz et al., 2003; Prinz et al., 2004). As these limitations also apply for general neuron modeling tasks beyond C. elegans neurons, there has been an increasing search for alternative fitting methods requiring less experimental data and manual interventions.

A promising direction in associating model parameters with neurons has been the simultaneous estimation of all parameters of an individual neuron given only electrophysiological responses of cells, such as membrane potential responses and steady-state current profiles. Such an approach requires significantly less experimental data per neuron and offers more flexibility with respect to trainable parameters. The primary aim of this approach is to model macroscopic cell behaviors in an automated fashion. Indeed, several methods adopting the approach have been introduced. Buhry et al., 2012 and Laredo et al., 2022 utilized the differential evolution (DE) method to simultaneously estimate the parameters of a 3-channel HH-model given the whole-cell membrane potential responses recording (Buhry et al., 2012; Laredo et al., 2022). Naudin et al., 2022b further developed the DE approach and introduced the multi-objective differential evolution (DEMO) method to estimate 22 HH-parameters of three non-spiking neurons in C. elegans given their whole-cell membrane potential responses and steady-state current profiles (Naudin et al., 2022b). The study was a significant step toward modeling whole-cell behaviors of C. elegans neurons in a systematic manner. From a statistical standpoint, Wang et al., 2022 used the Markov Chain–Monte Carlo method to obtain the posterior distribution of channel parameters for HH-models featuring three and eight ion channels (two and nine parameters, respectively) given the simulated membrane potential responses data (Wang et al., 2022). From an analytic standpoint, Valle et al. 2022 suggested an iterative gradient descent-based method that directly manipulates the HH-model to infer three conductance parameters and three exponents of activation functions given the measurements of membrane potential responses (Valle and Madureira, 2022). Recent advances in machine learning gave rise to deep learning-based methods which infer steady-state activation functions and posterior distributions of three-channel HH-model parameters inferred by an artificial neural network model given the membrane potential responses data (Gonçalves et al., 2020; Estienne, 2021).

While these methods suggest that simultaneous parameter estimation from macroscopic cell data is indeed possible through a variety of techniques, it is largely unclear whether they can be extended to fit more detailed HH-models featuring a large number of channels and parameters (e.g., C. elegans neurons) (Nicoletti et al., 2019). Furthermore, for most of the above methods, the algorithms require an independent (from scratch) optimization process for fitting each individual neuron, making it difficult to scale up the task toward a large number of neurons.

Here, we propose a new machine learning approach that aims to address these aspects for the class of non-spiking neurons, which constitute the majority of neurons in C. elegans nervous system (Goodman et al., 1998). Specifically, we develop a deep generative neural network model (GAN) combined with a recurrent neural network (RNN) Encoder called ElectroPhysiomeGAN (EP-GAN), which directly maps electrophysiological recordings of a neuron, for example, membrane potential responses and steady-state current profiles, to HH-model parameters of arbitrary dimensions (Figure 1). EP-GAN can be trained with simulation data informed by a generic HH-model encompassing a large set of arbitrary ionic current terms and thus can generalize its modeling capability to multiple neurons. Unlike typical GAN architecture trained solely with adversarial losses, we propose to implement an additional regression loss for reconstructing the given membrane potential responses and current profiles from generated parameters, thus improving the accuracy of the generative model. In addition, due to the RNN component of EP-GAN, the approach supports input data with missing features such as incomplete membrane potential responses and current profiles.

Estimation of HH-model parameters from membrane potential and steady-state current profiles.

Given the membrane potential responses (V) and steady-state current profiles (IV) of a neuron, the task is to predict biophysical parameters of the Hodgkin–Huxley-type neuron model (left). We use the Encoder-Generator approach to predict the parameters (right).

We validate our method to estimate HH-model parameters of 200 simulated non-spiking neurons followed by applying it to three previously recorded non-spiking neurons of C. elegans, namely RIM, AFD, and AIY. Studies have shown that membrane potential responses of these neurons can be well modeled with typical HH-model formulations with 22 parameters (Naudin et al., 2022b; Naudin et al., 2021). We show that when trained with a more detailed HH-model consisting of 16 ionic current terms resulting in 175 trainable parameters, EP-GAN can predict parameters reproducing their membrane potential responses with higher accuracy in the reconstruction of membrane potential with significantly faster inference speed than existing algorithms such as differential evolution and genetic algorithms. Through ablation studies on input data, we show that EP-GAN retains its prediction capability when provided with incomplete membrane potential responses and steady-state current profiles. We also perform ablation studies on EP-GAN architecture components to elucidate each component’s contributions toward the accuracy of the predicted parameters. To further test EP-GAN, we estimate HH-model parameters for six newly recorded non-spiking C. elegans neurons: AWB, AWC, URX, RIS, DVC, and HSN, whose membrane potential responses were not previously modeled.

Our results suggest that EP-GAN can learn a translation from electrophysiologically recorded responses and propose projections of them to parameter space. EP-GAN method is currently limited to non-spiking neurons in C. elegans as it was designed and trained with the HH-model describing the ion channels of these neurons. EP-GAN applications can potentially be extended toward resolving neuron parameters in other organisms since non-spiking neurons are found within animals across different species (Koch et al., 2006; Roberts and Bush, 1981; Davis and Stretton, 1989a; Davis and Stretton, 1989b; Burrows et al., 1988; Laurent and Burrows, 1989a; Laurent and Burrows, 1989b; Naudin, 2022a).

Results

We evaluate EP-GAN with respect to four existing evolutionary algorithms introduced for general parameter estimation: NSGA2, DEMO, GDE3, and NSDE. Specifically, NSGA2 is a variant of the Genetic Algorithm (GA) that uses a non-dominated sorting survival strategy and is a commonly used benchmark algorithm for multi-objective optimization problems that include HH-model fitting (Deb et al., 2000; Hay et al., 2011; van Geit et al., 2008). DEMO, GDE3, and NSDE are variants of Differential Evolution (DE) algorithms that combine DE mutation with Pareto-based ranking and crowding distance sorting applied in NSGA2’s survival strategy (Robič and Filipič, 2005; Kukkonen and Lampinen, 2005; Angira and Babu, 2005). These methods have been proposed as more effective methods than direct DE for the estimation of HH-model parameters (Naudin et al., 2022b; Rumbell et al., 2016; Octeau et al., 2019; Buhry et al., 2009). In particular, DEMO has been successfully applied to estimate HH-model parameters for non-spiking neurons in C. elegans (Naudin et al., 2022b). All four methods support multi-objective optimization over the large parameter space, allowing them to have similar setups as EP-GAN. All four methods were implemented in Python where DEMO uses the algorithm proposed in Naudin et al., 2022b whereas NSGA2, GDE3, and NSDE were implemented using Pymoo package (Blank and Deb, 2020).

For the HH-model to be estimated, we use the formulation introduced in Nicoletti et al., 2019; Nicoletti et al., 2024. The model features 16 ion channels that were found in C. elegans and other organisms expressing homologous channels and is considered the most detailed neuron model for the organism (see ‘Ionic currents modeling’ for the mathematical description of channels). The model has a total of 203 parameters, of which we identify 175 of them have the approximate ranges with lower and upper bounds that can be inferred from the literature (Liu et al., 2018; Naudin et al., 2021; Izhikevich, 2007). We thus target these 175 parameters as trainable parameters for all methods. For a detailed list of all 203 parameters included in the HH-model and 175 parameters used for training, see Nicoletti et al., 2024 and the included table predicted parameters in Supplementary file 1.

Predictions on simulated neurons

We first validate EP-GAN by training and testing using simulated neurons. Each simulated neuron training sample consists of two inputs: (i) simulated membrane potential traces concatenated with associated external stimuli traces and (ii) steady-state currents across 18 voltage points. For each neuron, the output is the set of 175 HH-parameters to be inferred. Each membrane potential trace is simulated for 15 s for a given stimulus according to the current-clamp protocol where the stimulation is applied for 5 s at [5, 10] s and no stimulation is applied at t=[0,5) (pre-activation) and t=(10,15] (post-activation). These time intervals are chosen to ensure sufficient stabilization periods before and after stimulation. For the membrane potential input, the responses during t=[4,11] interval are used consistent with the time interval used by experimental recordings. Similarly, steady-state currents are computed across 18 voltage states according to voltage-clamp protocol (see Table 4 for detailed current/voltage clamp protocols used for simulated neurons). The output HH-parameters are of the simulated neurons chosen randomly from lower and upper bounds as previously described. For training EP-GAN, we simulate a total of 32,000 (32k) neurons where EP-GAN achieves both good predictive performance and training time. Specifically, 32k is a training data size in which membrane potential errors from the test set (N=200) are within the average root mean square error (RMSE) recording error (4.8 mV, averaged over pre-, mid-, post-activation periods) obtained from experimental neurons with multiple membrane potential recording data. For more details on generating simulated neuron training samples, see ‘Generating training data’.

To initially test EP-GAN performance, we evaluate EP-GAN predicted parameters for 200 simulated neurons outside of the training set (test set). To emulate C. elegans’ neuronal diversity, neurons in the test set are divided into three response types: (i) transient outward rectifier type, (ii) outward rectifier type, and (iii) bistable type – that are currently found in non-spiking neurons of C. elegans according to their steady-state responses (Figure 2; Liu et al., 2018; Naudin et al., 2022b). For a given neuron being evaluated, EP-GAN predicted HH-parameters are obtained as follows: for each training epoch, EP-GAN generates a set of HH-parameters for the neuron and at the end of the training, the parameter set which achieved the lowest RMSE of membrane potential responses averaged across three time intervals – pre-activation [4, 5) s, mid-activation [5, 10] s, and post-activation (10, 11] s – with respect to ground truths is reported (detailed descriptions of its calculation provided in Appendix 1 andFigure 5A). In the case of multiple N-neurons being evaluated, the same procedure is followed except EP-GAN generates N-parameter sets in parallel at each epoch. Such multi-neuron inference is possible due to EP-GAN being a neural network, where parallel processing of inputs can be done with minimal impact on inference speed. Using these procedures, EP-GAN predicted HH-parameters result in mean membrane potential RMSE error of 2.37 mV for the test set (see Figure 2 for representative samples and Appendix 1—table 1 for the detailed breakdown of the errors). These errors are within the mean recording RMSE error of 4.8 mV obtained from experimental neurons, and their distributions were skewed unimodal type, where the majority of the errors fall within 4 mV (Figure 2—figure supplement 1).

Figure 2 with 1 supplement see all
ElectroPhysiomeGAN (EP-GAN) (32k) predictions on simulated neurons.

(A) EP-GAN predicted membrane potential traces and steady-state currents (red) overlaid on top of groundtruth counterparts (black) for transient outward rectifier neuron type. (B) Outward rectifier neuron type. (C) Bistable neuron type.

Predictions on experimental neurons

We apply EP-GAN trained and tested on simulated data to predict HH-parameters for nine experimentally recorded non-spiking neurons found in C. elegans: RIM, AFD, AIY, AWB, AWC, URX, RIS, DVC, and HSN. Among these neurons, AWB, AWC, URX, RIS, DVC, and HSN are novel recording data and were not previously modeled, whereas RIM, AIY, and AFD neurons are publicly available and their modeling descriptions were elaborated by previous works (Naudin et al., 2022b; Naudin et al., 2021; Naudin et al., 2022c). Similar to the prediction scenario on simulated neurons, we categorize experimental neurons into different response types according to their steady-state current responses. In particular, we classify (RIM, DVC, HSN) as transient outward rectifier type, (AIY, URX, RIS) as outward rectifier type, and (AFD, AWB, AWC) as bistable type. For all experimental neurons, simulation protocols outlined in Table 4 are used to generate membrane potential and steady-state responses of predicted parameters.

We compare the performance of EP-GAN with four existing parameter inference methods: NSGA2, DEMO, GDE3, and NSDE. Unlike EP-GAN, which can optimize multiple neurons in parallel, these are evolutionary methods where the optimization is done from scratch for each neuron. We therefore evaluate their respective performances relative to EP-GAN by normalizing the total number of simulated neuron samples during the entire optimization task. Specifically, for all methods, we set the maximum number of neuron samples used during optimization to equal sizes. For example if EP-GAN is trained with 32k neuron samples to predict nine neurons, NSGA2, DEMO, GDE3, and NSDE are each allocated up to 3.5k samples during the search phase of HH-parameters for each neuron, thus adding up to a total of 32k samples for all nine neurons. To test how the performance of each method scales with the amount of samples during optimization, we evaluate each method with both 32k and 64k total neuron samples.

The parameter selection process for NSGA2, DEMO, GDE3, and NSDE is as follows: during the search phase for each neuron, the parameter set candidates (i.e., population) are recorded at each iteration. At the end of the search phase, the steady-state current profile of each parameter set candidate is evaluated with respect to the experimentally known bifurcation structure (i.e., dI/dV) of the neuron being inferred (e.g., bistable type). Upon evaluation, only the parameter sets satisfying the dI/dV bound constraints (98% confidence interval) are kept. Such an initial selection process is similar to the ones employed by previous methods utilizing evolutionary algorithms (Naudin et al., 2022b). The dI/dV bound constraints are also used for generating EP-GAN training data (see ‘Generating training data’ for more detail). The final parameter set is then chosen by selecting the one with the lowest RMSE membrane potential responses error averaged across pre-, mid-, and post-activation periods identical to that of the EP-GAN parameter selection process. For DE methods (DEMO, GDE3, and NSDE), we follow the same configurations used in the literature to set their optimization scheme (Naudin et al., 2022b). Specifically, we set the crossover parameter CR and scale factor F to 0.3 and 1.5, respectively. For all four methods, NP (population size) is set to 600 with a total of 6 and 12 iterations (i.e., 3.6k and 6.2k samples per neuron for 32k and 64k total neuron samples, respectively). For all methods, loss functions identical to the ones used for EP-GAN training (mean absolute errors, see ‘Materials and methods’ for detail) are used to calculate the errors for membrane potential responses and steady-state currents for multi-objective optimization.

Small HH-model scenarios (47 parameters)

We first test EP-GAN and existing methods with a ‘smaller’ version of the HH-model of 47 parameters where the individual channel parameters (n=129) are frozen to default values given by Nicoletti et al., 2024. The considered parameters consist of 16 conductance parameters of each channel (gCh), 4 reversal potentials for calcium, potassium, sodium, and leak channels (VCa,VK,VNa,VL), 1 cell capacitance C, and 26 initial conditions for membrane potential V0 and channel activation variables (m0,h0). Such a parameter set is commonly targeted when fitting HH-models for individual neurons (Nicoletti et al., 2019; Nicoletti et al., 2024). For all methods, we test both 32k and 64k total sample sizes for the inference of nine experimental neurons. Figure 3 illustrates that EP-GAN can reconstruct membrane potential responses close to ground truth responses. Indeed, upon inspecting the RMSE error for membrane potential responses (averaged over pre-, mid-, and post-activation), EP-GAN (32k) median error of 2.5 mV over nine neurons is 50% lower than that of NSGA2 (64k) 4.3 mV followed by DEMO (64k) 4.8 mV, NSDE (64k) 5.5 mV, and GDE3 (64k) 6.0 mV (Table 1, Appendix 1—table 2, Figure 5B).

Figure 3 with 1 supplement see all
ElectroPhysiomeGAN (EP-GAN) (32k) prediction on experimental neurons (small HH-model).

(A) EP-GAN predicted membrane potential traces and steady-state currents (red) overlaid on top of groundtruth counterparts (black) for transient outward rectifier neuron type (RIM, DVC, HSN). (B) Outward rectifier neuron type (AIY, URX, RIS). (C) Bistable neuron type (AFD, AWB, AWC).

Table 1
Small HH-model scenarios root mean square error (RMSE) errors for predicted membrane potential responses and steady-state currents.

Each method is tested with 32k or 64k total sample sizes, where the top row shows membrane potential responses RMSE errors averaged across pre-activation, mid-activation, post-activation periods, and the bottom row shows steady-state currents RMSE errors across 18 voltage values. The lowest membrane potential responses RMSE error is marked bold for each neuron.

MethodSample sizeMedian errorRIMDVCHSNAIYURXRISAFDAWBAWC
GDE332k16.9 mV20.958.18.913.016.925.46.332.54.7
5.9 pA5.85.819.84.82.45.919.17.211.9
64k6.0 mV11.524.113.76.05.17.14.15.63.6
7.9 pA7.54.66.87.218.77.942.117.646.7
NSDE32k7.1 mV38.78.320.25.77.111.05.56.36.6
13.6 pA3.121.59.713.614.25.724.79.614.5
64k5.5 mV15.48.720.213.55.25.54.94.94.0
9.7 pA2.65.99.73.411.73.264.412.419.0
DEMO32k6.7 mV35.914.15.813.29.06.75.04.93.8
11.5 pA6.513.814.65.25.49.723.811.518.7
64k4.8 mV12.310.55.810.24.84.73.14.42.9
14.6 pA4.46.514.64.115.210.341.023.117.8
NSGA232k7.5 mV12.415.42.69.86.16.47.59.46.6
10 pA4.07.414.34.616.65.014.411.910.0
64k4.3 mV10.519.05.213.33.53.24.24.33.1
8.4 pA8.41.85.22.321.26.526.612.649.4
EP-GAN
(ours)
32k2.5 mV3.42.41.62.53.21.74.92.52.0
13.8 pA4.013.810.310.738.916.848.09.628.9
64k2.4 mV3.42.41.62.42.91.43.42.52.1
13.1 pA3.213.92.59.816.513.149.711.627.9

Among all nine neurons being inferred, EP-GAN (32k) showed the best accuracy for HSN with 1.6 mV and the lowest accuracy for AFD with 4.9 mV. With EP-GAN (64k), the median error further decreased (2.5mV2.4mV), where URX and RIS errors improved by 0.3 mV and AFD error improved by 1.5 mV over their 32k counterparts. Interestingly, we note that the high accuracy of EP-GAN in predicting membrane potential is not necessarily complemented with steady-state currents (Table 1, Figure 5B). EP-GAN’s overall steady-state current errors are generally higher than those of existing methods. A possible reason could be that the majority of these errors stem from lower and upper voltage ranges where the recording variations are high, thus potentially causing a conflict with membrane potential responses optimization. Such a competitive nature between membrane potential vs. steady-state current optimizations has indeed been reported in previous works (Naudin et al., 2022b).

Large HH-model scenarios (175 parameters)

We expand the domain of parameters being inferred by testing with respect to all 175 HH-model’s trainable parameters including the 47 parameters from the previous scenario + 129 channel parameters. The inclusion of channel parameters allows methods to further fine-tune the HH-model. The minimum and maximum values for channel parameters are set to ±50% from their default values (see ‘Generating training data’ for more detail). Such a task introduces further challenges as the methods need to estimate ×3 more parameters compared to the small HH-model scenario. From Figures 4 and 5B, Table 2, and Appendix 1—table 3, we see that while EP-GAN (32k) median membrane potential error increases slightly from 2.5 mV to 2.7 mV, its performance gaps over existing methods widen with 60% lower error than NSGA2 (64k) 7.5 mV, followed by NSDE (64k) 8.6 mV and DEMO (64k), GDE3 (64k) 10.5 mV. EP-GAN trained for large HH-model also slightly improved overall steady-state current error (13.8pA12.4pA) alongside membrane potential errors for RIM (3.4mV3.2mV) and URX (3.2mV3.0mV), indicating different selectivity for individual neurons for small vs. large HH-model. Further increasing the sample size to 64k improved median errors of both membrane potential (2.7mV2.6mV) and steady-states responses (12.4pA8.6pA). Taken together, these results show EP-GAN’s predicting capabilities for HH-parameters with higher membrane potential responses accuracy and its ability to generalize to a larger parameter space with minimal performance loss.

Figure 4 with 1 supplement see all
ElectroPhysiomeGAN (EP-GAN) (32k) prediction on experimental neurons (large HH-model).

(A) EP-GAN predicted membrane potential traces and steady-state currents (red) overlaid on top of groundtruth counterparts (black) for transient outward rectifier neuron type (RIM, DVC, HSN). (B) Outward rectifier neuron type (AIY, URX, RIS). (C) Bistable neuron type (AFD, AWB, AWC).

Bar plot showing the average root mean square error (RMSE) errors for membrane potential responses (pre-, mid-, post-activation periods, mean error) and steady-state currents for nine experimental neurons.

(A) Membrane potential responses (left) and steady-state currents (right) diagrams showing the time and voltage intervals of which the RMSE errors are computed. (B) Bar plots showing RMSE errors (n = 9, 95% confidence level using t-test) for membrane potential responses and steady-state currents for small HH-model prediction scenarios (top) and large HH-model prediction scenarios (bottom). All methods use a 32k sample size for both scenarios.

Table 2
Large HH-model scenarios root mean square error (RMSE) errors for predicted membrane potential responses and steady-state currents.

Each method is tested with 32k or 64k total sample sizes, where the top row shows membrane potential responses RMSE errors averaged across pre-activation, mid-activation, post-activation periods, and the bottom row shows steady-state currents RMSE errors across 18 voltage values. The lowest membrane potential responses RMSE error is marked bold for each neuron.

MethodSample sizeMedian errorRIMDVCHSNAIYURXRISAFDAWBAWC
GDE332k12.8 mV14.012.512.819.49.015.229.410.89.2
9.6 pA3.223.512.810.26.36.07.918.19.6
64k10.5 mV14.011.010.511.712.49.05.09.54.7
4.9 pA3.26.57.43.84.84.016.914.84.9
NSDE32k16.1 mV31.519.08.712.18.623.89.827.116.1
7.2 pA7.88.09.66.85.14.218.27.26.2
64k8.6 mV33.98.612.45.98.68.016.612.54.6
8.1 pA4.029.68.14.75.14.39.115.014.9
DEMO32k16.6 mV28.017.816.610.222.918.16.013.15.1
11.9 pA7.611.921.47.211.16.818.211.930.9
64k10.5 mV10.529.511.010.26.818.46.013.14.4
8.0 pA7.42.621.37.28.07.051.111.99.8
NSGA232k13.4 mV13.416.129.211.38.613.58.211.213.6
7.6 pA6.67.65.88.75.12.724.110.97.6
64k7.5 mV10.616.022.57.54.613.45.46.96.6
4.9 pA4.93.23.71.29.53.124.713.76.9
EP-GAN
(ours)
32k2.7 mV3.22.53.02.73.01.84.52.62.1
12.4 pA3.212.417.410.536.621.943.29.38.4
64k2.6 mV3.22.92.52.63.41.84.42.62.2
8.6 pA3.68.63.34.937.99.231.85.110.2

Ablation studies

EP-GAN architecture also allows its membrane potential inputs to have arbitrary current-clamp protocol due to its RNN encoder component. To test the robustness of EP-GAN when incomplete input data is given, we provide the model with membrane potential responses and steady-state current inputs with missing data points. For each membrane potential responses and current profile, the data is reduced by 25%, 50%, and 75% each. For membrane potential responses data, the ablation is done on stimulus space where a 50% reduction corresponds to removing the upper half of the membrane potential response traces each associated with a stimulus. For the steady-state current profile, we remove the first n-data points where they are instead extrapolated using linear interpolation with existing data points.

Our results show that EP-GAN largely preserves accuracy even when both membrane potential and steady-state current inputs are masked (Figure 6, Figure 6—figure supplement 1 [predicted steady-state currents], Table 3). In particular, EP-GAN preserves median membrane potential error (3.3mV) up to 50% remaining in its inputs but becomes less accurate when up to 25% input remains (3.3mV5.4mV). Surprisingly, AFD neuron membrane potential error is improved when only 50% of input data is considered (5.2mV4.5mV). These results could be attributed to the random input masking employed during EP-GAN training (see ‘Materials and methods’ for detail), which allows EP-GAN to make robust predictions even when conditioned with varying degrees of masked inputs.

Figure 6 with 1 supplement see all
Input data ablation on ElectroPhysiomeGAN (EP-GAN) (32k, Large HH-model).

Left: reconstructed membrane potential responses for RIM, AIY, and AFD when given with incomplete membrane potential responses data. Percentages in parentheses represent the remaining portion of input membrane potential responses trajectories. Right: reconstructed membrane potential responses for RIM, AIY, and AFD when given with incomplete steady-state current input.

Table 3
Ablation studies.

Top: membrane potential responses and steady-state current errors achieved for EP-GAN (32k, Large HH-model) when provided with incomplete input data. Bottom: membrane potential responses and steady-state current errors achieved for EP-GAN (32k, Large HH-model) upon using only adversarial loss (Adv) and using adversarial + current reconstruction loss (Adv + steady state) and all three loss components (Adv + steady state + membrane potential).

Input Data AblationSample sizeMedian ErrorRIMAIYAFD
EP-GAN (25% membrane potential)32k5.4 mV
14.9 pA
3.8
2.8
5.4
14.9
8.9
34.4
EP-GAN (75% steady-state)32k3.5 mV
15.6 pA
3.3
3.7
3.5
15.6
5.1
68.9
EP-GAN (50% steady-state)32k3.5 mV
15.5 pA
3.3
3.7
3.5
15.5
5.1
68.9
EP-GAN (25% steady-state)32k3.5 mV
15.6 pA
3.3
3.7
3.5
15.6
5.1
68.9
EP-GAN (75% membrane potential)32k3.4 mV
11.3 pA
3.4
3.6
2.7
11.3
4.9
44.0
EP-GAN (full)32k3.3 mV
10.5 pA
3.3
3.2
2.7
10.5
5.2
39.5
EP-GAN (50% membrane potential)32k3.3 mV
13.5 pA
3.3
3.4
2.9
13.5
4.5
43.2
Loss ablationSample sizeMedian ErrorRIMAIYAFD
EP-GAN (Adv)32k14.4 mV
20.3 pA
14.4
3.1
6.1
20.3
24.5
75.4
EP-GAN (Adv + steady state)32k6.0 mV
19.1 pA
5.7
2.8
6.0
19.1
3.9
23.1
EP-GAN (Adv + steady state + membrane potential)32k3.3 mV
10.5 pA
3.3
3.2
2.7
10.5
5.2
39.5

We also perform ablation studies on EP-GAN architecture by removing each loss component of the Generator module, allowing us to evaluate their relative contributions to accuracy. For all loss ablation scenarios, simulation protocols outlined in Table 4 are used to generate membrane potential and steady-state responses of predicted parameters. From Table 3 bottom, we see that removing the membrane potential loss term (V) results in a loss in performance for RIM and AIY but increases in accuracy for AFD. The result is consistent with input data ablation scenarios indicating AFD’s higher dependence on steady-state responses for good EP-GAN prediction. Upon removing the steady-state current reconstruction loss term (IV) in addition to the membrane potential reconstruction loss, we see a further reduction in overall performance. These results highlight the significance of the reconstruction losses in aligning the Generator to produce the desired outputs (Table 3).

Table 4
Simulation protocols for simulated and experimental neurons.
NeuronDuration (s)Current-clamp (min:step:max)Stimulation period (s)Voltage-clamp (min:step:max)
Simulated15–15 pA:5 pA:35 pA5–10–120 mV:10 mV:50 mV
RIM15–15 pA:5 pA:35 pA5–10–120 mV:10 mV:50 mV
DVC15–2 pA:1 pA:8 pA5–10–120 mV:10 mV:50 mV
HSN15–2 pA:1 pA:8 pA5–10–120 mV:10 mV:50 mV
AIY15–15 pA:5 pA:35 pA5–10–120 mV:10 mV:50 mV
URX15–4 pA:2 pA:16 pA5–10–120 mV:10 mV:50 mV
RIS15–4 pA:2 pA:16 pA5–10–120 mV:10 mV:50 mV
AFD15–15 pA:5 pA:35 pA5–10–120 mV:10 mV:50 mV
AWB15–4 pA:2 pA:16 pA5–10–120 mV:10 mV:50 mV
AWC15–4 pA:2 pA:16 pA5–10–120 mV:10 mV:50 mV

Parameter inference time

We also evaluate EP-GAN for its scalability by assessing its overall inference time and computational cost and comparing these to existing methods. Indeed, for estimation tasks involving many neurons, it is essential that the method is scalable so that the predictions are done within a reasonable time. In particular, for EP-GAN, the total time T needed for modeling N neurons including data generation and training time can be written as

T(N)TDatageneration+TTrain+TInference

whereas for existing methods, the T(N) follows the form

T(N)NTInference

which increases linearly as N increases. Since TInference for EP-GAN is nearly instantaneous, it has a strong advantage in parameter prediction tasks involving multiple neurons. As an example, given a hypothetical task of modeling all 279 somatic neurons in the C. elegans nervous system, it would take DEMO, GDE3, NSDE, or NSGA2 more than 44 days (assuming 7.2k samples per neuron and our available computing environment) whereas, for EP-GAN, the process would be done within a day under a similar training setup. For a larger number of neurons, the computational requirement of existing methods would grow linearly while EP-GAN would require constant time to complete the inference. Such scalability largely benefits from EP-GAN learning a parameter estimation strategy that is applicable for multiple neuron classes and its neural network structure being an inherently parallel architecture, allowing it to take multiple neuron profiles and output corresponding parameters in a single forward pass (Zou et al., 2009).

Discussion

In this work, we introduce a novel deep generative method and system called EP-GAN, for estimating HH-model parameters given the recordings of neurons with graded membrane potential (non-spiking). The proposed system encompasses the RNN encoder layer to process the neural recordings information such as membrane potential responses and steady-state current profiles and the Generator layer to generate a large number of HH-model parameters (N>175). The system can be trained entirely on simulation data informed by an arbitrary HH-model. When applied to neurons in C. elegans, EP-GAN generates parameters of HH-model which membrane potential responses are closer to ground truth responses than the existing methods such as differential evolution and genetic algorithms. The advantage of EP-GAN is in the accuracy and inference speed achieved through fewer training samples than existing methods and is generic such that it does not depend on the number of neurons for which inference is to be performed (Nicoletti et al., 2019; Naudin et al., 2022b; Nicoletti et al., 2024). In addition, the method largely preserves performance when provided with input data with partial information such as missing membrane potential responses (up to 50% missing) or steady-state current traces (up to 75% missing).

While EP-GAN is a step forward toward the ElectroPhysiome model of C. elegans, its inability to support neurons with spiking membrane potential responses remains a limitation. The reason stems from the fact that neurons with spiking membrane potential responses are rare during the generation of training data of 16 ionic channels HH-model without the spike-associated neuron channels. The relative sparsity of spiking responses makes their translation strategies to parameter space difficult to learn. A similar limitation is present with bistable membrane potential responses, for example, AFD, AWB, and AWC, although to a lesser extent. While the limitations for these profiles can be partially remedied through more training samples of their neuron types, their relative sparseness in the training data tends to cause lower quality of predicted parameters. Indeed, previous studies of C. elegans nervous system found that the majority of neurons exhibit graded membrane potential response instead of spiking (Nicoletti et al., 2019; Goodman et al., 1998). Furthermore, the limitation could lie within the current architecture of EP-GAN as it processes data directly without a component that discerns and processes spiking membrane potential responses. Improving the sampling strategy for training data alongside the enhancement of the network architecture could address these limitations in the future.

As discussed in ‘Materials and methods’, it is worth noting that EP-GAN does not necessarily recover the ground truth parameters that are associated with the input membrane potential responses and steady-state current profiles. This is mainly due to the fact that there may exist multiple parameter regimes for the HH-model that support the given inputs (Willms, 2002; Marder and Goaillard, 2006; Marder and Taylor, 2011; Prinz et al., 2003; Prinz et al., 2004; Valle and Madureira, 2022; Raba et al., 2013; Naudin, 2023). The parameters generated by a single forward pass of EP-GAN (i.e., a single flow of information from the input to the output) could thus be interpreted as a one-time sampling from such a regime, and a perturbation to inputs may result in a different set of parameters. Such sensitivity to perturbation could be adjusted by supplementing the training samples or inputs with additional recording data (e.g., multiple recording data per neuron).

EP-GAN allows additional modifications to accommodate different configurations of the problem. For instance, an update to the HH-model would only require retraining of the network without changes to its architecture. Indeed, the neuronal genome of C. elegans indicates additional voltage-gated channels that could be further incorporated into the HH-models introduced in Nicoletti et al., 2019; Nicoletti et al., 2024 to improve its modeling accuracy of membrane potential dynamics (Hobert, 2018). Extending the inputs to include additional data, for example, channel activation profiles, can also be done in a straightforward manner by concatenating them to the input vectors of the Encoder network. Extending EP-GAN prediction capabilities to new neuron types can also be done by incorporating additional constraints during training data generation.

Despite its primary focus on C. elegans neurons, we believe EP-GAN and its future extensions could be viable for modeling a variety of neurons in other organisms. Indeed, there are increasing advances in resolving connectomes of more complex organisms and techniques for recording large-scale neural activities (Brooks et al., 2022; Winding et al., 2023; Oh et al., 2014; Sofroniew et al., 2016). As neurons in these organisms can be described by a generic HH-model or similar differential equation model, EP-GAN is expected to be applicable and contribute to the development of biologically detailed nervous system models of neurons in these organisms.

Materials and methods

We divide the materials and methods section into three parts. In the first part, we describe the detailed architecture of the EP-GAN including its sub-modules with the simulation protocol used during training. In the second part, we describe the mathematical description for the ionic currents in the HH-model used for neuron modeling. In the third part, we describe the dataset and experimental protocol of novel neuron recordings of AWB, AWC, URX, RIS, DVC, and HSN from C. elegans nervous system.

Architecture of EP-GAN

Deep generative model for parameter prediction

Request a detailed protocol

EP-GAN receives neuronal recording data such as membrane potential responses and steady-state current profiles and generates a set of parameters that are associated with them in terms of simulating the inferred HH-model and comparing the simulated result with the inputs (Figures 1 and 7). We choose a deep generative model approach, specifically Generative Adversarial Network (GAN) as a base architecture of EP-GAN. The key advantage of GAN is in its ability to generate artificial data that closely resembles real data. The generative nature of GAN is advantageous for addressing the one-to-many nature of our problem, where there exist multiple parameter solutions for a given neuron recording. Indeed, several computational works attempting to solve an inverse HH-model noted the ill-posed nature of the parameter solutions (Willms, 2002; Marder and Goaillard, 2006; Marder and Taylor, 2011; Prinz et al., 2003; Prinz et al., 2004; Valle and Madureira, 2022; Raba et al., 2013; Naudin, 2023). Our approach is therefore leveraging GAN to learn a domain of parameter sets compatible with neuron recordings instead of mapping directly onto a single solution. GAN consists of two separate networks, Generator and Discriminator. The goal of the Generator is to generate outputs that are indistinguishable from real data, whereas the Discriminator’s goal is to distinguish outputs that are generated by the Generator against real data. Throughout training, the Generator and the Discriminator engage in a zero-sum game until they both converge to optimal states (i.e. Nash equilibrium) (Goodfellow et al., 2020). The particular architecture we use is Wasserstein GAN with gradient penalty (WGAN-GP), a variant of GAN architecture offering more stable training and faster convergence (Arjovsky et al., 2017).

Architecture of ElectroPhysiomeGAN (EP-GAN).

The architecture consists of an Encoder, Generator, and Discriminator. Encoder compresses the membrane potential responses into a 1D vector (i.e., latent space) that is then concatenated with 1D steady-state current profile to be used as an input to both Generator and Discriminator. Generator translates the latent space vector into a vector of parameters p~ and the Discriminator outputs a scalar measuring the similarity between generated parameters (p~) and ground truths (p). The Generator is trained with adversarial loss supplemented by reconstruction losses for both membrane potential responses and steady-state current profiles. The Discriminator is trained with Discriminator adversarial loss only. Generator and Discriminator follow the architecture of Wasserstein GAN with gradient penalty (WGAN-GP) for more stable learning. During training, random masking is applied to input membrane potential responses where its masking rate gradually decreases as the training continues.

Encoder module

Request a detailed protocol

In addition to Generator and Discriminator, we implement an Encoder module that pre-processes the input membrane potential responses for Generator and Discriminator (Figure 7, left). Specifically, the encoder serves two roles: (i) compression of membrane potential responses traces along the stimulus space, thus reducing its dimension from two-dimensional to one-dimensional, and (ii) translation of membrane potential responses traces into a latent space which encodes a meaningful internal representation for the Discriminator and Generator. The Encoder module uses Gated Recurrent Unit (GRU) architecture, a variant of RNN to perform this task (Cho et al., 2014). Each input sequence to a GRU cell at step k corresponds to the entire membrane potential response of size 350 (i.e., 350 time points, representing t=[4s,11s]) concatenated with the associated stimulus trace of equal size of 350. Since GRU is agnostic to the number of steps in an input sequence, such input structure allows EP-GAN to process a set of membrane potential traces with arbitrary current-clamp protocol. The output of the Encoder is a latent space vector of size 1024 encoding membrane potential responses information. The latent space vector is then concatenated with a 1D vector representing steady-state current profile which is used as an input to both Generator and Discriminator. During training, we randomly mask membrane potential traces to assist in better generalization in prediction (Chang et al., 2022; Liu et al., 2024). Specifically, we initially set the masking rate to 75% (i.e., 75% of membrane potential traces are randomly masked) and linearly decrease to 0% toward the end of training (Figure 7).

Discriminator module

Request a detailed protocol

The goal of the Discriminator is given the input membrane potential responses and current profiles, to distinguish generated parameters from real ground truth parameters. The Discriminator receives as input the latent space vector from the Encoder concatenated with a generated or ground truth parameter vector and outputs a scalar representing the relative distance between two parameter sets (Figure 7, Equation 1). Such a quantity is called Wasserstein distance or Wasserstein loss and differs from a vanilla GAN Discriminator, which only outputs between 0 and 1. Wasserstein loss is known to remedy several common issues that arise from a vanilla GAN such as vanishing gradient and mode collapse, leading to more stable training (Arjovsky et al., 2017). To further improve the training of the WGAN architecture, we supplement Wasserstein loss with a gradient penalty term, which ensures that the gradients of the Discriminator’s output with respect to the input have unit norms (Gulrajani et al., 2017). This condition is called Lipschitz continuity and prevents Discriminator outputs from having large variations when there are only small variations in the inputs (Gulrajani et al., 2017; Virmaux and Scaman, 2018). Combined together, the Discriminator is trained with the following loss:

(1) JD=E[D(p~)]E[D(p)]+λE[(p^D(p^)21)2]

where E[D(p~)] and E[D(p)] are the mean values of Discriminator outputs with respect to generated samples p~ and real samples p, respectively, and λE[(||p^D(p^)||21)2] is the gradient penalty term modulated by λ where p^=tp~+(1t)p is the interpolation between generated and real samples with 0t1.

Generator module

Request a detailed protocol

Being an adversary network of Discriminator, the goal of the Generator is to fool the Discriminator by generating parameters that are indistinguishable from the real parameters. The Generator receives as input the concatenated vector from the Encoder and outputs a parameter vector (Figure 7). The module consists of four fully connected layers with layer normalization applied after the first two layers for improved model convergence (Ba et al., 2016). Each parameter in the output vector is scaled between –1 and 1, which is then scaled back to the parameters’ original scales. The module is trained using three loss terms: (i) Generator adversarial loss, (ii) membrane potential responses reconstruction loss, and (iii) steady-state current reconstruction loss as follows:

(2) JG=E[D(p~)]+JV+JIV
(3) JV=i=1n|VgroundtruthVreconstructed|
(4) JIV=i=1n|IVgroundtruthIVreconstructed|

where E[D(p~)] is Generator adversarial loss, which is the reciprocal of the mean Discriminator outputs with respect to generated samples, and JV and JIV are L1 regression loss for reconstructed membrane potential responses and steady-state current profiles, respectively. It is important to note that JV and JIV are part of Generator’s computation graph and thus force Generator to optimize them on top of adversarial loss (Figure 8). The composite loss function of Generator makes EP-GAN a ‘model-informed’ GAN as the HH-model itself becomes part of the training process. Such networks have been shown to be more data-efficient during training as they do not rely solely on training data to learn an effective strategy (Karniadakis et al., 2021; Raissi et al., 2019). The mathematical description of membrane potential responses and steady-state current reconstruction from generated parameter set p~ is as follows:

(5) Vreconstructed(t)=1(dVdt(V,t,p~)),IVreconstructed(V)=IV(V,p~)
Description of membrane potential responses and steady-state current reconstruction losses for the Generator.

Generated parameter vector p~ is used to evaluate membrane potential responses derivatives dV/dt at n time points sampled with fixed interval given the ground truth V at those time points. The evaluated membrane potential responses derivatives are then used to reconstruct V using the forward integration operation 1. The reconstructed V is then compared with ground truth V to evaluate the membrane potential responses reconstruction loss JV. Steady-state current reconstruction is computed in a similar way via evaluating the currents at defined voltage points V given generated parameters p~ as inputs.

Here, dVdt(V,t,p~) is the right-hand-side function of the HH-model, which computes the membrane potential responses derivative at time t given the membrane potential responses V and parameter set p~, and IV(V,p~) is the function that evaluates the neuron’s steady-state current I given the voltage states V and parameter set p~. Membrane potential responses are reconstructed by first evaluating their derivatives with respect to ground truth membrane potential responses and generated parameters p~ at regularly sampled time points. This is followed by the forward integration operation 1 similar to Euler’s method to approximate V at sampled time points given the initial condition Vinit:

(6) Vt+1=Vt+hV(t),Vt=0=Vinit

where h is the time interval between sampled derivatives. Vinit can be selected at any time point within the ground truth membrane potential state (e.g., pre-activation, mid-activation, post-activation) to reconstruct different membrane potential features. Notably, all computation steps consisting of the forward integration process are expected to be differentiable. This is necessary to incorporate the forward integration process as part of the generator network that requires full differentiability and thus is trainable via the back-propagation algorithm (Rumelhart et al., 1986). Computationally, we achieve this by manually implementing the forward integration process with discrete array operations that support auto-differentiation and vectorization (e.g., PyTorch Tensors) instead of simulating the membrane potential with ODE solvers (Paszke et al., 2019). The variable h can also be adjusted to increase the accuracy of the membrane potential responses reconstruction in exchange for the increased computational cost. The current profile is reconstructed by directly evaluating IV(V,p) ,which uses the generated parameter set p~ over the range of voltage values. We show in Table 3 that reconstruction losses are essential for the accuracy of predicted parameters.

Generating training data

Request a detailed protocol

For a successful training of a neural network model, the training data must be of a sufficient number of samples, denoised, and diverse. To ensure these conditions are met with a simulated dataset, we employ a three-step process for generating training data (Figure 9).

Training data generation.

In Step 1, each parameter is initially sampled from biologically plausible ranges using both skewed Gaussian (channel conductance) and uniform sampling. A parameter set consists of 175 parameters spanning 16 known ion channels in C. elegans and similar organisms. In Steps 2 and 3, steady-state currents and membrane potential responses are evaluated for each parameter set to ensure they satisfy the predefined constraints such as bifurcation structure represented by dI/dV bounds and minimum-maximum membrane potential across current-clamp protocols. Only parameter sets that meet both constraints are included in the training set.

Step 1

Request a detailed protocol

Randomly generate parameter sets by sampling each parameter within a predefined distribution. This distribution is the skewed normal distribution for channel conductance parameters and uniform distributions for other parameters. The range is determined according to the biologically feasible ranges reported in the literature (see table predicted parameters in Supplementary file 1 for an explicit range used for each parameter) (Liu et al., 2018; Naudin et al., 2021; Izhikevich, 2007). In particular, search ranges for channel parameters are set to baseline ±50% where baseline parameters are default parameters given by Nicoletti et al., 2024.

Step 2

Request a detailed protocol

Simulate steady-state current traces for each sampled parameter set followed by imposing bifurcation structure constraints on each of them. This is done by calculating the first derivative of the currents dI/dV with respect to voltage states and ensuring they are within the 98% confidence intervals of experimentally obtained dI/dV bounds. Specifically, we evaluate each parameter set with respect to the three neuron types that are found within C. elegans non-spiking neurons – Type 1: transient outward rectifier (RIM, DVC, HSN); type 2: outward rectifier type (AIY, URX, RIS); and type 3: bistable type (AFD, AWB, AWC). During data generation, approximately the same number of neurons are generated for each type to ensure balance between all neuron types. The step can be further extended with new neuron types by incorporating additional dI/dV bounds.

Step 3

Request a detailed protocol

Impose (minimum, maximum) constraints on the membrane potential response across the current-clamp protocol. These values are set to (–100 mV, 150 mV) respectively for (–15 pA, 35 pA) current-clamp range. Parameter sets that do not satisfy the steady-state currents (Step 2) and membrane potential responses constraints are then removed from the training set. These constraints serve two purposes: (i) remove parameter sets that result in non-realistic membrane potential responses/steady-state current profiles from the training set and (ii) serve as an initial data augmentation process for EP-GAN training. The constraints can also be extended or adjusted if deemed necessary for the improvement of the training process. Once constraints are applied, Gaussian noise is added to the membrane potential responses training data to mimic the measurement noises in experimental membrane potential responses recording data.

Ionic currents modeling

Request a detailed protocol

The general equation describing the membrane potential dynamics of a single-compartment neuron is

(7) CmdVdt=Iion+IExt

where Iion and IExt represent the ionic and external currents applied to a neuron, respectively. The HH-model we consider has a total of 16 ionic current terms comprised of 11 voltage/calcium-gated potassium currents (IK+), 3 voltage-gated calcium currents (ICa2+), leakage currents (ILeak), and sodium leakage currents (INa+) (Table 5). Consequently, the ionic current term Iion of the considered HH-model can be written as

(8) Iion=IK++ICa2++ILeak+INa+
Table 5
List of ion channels included in the estimated HH-models.

Ion selectivity for each channel and the number of ElectroPhysiomeGAN (EP-GAN) trained parameters (not including reversal potentials) for both small and large HH-models are listed.

Ion channelIon selectivity# of parameters(small HH-model)# of parameters(large HH-model)
SHL1K+422
SHK1K+314
EGL2K+28
IRK1/3K+210
UNC103K+315
KQT1K+317
EXP2K+315
SLO1K+22
SLO1-CaVK+33
SLO2K+22
SLO2-CaVK+33
EGL19Ca+323
UNC2Ca+318
CCA1Ca+315
LeakLeak11
NCANa+11

Each ith ionic current can be modeled according to the HH-type formulation as follows:

(9) Iioni=gimiphip(VErev)

where gi is the maximum conductance and Erev is the reversal potential of the channel. mip and hip each represent the activation and inactivation variables of the channel with the following equations:

(10) dmidt=mi,miτi,m
(11) dhidt=hi,hiτi,h

where mi, and hi, each represent the steady-state activation and inactivation values, and τi,m and τi,h represent the activation and inactivation time constants. Note that each ionic channel may have a different mathematical description and parameters for variables mi,, hi,, τi,m, and τi,h according to their dynamics and dependencies. While the majority of these variables are dependent on membrane potential, some channels are dependent on intracellular calcium (e.g., SLO1/2) or independent (e.g., τi,h of SHK1).

Leakage and sodium leakage currents are modeled with the following equations:

(12) ILeak=gLeak(VELeak)
(13) INa+=gNCA(VENCA)

where gLeak and gNCA each represent the conductance values for the leakage and sodium leakage currents. For the exact mathematical equations describing each individual channel, please refer to the supplementary materials of Nicoletti et al., 2024.

Experimental protocol

C. elegans culture and strains

Request a detailed protocol

All animals used in this study were maintained at room temperature (22°C–23°C) on nematode growth medium (NGM) plates seeded with Escherichia coli OP50 bacteria as a food source (Brenner, 1974). The strains used in this study were CX7893 kyIs405 (AWB), CX3695 kyIs140 (AWC), ZG611 iaIs19 (URX), EG1285 lin-15B(n765);oxIs12 (RIS), UL2650 leEx2650 (DVC), and CX4857 kyIs179 (HSN).

Electrophysiology

Request a detailed protocol

Electrophysiological recordings were performed on young adult hermaphrodites (∼3 days old) at room temperature as previously described (Liu et al., 2018). The gluing and dissection were performed under an Olympus SZX16 stereomicroscope equipped with a 1× Plan Apochromat objective and widefield 10× eyepieces. Briefly, an adult animal was immobilized on a Sylgard-coated (Sylgard 184, Dow Corning) glass coverslip in a small drop of DPBS (D8537; Sigma) by applying a cyanoacrylate adhesive (Vetbond tissue adhesive; 3M) along one side of the body. A puncture in the cuticle away from the incision site was made to relieve hydrostatic pressure. A small longitudinal incision was then made using a diamond dissecting blade (type M-DL 72029 L; EMS) along the glue line adjacent to the neuron of interest. The cuticle flap was folded back and glued to the coverslip with GLUture Topical Adhesive (Abbott Laboratories), exposing the neuron to be recorded. The coverslip with the dissected preparation was then placed into a custom-made open recording chamber (∼1.5 ml volume) and treated with 1 mg/ml collagenase (type IV; Sigma) for ∼10 s by hand pipetting. The recording chamber was subsequently perfused with the standard extracellular solution using a custom-made gravity-feed perfusion system for ∼10 ml.

All electrophysiological recordings were performed with the bath at room temperature under an upright microscope (Axio Examiner; Carl Zeiss, Inc) equipped with a 40× water immersion lens and 16× eyepieces. Neurons of interest were identified by fluorescent markers and their anatomical positions. Preparations were then switched to the differential interference contrast (DIC) setting for patch-clamp. Electrodes with resistance (RE) of 15–25 MΩ were made from borosilicate glass pipettes (BF100-58-10; Sutter Instruments) using a laser pipette puller (P-2000; Sutter Instruments) and fire-polished with a microforge (MF-830; Narishige). We used a motorized micromanipulator (PatchStar Micromanipulator; Scientifica) to control the electrodes back filled with standard intracellular solution. The standard pipette solution was (all concentrations in mM) [K-gluconate 115; KCl 15; KOH 10; MgCl2 5; CaCl2 0.1; Na2ATP 5; NaGTP 0.5; Na-cGMP 0.5; cAMP 0.5; BAPTA 1; HEPES 10; sucrose 50], with pH adjusted with KOH to 7.2, osmolarity 320–330 mOsm. The standard extracellular solution was [NaCl 140; NaOH 5; KCl 5; CaCl2 2; MgCl2 5; sucrose 15; HEPES 15; dextrose 25], with pH adjusted with NaOH to 7.3, osmolarity 330–340 mOsm. Liquid junction potentials were calculated and corrected before recording. Whole-cell current clamp and voltage-clamp experiments were conducted on an EPC-10 amplifier (EPC-10 USB; Heka) using PatchMaster software (Heka). Two-component capacitive compensation was optimized at rest, and series resistance was compensated to 50%. Analog data were filtered at 2 kHz and digitized at 10 kHz. Current-injection and voltage-clamp steps were applied through the recording electrode.

Data and code availability

Request a detailed protocol

The membrane potentials and steady-state currents recording data for nine experimental neurons can be found in Mendeley Data, https://doi.org/10.17632/g5kcjp7jsk.1.

Generated HH-model parameters of all nine experimental neurons considered in the study (small and large HH-models, using EP-GAN (32k)) as well as minimum and maximum values for each parameter used during inference are deposited in supporting files 1.

Pre-trained PyTorch EP-GAN (64k) models for both small and large HH-models and the Jupyter Notebook testing the models with respect to nine experimental C. elegans neurons studied in the paper are available at the Github repository (https://github.com/shlizee/epgan copy archived at Shlizerman, 2025).

Appendix 1

Error calculation

The overall errors between predicted membrane potential traces and their ground truth counterparts are computed using RMSE formula as follows:

Verror=t=1T(k=1Nstim(Vk,tpredVk,tgt)2/Nstim)NT

Voltage with upper subscripts pred and gt represents reconstructed membrane potential values at trace k and time t using predicted HH-parameters and ground truth variables, respectively. Nstim corresponds to a total number of stimulus values associated with current-clamp protocol. NT represents the total number of time points in which voltage traces are defined.

The error is computed for each of three intervals: pre-activation [4–5 s), mid-activation [5–10 s], and post-activation (10–11 s], which are then averaged to compute the overall error.

Appendix 1—table 1
Root mean square error (RMSE) errors for membrane potential responses (top) and steady-state currents (bottom) for test neurons (n=200) considered in prediction on simulated neurons scenario.

Membrane potential responses errors are ordered as pre-activation error [4–5 s), mid-activation error [5–10 s], and post-activation periods error (10–11 s].

MethodSimulation #Test neurons
EPGAN32k1.33 mV | 3.63 mV | 2.15 mV
8.41 pA

Numerical simulation of HH-model

For efficient simulations of HH-model membrane potential dynamics, we use Julia ODE solver KenCarp47 algorithm supplemented by high-performance computing packages such as NumPy and SciPy (Rackauckas and Nie, 2017; Harris et al., 2020; Virtanen et al., 2020). Both relative and absolute tolerances for the ODE solver have been set to 1e-8 to ensure the accuracy of the simulations.

Appendix 1—table 2
Small HH-model root mean square error (RMSE) errors (sample size = 32k) for membrane potential responses and steady-state currents for predictions on small HH-model scenarios.

For each neuron, top row shows the RMSE errors for three time intervals – pre-activation [4–5s), mid-activation [5–10s], and post-activation (10–11s] and bottom row shows the RMSE error for steady-state currents across 18 voltage points.

MethodNeurons
GDE3RIMDVCHSN
5.63 mV53.09 mV3.92 mV47.39 mV79.42 mV47.34 mV2.8 mV14.76 mV8.98 mV
5.78 pA5.78 pA19.84 pA
AIYURXRIS
9.75 mV17.41 mV11.75 mV14.89 mV25.19 mV10.61 mV28.39 mV19.52 mV28.18 mV
4.82 pA2.42 pA5.94 pA
AFDAWBAWC
0.66 mV17.37 mV0.93 mV36.72 mV26.7 mV34.21 mV0.56 mV12.42 mV1.2 mV
19.13 pA7.22 pA11.92 pA
NSDERIMDVCHSN
33.29 mV51.33 mV31.54 mV0.86 mV19.88 mV4.29 mV9.76 mV41.66 mV9.22 mV
3.14 pA21.47 pA9.7 pA
AIYURXRIS
0.46 mV11.84 mV4.69 mV1.35 mV15.19 mV4.61 mV5.86 mV21.26 mV5.96 mV
13.63 pA14.2 pA5.68 pA
AFDAWBAWC
1.74 mV13.16 mV1.49 mV0.5 mV15.88 mV2.39 mV0.74 mV17.47 mV
24.73 pA9.55 pA14.49 pA
DEMORIMDVCHSN
32.73 mV43.2 mV31.84 mV7.92 mV27.06 mV7.24 mV1.18 mV14.5 mV1.59 mV
6.52 pA13.77 pA14.63 pA
AIYURXRIS
7.26 mV26.04 mV6.27 mV1.7 mV22.14 mV3.09 mV1.29 mV15.55 mV3.26 mV
5.16 pA5.39 pA9.68 pA
AFDAWBAWC
0.85 mV13.06 mV1.08 mV0.05 mV14.41 mV0.21 mV0.8 mV9.44 mV1.23 mV
23.84 pA11.47 pA18.7 pA
NSGA2RIMDVCHSN
3.1 mV26.28 mV7.88 mV16.15 mV21.63 mV8.39 mV0.48 mV6.67 mV0.64 mV
3.97 pA7.39 pA14.29 pA
AIYURXRIS
3.58 mV22.2 mV3.71 mV2.36 mV10.37 mV5.46 mV2.88 mV13.94 mV2.36 mV
4.59 pA16.555.01
AFDAWBAWC
2.24 mV17.99 mV2.37 mV0.52 mV19.49 mV8.32 mV1.9 mV14.82 mV2.98 mV
14.38 pA11.86 pA9.98 pA
EP-GANRIMDVCHSN
0.33 mV8.23 mV1.52 mV0.19 mV5.74 mV1.22 mV0.18 mV4.02 mV0.49 mV
4.03 pA13.8 pA10.29 pA
AIYURXRIS
0.66 mV6.41 mV0.55 mV1.66 mV5.07 mV2.82 mV0.74 mV3.77 mV0.59 mV
10.7 pA16.84 pA13.8 pA
AFDAWBAWC
3.15 mV8.66 mV2.87 mV0.04 mV7.23 mV0.33 mV0.66 mV4.71 mV0.72 mV
47.97 pA9.64 pA28.86 pA
Appendix 1—table 3
Large HH-model root mean square error (RMSE) errors (sample size = 32k) for membrane potential responses and steady-state currents for predictions on large HH-model scenarios.

For each neuron, the top row shows the RMSE errors for three time intervals – pre-activation [4–5s), mid-activation [5–10s], and post-activation (10–11s], and the bottom row shows the RMSE error for steady-state currents across 18 voltage points.

MethodNeurons
GDE3RIMDVCHSN
2.31 mV30.01 mV9.71 mV3.08 mV27.0 mV7.54 mV3.4 mV31.0 mV4.09 mV
3.15 pA23.53 pA12.75 pA
AIYURXRIS
16.88 mV25.85 mV15.51 mV3.79 mV20.27 mV2.97 mV6.54 mV32.36 mV6.61 mV
10.24 pA6.32 pA5.99 pA
AFDAWBAWC
33.71 mV20.5 mV33.86 mV3.8 mV24.56 mV4.14 mV8.05 mV11.58 mV8.04 mV
7.85 pA18.08 pA9.55 pA
NSDERIMDVCHSN
27.97 mV40.68 mV25.98 mV13.33 mV30.91 mV12.46 mV5.03 mV15.31 mV5.73 mV
7.75 pA8.03 pA9.58 pA
AIYURXRIS
6.81 mV22.46 mV7.05 mV0.19 mV21.14 mV4.45 mV24.54 mV22.72 mV24.22 mV
6.82 pA5.06 pA4.17 pA
AFDAWBAWC
24.96 mV2.09 mV18.19 mV31.41 mV21.6 mV28.39 mV12.25 mV22.77 mV13.38 mV
14.73 pA7.24 pA6.18 pA
DEMORIMDVCHSN
8.91 mV63.94 mV11.11 mV16.01 mV21.32 mV15.96 mV11.8 mV26.0 mV12.11 mV
7.57 pA11.89 pA21.4 pA
AIYURXRIS
1.46 mV25.9 mV3.21 mV23.15 mV18.16 mV27.24 mV19.32 mV15.3 mV19.8 mV
7.24 pA11.14 pA6.81 pA
AFDAWBAWC
2.29 mV11.92 mV3.79 mV7.0 mV24.03 mV8.3 mV1.03 mV10.97 mV3.22 mV
18.17 pA11.9 pA30.85 pA
NSGA2RIMDVCHSN
3.58 mV32.89 mV3.63 mV7.55 mV33.6 mV7.06 mV31.5 mV24.36 mV31.81 mV
6.57 pA7.63 pA5.75 pA
AIYURXRIS
0.38 mV14.99 mV18.39 mV3.61 mV20.4 mV1.87 mV11.87 mV17.09 mV11.45 mV
8.68 pA5.13 pA2.74 pA
AFDAWBAWC
0.8 mV22.68 mV1.14 mV1.14 mV30.38 mV2.11 mV5.12 mV32.47 mV3.17 mV
24.07 pA10.9 pA7.55 pA
EP-GANRIMDVCHSN
0.24 mV7.78 mV1.65 mV0.30 mV5.90 mV1.39 mV0.46 mV6.62 mV2.02 mV
3.21 pA12.35 pA17.44 pA
AIYURXRIS
0.43 mV6.57 mV1.12 mV0.74 mV4.58 mV3.78 mV0.27 mV3.42 mV1.78 mV
10.52 pA36.64 pA21.86 pA
AFDAWBAWC
1.68 mV9.99 mV1.86 mV0.37 mV7.24 mV0.31 mV0.57 mV4.93 mV0.84 mV
43.20 pA9.27 pA8.35 pA

Data availability

Membrane potential dynamics and steady-state current responses of all 9 experimental neurons considered in the study are deposited in Mendeley Data. Generated HH-model parameters of all 9 experimental neurons considered in the study (small and large HH-models), minimum and maximum values for each parameter used during inference, and parameter values predicted by EP-GAN are deposited in Supplementary file 1.Pre-trained EP-GAN models (64k sample size) for estimating small and large HH-models are deposited in Github.

The following data sets were generated
    1. Peng M
    2. Chen S
    3. Liu Q
    (2025) Mendeley Data
    ElectroPhysiomeGAN: Generation of Biophysical Neuron Model Parameters from Recorded Electrophysiological Responses.
    https://doi.org/10.17632/g5kcjp7jsk.1

References

  1. Conference
    1. Angira R
    2. Babu B
    (2005)
    Non-dominated sorting differential evolution (NSDE): an extension of differential evolution for multi-objective optimization
    IICAI. pp. 1428–1443.
  2. Conference
    1. Arjovsky M
    2. Chintala S
    3. Bottou L
    (2017)
    Wasserstein generative adversarial networks
    International conference on machine learning. pp. 214–223.
  3. Conference
    1. Buhry L
    2. Giremus A
    3. Grivel E
    4. Saïghi S
    5. Renaud S
    (2009)
    New variants of the differential evolution algorithm: application for neuroscientists
    17th european signal processing conference. pp. 2352–2356.
  4. Conference
    1. Chang H
    2. Zhang H
    3. Jiang L
    4. Liu C
    5. Freeman WT
    (2022) MaskGIT: Masked Generative Image Transformer
    IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 11315–11325.
    https://doi.org/10.1109/CVPR52688.2022.01103
  5. Conference
    1. Deb K
    2. Agrawal S
    3. Pratap A
    4. Meyarivan T
    (2000)
    A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II
    Parallel problem solving from nature PPSN VI: 6th international conference paris, france, september 18-20, 2000 proceedings 6. pp. 849–858.
  6. Conference
    1. Estienne L
    (2021) Towards an Hybrid Hodgkin-Huxley Action Potential Generation Model
    2021 XIX Workshop on Information Processing and Control (RPIC). pp. 1–6.
    https://doi.org/10.1109/RPIC53795.2021.9648523
  7. Conference
    1. Gulrajani I
    2. Ahmed F
    3. Arjovsky M
    4. Dumoulin V
    5. Courville AC
    (2017)
    Improved training of wasserstein gans
    Advances in neural information processing systems.
  8. Conference
    1. Laredo JJL
    2. Naudin L
    3. Corson N
    4. Fernandes CM
    (2022)
    A methodology for determining ion channels from membrane potential neuronal recordings
    Applications of evolutionary computation: 25th european conference, EvoApplications 2022, held as part of EvoStar 2022, madrid, spain, april 20-22, 2022, proceedings. pp. 15–29.
  9. Book
    1. Roberts A
    2. Bush BM
    (1981)
    Neurones without Impulses: Their Significance for Vertebrate and Invertebrate Nervous Systems
    Cambridge University Press.
  10. Conference
    1. Robič T
    2. Filipič B
    (2005)
    Differential evolution for multiobjective optimization
    Evolutionary multi-criterion optimization: third international conference, EMO 2005, guanajuato, mexico, march 9-11, 2005. Proceedings 3. pp. 520–533.
    1. White JG
    2. Southgate E
    3. Thomson JN
    4. Brenner S
    (1986) The structure of the nervous system of the nematode Caenorhabditis elegans
    Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 314:1–340.
    https://doi.org/10.1098/rstb.1986.0056
    1. Zou J
    2. Han Y
    3. So SS
    (2009) Artificial Neural Networks
    Artificial Neural Networks: Methods and Applications 1:14–22.
    https://doi.org/10.1007/978-1-60327-101-1_2

Article and author information

Author details

  1. Jimin Kim

    Department of Electrical and Computer Engineering, University of Washington, Seattle, United States
    Contribution
    Conceptualization, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing - original draft, Project administration, Writing – review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-5597-5142
  2. Minxian Peng

    Department of Neuroscience, City University of Hong Kong, Hong Kong, Hong Kong
    Contribution
    Data curation
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-0419-3543
  3. Shuqi Chen

    Department of Neuroscience, City University of Hong Kong, Hong Kong, Hong Kong
    Contribution
    Data curation
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0009-0003-9744-6964
  4. Qiang Liu

    Department of Neuroscience, City University of Hong Kong, Hong Kong, Hong Kong
    Contribution
    Data curation, Supervision, Funding acquisition, Writing – review and editing
    For correspondence
    qiangliuemail@gmail.com
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9232-1420
  5. Eli Shlizerman

    1. Department of Electrical and Computer Engineering, University of Washington, Seattle, United States
    2. Department of Applied Mathematics, University of Washington, Seattle, United States
    Contribution
    Conceptualization, Resources, Supervision, Funding acquisition, Methodology, Project administration, Writing – review and editing
    For correspondence
    shlizee@uw.edu
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-3136-4531

Funding

National Science Foundation (CRCNS IIS-2113003)

  • Jimin Kim
  • Eli Shlizerman

National Science Foundation (CRCNS IIS-2113120)

  • Qiang Liu

The Kavli Foundation

  • Qiang Liu

City University of Hong Kong

  • Qiang Liu

CityU New Research Initiatives (9610587)

  • Qiang Liu

Hong Kong Research Grants Council RGC (CityU 21103522)

  • Qiang Liu

Hong Kong Research Grants Council RGC (CityU 11104123)

  • Qiang Liu

Hong Kong Research Grants Council RGC (CityU 11100524)

  • Qiang Liu

NIH Office of Research Infrastructure Programs (P40 OD010440)

  • Qiang Liu

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

This work was supported in part by National Science Foundation grant CRCNS IIS-2113003 (JK,ES), Washington Research Fund (ES), CRCNS IIS-2113120 (QL), Kavli NSI Pilot Grant (QL), CityU New Research Initiatives/Infrastructure Support from Central APRC 9610587 (QL), the General Research Fund (GRF) and Early Career Scheme (ECS) Award from Hong Kong Research Grants Council RGC (CityU 21103522, CityU 11104123, CityU 11100524) (QL), and Chan Zuckerberg Initiative (to Cori Bargmann). The authors also acknowledge the partial support by the Departments of Electrical and Computer Engineering (JK, ES), Applied Mathematics (ES), the Center of Computational Neuroscience (ES), and the eScience Institute (ES, JK) at the University of Washington. In addition, we thank Cori Bargmann and Ian Hope for C. elegans strains. Some strains were provided by the CGC, which is funded by the NIH Office of Research Infrastructure Programs (P40 OD010440). We thank Saba Heravi for discussions regarding parameter inference for electrophysiological recordings.

Version history

  1. Preprint posted:
  2. Sent for peer review:
  3. Reviewed Preprint version 1:
  4. Reviewed Preprint version 2:
  5. Reviewed Preprint version 3:
  6. Version of Record published:

Cite all versions

You can cite all versions using the DOI https://doi.org/10.7554/eLife.95607. This DOI represents all versions, and will always resolve to the latest one.

Copyright

© 2024, Kim et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 960
    views
  • 47
    downloads
  • 0
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Jimin Kim
  2. Minxian Peng
  3. Shuqi Chen
  4. Qiang Liu
  5. Eli Shlizerman
(2025)
Generation of biophysical neuron model parameters from recorded electrophysiological responses
eLife 13:RP95607.
https://doi.org/10.7554/eLife.95607.4

Share this article

https://doi.org/10.7554/eLife.95607