A differentiable Gillespie algorithm for simulating chemical kinetics, parameter estimation, and designing synthetic biological circuits

  1. Krishna Rijal  Is a corresponding author
  2. Pankaj Mehta
  1. Department of Physics, Boston University, United States

eLife Assessment

This important study introduces a fully differentiable variant of the Gillespie algorithm as an approximate stochastic simulation scheme for complex chemical reaction networks, allowing kinetic parameters to be inferred from empirical measurements of network outputs using gradient descent. The concept and algorithm design are convincing and innovative. While the proofs of concept are promising, some questions are left open about implications for more complex systems that cannot be addressed by existing methods. This work has the potential to be of significant interest to a broad audience of quantitative and synthetic biologists.

https://doi.org/10.7554/eLife.103877.3.sa0

Abstract

The Gillespie algorithm is commonly used to simulate and analyze complex chemical reaction networks. Here, we leverage recent breakthroughs in deep learning to develop a fully differentiable variant of the Gillespie algorithm. The differentiable Gillespie algorithm (DGA) approximates discontinuous operations in the exact Gillespie algorithm using smooth functions, allowing for the calculation of gradients using backpropagation. The DGA can be used to quickly and accurately learn kinetic parameters using gradient descent and design biochemical networks with desired properties. As an illustration, we apply the DGA to study stochastic models of gene promoters. We show that the DGA can be used to: (1) successfully learn kinetic parameters from experimental measurements of mRNA expression levels from two distinct Escherichia coli promoters and (2) design nonequilibrium promoter architectures with desired input–output relationships. These examples illustrate the utility of the DGA for analyzing stochastic chemical kinetics, including a wide variety of problems of interest to synthetic and systems biology.

Introduction

Randomness is a defining feature of our world. Stock market fluctuations, the movement of particles in fluids, and even the change of allele frequencies in organismal populations can all be described using the language of stochastic processes. For this reason, disciplines as diverse as physics, biology, ecology, evolution, finance, and engineering have all developed tools to mathematically model stochastic processes (Van Kampen, 1992; Gardiner, 2009; Rolski et al., 2009; Wong and Hajek, 2012). In the context of biology, an especially fruitful area of research has been the study of stochastic gene expression in single cells (McAdams and Arkin, 1997; Elowitz et al., 2002; Raj and van Oudenaarden, 2008; Sanchez and Golding, 2013). The small number of molecules involved in gene expression make stochasticity an inherent feature of protein production and numerous mathematical and computational techniques have been developed to model gene expression and relate mathematical models to experimental observations (Paulsson, 2005; Wilkinson, 2018).

One prominent computational algorithm for understanding stochasticity in gene expression is the Gillespie algorithm, with its Direct Stochastic Simulation Algorithm variant being the most commonly used method (Doob, 1945; Gillespie, 1977). The Gillespie algorithm is an extremely efficient computational technique used to simulate the time evolution of a system in which events occur randomly and discretely (Gillespie, 1977). Beyond gene expression, the Gillespie algorithm is widely employed across numerous disciplines to model stochastic systems characterized by discrete, randomly occurring events including epidemiology (Pineda-Krch, 2008), ecology (Parker and Kamenev, 2009; Dobramysl et al., 2018), neuroscience (Benayoun et al., 2010; Rijal et al., 2024), and chemical kinetics (Gillespie, 1976; Gillespie, 2007).

Here, we revisit the Gillespie algorithm in light of the recent progress in deep learning and differentiable programming by presenting a ‘fully differentiable’ variant of the Gillespie algorithm we dub the differentiable Gillespie algorithm (DGA). The DGA modifies the traditional Gillespie algorithm to take advantage of powerful automatic differentiation libraries for example, PyTorch (Paszke et al., 2019), Jax (Bradbury et al., 2018), and Julia (Bezanson et al., 2017) and gradient-based optimization. The DGA allows us to quickly fit kinetic parameters to data and design discrete stochastic systems with a desired behavior. Our work is similar in spirit to other recent work that seeks to harness the power of differentiable programming to accelerate scientific simulations (Liao et al., 2019; Schoenholz and Cubuk, 2020; Wei et al., 2019; Degrave et al., 2019; Arya et al., 2022; Arya et al., 2023; Bezgin et al., 2023). The DGA’s use of differential programming tools also complements more specialized numerical methods designed for performing parameter sensitivity analysis on Gillespie simulations such as finite-difference methods (Anderson, 2012; Srivastava et al., 2013; Thanh et al., 2018), the likelihood ratio method (Glynn, 1990; McGill et al., 2012; Núñez and Vlachos, 2015), and pathwise derivative methods (Sheppard et al., 2012).

One of the difficulties in formulating a differentiable version of the Gillespie algorithm is that the stochastic systems it treats are inherently discrete. For this reason, there is no obvious way to take derivatives with respect to kinetic parameters without making approximations. As shown in Figure 1, in the traditional Gillespie algorithm both the selection of the index for the next reaction and the updates of chemical species are both discontinuous functions of the kinetic parameters. To circumnavigate these difficulties, the DGA modifies the traditional Gillespie algorithm by approximating discrete operations with continuous, differentiable functions, smoothing out abrupt transitions to facilitate gradient computation via automatic differentiation (Figure 1). This significant modification preserves the core characteristics of the original algorithm while enabling integration with modern deep learning techniques.

Comparison between the exact Gillespie algorithm and the differentiable Gillespie algorithm (DGA) for simulating chemical kinetics.

(a) Example of kinetics with N=3 reactions with rates ri(i=1,2,3). (b) Illustration of the DGA’s approximations: replacing the non-differentiable Heaviside and Kronecker delta functions with smooth sigmoid and Gaussian functions, respectively. (c) Flow chart comparing exact and differentiable Gillespie simulations.

One natural setting for exploring the efficacy of the DGA is recent experimental and theoretical works exploring stochastic gene expression. Here, we focus on a set of beautiful experiments that explore the effect of promoter architecture on steady-state gene expression (Jones et al., 2014). An especially appealing aspect of Jones et al., 2014 is that the authors independently measured the kinetic parameters for these promoter architectures using orthogonal experiments. This allows us to directly compare the predictions of DGA to ground truth measurements of kinetic parameters. We then extend our considerations to more complex promoter architectures (Lammers et al., 2023) and illustrate how the DGA can be used to design circuits with a desired input–output relation.

Results

A differentiable approximation to the Gillespie algorithm

Before proceeding to discussing the DGA, we start by briefly reviewing how the traditional Gillespie algorithm simulates discrete stochastic processes. For concreteness, in our exposition, we focus on the chemical system shown in Figure 1 consisting of three species, A, B, and C, whose abundances are described by a state vector x=(x1,x2,x3). These chemical species can undergo N=3 chemical reactions, characterized by rate constants, ri(x) where i=1,,3, and a stoichiometric matrix Siα whose ith row encodes how the abundance xα of species α changes due to reaction i. Note that in what follows, we will often supress the dependence of the rates ri(x) on x and simply write ri.

In order to simulate such a system, it is helpful to discretize time into small intervals of size Δt1. The probability that a reaction i with rate ri occurs during such an interval is simply riΔt. By construction, we choose Δt to be small enough that riΔt1 and that the probability that a reaction occurs in any interval Δt is extremely small and well described by a Poisson process. This means that naively simulating such a process is extremely inefficient because, in most intervals, no reactions will occur.

Gillespie algorithm

The Gillespie algorithm circumnavigates this problem by: (1) exploiting the fact that the reactions are independent so that the rate at which any reaction occurs is also described by an independent Poisson process with rate R=Σiri and (2) the waiting time distribution p(τ) of a Poisson process with rate R is the exponential distribution p(τ)=ReRτ. The basic steps of the Gillespie algorithm are illustrated in Figure 1.

The simulation begins with the initialization of time and state variables:

t=0,x=x0,

where t is the simulation time. One then samples the waiting time distribution p(τ) for a reaction to occur to determine when the next the reaction occurs. This is done by drawing a random number u from a uniform distribution over [0,1] and updating

(1) ttR1ln(u).

Note that this time update is a fully differentiable function of the rates ri.

In order to determine which of the reactions i occurs after a time τ, we note that probability that reaction i occurs is simply given by qi=ri/R. Thus, we can simply draw another random number u and choose i such that i equals the smallest integer satisfying

(2) i=1iri/R>u.

The reaction abundances x are then updated using the stoichiometric matrix

(3) xαxα+Siα.

Unlike the time update, both the choice of the reaction i and the abundance updates are not differentiable since the choice of the reaction i is a discontinuous function of the parameters ri.

Approximating updates in the Gillespie with differentiable functions

In order to make use of modern deep learning techniques and modern automatic differentiation packages, it is necessary to modify the Gillespie algorithm in such as way as to make the choice of reaction index (Equation 2) and abundance updates (Equation 3) differentiable functions of the kinetic parameters. To do so, we rewrite Equation 2 using a sum of Heaviside step function Θ(y) (recall Θ(y)=0 if y<0 and Θ(y)=1 if y>0):

(4) i=1+i=1N1Θ(uriR)

This formulation of index selection makes clear the source of non-differentiability. The derivative of the i with respect to ri does not exist at the transition points where the Heaviside function jumps (see Figure 1b).

This suggests a natural modification of the Gillespie algorithm to make it differentiable – replacing the Heaviside function Θ(y) by a sigmoid function of the form

(5) σ(y)=11+eya

where we have introduced a ‘hyper-parameter’ a that controls the steepness of the sigmoid and plays an analogous role to temperature in a Fermi function in statistical mechanics. A larger value of a1 results in a steeper slope for the sigmoid functions, thereby more closely approximating the true Heaviside functions which is recovered in the limit a0 (see Figure 1b). With this replacement, the index selection equation becomes

(6) i=1+i=1N1σ(1a(uriR))

Note that in making this approximation, our index is no longer an integer, but instead can take on all real values between 0 and N. However, by making a sufficiently small, Equation 6 still serves as a good approximation to the discrete jumps in Equation 4. In general, a is a hyperparameter that is chosen to be as small as possible while still ensuring that the gradient of i with respect to the kinetic parameters ri can be calculated numerically with high accuracy. For a detailed discussion, please see Appendix 1—figure 1 and Appendix 1.

Since the index i is no longer an integer but a real number, we must also modify the abundance update in Equation 3 to make it fully differentiable. To do this, we start by rewriting Equation 3 using the Kronecker delta δij (where δij=1 if i=j and δij=0 if ij) as

(7) xαxα+i=1NδiiSiα

Since i is no longer an integer, we can approximate the Kronecker delta δii by a Gaussian function, to arrive at the approximate update equation

(8) xαxα+i=1Ne1b(ii)2Siα.

The hyperparameter b is generally chosen to be as small as possible while still ensuring numerical stability of gradients (Appendix 1—figure 1). Note by using an abundance update of the form Equation 8, the species abundances x are now real numbers. This is in stark contrast with the exact Gillespie algorithm where the abundance update (Equation 7) ensures that the xα are all integers.

Combining the DGA with gradient-based optimization

The goal of making Gillespie simulations differentiable is to enable the computation of the gradient of a loss function, L(θ), with respect to the kinetics parameters θ. A loss function quantifies the difference between simulated and desired values for quantities of interest. For example, when employing the DGA in the context of fitting noisy gene expression models, a natural choice for L(θ) is the difference between the simulated and experimentally measured moments of mRNA/protein expression (or alternatively, the Kullback–Leibler divergence between the experimental and simulated mRNA/protein expression distributions if full distributions can be measured). When using the DGA to design gene circuits, the loss function can be any function that characterizes the difference between the simulated and desired values of the input–output relation.

The goal of the optimization using the DGA is to find parameters θ that minimize the loss. The basic workflow of a DGA-based optimization is shown in Figure 2. One starts with an initial guess for the parameters θ0. One then uses DGA algorithm to simulate the systems and calculate the gradient of the loss function θL(θ). One then updates the parameters, moving in the direction of the gradient using gradient descent or more advanced methods such as ADAM (Kingma and Ba, 2014; Mehta et al., 2019), which uses adaptive estimates of the first and second moments of the gradients to speed up convergence to a local minimum of the loss function.

Flow chart of the parameter optimization process using the differentiable Gillespie algorithm (DGA).

The process begins by initializing the parameters θ=θ0. Simulations are then run using the DGA to obtain statistics {Si(θ)} like moments. These statistics are used to compute the loss L({Si}), and the gradient of the loss L is obtained. Finally, parameters are updated using the ADAM optimizer, and the process iterates to minimize the loss.

The price of differentiability

A summary of the DGA is shown in Figure 1. Unsurprisingly, differentiability comes at a price. The foremost of these is that unlike the Gillespie algorithm, the DGA is no longer exact. The DGA replaces the exact discrete stochastic system by an approximate differentiable stochastic system. This is done by allowing both the reaction index and the species abundances to be continuous numbers. Though in theory, the errors introduced by these approximations can be made arbitrarily small by choosing the hyper-parameters a and b small enough (see Figure 1), in practice, gradients become numerically unstable when a and b are sufficiently small (see Appendix 1 and Appendix 1—figure 1).

In what follows, we focus almost exclusively on steady-state properties that probe the ‘bulk’, steady-state properties of the stochastic system of interest. We find the DGA works well in this setting. However, we note that the effect of the approximations introduced by the DGA may be pronounced in more complex settings such as the calculation of rare events, modeling of tail-driven processes, or dealing with non-stationary time series.

In order to better understand the DGA in the context of stochastic gene expression, we benchmarked the DGA on a simple two-state promoter model inspired by experiments in Escherichia coli (Jones et al., 2014). This simple model had several advantages that make it well suited for exploring the performance of DGA. These include the ability to analytically calculate mRNA expression distributions and independent experimental measurements of kinetic parameters.

Two-state promoter model

Gene regulation is tightly regulated at the transcriptional level to ensure that genes are expressed at the right time, place, and in the right amount (Phillips et al., 2012). Transcriptional regulation involves various mechanisms, including the binding of transcription factors to specific DNA sequences, the modification of chromatin structure, and the influence of non-coding RNAs, which collectively control the initiation and rate of transcription (Phillips et al., 2012; Sanchez et al., 2013; Phillips et al., 2019). By orchestrating these regulatory mechanisms, cells can respond to internal signals and external environmental changes, maintaining homeostasis and enabling proper development and function.

Here, we focus on a classic two-state promoter gene regulation (Jones et al., 2014). Two-state promoter systems are commonly studied because they provide a simplified yet powerful model for understanding gene regulation dynamics. These systems, characterized by promoters toggling between active and inactive states, offer insights into how genes are turned on or off in response to various stimuli (see Figure 3a). The two-state gene regulation circuit involves the promoter region, where RNA polymerase (RNAP) binds to initiate transcription and synthesize mRNA molecules at a rate r. A repressor protein can also bind to the operator site at a rate konR and unbind at a rate koffR. When the repressor is bound to the operator, it prevents RNAP from accessing the promoter, effectively turning off transcription. mRNA is also degraded at a rate γ. An appealing feature of this model is that both mean mRNA expression and the Fano factor can be calculated analytically and there exist beautiful quantitative measurements of both these quantities (Figure 3b). For this reason, we use this two-state promoters to benchmark the efficacy of DGA below.

Two-state gene regulation architecture.

(a) Schematic of gene regulatory circuit for transcriptional repression. RNA polymerase (RNAP) binds to the promoter region to initiate transcription at a rate r, leading to the synthesis of mRNA molecules (red curvy lines). mRNA is degraded at a rate γ. A repressor protein can bind to the operator site, with association and dissociation rates konR and koffR, respectively. (b) Experimental data from Phillips et al., 2019, showing the relationship between the mean mRNA level and the Fano factor for two different promoter constructs: lacUD5 (green squares) and 5DL1 (red squares).

Characterizing errors due to approximations in the DGA

We begin by testing the DGA to do forward simulations on the two-state promoter system described above and comparing the results to simulations performed with the exact Gillespie algorithm (see Appendix 2 for simulation details). Figure 4a compares the probability distribution function (PDF) for the steady-state mRNA levels obtained from the DGA (in red) and the exact Gillespie simulation (in blue). The close overlap of these distributions demonstrates that the DGA can accurately replicate the results of the exact Gillespie simulation. This is also shown by the very close match of the first four moments mn of the mRNA count between the exact Gillespie and the DGA in Figure 4b, though the DGA systematically overestimates these moments. As observed in Figure 4a, the DGA also fails to accurately capture the tails of the underlying PDF. This discrepancy arises because rare events result from very frequent low-probability reaction events where the sigmoid approximation used in the DGA significantly impacts the reaction selection process and, consequently, the final simulation results.

Accuracy of the differentiable Gillespie algorithm (DGA) in simulating the two-state promoter architecture in Figure 3a.

Comparison between the DGA and exact simulations for (a) steady-state mRNA distribution, (b) moments of the steady-state mRNA distribution, and (d) the probability for the promoter to be in the ‘ON’ or ‘OFF’ state. (c) Ratio of the Jensen–Shannon divergence JSD(pDGA||pexactss) between the differentiable Gillespie probability distribution function (PDF) pDGA and the exact steady-state PDF pexactss, and the Shannon entropy H(pexactss) of the exact steady-state PDF. In all of the plots, 2000 trajectories are used. The simulation time used in panels (a), (b), and (d) is marked by blue ‘x’. Parameter values: konR=0.5, koffR=1.0, r=10, γ=1, 1/a=200, and 1/b=20.

Next, we compare the accuracy of the DGA in simulating mRNA abundance distributions across a range of simulation times (see Figure 4c). The accuracy is quantified by the ratio of the Jensen–Shannon divergence JSD(pDGA||pexactss) between the differentiable Gillespie PDF pDGA and the exact steady-state PDF pexactss, and the entropy H(pexactss) of the exact steady-state PDF. For probability distributions P and Q over the same discrete space X, the JSD and H are defined as:

(9) JSD(PQ)=12DKL(PM)+12DKL(QM)H(P)=xXP(x)logP(x)

where M=12(P+Q) and DKL denotes the Kullback–Leibler divergence

(10) DKL(PQ)=xXP(x)logP(x)Q(x)

The ratio JSDH normalizes divergence by entropy, enabling meaningful comparison across systems. As expected, the JSDH ratio decreases with increasing simulation time, indicating convergence toward the steady-state distribution of the exact Gillespie simulation. By ‘steady-state distribution’, we mean the long-term probability distribution of states that the exact Gillespie algorithm approaches after a simulation time of 104. The saturation of the JSDH ratio at approximately 0.003 for long simulation times is due to the finite values of a1 and b1. In percentage terms, this ratio represents a 0.3% divergence, meaning that the DGA’s approximation introduces only a 0.3% deviation from the exact distribution, relative to the total uncertainty (entropy) in the exact system.

Finally, the bar plot in Figure 4d shows simulation results for the probability of the promoter being in the ‘OFF’ and ‘ON’ states as predicted by the DGA (in red) and the exact Gillespie simulation (in blue). The differentiable Gillespie overestimates the probability of being in the ‘OFF’ state and underestimates the probability of being in the ‘ON’ state. Nonetheless, given the discrete nature of this system, the DGA does a reasonable job of matching the results of the exact simulations.

As we will see below, despite these errors the DGA is able to accurately capture gradient information and hence works remarkably well at gradient-based optimization of loss functions.

Parameter estimation using the DGA

In many applications, one often wants to estimate kinetic parameters from experimental measurements of a stochastic system (Tian et al., 2007; Munsky et al., 2009; Komorowski et al., 2009; Villaverde et al., 2019). For example, in the context of gene expression, biologists are often interested in understanding biophysical parameters such as the rate at which promoters switch between states or a transcription factor unbinds from DNA. However, estimating kinetic parameters in stochastic systems poses numerous challenges because the vast majority of methods for parameter estimation are designed with deterministic systems in mind. Moreover, it is often difficult to analytically calculate likelihood functions making it difficult to perform statistical inference. One attractive method for addressing these difficulties is to combine differentiable Gillespie simulations with gradient-based optimization methods. By choosing kinetic parameters that minimize the difference between simulations and experiments as measured by a loss function, one can quickly and efficiently estimate kinetic parameters and error bars.

Loss function for parameter estimation

To use the DGA for parameter estimation, we start by defining a loss function L(θ) that measures the discrepancy between simulations and experiments. In the context of the two-state promoter model (Figure 3), a natural choice of loss function is the square error between the simulated and experimentally measured mean and standard deviations of the steady-state mRNA distributions:

(11) L(θ)=(m^m)2+(σ^mσm)2,

where m^ and σ^m denote the mean and standard deviation obtained from DGA simulations, and m and σm are the experimentally measured values of the same quantities. Having specified the loss function and parameters, we then use the gradient-based optimization to minimize the loss and find the optimal parameters θ^ (see Figure 2). Note that in general the solution to the optimization problem need not be unique (see below).

Confidence intervals and visualizing loss landscapes

Given a set of learned parameters θ^ that minimize L(θ), one would also ideally like to assign a confidence interval (CI) to this estimate that reflect how constrained these parameters are. One natural way to achieve this is by examining the curvature of the loss function as the parameter θi varies around its minimum value, θimin. Motivated by this, we define the 95% CIs for parameter θi by:

(12) CIθi=[θiminδ,θimin+1.96δθi]

where

(13) δθi=(2Lθi2)1|θi=θimin

and L(θiminδ)=L(θimin+1.96δθi). A detailed explanation of how to numerically estimate the CIs is given in Appendix 3.

One shortcoming of Equation 13 is that it treats each parameter in isolation and ignores correlations between parameters. On a technical level, this is reflected in the observation that the CIs only know about the diagonal elements of the full Hessian ij2L(θ). This shortcoming is especially glaring when there are many sets of parameters that all optimize the loss function (Einav et al., 2018; Razo-Mejia et al., 2018). As discuss below, this is often the case in many stochastic systems including the two-state promoter architecture in Figure 3. For this reason, it is often useful to make two dimensional plots of the loss function L(θ). To do so, for each pair of parameters, we simply sample the parameters around their optimal value and forward simulate to calculate the loss function L(θ). We then use this simulations to create two-dimensional heat maps of the loss function. This allows us to identify ‘soft directions’ in parameter space, where the loss function L(θ) changes slowly, indicating weak sensitivity to specific parameter combinations.

Parameter estimation on synthetic data

Before proceeding to experiments, we start by benchmarking the DGA’s ability to perform parameter estimation on synthetic data generated using the two-state promoter model shown in Figure 3. This model nominally has four independent kinetic parameters: the rate at which repressors bind the promoter, konR; the rate at which the repressor unbinds from the promoter, koffR; the rate at which mRNA is produced, r; and the rate at which mRNA degrades, γ. Since we are only concerned with steady-state properties of the mRNA distribution, we choose to measure time in units of the off rate and set koffR=1 in everything that follows. In Appendix 4, we make use of exact analytical results for m and σm to show that the solution to the optimization problem specified by loss function in Equation 11 is degenerate – there are many combinations of the three parameters {konR,r,γ} that all optimize L(θ). On the other hand, if one fixes the mRNA degradation rate γ, this degeneracy is lifted and there is a unique solution to the optimization problem for the two parameters {konR,r}. We discuss both these cases below.

Generating synthetic data

To generate synthetic data, we randomly sample the three parameters: konR, r, and γ within the range [0.1,10], while keeping koffR fixed at 1. In total, we generate 20 different sets of random parameters. We then perform exact Gillespie simulations for each set of parameters. Using these simulations, we obtain the mean m and standard deviation σm of the mRNA levels, which are then used as input to the loss function in Equation 11. We then use the DGA to estimate the parameters using the procedure outlined above and compare the resulting predictions with ground truth values for simulations.

Estimating parameters in the non-degenerate case

We begin by considering the case where the mRNA degradation rate γ is known and the goal is to estimate the two other parameters: the repressor binding rate konR and the mRNA production rate r. As discussed above, in this case, the loss function in Equation 11 has a unique minima, considerably simplifying the inference task. Figure 5a shows a scatter plot of the learned and the true parameter values for wide variety of choices of γ. As can be seen, there is very good agreement between the true parameters and learned parameters. Figure 5c shows that even when the true and learned parameters differ, the DGA can predict the mean m and standard deviation σm of the steady-state mRNA distribution almost perfectly (see Appendix 5 for discussion of how error bars were estimated). To better understand this, we selected a set of learned parameters: konR=0.87, r=3.83, and γ=2.43. We then plotted the loss function in the neighborhood of these parameters (Figure 5b). As can be seen, the loss function around the true parameters is quite flat and the learned parameters live at the edge of this flat region. The flatness of the loss function reflects the fact that the mean and standard deviation of the mRNA distribution depend weakly on the kinetic parameters.

Gradient-based learning via differentiable Gillespie algorithm (DGA) is applied to the synthetic data for the gene expression model in Figure 3a.

Parameters koffR are fixed at 1, with 1/a=200 and 1/b=20 for a simulation time of 10. (a) Scatter plot of true versus inferred parameters (k^onR and r^) with γ constant. Error bars are 95% confidence intervals (CIs). Panel (b) plots the logarithm of the loss function near a learned parameter set (shown in red circles in (a)), showing insensitivity regions. Panel (c) compares true and predicted mRNA mean and standard deviation with 95% CIs.

Estimating parameters for the degenerate case

We now estimate parameters for the two-state promoter model when all three parameters konR, r, and γ are unknown. As discussed above, in this case, there are many sets of parameters that all minimize the loss function in Equation 11. Figure 6a shows a comparison between the learned and true parameters along with a heat map of the loss function for one set of synthetic parameters (Figure 6b). As can be seen in the plots, though the true parameters and learned parameter values differ significantly, they do so along ‘sloppy’ directions where loss function is flat. Consistent with this intuition, we performed simulations comparing the mean m^ and standard deviation σ^m of the steady-state mRNA levels using the true and learned parameters and found near-perfect agreement across all of the synthetic data (Figure 6c).

Gradient-based learning via differentiable Gillespie algorithm (DGA) is applied to the synthetic data for the gene expression model in Figure 3a.

Parameters koffR are fixed at 1, with 1/a=200 and 1/b=20 for a simulation time of 10. (a) Scatter plot of true versus inferred parameters (k^onR, r^, and γ). Error bars are 95% confidence intervals (CIs). Panel (b) plots the logarithm of the loss function near a learned parameter set (shown in red circles in (a)), showing insensitivity regions. Panel (c) compares true and predicted mRNA mean and standard deviation with 95% CIs.

Parameter estimation on experimental data

In the previous section, we demonstrated that our DGA can effectively obtain parameters for synthetic data. However, real experimental data often contains noise and variability, which can complicate the parameter estimation process. To test the DGA in this more difficult setting, we reanalyze experiments by Jones et al., 2014 which measured how mRNA expression changes in a system well described by the two-state gene expression model in Figure 3. In these experiments, two constitutive promoters lacUD5 and 5DL1 (with different transcription rates r) were placed under the control of a LacI repressor through the insertion of a LacI binding site. By systematically varying LacI concentrations, the authors were able to adjust the repressor binding rate konR. mRNA fluorescence in situ hybridization was employed to measure mRNA expression, providing data on both mean expression levels m and the variability as quantified by the Fano factor f=σm2/m for both promoters (see Figure 3b).

Given a set of measurements of the mean and Fano factor {mi,fim} for a promoter (lacUD5 and 5DL1), we construct a loss function of the form:

(14) L=i=1N(mi^mi)2+i=1N(σ^imfimmi)2,

where i runs over data points (each with a different lac repressor concentration) and mi^ and σ^im are the mean and standard deviation obtained from a sample of DGA simulations. This loss function is chosen because, at its minimum, mi^=mi and σ^im=fimmi for all i. As above, we set koffR=1, and focus on estimating the other three parameters {r,γ,konR}. When performing our gradient-based optimization, we assume that the transcription rate r and the mRNA degradation rate γ are the same for all data points i, while allowing konR to vary across data points i. This reflects the fact that konR is a function of the lac repressor concentration which, by design, is varied across data points (see Appendix 6 for details on how this optimization is implemented and calculation of error bars).

The results of this procedure are summarized in Figure 7. We find that for the lacUD5 promoter r^=90.25, γ^=6.20 and that k^onR varies from a minimum value of 0.18 to a maximum value of 99.0. For the 5DL1 promoters r^=87.48 and γ^=9.80 and k^onR varies between 3.64 and 99.0. Recall that we have normalized all rates to the repressor unbinding rate koffR=1. These values indicate that mRNA transcription occurs much faster compared to the unbinding of the repressor, suggesting that once the promoter is in an active state, it produces mRNA rapidly. The relatively high mRNA degradation rates indicate a mechanism for fine-tuning gene expression levels, ensuring that mRNA does not persist too long in the cell, which could otherwise lead to prolonged expression even after promoter deactivation.

Fitting of experimental data from Jones et al., 2014 using the differentiable Gillespie algorithm (DGA).

(a) Comparison between theoretical predictions from the DGA (solid curves) and experimental values of mean and the Fano factor for the steady-state mRNA levels are represented by square markers, along with the error bars, for two different promoters, lacUD5 and 5DL1. Solid curves are generated by using DGA to estimate r^, γ^, and {k^onR} and using this as input to exact analytical formulas. (b) Comparison between the inferred values of r^γ^ using DGA with experimentally measured values of this parameter from Jones et al., 2014. (c) Inferred k^onR values as a function of the mean mRNA level.

As expected, the repressor binding rates decrease with the mean mRNA level (see Figure 7c). The broad range of repressor binding rates shows that the system can adjust its sensitivity to repressor concentration, allowing for both tight repression and rapid activation depending on the cellular context.

Figure 7a shows a comparison between the predictions of the DGA (solid curves) and the experimental data (squares) for mean mRNA levels and the Fano factor . The theoretical curves are obtained by using analytical expression for and from Gillespie, 2007 with parameters estimated from the DGA. We find that for the lacUD5 and the 5DL1 promoters, the mean percentage errors for predictions of the Fano factor are 25% and 28%, respectively (see Appendix 6).

An appealing feature of Jones et al., 2014 is that the authors performed independent experiments to directly measure the normalized transcription rate r/γ (namely the ratio of the transcription rate and the mRNA degradation rate). This allows us to compare the DGA predictions for these parameters to ground truth measurements of kinetic parameters. In Figure 7b, the predictions of the DGA agree remarkably well for both the lacUD5 and 5DL1 promoters.

Designing gene regulatory circuits with desired behaviors

Another interesting application of the DGA is to design stochastic chemical or biological networks that exhibit a particular behavior. In many cases, this design problem can be reformulated as identifying choices of parameter that give rise to a desired behavior. Here, we show that the DGA is ideally suited for such a task. We focus on designing the input–output relation of a four state promoter model of gene regulation (Lammers et al., 2023). We have chosen this more complex promoter architecture because, unlike the two-state promoter model analyzed above, it allows for nonequilibrium currents. In making this choice, we are inspired by numerous recent works have investigated how cells can tune kinetic parameters to operate out of equilibrium in order to achieve increased sharpness/sensitivity (Nicholson and Gingrich, 2023Lammers et al., 2023; Zoller et al., 2022; Wong and Gunawardena, 2020; Dixit et al., 2024).

Model of nonequilibrium promoter

We focus on designing the steady-state input–output relationship of the four-state promoter model of gene regulation model shown in Figure 8a; Lammers et al., 2023. The locus can be in either an ‘ON’ state where mRNA is transcribed at a rate r or an ‘OFF’ state where the locus is closed and there is no transcription. In addition, a transcription factor (assumed to be an activator) with concentration [c] can bind to the locus with a concentration dependent rate [c]kb in the ‘OFF’ state and a rate [c]ηbakb in the ‘ON’ rate. The activator can also unbind at a rate ku in the ‘OFF’ state and a rate ηuaku in the ‘ON’ state. The average mRNA production rate (averaged over many samples) in this model is given by

(15) r¯=r(π2+π3)

where πs (s=2,3) is the steady-state probability of finding the system in each of the ‘ON’ states (see Figure 8a).

Design of the four-state promoter architecture using the differentiable Gillespie algorithm (DGA).

(a) Schematic of four-state promoter model. (b) Target input–output relationships (solid curves) and learned input–output relationships (blue dots) between activator concentration [c] and average mRNA production rate. (c) Parameters learned by DGA for the two responses in (b). (d) The sharpness of the response dr¯d[c][c], and the energy dissipated per unit time for two responses in (b). (e) Logarithm of the loss function for the learned parameter set for Response-2, revealing directions (or curves) of insensitivity in the model’s parameter space. The red circles are the learned parameter values.

Such promoter architectures are often studied in the context of protein gradient-based development (Lammers et al., 2023; Estrada et al., 2016; Owen and Horowitz, 2023). One well-known example of such a gradient is the dorsal protein gradient in Drosophila, which plays a crucial role in determining the spatial boundaries of gene expression domains during early embryonic development. In this context, the sharpness of the response as a function of activator concentration is a critical aspect. High sharpness ensures that the transition between different gene expression domains occurs over a very narrow region, leading to well-defined and precise boundaries. Inspired by this, our objective is to determine the parameters such that the variation in r¯ as a function of the activator concentration [c] follows a desired response. We consider the two target responses (shown in Figure 8b) of differing sharpness, which following Lammers et al., 2023 we quantify as max(r¯[c][c]). For simplicity, we use sixth-degree polynomials to model the input–output functions, with the x-axis plotted on a logarithmic scale. We note that our results do not depend on this choice and any other functional form works equally well.

Loss function

In order to use the DGA to learn a desired input–output relation, we must specify a loss function that quantifies the discrepancy between the desired and actual responses of the promoter network. To construct such a loss function, we begin by discretizing the activator concentration into N=10 logarithmically spaced points, [c]i, where i=1,2,,N. For each [c]i, we denote the corresponding average mRNA production rate r¯i (see Equation 15). After discretization, the loss function is simply the square error between the desired response, r¯i, and the current response, r¯^(θ)i, of the circuit

(16) L=i=1N(r¯^(θ)ir¯i)2,

where r¯^(θ)i denotes the predicted average mRNA production rates obtained from the DGA simulations given the current parameters θ. To compute r¯^i for a concentration [c]i, we perform n=600 DGA simulations (indexed by capital letters A=1,,n) using the DGA and use these simulations to calculate the fraction of time spent in transcriptionally active states (states s=2 and s=3 in Figure 8a). If we denote the fraction of time spent in state s in simulation A by wsA, then we can calculate the probability πs of being in state s by

(17) πs=1nA=1nwsA

and use Equation 15 to calculate r¯^(θ)i

As before, we optimize this loss using gradient descent (see Figure 2). We assume that the transcription rate r is known (this just corresponds to an overall scaling of mRNA numbers). Since we are concerned only with steady-state properties, we fix the activator binding rate to a constant value, kb=0.02. This is equivalent to measuring time in units of kb1. We then use gradient descent to optimize the remaining seven parameters governing transitions between promoter states.

Assessing circuits found by the DGA

Figure 8b shows a comparison between the desired and learned input–output relations. This is good agreement between the learned and desired responses, showing that the DGA is able to design dose–response curves with different sensitivities and maximal values. Figure 8c shows the learned parameters for both response curves. Notably, the degree of activation resulting from transcription factor binding, denoted by ηab, is substantially higher for the sharper response (Response-2). In contrast, the influence on transcription factor binding due to activation, represented by ηba, is reduced for the sharper response curve. Additionally, the unbinding rate ku is observed to be lower for the sharper response. However, it is essential to approach these findings with caution, as the parameters are highly interdependent. These interdependencies can be visualized by plotting the loss function around the optimized parameter values. Figure 8e shows two dimensional heat maps of the loss function for Response-2. There are seven free parameters, resulting in a total of 21 possible 2D slices of the loss function within the seven-dimensional loss landscape.

The most striking feature of these plots is the central role played by the parameters ηab and ηua which must both be high, suggesting that the sharpness in Response-2 may result from creating a high-flux nonequilibrium cycle through the four promoter states (see Figure 8a). This observation is consistent with recent works suggesting that creating such nonequilibrium kinetics represents a general design principle for engineering sharp responses (Lammers et al., 2023; Zoller et al., 2022; Wong and Gunawardena, 2020; Dixit et al., 2024). To better understand if this is indeed what is happening, we quantified the energy dissipation per unit time (power consumption), Φ, in the nonequilibrium circuit. The energetic cost of operating biochemical networks can be quantified using ideas from nonequilibrium thermodynamics using a generalized Ohm’s law of the form (Lammers et al., 2023; Qian, 2007; Mehta and Schwab, 2012; Lan et al., 2012; Lang et al., 2014; Mehta et al., 2016)

(18) Φ=JΔμ

where we have defined a nonequilibrium drive

(19) Δμ=ln(ηabηuaηibηba)

and the nonequilibrium flux

(20) J=π0kb[c]π1ku,

where π0 and π1 are the probabilities of finding the system in state 0 and 1, respectively. Figure 8d shows a comparison between energy consumption and sharpness of the two learned circuits. Consistent with the results of Lammers et al., 2023, we find that the sharper response curve is achieved by consuming more energy.

Discussion

In this paper, we introduced a fully differentiable variant of the Gillespie algorithm, the DGA. By integrating differentiable components into the traditional Gillespie algorithm, the DGA facilitates the use of gradient-based optimization techniques, such as gradient descent, for parameter estimation and network design. The ability to smoothly approximate the discrete operations of the traditional Gillespie algorithm with continuous functions facilitates the computation of gradients via both forward- and reverse-mode automatic differentiation, foundational techniques in machine learning, and has the potential to significantly expand the utility of stochastic simulations. Our work demonstrates the efficacy of the DGA through various applications, including parameter learning and the design of simple gene regulatory networks.

We benchmarked the DGA’s ability to accurately replicate the results of the exact Gillespie algorithm through simulations on a two-state promoter architecture. We found the DGA could accurately approximate the moments of the steady-state distribution and other major qualitative features. Unsurprisingly, it was less accurate at capturing information about the tails of distributions. We then demonstrated that the DGA could be accurately used for parameter estimation on both simulated and real experimental data. This capability to infer kinetic parameters from noisy experimental data underscores the robustness of the DGA, making it a potentially powerful computation tool for real-world applications in quantitative biology. Furthermore, we showcased the DGA’s application in designing biological networks. Specifically, for a complex four-state promoter architecture, we learned parameters that enable the gene regulation network to produce desired input–output relationships. This demonstrates how the DGA can be used to rapidly design complex biological systems with specific behaviors. We expect computational design of synthetic circuits with differentiable simulations to become an increasingly important tool in synthetic biology.

There remains much work still to be done. In this paper, we focused almost entirely on properties of the steady states. However, a powerful aspect of the traditional Gillespie algorithm is that it can be used to simulate dynamical trajectories. How to adopt the DGA to utilize dynamical data remains an extremely important open question. In addition, it will be interesting to see if the DGA can be adapted to understand the kinetic of rare events. It will also be interesting to compare the DGA with other recently developed approximation methods such as those based on tensor networks (Strand et al., 2022; Nicholson and Gingrich, 2023). Beyond the gene regulatory networks, extending the DGA to handle larger and more diverse datasets will be crucial for applications in epidemiology, evolution, ecology, and neuroscience. On a technical level, this may be facilitated by developing more sophisticated smoothing functions and adaptive algorithms to improve numerical stability and convergence.

The DGA could also be extended to stochastic spatial systems by incorporating reaction–diffusion master equations or lattice-based models. Its differentiability may enable efficient optimization of spatially heterogeneous reaction parameters. However, such extensions may need to address computational scalability and stability in high-dimensional spaces, especially in processes such as diffusion-driven pattern formation or spatial gene regulation.

Materials and methods

A detailed explanation of how the DGA is implemented using PyTorch is given in the Appendix. In addition, all code for the DGA is available on Github at our Github repository https://github.com/Emergent-Behaviors-in-Biology/Differentiable-Gillespie-Algorithm (copy archived at Rijal, 2025).

Appendix 1

Balancing accuracy and stability: hyperparameter tradeoffs

In this section, we explore the tradeoffs involved in tuning the hyperparameters a and b in the DGA. These hyperparameters are crucial for balancing the accuracy and numerical stability of the DGA in approximating the exact Gillespie algorithm.

The hyperparameter a1 controls the steepness of the sigmoid function used to approximate the Heaviside step function in reaction selection. Similarly, b1 determines the sharpness of the Gaussian function used to approximate the Kronecker delta function in abundance updates. A larger value of a1 or b1 results in a steeper sigmoid or Gaussian function, thus more closely approximating the discrete functions in the exact Gillespie algorithm.

Accuracy of the forward DGA simulations

To assess the impact of these hyperparameters on the accuracy of the DGA, we measure the ratio of the Jensen–Shannon divergence between the DGA-generated PDF pDGA and the exact PDF pexact, normalized by the entropy H(pexact) of the exact steady-state PDF (see Equation 9). This ratio, JSD(pDGApexact)H(pexact), provides a measure of how closely the DGA approximates the exact Gillespie algorithm.

Appendix 1—figure 1 shows the ratio JSDH as a function of the sharpness parameters a1 and b1. In panel (a), b1 is fixed at 20, and a1 is varied. In panel (b), a1 is fixed at 200, and b1 is varied. Some key insights can be drawn from these plots.

First, as a1 or b1 increases, the ratio JSDH decreases, indicating that the DGA’s approximation becomes more accurate. This is because steeper sigmoid and Gaussian functions better mimics the discrete steps of the exact Gillespie algorithm. Interestingly, while the ratio decreases for both parameters, a1 plateaus at high values, whereas b1 rises at high values. This plateau occurs because the sigmoid function used for reaction selection becomes so steep that it effectively becomes a step function, beyond which further steepening has negligible impact. As b1 becomes very large, the width of the Gaussian function used for abundance updates becomes extremely narrow. In such a scenario, it becomes increasingly improbable for the chosen reaction index to fall within this narrow width, especially because the reaction indices are not exact integers but are instead near-integer continuous values. Therefore, as b1 becomes very large, the discrepancy between the DGA-generated probabilities and the exact probabilities widens, causing the ratio JSDH to increase.

Appendix 1—figure 1
In panels (a) and (b), we plot the ratio of the Jensen–Shannon divergence JSD(pDGA||pexactss) between the differentiable Gillespie PDF pDGA and the exact steady-state PDF pexactss, and the Shannon entropy H(pexactss) of the exact steady-state PDF, as a function of the two sharpness parameters 1/a and 1/b.

In panel (a), 1/b=20; in panel (b), 1/a=200. The simulation time is set to 10. In panels (c) and (d), for these same values, we show the gradient Lr of the loss function L with respect to the parameter r near the true parameter values. In all the plots, the values of the rates are: konR=0.5, koffR=1, r=10, and γ=1. 5000 trajectories are used to obtain the PDFs, while 200 trajectories are used to obtain the gradients.

Stability of the backpropagation of DGA simulations

Numerical stability for gradient computation is crucial. Therefore, it is important to examine how the gradient behaves as a function of the hyperparameters. Panels (c) and (d) of Appendix 1—figure 1 provide insights into this behavior. The plots show the gradient Lr of the loss function L with respect to the parameter r near the true parameter values. The gradients are computed for the loss function in Equation 11 with m and σm equal to 8 and 2.5, respectively, for the parameter values konR=0.5, koffR=1, r=10, and γ=1. With these parameter values, the true mean and standard deviation are equal to 6.67 and 3.94, respectively.

As a1 or b1 increases, the gradients become more accurate, but their numerical stability can be compromised. This is evidenced by the increased variability and erratic behavior in the gradients at very high sharpness values (see Appendix 1—figure 1). Hence, very large values of a1 or b1 lead to oscillations and convergence issues, highlighting the need for a balance between accuracy and stability.

The tradeoff between accuracy and numerical stability is evident. This necessitates careful tuning of a and b to ensure stable and efficient optimization. In practice, this involves selecting values that provide sufficient approximation quality without compromising the stability of the gradients.

Appendix 2

Implementation of the DGA in PyTorch

This section explains the implementation of the DGA used to simulate the two-state promoter model. The implementation can be adapted to any model where the stoichiometric matrix and propensities are known. The model involves promoter state switching and mRNA production/degradation (Figure 3a), and the algorithm is designed to handle these sub-processes effectively.

Stoichiometric matrix:

The stoichiometric matrix is a key component in modeling state changes due to reactions. Each row represents a reaction, and each column corresponds to a state variable (promoter state and mRNA level). The matrix for the two-state promoter model includes:

  • Reaction 1: Promoter state transitions from the OFF state (–1) to the ON state (+1).

  • Reaction 2: Production of mRNA.

  • Reaction 3: Promoter state transitions from the ON state (+1) to the OFF state (–1).

  • Reaction 4: Degradation of mRNA.

The stoichiometric matrix S for this model is:

S=(20012001)

In this matrix, the rows correspond to the reactions listed above, and the columns represent the promoter state and mRNA level, respectively.

Propensity calculations:

The levels vector is a two-dimensional vector where the first element represents the promoter state (–1 or +1) and the second element represents the mRNA number m. Propensities are the rates at which reactions occur, given the current state of the system. In our PyTorch implementation, the propensities are calculated using the following expressions:

propensities = torch.stack([
   kon * torch.sigmoid(-c * levels[0]),
   # Promoter state switching from –1 to +1
   r * torch.sigmoid(-c * levels[0]),
   # mRNA production
   torch.sigmoid(c * levels[0]),
   # Promoter state switching from +1 to –1
   g * levels [1]
   # mRNA degradation
])

Each propensity corresponds to a different reaction:

  • Promoter state switching from –1 to +1: The value of kon * torch.sigmoid(-c * levels[0]) is around konR when the levels[0] (promoter state) is around –1 and close to zero when levels[0] is around +1. The sigmoid function ensures a smooth transition, allowing differentiability and preventing abrupt changes in rates. The constant c controls the sharpness of the sigmoid function.

  • mRNA production: The rate is proportional to r and modulated by the same sigmoid function, torch.sigmoid(-c * levels[0]), such that the rate is equal to r only when the promoter is in –1 state.

  • Promoter state switching from +1 to –1: The rate is set to 1 and modulated by the sigmoid function, torch.sigmoid(c * levels[0]), such that the rate is equal to 1 only when the promoter is in +1 state.

  • mRNA degradation: The rate is proportional to the current mRNA level, g * levels [1], reflecting the natural decay of mRNA with rate mγ.

Using the sigmoid function in the propensity calculations is crucial for ensuring smooth and differentiable transitions between states. This smoothness is essential for gradient-based optimization methods, which rely on continuous and differentiable functions to compute gradients effectively. Without the sigmoid function, the propensity rates could change abruptly, leading to numerical instability and difficulties in optimizing the model parameters.

Reaction selection function: We define a function reaction_selection that selects the next reaction to occur based on the transition points and a random number between [0,1]. The function basically implements using Equation 6. The transition points are first calculated from the cumulative sum of reaction rates normalized to the total rate.

Gillespie simulation function: The main gillespie_simulation function uses the previously described functions, each with specific roles, to perform the actual simulation step-by-step, as shown in Figure 1. This function iterates through the number of simulations, updating the system’s state and the propensities of each reaction at each step.

State jump function: We define another function state_jump that calculates the state change vector when a reaction occurs. It uses a Gaussian function to smoothly transition between states based on the selected reaction index and the stoichiometry matrix (see Equation 8).

Appendix 3

Estimating confidence intervals for parameters

In this section, we describe the methodology used to estimate the confidence intervals for the parameters using polynomial fitting and numerical techniques. This approach uses the results of multiple simulations to determine the range within which the parameter θi is likely to lie, based on the curvature of the loss function around its minimum value. Specifically, we perform the following steps:

  1. Parameter initialization: Set up and initialize the necessary parameters. This includes defining the number of evaluation points, the number of simulations, the simulation time, and the hyperparameters a1, b1, and c.

  2. Range generation: For each set of learned parameters θ^, generate a range of values for the parameter of interest θi while keeping the other parameters fixed at their learned values. Let the learned value of the parameter θi be θ^i. Then the range of evaluation is approximately [0.2θ^i,2θ^i], depending on the flatness of the loss landscape around its minimum.

  3. Simulation and loss calculation: For each value in this range, perform forward DGA simulations to calculate the mean and standard deviation of the results, using which the loss function is computed and stored.

  4. Polynomial fitting: Fit a polynomial of degree 6 to the computed loss values across the range of θi. Compute the first and second derivatives of the fitted polynomial to identify the minimum and evaluate the curvature.

  5. Identifying minimum: Identify the valid minimum θimin of the loss function by solving for the roots of the first derivative and filtering out the points where the second derivative is positive.

  6. Curvature and standard deviation calculation: Calculate the curvature of the loss landscape at its minimum using its second derivative at the minimum. An estimation of the standard deviation δθi of the parameter θi is given by:

(C1) δθi=(2Lθi2)1|θi=θimin

7. Confidence interval calculation: The loss function is typically asymmetric around its minimum, with the left side usually steeper than the right (see Appendix 3—figure 1). To determine the right error bar, we use θimin+1.96δθi. For the left error bar, we find the point where L(θiminδ)=L(θimin+1.96δθi) (see Appendix 3—figure 1). Therefore, the balanced 95% CI for the parameter θi is given by:

(C2) CIθi=[θiminδ,θimin+1.96δθi]

This methodology allows for a robust estimation of the confidence intervals, providing insights into the reliability and precision of the parameter estimates.

Appendix 3—figure 1
Error bars estimation for asymmetric loss function.

Appendix 4

Demonstrating parameter degeneracy in the two-state promoter model

We will now demonstrate the existence of degeneracy in the two-state promoter architecture. Setting koffR=1 in the analytical expressions for the mean m and the Fano factor f from Jones et al., 2014, we have:

(D1) m=1konR+1rγ,f=1+konRkonR+1rkonR+1+γ

We want to solve Equation D1 for r and γ. By isolating r in the expression for m, we obtain:

(D2) r=mγ(konR+1)

Next, we substitute the expression for r from Equation D2 into the expression for the f in Equation D1:

f=1+konRkonR+1mγ(konR+1)konR+1+γ=1+konRmγkonR+1+γ(f1)(konR+1+γ)=konRmγ

Expanding and isolating terms involving γ:

(f1)konR+(f1)+(f1)γ=konRmγ

Rearranging to solve for γ:

(f1)+(f1)konR=γ(konRmf+1)

Thus, we obtain:

(D3) γ=(f1)(konR+1)konRmf+1

We substitute back Equation 28 in Equation 23 to obtain r:

(D4) r=m(f1)(konR+1)2konRmf+1

Equations D3 and D4 indicate that for any given konR, there exists a corresponding pair of values for r and γ that satisfy the equations. Therefore, the solutions are not unique, demonstrating a high degree of degeneracy in the parameter space. This degeneracy arises because multiple combinations of {r,γ,konR} can produce the same observable m and f. The lack of unique solutions is a common issue in parameter estimation for complex systems, where different parameter sets can lead to similar system behaviors.

Resolving degeneracy with known γ

To resolve the degeneracy, we can fix the value of γ. Now, if the rate γ is known, we can solve Equation D1 for the other parameters. Starting by rearranging the mean expression from Equation D1:

(D5) konR=rγm1

Substituting Equation D5 in the expression of f in Equation D1, we have

(D6) f=1+rγmrrγmr+γ2m(f1)(r+γ2m)=(rγm)(γm)r(f1γm)=γ2m2(f1)γ2mr=mγ2(m+f1)γmf+1

Substituting Equation D6 into Equation D5, we obtain:

(D7) konR=γ(m+f1)γmf+11

Equations D6 and D7 provide unique solutions for r and konR given a known value of γ. By fixing γ, the degeneracy is resolved, and the remaining parameters can be accurately determined.

Appendix 5

Estimating confidence intervals for m and σ

To assess the reliability of the statistics predicted through DGA-based optimization, we calculate error bars using the following procedure. For the non-degenerate situation, we use the learned parameter values k^onR and r^, and perform forward DGA simulations with many different random seeds. This generates multiple samples of the mean mRNA level m and the standard deviation σm. These samples provide us with the variability due to the stochastic nature of the simulations. The 95% CIs are then determined using their standard deviations, denoted as δm and δσm. For the mean mRNA level, the CI is calculated as:

(E1) CIm=[m^1.96×δm,m^+1.96×δm]

For the standard deviation of mRNA levels, the CI is calculated as:

(E2) CIσm=[σ^m1.96×δσm,σ^m+1.96×δσm]

Appendix 6

DGA-based optimization for experimental data and estimation of errors

Optimization procedure

Given a set of measurements of the mean mRNA expression levels (mi) and the Fano factor (fim) for promoters (lacUD5 and 5DL1), we construct a loss function as follows:

(F1) L=i=1N(mi^mi)2+i=1N(σ^imfimmi)2

where i runs over data points (each with a different lac repressor concentration), and mi^ and σ^im are the mean and standard deviation obtained from a sample of DGA simulations. This loss function is chosen because, at its minimum, mi^=mi and σ^im=fimmi for all i.

For the optimization, we set koffR=1 and focus on estimating the parameters {r,γ,konR}. During the gradient-based optimization, the transcription rate r and the mRNA degradation rate γ are assumed to be the same for all data points i, while allowing konR to vary across data points i. This reflects the fact that konR is a function of the lac repressor concentration, which is varied across data points.

Instead of konR, we actually learn a transformed parameter poff=1/(1+konR), which is the probability for the promoter to be in the OFF state. This approach is based on our observation that the gradient of the loss function with respect to poff is more numerically stable compared to the gradient with respect to konR.

The parameters are initialized randomly as follows:

  • r is initialized to a random number in [0, 100].

  • γ is initialized to a random number between [0, 10].

  • The poff values, which depend on the index i of the data points, are initially set as linearly spaced points within the range [0.03, 0.97].

The hyperparameters used for the simulations are as follows:

  • Number of simulations: 200

  • Simulation time: 0.2

  • Steepness of the sigmoid function: a1=200.0

  • Sharpness of the Gaussian function: b1=20.0

  • Steepness of the sigmoid in propensities: c=20.0

The parameters are iteratively updated to minimize the loss function. During each iteration of the optimization, the following steps are performed:

  1. Forward simulation: The DGA is used to simulate the system, generating predictions for the mean mRNA levels and their standard deviations.

  2. Loss calculation: The loss function is computed based on the differences between the simulated and experimentally measured values of the mean mRNA levels and standard deviations (see Equation F1).

  3. Gradient calculation: The gradients of the loss function with respect to the parameters r, γ, and konR are calculated using backpropagation.

  4. Parameter update: The ADAM optimizer updates the parameters in the direction that reduces the loss function. ADAM adjusts the learning rates based on the history of gradients and their moments. The learning rate used is 0.1.

The values of the parameters and the loss value are saved after each iteration. The parameter values corresponding to the minimum loss after convergence are picked at the end.

Goodness of fit

To quantitatively assess the goodness of the fit, we define the mean percentage error (MPE) as follows:

(F2) MPE=100%Ni=1N|fimf^im|fim,

where f^im is the predicted Fano factor for the ith data point. This metric provides a measure of the average discrepancy between the predicted and experimental Fano factors, expressed as a percentage of the experimental values.

Error bars

The error bars in the ratio r^/γ^ are obtained by applying error propagation to the standard deviations δr and δγ of the individual values r^ and γ^. The propagated error is given by:

(F3) δrγ=r^γ^(δrr^)2+(δγγ^)2,

where δr and δγ are the standard deviations of r^ and γ^, respectively. These standard deviations are obtained using the curvature of the loss function, as discussed earlier.

Data availability

All code for the Differentiable Gillespie Algorithm is freely available on GitHub at https://github.com/Emergent-Behaviors-in-Biology/Differentiable-Gillespie-Algorithm (copy archived at Rijal, 2025). The study uses previously published experimental datasets from Jones et al., 2014. This dataset is publicly available through Science at https://doi.org/10.1126/science.1255301. No additional proprietary or restricted datasets were used in this study.

References

  1. Book
    1. Gardiner C
    (2009)
    Stochastic Methods
    Springer.
  2. Book
    1. Paszke A
    2. Gross S
    3. Massa F
    4. Lerer A
    5. Bradbury J
    6. Chanan G
    7. Killeen T
    8. Lin Z
    9. Gimelshein N
    10. Antiga L
    (2019)
    Advances in Neural Information Processing Systems
    The MIT Press.
    1. Schoenholz S
    2. Cubuk ED
    (2020)
    JAX MD: A framework for differentiable physics
    Advances in Neural Information Processing Systems 33:11428–11441.
  3. Book
    1. Van Kampen NG
    (1992)
    Stochastic Processes in Physics and Chemistry
    Elsevier.

Article and author information

Author details

  1. Krishna Rijal

    Department of Physics, Boston University, Boston, United States
    Contribution
    Conceptualization, Data curation, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing – original draft
    For correspondence
    krishnarijal331@gmail.com
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-7236-7387
  2. Pankaj Mehta

    Department of Physics, Boston University, Boston, United States
    Contribution
    Conceptualization, Supervision, Funding acquisition, Validation, Investigation, Methodology, Project administration, Writing – review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-1290-5897

Funding

National Institute of General Medical Sciences (R35GM119461)

  • Pankaj Mehta

Chan Zuckerberg Initiative (Investigator grant)

  • Pankaj Mehta

The funders had no role in study design, data collection, and interpretation, or the decision to submit the work for publication.

Acknowledgements

This work was supported by NIH NIGMS R35GM119461 to PM and Chan-Zuckerburg Institute Investigator grant to PM. The authors also acknowledge support from the Shared Computing Cluster (SCC) administered by Boston University Research Computing Services. We would also like to thank the Mehta and Kondev groups for useful discussions.

Version history

  1. Preprint posted:
  2. Sent for peer review:
  3. Reviewed Preprint version 1:
  4. Reviewed Preprint version 2:
  5. Version of Record published:

Cite all versions

You can cite all versions using the DOI https://doi.org/10.7554/eLife.103877. This DOI represents all versions, and will always resolve to the latest one.

Copyright

© 2025, Rijal and Mehta

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 415
    views
  • 23
    downloads
  • 0
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Krishna Rijal
  2. Pankaj Mehta
(2025)
A differentiable Gillespie algorithm for simulating chemical kinetics, parameter estimation, and designing synthetic biological circuits
eLife 14:RP103877.
https://doi.org/10.7554/eLife.103877.3

Share this article

https://doi.org/10.7554/eLife.103877

Further reading

    1. Cell Biology
    Laura Childers, Jieun Park ... Michel Bagnat
    Research Article

    Dietary protein absorption in neonatal mammals and fishes relies on the function of a specialized and conserved population of highly absorptive lysosome-rich enterocytes (LREs). The gut microbiome has been shown to enhance absorption of nutrients, such as lipids, by intestinal epithelial cells. However, whether protein absorption is also affected by the gut microbiome is poorly understood. Here, we investigate connections between protein absorption and microbes in the zebrafish gut. Using live microscopy-based quantitative assays, we find that microbes slow the pace of protein uptake and degradation in LREs. While microbes do not affect the number of absorbing LRE cells, microbes lower the expression of endocytic and protein digestion machinery in LREs. Using transgene-assisted cell isolation and single cell RNA-sequencing, we characterize all intestinal cells that take up dietary protein. We find that microbes affect expression of bacteria-sensing and metabolic pathways in LREs, and that some secretory cell types also take up protein and share components of protein uptake and digestion machinery with LREs. Using custom-formulated diets, we investigated the influence of diet and LRE activity on the gut microbiome. Impaired protein uptake activity in LREs, along with a protein-deficient diet, alters the microbial community and leads to an increased abundance of bacterial genera that have the capacity to reduce protein uptake in LREs. Together, these results reveal that diet-dependent reciprocal interactions between LREs and the gut microbiome regulate protein absorption.

    1. Cell Biology
    Rachel Pudlowski, Lingyi Xu ... Jennifer T Wang
    Research Advance

    Centrioles have a unique, conserved architecture formed by three linked, ‘triplet’, microtubules arranged in ninefold symmetry. The mechanisms by which these triplet microtubules are formed remain unclear but likely involve the noncanonical tubulins delta-tubulin and epsilon-tubulin. Previously, we found that human cells lacking delta-tubulin or epsilon-tubulin form abnormal centrioles, characterized by an absence of triplet microtubules, lack of central core protein POC5, and a futile cycle of centriole formation and disintegration (Wang et al., 2017). Here, we show that human cells lacking either TEDC1 or TEDC2 have similar abnormalities. Using ultrastructure expansion microscopy, we observed that mutant centrioles elongate to the same length as control centrioles in G2 phase and fail to recruit central core scaffold proteins. Remarkably, mutant centrioles also have an expanded proximal region. During mitosis, these mutant centrioles further elongate before fragmenting and disintegrating. All four proteins physically interact and TEDC1 and TEDC2 can form a subcomplex in the absence of the tubulins, supporting an AlphaFold Multimer model of the tetramer. TEDC1 and TEDC2 localize to centrosomes and are mutually dependent on each other and on delta-tubulin and epsilon-tubulin for localization. Our results demonstrate that delta-tubulin, epsilon-tubulin, TEDC1, and TEDC2 function together to promote robust centriole architecture, laying the foundation for future studies on the mechanisms underlying the assembly of triplet microtubules and their interactions with centriole structure.