A mechanism for hunchback promoters to readout morphogenetic positional information in less than a minute
Abstract
Cell fate decisions in the fly embryo are rapid: hunchback genes decide in minutes whether nuclei follow the anterior/posterior developmental blueprint by reading out positional information in the Bicoid morphogen. This developmental system is a prototype of regulatory decision processes that combine speed and accuracy. Traditional arguments based on fixed-time sampling of Bicoid concentration indicate that an accurate readout is impossible within the experimental times. This raises the general issue of how speed-accuracy tradeoffs are achieved. Here, we compare fixed-time to on-the-fly decisions, based on comparing the likelihoods of anterior/posterior locations. We found that these more efficient schemes complete reliable cell fate decisions within the short embryological timescales. We discuss the influence of promoter architectures on decision times and error rates, present concrete examples that rapidly readout the morphogen, and predictions for new experiments. Lastly, we suggest a simple mechanism for RNA production and degradation that approximates the log-likelihood function.
Introduction
From development to chemotaxis and immune response, living organisms make precise decisions based on limited information cues and intrinsically noisy molecular processes, such as the readout of ligand concentrations by specialized genes or receptors (Houchmandzadeh et al., 2002; Perry et al., 2012; Takeda et al., 2012; Marcelletti and Katz, 1992; Bowsher and Swain, 2014). Selective pressure in biological decision-making is often strong, for reasons that range from predator evasion to growth maximization or fast immune clearance. In development, early embryogenesis of insects and amphibians unfolds outside of the mother, which arguably imposes selective pressure for speed to limit the risks of predation and infection by parasitoids (O'Farrell, 2015). In Drosophila embryos, the first 13 cycles of DNA replication and mitosis occur without cytokinesis, resulting in a multinucleated syncytium containing about 6000 nuclei (O'Farrell et al., 2004). Speed is witnessed both by the rapid and synchronous cleavage divisions observed over the cycles, and the successive fast decisions on the choice of differentiation blueprints, which are made in less than 3 min (Lucas et al., 2018).
In the early fly embryo, the map of the future body structures is set by the segmentation gene hierarchy (Nüsslein-Volhard et al., 1984; Houchmandzadeh et al., 2002; Jaeger, 2011). The definition of the positional map starts by the emergence of two (anterior and posterior) regions of distinct hunchback (hb) expression, which are driven by the readout of the maternal Bicoid (Bcd) morphogen gradient (Houchmandzadeh et al., 2002, Figure 1a). hunchback spatial profiles are sharp and the variance in hunchback expression of nuclei at similar positions along the AP axis is small (Desponds et al., 2016; Lucas et al., 2018). Taken together, these observations imply that the short-time readout is accurate and has a low error. Accuracy ensures spatial resolution and the correct positioning of future organs and body structures, while low errors ensure reproducibility and homogeneity among spatially close nuclei. The amount of positional information available at the transcriptional locus is close to the minimal amount necessary to achieve the required precision (Gregor et al., 2007b; Porcher et al., 2010; Garcia et al., 2013; Petkova et al., 2019). Furthermore, part of the morphogenetic information is not accessible for reading by downstream mechanisms (Tikhonov et al., 2015), as information is channeled and lost through subsequent cascades of gene activity. In spite of that, by the end of nuclear cycle 14 the positional information encoded in the gap gene readouts is sufficient to correctly predict the position of each nucleus within 2% of the egg length (Petkova et al., 2019). Adding to the time constraints, mitosis resets the binding of transcription factors (TF) during the phase of synchronous divisions (Lucas et al., 2018), suggesting that the decision about the nuclei’s position is made by using information accessible within one nuclear cycle. Experiments additionally show that during the nuclear cycles 10–13 the positional information encoded by the Bicoid gradient is read out by hunchback promoters precisely and within 3 min (Lucas et al., 2018).
Effective speed-accuracy tradeoffs are not specific to developmental processes, but are shared by a large number of sensing processes (Rinberg et al., 2006; Heitz and Schall, 2012; Chittka et al., 2009). This generality has triggered interest in quantitative limits and mechanisms for accuracy. Berg and Purcell derived the seminal tradeoff between integration time and readout accuracy for a receptor evaluating the concentration of a ligand (Berg and Purcell, 1977) based on its average binding occupancy. Later studies showed that this limit takes more complex forms when rebinding events of detached ligands (Bialek and Setayeshgar, 2005; Kaizu et al., 2014) or spatial gradients (Endres and Wingreen, 2008) are accounted for. The accuracy of the averaging method in Berg and Purcell, 1977 can be improved by computing the maximum likelihood estimate of the time series of receptor occupancy for a given model (Endres and Wingreen, 2009; Mora and Wingreen, 2010). However, none of these approaches can result in a precise anterior-posterior (AP) decision for the hunchback promoter in the short time of the early nuclear cycles, which has led to the conclusion that there is not enough time to apply the fixed-time Berg-Purcell strategy with the desired accuracy (Gregor et al., 2007a). Additional mechanisms to increase precision (including internuclear diffusion) do yield a speed-up (Erdmann et al., 2009; Aquino et al., 2016), yet they are not sufficient to meet the 3-min challenge. The issue of the embryological speed-accuracy tradeoff remains open.
The approaches described above are fixed-time strategies of decision-making, that is, they assume that decisions are made at a pre-defined deterministic time T that is set long enough to achieve the desired level of error and accuracy. As a matter of fact, fixing the decision time is not optimal because the amount of available information depends on the specific statistical realization of the noisy signal that is being sensed. The time of decision should vary accordingly and therefore depend on the realization. This intuition was formalized by A. Wald by his Sequential Probability Ratio Test (SPRT) (Wald, 1945a). SPRT achieves optimality in the sense that it ensures the fastest decision strategy for a given level of expected error. The adaption of the method to biological sensing posits that the cell discriminates between two hypothetical concentrations by accumulating information through binding events, and by computing on the fly the ratio of the likelihoods (or appropriate proxies) for the two concentrations to be discriminated (Siggia and Vergassola, 2013). When the ratio ‘strongly’ favors one of the two hypotheses, a decision is triggered. The strength of the evidence required for decision-making depends on the desired level of error. For a given level of precision, the average decision time can be substantially shorter than for traditional averaging algorithms (Siggia and Vergassola, 2013). SPRT has also been proposed as an efficient model of decision-making in the fields of social interactions and neuroscience (Gold and Shadlen, 2007; Marshall et al., 2009) and its connections with non-equilibrium thermodynamics are discussed in Roldán et al., 2015.
Wald’s approach is particularly appealing for biological concentration readouts since many of them, including the anterior-posterior decision faced by the hunchback promoter, appear to be binary decisions. Our first goal here is to specifically consider the paradigmatic example of the hunchback promoter and elucidate the degree of speed-up that can be achieved by decisions on the fly. Second, we investigate specific implementations of the decision strategy in the form of possible hunchback promoter architectures. We specifically ask how cooperative TF binding affects the sensing limits. Our results have implications beyond fly development and are generally relevant to regulatory processes. We identify promoter architectures that, by approximating Wald’s strategy, do satisfy several key experimental constraints and reach the experimentally observed level of accuracy of hunchback expression within the (apparently) very stringent time limits of the early nuclear cycles.
Results
Methodological setup
The decision process of the anterior vs posterior hunchback expression
The problem faced by nuclei in their decision of anterior vs posterior developmental fate is sketched in Figure 1a. By decision we mean that nuclei commit to a cell fate through a process that is mainly irreversible leading to one of two classes of cell states that correspond to either the anterior or the posterior regions of the embryo, based on positional information acquired through gene activity. We limit our investigation of promoter architectures to the six experimentally identified Bicoid-binding sites (Figure 1b). We do not consider the known Hunchback-binding sites because before nuclear cycle 13 there is little time to produce sufficient concentrations of zygotic proteins for a significant feedback effect and the measured maternal hunchback profile has not been shown to alter anterior-posterior decision-making. Following the observation that Bcd readout is the leading factor in nuclei fate determination (Ochoa-Espinosa et al., 2009), we also neglect the role of other maternal gradients, for example Caudal, Zelda or Capicua (Jiménez et al., 2000; Sokolowski et al., 2012; Tran et al., 2018; Lucas et al., 2018), since the readout of these morphogens can only contribute additional information and decrease the decision time. We focus on the proximal promoter since no active enhancers have been identified for the hunchback locus in nuclear cycles 11–13 (Perry et al., 2011). Our results can be generalized to enhancers (Hannon et al., 2017), the addition of which only further improves the speed-accuracy efficacy, as we explicitly show for a simple model of Bicoid activated enhancers in the section ’Joint dynamics of Bicoid enhancer and promoter’. Since our goal is to show that accurate decisions can be made rapidly, we focus on the worst case decision-making scenario: the positional information (Wolpert et al., 2015) is gathered through a readout of the Bicoid concentration only, and the decision is assumed to be made independently in each nucleus. Having additional information available and/or coupling among nuclei can only strengthen our conclusion.
The profile of the average concentration of the maternal morphogen Bicoid is well represented by an exponential function that decreases from the anterior toward the posterior of the embryo : , where x is the position along the anterior-posterior axis measured in terms of percentage egg-length (EL), and x0 is the position of half maximum hb expression corresponding to L0 Bcd concentration. The decay length corresponds to 20% EL (Gregor et al., 2007a). Nuclei convert the graded Bicoid gradient into a sharp border of hunchback expression (Figure 1a), with high and low expressions of the hunchback promoter at the left and the right of the border respectively (Driever and Nüsslein-Volhard, 1988; Struhl et al., 1992; Crauk and Dostatni, 2005; Gregor et al., 2007b; Houchmandzadeh et al., 2002; Porcher et al., 2010; Garcia et al., 2013; Lucas et al., 2013; Tran et al., 2018; Lucas et al., 2018). We define the border region of width δx symmetrically around x0 by the dashed lines in Figure 1a. δx is related to the positional resolution (Erdmann et al., 2009; Tran et al., 2018) of the anterior-posterior decision: it is the minimal distance measured in percentages of egg-length between two nuclei’s positions, at which the nuclei can distinguish the Bcd concentrations. Although this value is not known exactly, a lower bound is estimated as EL (Gregor et al., 2007a), which corresponds to the width of one nucleus.
We denote the Bcd concentration at the anterior (respectively posterior) boundary of the border region by L1 (respectively L2) (see Figure 1a). At each position x, nuclei compare the probability that the local concentration is greater than L1 (anterior) or smaller than L2 (posterior). By using current best estimates of the parameters (see Appendix 1), a classic fixed-time-decision integration process and an integration time of 270 s (the duration of the interphase in nuclear cycle 11 [Tran et al., 2018]), we compute in Figure 1c the probability of error per nucleus for each position in the embryo (see Appendix 2 for details). As expected, errors occur overwhelmingly in the vicinity of the border region, where the decision is the hardest (Figure 1c). For nuclei located within the border region, both anterior and posterior decisions are correct since the nuclei lie close to both regions. It follows that, although the error rate can formally be computed in this region, the errors do not describe positional resolution mistakes and do not contribute to the total error (white zone in Figure 1c).
In view of Figure 1c and to simplify further analysis we shall focus on the boundaries of the border region : each nucleus discriminates between hypothesis 1 – the Bcd concentration is , and hypothesis 2 – the Bcd concentration is . To achieve a positional resolution of EL, nuclei need to be able to discriminate between differences in Bcd concentrations on the order of 10%. In addition to the variation in Bcd concentration estimates that are due to biological precision, concentration estimated using many trials follows a statistical distribution. The central limit theorem suggests that this distribution is approximately Gaussian. This assumption means that the probability that the Bcd concentration estimate deviates from the actual concentration by more than the prescribed 10% positional resolution is 32% (see the subsection 'How many nuclei make a mistake?’ for variations on the value and arguments on the error rate). In Figure 1d, we show that the time required under a fixed-time-decision strategy for a promoter with six binding sites to estimate the Bcd concentration within the 32% Gaussian error rate (Gregor et al., 2007a) close to the boundary is much longer than 270 s, minutes (see Appendix 2 for details of the calculation). The activation rule for the promoter architecture in the figure is that all binding sites need to be bound for transcription initiation.
Identifying fast decision promoter architectures
The promoter model
We model the hb promoter as six Bcd binding sites (Schröder et al., 1988; Driever et al., 1989; Struhl et al., 1989; Ochoa-Espinosa et al., 2005) that determine the activity of the gene (Figure 2a). Bcd molecules bind to and unbind from each of the sites with rates μi and νi, which are allowed to be different for each site. For simplicity, the gene can only take two states : either it is silenced (OFF) and mRNA is not produced, or the gene is activated (ON) and mRNA is produced at full speed. While models that involve different levels of polymerase loading are biologically relevant and interesting, the simplified model allows us to gain more intuition and follows the worst-case scenario logic that we discussed in the previous subsection ’The decision process of the anterior vs posterior hunchback expression’. The same remark applies for the wide variety of promoter architectures considered in previous works (Estrada et al., 2016; Tran et al., 2018). In particular, we assume that only the number of sites that are bound matters for gene activation (and not the specific identity of the sites). Such architectures are again a subset of the range of architectures considered in Estrada et al., 2016; Tran et al., 2018.
The dynamics of our model is a Markov chain with seven states with probability Pi corresponding to the number of sites occupied: from all sites unoccupied (probability P0) to all six sites bound by Bcd molecules (probability P6). The minimum number k of bound Bicoid sites required to activate the gene divides this chain into the two disjoint and complementary subsets of active states (, for which the gene is activated) and inactive states (, for which the gene is silenced) as illustrated in Figure 2b and d.
As Bicoid ligands bind and unbind the promoter (Figure 2b), the gene is successively activated and silenced (Figure 2c). This binding/unbinding dynamics results in a series of OFF and ON activation times that constitute all the information about the Bcd concentration available to downstream processes to make a decision. We note that the idea of translating the statistics of binding-unbinding times into a decision remains the same as in the Berg-Purcell approach, where only the activation times are translated into a decision (and not the deactivation times). The promoter architecture determines the relationship between Bcd concentration and the statistics of the ON-OFF activation time series, which makes it a key feature of the positional information decision process. Following (Siggia and Vergassola, 2013), we model the decision process as a Sequential Probability Ratio Test (SPRT) based on the time series of gene activation. At each point in time, SPRT computes the likelihood P of the observed time series under both hypotheses and and takes their ratio : . The logarithm of undergoes stochastic changes until it reaches one of the two decision threshold boundaries K or −K (symmetric boundaries are used here for simplicity) (Figure 2d). The decision threshold boundaries K are set by the error rate e for making the wrong decision between the hypothetical concentrations : (see Siggia and Vergassola, 2013 and Appendix 3). The choice of K or e depends on the level of reproducibility desired for the decision process. We set , corresponding to the widely used error rate of being more than one standard deviation away from the mean of the estimate for the concentration assumed to be unbiased in a Gaussian model (see the subsection ’The decision process of the anterior vs posterior hunchback expression’). The statistics of the fluctuations in likelihood space are controlled by the values of the Bcd concentrations: when Bcd concentration is low, small numbers of Bicoid ligands bind to the promoter (Figure 2e) and the hb gene spends little time in the active expression state (Figure 2f), which leads to a negative drift in the process and favors the lower one of the two possible concentrations (Figure 2g).
Mean decision time: connecting drift-diffusion and Wald’s approaches
In this section, we develop new methods to determine the statistics of gene switches between the OFF and ON expression states. Namely, by relating Wald’s approach (Wald, 1945a) with drift-diffusion, we establish the equality between the drift and diffusion coefficients in decision making space for difficult decision problems, that is, when the discrimination is hard, we elucidate the reason underlying the equality. That allows us to effectively determine long-term properties of the likelihood log-ratio and compute mean decision times for complex architectures.
A gene architecture consists of N binding sites and is represented by N + 1 Markov states corresponding to the number of bound TF, and the rates at which they bind or unbind (Figure 3a). For a given architecture, the dynamics of binding/unbinding events and the rules for activation define the two probability distributions and for the duration of the OFF and ON times, respectively (Figure 3b). The two series are denoted and , where and are the number of switching events in time t from OFF to ON and vice versa. For those cases where the two concentrations L1 and L2 are close and the discrimination problem is difficult (which is the case of the Drosophila embryo), an accurate decision requires sampling over many activation and deactivation events to achieve discrimination. The logarithm of the ratio can then be approximated by a drift–diffusion equation: , where V is the constant drift, that is, the bias in favor of one of the two hypotheses, D is the diffusion constant and a standard Gaussian white noise with zero mean and delta-correlated in time (Wald, 1945a; Siggia and Vergassola, 2013). The decision time for the case of symmetric boundaries , is given by the mean first-passage time for this biased random walk in the log-likelihood space (Redner, 2001; Siggia and Vergassola, 2013):
Note that in this approximation all the details of the promoter architecture are subsumed into the specific forms of the drift V and the diffusion D.
Drift. We assume for simplicity that the time series of OFF and ON times are independent variables (when this assumption is relaxed, see Appendix 8). This assumption is in particular always true when gene activation only depends on the number of bound Bicoid molecules. Under these assumptions, we can apply Wald’s equality (Wald, 1945b; Durrett, 2010) to the log-likelihood ratio, . Wald considered the sum of a random number M of independent and identically distributed (i.i.d.) variables. The equality that he derived states that if M is independent of the outcome of variables with higher indices (i.e. M is a stopping time), then the average of the sum is the product .
Wald’s equality applies to our likelihood sum ( of the likelihoods, where M is the number of ON and OFF times before a given (large) time t). We conclude the drift of the log-likelihood ratio, , is inversely proportional to , where is the mean of the distribution of ON times and is the mean of the distribution of OFF times . The term determines the average speed at which the system completes an activation/deactivation cycle, while the average describes how much deterministic bias the system acquires on average per activation/deactivation cycle. The latter can be re-expressed in terms of the Kullback-Leibler divergence between the distributions of the OFF and ON times calculated for the actual concentration L and each one of the two hypotheses, L1 and L2 :
Equation 2 quantifies the intuition that the drift favors the hypothetical concentration with the time distribution which is the closest to that of the real concentration L (Figure 3b).
Diffusivity : Why it is more involved to calculate and how we circumvent it. While the drift has the closed simple form in Equation 2, the diffusion term is not immediately expressed as an integral. The qualitative reason is as follows. Computing the likelihood of the two hypotheses requires computing a sum where the addends are stochastic (ratios of likelihoods) and the number of terms is also stochastic (the number of switching events). These two random variables are correlated: if the number of switching events is large, then the times are short and the likelihood is probably higher for large concentrations. While the drift is linear in the above sum (so that the average of the sum can be treated as shown above), the diffusivity depends on the square of the sum. The diffusivity involves then the correlation of times and ratios (Carballo-Pacheco et al., 2019), which is harder to obtain as it depends a priori on the details of the binding site model (see the subsection ’Equality between drift and diffusivity’ and the subsection ’When are correlations between the times of events leading to decision important?’ of Appendix 3 for details).
We circumvent the calculation of the diffusivity by noting that the same methods used to derive Equation (1) also yield the probability of first absorption at one of the two boundaries, say +K (see the subsection ’Equality between drift and diffusivity’ of Appendix 3):
By imposing , we obtain and the comparison with the expression of leads to the equality .
The above equality is expected to hold for difficult decisions only. Indeed, drift-diffusion is based on the continuity of the log-likelihood process and Wald’s arguments assume the absence of substantial jumps in the log-likelihood over a cycle. In other words, the two approaches overlap if the hypotheses to be discriminated are close. For very distinct hypotheses (easy discrimination problems), the two approaches may differ from the actual discrete process of decision and among themselves. We expect then that holds only for hypotheses that are close enough, which is verified by explicit examples (see the subsection ’Equality between drift and diffusivity’ of Appendix 3). The Appendix subsection also verifies by expanding the general expression of V and D for close hypotheses. The origin of the equality is discussed below.
Using , we can reduce the general formula Equation (1) to
which is formula 4.8 in Wald, 1945a and it is the expression that we shall be using (unless stated otherwise) in the remainder of the paper.
The additional consequence of the equality is that the argument of the hyperbolic tangent in Equation (1) is . It follows that for any problem where the error , the argument of the hyperbolic tangent is large and the decision time is weakly dependent on deviations to that occur when the two hypotheses differ substantially. A concrete illustration is provided in the subsection ’The first passage time to decision’ of Appendix 3.
Single binding site example. As an example of the above equations, we consider the simplest possible architecture with a single binding site (), where the gene activation and de-activation processes are Markovian. In this case, the de-activation rate ν is independent of TF concentration and the activation rate is exponentially distributed . We can explicitly calculate the drift (Equation 2) and expand it for and , at leading order in . Inserting the resulting expression into Equation 4, we conclude that
decreases with increasing relative TF concentration difference and gives a very good approximation of the complete formula (see Appendix 3—figure 2a–c with different values of ).
Equations 2 and 4 greatly reduce the complexity of evaluating the performance of architectures, especially when the number of binding sites is large. Alternatively, computing the correlation of times and log-likelihoods would be increasingly demanding as the size of the gene architecture transfer matrices increase. As an illustration, Figure 3 compares the performance of different activation strategies : the 2-or-more rule (), which requires at least two Bcd-binding sites to be occupied for hb promoter activation (Figure 3a–d in blue), and the 4-or-more rule () (Figure 3a–d in red) for fixed binding and unbinding parameters. Figure 3c shows that stronger drifts lead to faster decisions. The full decision time probability distribution is computed from the explicit formula for its Laplace transform (Siggia and Vergassola, 2013, Figure 3d). With the rates chosen for Figure 3, the rule leads to an ON time distribution that varies strongly with the concentration, making it easier to discriminate between similar concentrations: it results in a stronger average drift that leads to a faster decision than (Figure 3d).
What is the origin of the equality? The special feature of the SPRT random process is that it pertains to a log-likelihood. This is at the core of the equality that we found above. First, note that the equality is dimensionally correct because log-likelihoods have no physical dimensions so that both V and D have units of . Second, and more important, log-likelihoods are built by Bayesian updating, which constrains their possible variations. In particular, given the current likelihoods and at time t for the two concentrations L1 and L2 and the respective probabilities and of the two hypotheses, it must be true that the expected values after a certain time remain the same if the expectation is taken with respect to the current (see, e.g. Reddy et al., 2016). In formulae, this implies that the average variation of the probability over a given time that is
should vanish (see the subsection 'Equality between drift and diffusivity’ of Appendix 3 for a derivation). Here, is the expected variation of under the assumption that hypothesis 1 is true and is the same quantity but under the assumption that hypothesis 2 is true. We notice now that , where is the log-likelihood, and that the drift-diffusion of the log-likelihood implies that , and . By using that and , we finally obtain that
and imposing yields the equality . Note that the above derivation holds only for close hypotheses, otherwise the velocity and the diffusivity under the two hypotheses do not coincide.
Additional embryological constraints on promoter architectures
In addition to the requirements imposed by their performance in the decision process (green dashed line in Figure 4a), promoter architectures are constrained by experimental observations and properties that limit the space of viable promoter candidates for the fly embryo. A discussion about their possible function and their relation to downstream decoding processes is deferred to the final section.
First, we require that the average probability for a nucleus to be active in the boundary region is equal to 0.5, as it is experimentally observed (Lucas et al., 2013; Figure 1a). This requirement mainly impacts and constrains the ratio between binding rates and unbinding rates .
Second, there is no experimental evidence for active search mechanisms of Bicoid molecules for its targets. It follows that, even in the best case scenario of a Bcd ligand in the vicinity of the promoter always binding to the target, the binding rate is equal to the diffusion limited arrival rate (Appendix 1). As a result, the binding rates are limited by diffusion arrivals and the number of available binding sites: (black dashed line in Figure 4b), where L is the concentration of Bicoid. This sets the timescale for binding events. In Appendix 1, we explore the different measured values and estimates of parameters defining the diffusion limit and their influence on the decision time (see Appendix 1—table 1 for all the predictions).
Third, as shown in Figure 1, the hunchback response is sharp, as quantified by fitting a Hill function to the expression level vs position along the egg length. Specifically, the hunchback expression (in arbitrary units) is well approximated as a function of the Bicoid concentration by the Hill function , where is the Bcd concentration at the half-maximum hb expression point and H is the Hill coefficient (Figure 1a). Experimentally, the measured Hill coefficient for mRNA expression from the WT hb promoter is (Lucas et al., 2018; Tran et al., 2018). Recent work (Tran et al., 2018) suggests that these high values might not be achieved by Bicoid-binding sites only. Given current parameter estimates and an equilibrium binding model, (Tran et al., 2018) shows that a Hill coefficient of 7 is not achievable within the duration of an early nuclear cycle ( min). That points at the contribution of other mechanisms to pattern steepness. Given these reasons (and the fact that we limit ourselves only to a model with six equilibrium Bcd-binding sites only), we shall explore the space of possible equilibrium promoter architectures limiting the steepness of our profiles to Hill coefficients .
Numerical procedure for identifying fast decision-making architectures
Using Equations 2 and 4, we explore possible hb promoter architectures and activation rules to find the ones that minimize the time required for an accurate decision, given the constraints listed in the paragraph ‘Additional embryological constraints on promoter architectures’. We optimize over all possible binding rates (μ1 is the binding rate of the first Bcd ligand and the binding rate of the last Bcd ligand when 5 Bcd ligands are already bound to the promoter), and the unbinding rates (ν1 is the unbinding rate of a single Bcd ligand bound to the promoter and is the unbinding rate of all Bcd ligands when all six Bcd-binding sites are occupied). We also explore different activation rules by varying the minimal number of Bcd ligands k required for activation in the k-or-more activation rule (Estrada et al., 2016; Tran et al., 2018). We use the most recent estimates of biological constants for the hb promoter and Bcd diffusion (see Appendix 1) and set the error rate at the border to 32% (Gregor et al., 2007b; Petkova et al., 2019). Reasons for this choice were given in the subsection ‘The decision process of the anterior vs posterior hunchback expression’ and will be revisited in the subsection ‘How many nuclei make a mistake?’, where we shall introduce some embryological considerations on the number of nuclei involved in the decision process and determine the error probability accordingly. The optimization procedure that minimized the average decision time for different values of k and H is implemented using a mixed strategy of multiple random starting points and steepest gradient descent (Figure 4a).
Logic and properties of the identified fast decision architectures
The main conclusion we reach using the methodology presented in the 'Methodological setup' section is that there exist promoter architectures that reach the required precision within a few minutes and satisfy all the additional embryological constraints that were discussed previously (Figure 4a). The fastest promoters (blue crosses in Figure 4a) reach a decision within the time of nuclear cycle 11 (green line in Figure 4a) for a wide range of activation rules. Even imposing steep readouts () allows us to identify relatively fast promoters, although imposing the nuclear cycle time limit, pushes the activation rule to smaller k. Interestingly, we find that the fastest architectures identified perform well over a range of high enough concentrations (Appendix 4—figure 1). The optimal architectures differ mainly by the distribution of their unbinding rates (Figure 4b). We shall now discuss their properties, namely the binding times of Bicoid molecules to the DNA binding sites, and the dependence of the promoter activity on various features, such as activation rules and the number of binding sites in detail. Together, these results elucidate the logic underlying the process of fast decision-making.
How many nuclei make a mistake?
The precision of a stochastic readout process is defined by two parameters: the resolution of the readout , and the probability of error, which sets the reproducibility of the readout. In Figure 4, we have used the statistical Gaussian error level (32%) to obtain our results. However, the error level sets a crucial quantity for a developing organism and it is important to connect it with the embryological process, namely how many nuclei across the embryo will fail to properly decide (whether they are positioned in the anterior or in the posterior part of the embryo). To make this connection, we compute this number for a given average decision time t and we integrate the error probability along the AP axis to obtain the error per nucleus . The expected number of nuclei that fail to correctly identify their position is given by , where c is the nuclear cycle and we have neglected the loss due to yolk nuclei remaining in the bulk and arresting their divisions after cycle 10 (Foe and Alberts, 1983). Assuming a 270 s readout time – the total interphase time of nuclear cycle 11 (Tran et al., 2018) – for the fastest architecture identified above and an error rate of 32%, we find that , that is an essentially fail-proof mechanism. This number can be compared with >30 nuclei in the embryo that make an error in a read-out in a Berg-Purcell-like fixed-time scheme (integrated blue area in Figure 1c).
Conversely, for a given architecture, reducing the error level increases drastically the mean first-passage time to decision: the mean time for decision as a function of the error rate for the fastest architecture identified with and is shown in Appendix 2—figure 1. The decision can be made in about a minute for but requires on average 10 min for (Appendix 2—figure 1). Note that, because the mean first-passage depends simply on the inverse of the drift per cycle (Equation 4), the relative performance of two architectures is the same for any error rate so that the fastest architectures identified in Figure 4a are valid for all error levels.
Just like for the fixed-time strategy (Figure 1c and d), nuclei located in the mid-embryo region are more likely to make mistakes and take longer on average to trigger a decision (Appendix 4—figure 2).
Residence times among the various states
As shown in Tran et al., 2018, high Hill coefficients in the hunchback response are associated with frequent visits of the extreme expression states where available binding sites are either all empty (state 0), or all occupied (state 6). Figure 4c provides a concrete illustration by showing the distribution of residence times for the promoter architectures that yield the fastest decision times for and no constraints (blue bars), (red bars) and (black bars). When there are no constraints on the slope of the hunchback response, the most frequently occupied states are close to the ON-OFF transition (2 and 3 occupied binding sites in Figure 4c) to allow for fast back and forth between the active and inactive states of the gene and thereby gather information more rapidly by reducing (see formulae 2 and 4).
We notice that for higher Hill coefficients, the system transits quickly through the central states (in particular states with 3 and 4 occupied Bcd sites, Figure 4 red and black bars). As expected for high Hill coefficients, such dynamics requires high cooperativity. Cooperativity helps the recruitment of extra transcription factors once one or two of them are already bound and thus speeds up the transitioning through the states with 2, 3 and 4 occupied binding sites. An even higher level of cooperativity is required to make TF DNA binding more stable when 5 or 6 of them are bound, reducing the OFF rates or (Figure 4b).
The (short) binding times of Bicoid on DNA
The distribution of times spent bound to DNA of individual Bicoid molecules is shown in Figure 4d obtained from Monte Carlo simulations using rates from the fastest architecture with and . We find an exponential decay, an average bound time of about 7.1 s and a median around 0.5 s. Our median-time-bound prediction is of the same order of magnitude as the observed bound times seen in recent experiments by Mir et al., 2017; Mir et al., 2018, who found short (mean ∼0.629 s and median ∼0.436 s based on exponential fits), yet quite distinguishable from background, bound times to DNA. These results were considered surprising because it seemed unclear how such short binding events could be consistent with the processing of ON and OFF gene switching events. Our results show that such short binding times may actually be instrumental in achieving the tradeoff between accuracy and speed, and rationalize how longer activation events are still achieved despite the fast binding and unbinding. High cooperativity architectures lead to non-exponential bound times to DNA (Figure 4d) for which the typical bound time (median) is short but the tail of the distribution includes slower dynamics that can explain longer activation events (the mean is much larger than the median). This result suggests that cells can use the bursty nature of promoter architectures to better discriminate between TF concentrations.
In Mir et al., 2017, the raw distribution comprises both non-specific and specific binding and cannot be directly compared to simulation results. Instead, we use the largest of the two exponents fit for the boundary region (Mir et al., 2017), which should correspond to specific binding. The agreement between the distributions in Figure 4d is overall good, and we ascribe discrepancies to the fact that (Mir et al., 2017) fit two exponential distributions assuming the observed times were the convolution of exponential specific and non-specific binding times. Yet the true specific binding time distribution is likely not exponential, e.g. due to the effect of binding sites having different binding affinities. We show in Appendix 5—figure 1 that the two distributions are very similar and hard to distinguish once they are mixed with the non-specific part of the distribution.
Activation rules
In the parameter range of the early fly embryo, the fastest decision-making architectures share the one-or-more () activation rule : the promoter switches rapidly between the ON and OFF expression states and the extra binding sites are used for increasing the size of the target rather than building a more complex signal. Architectures with and activation rules can make decisions in less than 270 s and satisfy all the required biological constraints. Generally speaking, our analysis predicts that fast decisions require a small number of Bicoid-binding sites (less than three) to be occupied for the gene to be active. The advantage of the or activation rules is that the ON and OFF times are on average longer than for , which makes the downstream processing of the readout easier. We do not find any architecture satisfying all the conditions for the activation rules, although we cannot exclude there could be some architectures outside of the subset that we managed to sample, especially for the activation rules where we did identify some promoter structures that are close to the time constraint.
Activation rules with higher k can give higher information per cycle for the ON rate, yet they do not seem to lead to faster decisions because of the much longer duration of the cycles. To gain insight on how the tradeoff between fast cycles and information affects the efficiency of activation rules, we consider architectures with only two binding sites, which lend to analytical understanding (Figure 5a and b). Both of these considered architectures are out of equilibrium and require energy consumption (as opposed to the two equilibrium architectures of Figure 5c and d).
When is 1-or-more faster than all-or-nothing activation?
A first model has the promoter consisting of two binding sites with the all-or-nothing rule (Figure 5a). We consider the mathematically simpler, although biologically more demanding, situation where TFs cannot unbind independently from the intermediate states – once one TF binds, all the binding sites need to be occupied before the promoter is freed by the unbinding with rate ν of the entire complex of TFs. This situation can be formulated in terms of a non-equilibrium cycle model, depicted for two binding sites in Figure 5a. The activation time is a convolution of the exponential distributions . In the simple case, when the two binding rates are similar (), the OFF times follow a Gamma distribution and the drift and diffusion can be computed analytically (see Appendix 4). When the two binding rates are not similar the drift and diffusion must be obtained by numerical integration (see Appendix 4).
In the first model described above (Figure 5a), deactivation times are independent of the concentration and do not contribute to the information gained per cycle and, as a result, to . To explore the effect of deactivation time statistics on decision times, we consider a cycle model where the gene is activated by the binding of the first TF (the 1-or-more rule) and deactivation occurs by complete unbinding of the TFs complex (Figure 5b). The resulting activation times are exponentially distributed and contribute to drift and diffusion as in the simple two state promoter model (Figure 5d). The deactivation times are a convolution of the concentration-dependent second binding and the concentration-independent unbinding of the complex and their probability distribution is . Drift and diffusion can be obtained analytically (Appendix 4). The concentration-dependent deactivation times prove informative for reducing the mean decision time at low TF concentrations but increase the decision time at high TF concentrations compared to the simplest irreversible binding model. In the limit of unbinding times of the complex () much larger than the second binding time (), no information is gained from deactivation times. In the limit of , the model reduces to a one binding site exponential model and the two architectures (Figure 5b and d) have the same asymptotic performance.
Within the irreversible schemes of Figure 5a and Figure 5b, the average time of one activation/deactivation cycle is the same for the all-or-nothing and 1-or-more activation schemes. The difference in the schemes comes from the information gained in the drift term , which begs the question : is it more efficient to deconvolve the second binding event from the first one within the all-or-nothing activation scheme, or from the deactivation event in the activation scheme?
In general, the convolution of two concentration-dependent events is less informative than two equivalent independent events, and more informative than a single binding event. For small concentrations L, activation events are much longer than deactivation events. In the scheme, OFF times are dominated by the concentration-dependent step and the two activation events can be read independently. This regime of parameters favors the rule (Figure 5e). However, when the concentration L is very large the two binding events happen very fast and for , in the scheme, it is hard to disentangle the binding and the unbinding events. The information gained in the second binding event goes to 0 as and the one-or-more activation scheme (Figure 5b) effectively becomes equivalent to a single binding site promoter (Figure 5d), making the all-or-nothing activation (Figure 5a) scheme more informative (Figure 5e). The fastest decision time architecture systematically convolves the second binding event with the slowest of the other reactions (Figure 5e), with the transition between the two activation schemes when the other reactions have exactly equal rates ( line in Figure 5e) (see Appendix 6 for a derivation).
How the number of binding sites affects decisions
The above results have been obtained with six binding sites. Motivated by the possibility of building synthetic promoters (Park et al., 2019) or the existence of yet undiscovered binding sites, we investigate here the role of the number of binding sites. Our results suggest that the main effect of additional binding sites in the fly embryo is to increase the size of the target (and possibly to allow for higher cooperativity and Hill coefficients). To better understand the influence of the number of binding sites on performance at the diffusion limit, we compare a model with one binding site (Figure 5d) to a reversible model with two binding sites where the gene is activated only when both binding sites are bound (all-or-nothing , Figure 5c). Just like for the six binding site architectures, we describe this two binding site reversible model by using the transition matrix of the Markov chain and calculate the total activation time .
For fixed values of the real concentration L, the two hypothetical concentrations L1 and L2, the error e and the second off-rate , we optimize the remaining parameters μ1, μ2 and ν1 for the shortest average decision time.
For high gene deactivation rates , the fastest decision time is achieved by a promoter with one binding site (Figure 5f): once one ligand has bound, the promoter never goes back to being completely unbound (=0 in Figure 5c) but toggles between one and two bound TF (Figure 5d with and ). For lower values of gene deactivation rates , there is a sharp transition to a minimal solution using both binding sites. In the all-or-nothing activation scheme that is used here, the distribution of deactivation times is ligand independent and the concentration is measured only through the distribution of activation times, which is the convolution of the distributions of times spent in the 0 and 1 states before activation in the two state. For very small deactivation rates, it is more informative to 'measure’ the ligand concentration by accumulating two binding events every time the gene has to go through the slow step of deactivating (Figure 5c). However, for large deactivation rates little time is 'lost’ in the uninformative expressing state and there is no need to try and deconvolve the binding events but rather use direct independent activation/deactivation statistics from a single binding site promoter (Figure 5d, see Appendix 7 for a more detailed calculation).
The role of weak binding sites
An important observation about the strength of the binding sites that emerge from our search is that the binding rates are often below the diffusion limit (see black dashed line in Figure 4b) : some of the ligands reach the receptor, they could potentially bind but the decision time decreases if they do not. In other words, binding sites are 'weak’ and, since this is also a feature of many experimental promoters (Gertz et al., 2009), the purpose of this section is to investigate the rationale for this observation by using the models described in Figure 5.
Naively, it would seem that increasing the binding rate can only increase the quality of the readout. This statement is only true in certain parameter regimes, and weaker binding sites can be advantageous for a fast and precise readout. To provide concrete examples, we fix the deactivation rate ν and the first binding rate μ1 in the 1-or-more irreversible binding model of Figure 5b and we look for the unbinding rate that leads to the fastest decision. We consider a situation where the two binding sites are not interchangeable and binding must happen in a specific order. In this case, the diffusion limit states that if the first binding is strong and happens at the diffusion limit. We optimize the mean decision time for (see Appendix 9—figure 1 for an example) and find a range of parameters where the fastest-decision value is not as fast as parameter range allows (Figure 5g). We note that this weaker binding site that results in fast decision times can only exist within a promoter structure that features cooperativity. In this specific case, the first binding site needs to be occupied for the second one to be available. If the two binding sites are independent, then the diffusion limit is and the fastest solution always has the fastest possible binding rates.
Predictions for Bicoid-binding sites mutants
In addition to results for wild type embryos, our approach also yields predictions that could be tested experimentally by using synthetic hb promoters with a variable numbers of Bicoid-binding sites (Figure 6a). For any of the fast decision-making architectures identified and activation rules chosen, we can compute the effects of reducing the number of binding sites. Specifically, our predictions for the activation rule and in Figure 6b can be compared to FISH or fluorescent live imaging measurements of the fraction of active loci at a given position along the anterior-posterior axis. Bcd-binding site mutants of the WT promoter have been measured by immunostaining in cycle 14 (Park et al., 2019), although mRNA experiments in earlier cell cycles of well characterized mutants are needed to provide for a more quantitative comparison.
An important consideration for the comparison to experimental data is that there is a priori no reason for the hb promoter to have an optimal architecture. We do find indeed many architectures that satisfy all the experimental constraints and are not the fastest decision-making but 'good enough’ hb promoters. A relevant question then is whether or not similarity in performance is associated with similarity in the microscopic architecture. This point is addressed in Figure 6c, where we compare the fraction of active loci along the AP axis using several constraint-conforming architectures for and the and activation rules. The plot shows that solutions corresponding to the same activation rule are clustered together and quite distinguishable from the rest. This result suggests that the precise values of the binding and unbinding constants are not important for satisfying the constraints, that many solutions are possible, and that FISH or MS2 imaging experiments can be used to distinguish between different activation rules. The fraction of active loci in the boundary region is an even simpler variable that can differentiate between different activation rules (Figure 6d). Lastly, we make a prediction for the displacement of the anterior-posterior boundary in mutants, showing that a reduced numbers of Bcd sites results in a strong anterior displacement of the hb expression border compared to six binding sites, regardless of the activation rule (Figure 6e). Error bars in Figure 6e, that correspond to different close-to-fastest architectures, confirm that these share similar properties and different activation rules are distinguishable.
Joint dynamics of Bicoid enhancer and promoter
The Bicoid transcription factor has been shown to target more than a thousand enhancer loci in the Drosophila embryo with a wide concentration range of sensitivities (Driever and Nüsslein-Volhard, 1988; Struhl et al., 1989; Hannon et al., 2017). Enhancers are of special interest because they can be located far away from the promoter (Ribeiro et al., 2010; Krivega and Dean, 2012) and perform a statistically independent sample of the concentration that is later combined with that of the promoter. Evidence suggests that promoter-based conformational changes can be stable over long times (Fukaya et al., 2016), which mimics information storage during a process of signal integration. To explore these effects, we consider a simple model of enhancer dynamics where a Bicoid-specific enhancer switches between two states ON and OFF independently of the promoter dynamics. We assume a simple rule for the gene activity: the gene is transcribed when both the promoter and the enhancer are ON. As an example, we consider an enhancer made of one binding site so that the ON rate of the enhancer is limited by the diffusion rate . As an example, we perform a parameter search for the promoter activation rule (see Appendix 12), while still assuming that about half the nuclei are active at the boundary and a required Hill coefficient greater than 4, looking for architectures yielding the shortest decision time for an error rate of 32% for the 10% relative concentration difference discrimination problem. We find that the enhancer improves the performance of the readout, reducing the time to decision by about . We find that adding extra binding sites to the promoter increases the computing power of the enhancer-promoter system and can reduce the time to decision to about 60% of the performance of the best architectures without enhancers.
Estimating the log-likelihood function with RNA concentrations
To illustrate how a biochemical network can approximate the calculation of the log-likelihood, we consider the case of the fastest architecture identified in the paragraph ’Numerical procedure for identifying fast decision-making architectures’ with and . In Figure 7a, we show the contributions of the OFF times (blue) and the ON times (red) to the log-likelihood for this architecture. We notice that the behavior of the log-likelihood contributions at long times is simply linear in time with a positive rate for ON times and a negative rate for OFF times. Conversely, short ON times contribute negatively to the log-likelihood while short OFF times contribute positively to the log-likelihood (Figure 7a). This observation suggests a simple model of RNA production with delay to approximate the computation of the log-likelihood. We consider a model of RNA production with five parameters (Figure 7b, details of the model are given in Appendix 11). We assume that when the promoter is in the ON state, polymerase is loaded and RNA is transcribed at a constant rate while when the promoter is in the OFF state, RNA is produced at a lower basal rate . In order to approximate the linear behavior of the log-likelihood function at long times we suggest the existence of an enzyme actively degrading hunchback RNA (Chanfreau, 2017) and include in our model of RNA production the delay and inertia associated with promoter dynamics and polymerase loading. RNA levels fluctuate and trigger decisions when the concentration is greater than a threshold (anterior decision) or lower than a threshold (posterior decision). Since many forms of switch have already been presented in the literature (Goldbeter and Koshland, 1981; Tyson et al., 2003; Ozbudak et al., 2004; Siggia and Vergassola, 2013; Sandefur et al., 2013), we shall concentrate on the log-likelihood calculation and refer to previous references for the implementation of a decision when reaching a threshold.
We look for parameters that satisfy both a high speed and high accuracy requirement for a decision between two points located 2% egg lengths apart across the mid-embryo boundary. For the fastest architecture identified with and , we identify parameters that satisfy and a mean decision time min (see Appendix 11—figure 1). We check that this model produces a profile of RNA that is consistent with the observed high Hill coefficient (Figure 7c). Interestingly, we find that for this particular set of parameters the RNA profile Hill coefficient is increased by the delayed transcription dynamics and the active degradation from (blue line in Figure 7c) up to (red line in Figure 7c, details of the calculation are given in Appendix 11). This result could shed new light on the fundamental limits to Hill coefficients in the context of cooperative TF binding (Estrada et al., 2016; Tran et al., 2018) and provide a possible mechanism to explain how mRNA profiles can reach higher steepness than the corresponding TF activities do. We also looked for parameter sets that approximate the log-likelihood for the optimal architecture identified for and and find several candidates that fall close to the requirement of speed and accuracy (Appendix 11—figure 1). Together these results show that implementing the log-likelihood using a molecular circuit with a hb promoter is possible. They do not show this is what is happening in the embryo itself.
Discussion
The issue of precision in the Bicoid readout by the hunchback promoter has a long history (Nüsslein-Volhard and Wieschaus, 1980; Tautz, 1988). Recent interest was sparked by the argument that the amount of information at the hunchback locus available during one nuclear cycle is too small for the observed 2% EL distance between neighboring nuclei that are able to make reproducible distinct decisions (Gregor et al., 2007b). By using updated estimates of the biophysical parameters (Porcher et al., 2010; Tran et al., 2018), and the Berg-Purcell error estimation, we confirm that establishing a boundary with 2% variability between neighbouring nuclei would take at least about 13.4 min – roughly the non-transient expression time in nuclear cycle 14 (Lucas et al., 2018; Tran et al., 2018) (Appendix 1). This holds for a single Bicoid-binding site. An intuitive way to achieve a speed up is to increase the number of binding sites: multiple occupancy time traces are thereby made available, which provides a priori more information on the Bicoid concentration.
Possible advantages of multiple sites are not so easy to exploit, though. First, the various sites are close and their respective bindings are correlated (Kaizu et al., 2014), so that their respective occupancy time traces are not independent. That reduces the gain in the amount of information. Second, if the activation of gene expression requires the joint binding of multiple sites, the transition to the active configuration takes time. The overall process may therefore be slowed down with respect to a single binding site model, in spite of the additional information. Third, and most importantly, information is conveyed downstream via the expression level of the gene, which is again a single time trace. This channeling of the multiple sites’ occupancy traces into the single time trace of gene expression makes gene activation a real information bottleneck for concentration readout. All these factors can combine and even lead to an increase in the decision time. To wit, an all-or-nothing equilibrium activation model with six identical binding sites functioning at the diffusion limit and no cooperativity takes about 38 min to achieve the same above accuracy. In sum, the binding site kinetics and the gene activation rules are essential to harness the potential advantage of multiple binding sites.
Our work addresses the question of which multisite promoters architecture minimize the effects of the activation bottleneck. Specifically, we have shown that decision schemes based on continuous updating and variable decision times significantly improve speed while maintaining the desired high readout accuracy. This should be contrasted to previously considered fixed-time integration strategies. In the case of the hunchback promoter in the fly embryo, the continuous update schemes achieve the 2% EL positional resolution in less than 1 min, always outperforming fixed-time integration strategies for the same promoter architecture (see Appendix 1—table 1). While 1 min is even beyond what is required for the fly embryo, this margin in speed allows to accommodate additional constraints, viz. steep spatial boundary and biophysical constraints on kinetic parameters. Our approach ultimately yields many promoter architectures that are consistent with experimental observables in fly embryos, and results in decision times that are compatible with a precise readout even for the fast nuclear cycle 11 (Lucas et al., 2018; Tran et al., 2018).
Several arguments have been brought forward to suggest that the duration of a nuclear cycle is the limiting time period for the readout of Bicoid concentration gradient. The first one concerns the reset of gene activation and transcription factor binding during mitosis. In that sense, any information that was stored in the form of Bicoid already bound to the gene is lost. The second argument is that the hunchback response integrated over a single nuclear cycle is already extremely precise. However, none of these imply that the hunchback decision is made at a fixed-time (corresponding to mitosis) so that strategies involving variable decision times are quite legitimate and consistent with all the known phenomenology.
We have performed our calculations in a worst-case scenario. First, we did not consider averaging of the readout between neighbouring nuclei. While both protein (Gregor et al., 2007a) and mRNA concentrations (Little et al., 2013) are definitely averaged, and it has been shown theoretically that averaging can both increase and decrease (Erdmann et al., 2009) readout variability between nuclei, we do not take advantage of this option. The fact that we achieve less than 3 min in nuclear cycle 11, demonstrates that averaging is a priori dispensable. Second, we demand that the hunchback promoter results in a readout that gives the positional resolution observed in nuclear cycle 14, in the time that the hunchback expression profile is established in nuclear cycle 11. The reason for this choice is twofold. On the one hand, we meant to show that such a task is possible, making feasible also less constrained set-ups. On the other hand, the hunchback expression border established in nuclear cycle 11 does not move significantly in later nuclear cycles in the WT embryo, suggesting that the positional resolution in nuclear cycle 11 is already sufficient to reach the precision of later nuclear cycles. The positional resolution that can be observed in nuclear cycle 11 at the gene expression level is EL (Tran et al., 2018), but this is also due to smaller nuclear density.
Two main factors generally affect the efficiency of decisions: how information is transmitted and how available information is decoded and exploited. Decoding depends on the representation of available information. Our calculations have considered the issue of how to convey information across the bottleneck of gene activation, under the constraint of a given Hill coefficient. The latter is our empirical way of taking into account the constraints imposed by the decoding process. High Hill coefficients are a very convenient way to package and represent positional information: decoding reduces to the detection of a sharp transition, an edge in the limit of very high coefficients. The interpretation of the Hill coefficient as a decoding constraint is consistent with our results that an increase in the coefficient slows down the decision time. The resulting picture is that promoter architecture results from a balance between the constraints imposed by a quick and accurate readout and those stemming from the ease of its decoding. The very possibility of a balance is allowed by the main conclusion demonstrated here that promoter structures can go significantly below the time limit imposed by the duration of the early nuclear cycles. That leaves room for accommodating other features without jeopardising the readout timescale. While the constraint of a fixed Hill coefficient is an effective way to take into account constraints on decoding, it will be of interest to explore in future work if and how one can go beyond this empirical approach. That will require developing a joint description for transmission and decoding via an explicit modeling of the mechanisms downstream of the activation bottleneck.
Recent work has shed light on the role of out of equilibrium architectures on steepness of response (Estrada et al., 2016) and gradient establishment (Tran et al., 2018; Park et al., 2019). Here, we showed that equilibrium architectures perform very well and achieve short decision times, and that out of equilibrium architectures do not seem to significantly improve the performance of promoters, except for making some switches from gene states a bit faster. Non-equilibrium effect can, however, increase the Hill coefficient of the response without adding extra binding sites, which is useful for the downstream readout of positional information that we formulated above as decoding.
We also showed how short bound times of Bicoid molecules to the DNA (Mir et al., 2017; Mir et al., 2018) are translated into accurate and fast decisions. Our fast decision-making architectures also display short DNA-bound times. However, the constraint of high cooperativity means that the distribution of bound times to the DNA is non-exponential and the rare long binding times that occur during the bursty binding process are exploited during the read-out. The combination of high cooperativity and high temporal variance due to bursty dynamics is a possible recipe for an accurate readout.
At the technical level, we developed new methods for the mean decision time of complex gene architectures within the framework of variable time decision-making (SPRT). This allowed us to establish the equality between drift and diffusion of the log-likelihood between two close hypotheses. Its underlying reason is the martingale property that the conditional expectation of probabilities for two hypotheses, given all prior history, is equal to their present value. The methodology developed here will be useful for the broad range of decision processes where SPRT is relevant, including neuroscience (Gold and Shadlen, 2007; Bitzer et al., 2014) and synthetic biology (O'Brien and Murugan, 2019; Pittayakanchit et al., 2018).
We made predictions about how promoter architectures with different activation schemes can be compared in synthetic embryos with different numbers of Bcd binding sites. Furthermore, experiments that change the composition of the syncytial medium would influence the diffusion constant and assay the assumption of diffusion-limited activation. Our model predicts that these changes would result in modifications of hunchback activation profiles: higher or lower diffusion rates slide the hunchback profile towards the anterior or the posterior end of the embryo, respectively, similarly to an increase or a decrease of the number of Bicoid binding sites. Any of the above experiments would greatly advance our understanding of the molecular control of spatial patterning in Drosophila embryo and, more generally, of regulatory processes.
Finally, we showed how a simple model of RNA production with delay and active degradation could easily approximate the seemingly complex log-likelihood calculation. The specific implementation that we described is tentative and aimed at simplicity, yet it illustrates several points, namely that the log-likelihood can be approximated by preserving a high Hill coefficients, observed characteristics of transcription dynamics and ensuring speed-accuracy limits. Future experiments could identify candidates for the enzyme responsible for the active degradation of RNA, or image the formation and dissolution of clusters using super-resolution imaging methods. This mechanism is also very close to the multifactor clusters (Mir et al., 2018) observed recently in early Drosophila in conjunction with active transcription sites. We suggest this mechanism as an example of implementation of the log-likelihood calculation, but note that the added complexity of the computation could happen at different levels of the expression machinery, including upstream of promoter activation through enhancer dynamics and chromatin arrangement.
Appendix 1
Biological parameters in the embryo
To build a model of the promoter, we combine parameters from recent experimental work.
The embryo at nuclear cycle n is modelled as having nuclei/cells. For simplicity, we assume they are equidistributed on the periphery of the embryo and across the embryo’s length and do not take into account the effect of the geometry of the embryo because the curvature of the embryo is small (the embryo is about 500 µm-long along the anterior-posterior axis and only about 100 µm-long along the dorso-ventral axis). We also neglect the few nuclei forming pole cells and remaining in the bulk (Foe and Alberts, 1983).
The Bicoid concentration in the embryo is given by , where x is the position along the anterior-posterior (AP) embryo axis measured in % of the egg length, is the decay length also measured in % of the egg length ( which is roughly 20% of the egg length (Gregor et al., 2007b) and x0 is the position of half-maximum hunchback expression (x0 is of the order of 250 µm and varies slightly depending on cell cycle, usually close to 45% egg length for the WT hunchback promoter (Porcher et al., 2010). is the concentration of free Bicoid molecules at the AP boundary (Abu-Arish et al., 2010) (that also corresponds to the point of largest hunchback expression slope (Tran et al., 2018).
To compute the diffusion limited arrival rate at the locus, we use the following parameters: diffusivity (Porcher et al., 2010; Abu-Arish et al., 2010), concentration of free Bicoid molecules at the AP boundary , size of the binding target (Gregor et al., 2007a), which leads to an effective at the boundary. This value is an upper bound, assuming that every encounter between a transcription factor and a binding site results in successful binding. We note in Main Text Figure 4b that most of the ON rates are close to the diffusion limit. We conclude that in this parameter regime, the most efficient strategy is to have ON events that are as fast as possible. The only reason to reduce them is to achieve the required Hill coefficient. That can be done by either adjusting the ON or the OFF rates.
The above estimate may be inaccurate for various reasons and we ought to explore the sensitivity of results to those uncertainties. Appendix 1—table 1 recapitulates the time-performance of different strategies for different choices of the parameters. A first source of uncertainty is the value of the diffusivity, which is estimated to vary between 4 and (Porcher et al., 2010; Abu-Arish et al., 2010). We consider then two possible values for the diffusivity from Porcher et al., 2010: and the aforementioned value . A second source of uncertainty is that the actual Bicoid concentration at the boundary could vary by up to a factor two depending on estimates of the concentration and its decay length (Gregor et al., 2007b; Tran et al., 2018; Abu-Arish et al., 2010). We consider then two possible value for the concentration: the aforementioned molecules per , and molecules per . Finally, we assumed above that the size of the target is the full Bicoid operator site, which we took about ten base pairs following Gregor et al., 2007a. However, assuming that the TF must reach a specific position on the promoter-binding site could reduce the size of the target to a single base pair, that is a by a factor 10. In terms of parameters, we consider then two possible values for a : either , or . All in all, taking into account the various sources of uncertainty can range in the interval .
We consider four possible decision strategies. The first one is a single binding site making a fixed-time decision. This computation is made using the original Berg-Purcell formula (Berg and Purcell, 1977). In the Berg-Purcell strategy, the concentration of the ligand is inferred based on the total time that the receptor or the binding site has spent occupied by ligands. Due to averaging, the relative error of the concentration readout is inversely proportional to the number of independent measurements of the concentration that can be made within the total fixed time T, that is, to the arrival rate of new Bicoid molecules at the binding site, multiplied by the probability to find the binding site empty (in our case, at the boundary, the probability is roughly one half). Since the rate of arrival of new Bicoid molecules to the binding site is , the relative error of the concentration readout is given by
where is the estimate of the concentration, T is the integration time and is the probability that the binding site is full.
In the second strategy, there are two sets of six binding sites being read independently. In that case, the information from each binding site can be accessed individually and their contributions averaged to give a more precise estimate. This calculation again can be made using the original Berg-Purcell formula (Berg and Purcell, 1977) for several receptors, dividing the relative error by the square root of the total number of binding sites in Equation 8:
where N is the total number of binding sites (in our case, ).
For the third possibility, we consider a decision made at a fixed time using the fastest architecture identified (Main Text Figure 4) without constraint on the slope (activation rule ). We compute the decision time using the drift-diffusion approximation with fixed time (see Appendix 2).
Finally, we consider the fastest architecture identified with a random decision time and the Sequential Probability Ratio Test (SPRT) strategy.
The result of the above calculations is that for a single receptor estimating Bcd concentration with 10% precision within a fixed-time Berg-Purcell type calculation (see Appendix 2), decisions take between 6 min for the fastest binding rates and ∼4 hr for the slowest estimates. Conversely, by using the on-the-fly SPRT decision-making process and the one-or-more scheme at equilibrium, the time needed to make decisions with 10% precision and an error rate of 32% at the boundary is for the fastest rates and min for the slowest rates. For all sets of parameters, the on-the-fly SPRT decision-making process gives a ∼3.5-fold faster decision time than the Berg-Purcell estimate and a >10-fold faster decision making time than the one-binding-site Berg-Purcell estimate. For the fastest rates, a decision with an error rate of less than 5% can be achieved in about 5 min within the SPRT scheme.
Appendix 2
Error rate and decision time for the fixed-time decision strategy
In this section, we describe how we compute the decision time for a fixed-time strategy (or 'Berg-Purcell type decision’) and a complex promoter architecture.
The classic Berg-Purcell calculation is based on the idea of averaging the time spent by the ligand bound to a receptor (or, in our case, a binding site). The original calculation assumed that the waiting times between binding and unbinding are exponential and that the bound times are not informative about the concentration. Neither of these assumptions hold in the case of the hunchback promoter. Indeed, in the context of a complex promoter architecture, the waiting times that are available to the nucleus or cell downstream are the gene’s ON and OFF switching times. They are not exponentially distributed, and, depending on the activation rule, the OFF times can be just as informative about the concentration as the ON times. For these two reasons, we ought to readapt the Berg-Purcell idea to compute the mean decision time.
To that purpose, we consider a decision with a given rate of error e, fix a time of decision T and choose the concentration that has the highest likelihood between the two options L1 and L2. In other words, if then the nucleus chooses , while if then it chooses . For instance, if the actual concentration , then the probability of error at time T is given by .
To calculate the above error, we use the drift-diffusion approximation for , compute the drift V and diffusivity D from Main Text Equations 1-4 and approximate the distribution of by a normal distribution with mean and standard deviation . We compute the error rate e for the fixed-time decision process based on the Gaussian approximation. Finally, to find the mean decision time for a given error rate, we perform a quick one-dimensional search over T, which yields the value of the fixed decision time appropriate for the prescribed error e.
Appendix 3
The mean first-passage time for the decision-making process
To investigate the role of promoter architectures in decision-making, we apply the SPRT approach (Sequential Probability Ratio Test) (Wald, 1945a; Siggia and Vergassola, 2013). In the simplest formulation of this approach, a decision is made between two hypotheses about the concentration of a surrounding TF: either the TF is at concentration L1 or it is at concentration L2. The decision is based on the binding and unbinding events of the TF to a promoter. At each point in time, the error of committing to a given concentration (scenario) is estimated by computing the ratio of the probability of one hypothesis, (the surrounding concentration is L1) over the other (the concentration is L2), :
Technically, the time dependent likelihood ratio, undergoes a random walk as TFs bind and unbind to the promoter, activating and deactivating the gene. The process is terminated and a decision is made when the likelihood ratio falls below the desired error level, which is expressed in terms of absorbing boundaries at K+ (the concentration is L1) and (the concentration is L2): or (see Main Text Figure 2). The boundaries can be expressed in terms of the error e and for symmetric errors, (Siggia and Vergassola, 2013). In our case . The mean time for decision can be computed for each discrimination task as the mean first-passage time of a random walk (in the limit of long decision times). In Appendix 2—figure 1, we show the mean decision-time for different values of e.
Assuming the gene has two levels of activation ON and OFF, the information available to downstream mechanisms is a series of ON times and OFF times of gene activity. The probability of a concentration hypothesis is then . If successive ON and OFF times are independent then the probabilities factorize. The log ratio is then written as a function of the ON (respectively OFF) time probability distribution (respectively )
where is the number of ON to OFF switching events and the number of OFF to ON switching events (Siggia and Vergassola, 2013). To understand the dynamics of the log ratio, it is necessary to compute the distributions and based on the promoter dynamics, which is the focus of the following subsection.
From binding to gene activation
In this subsection, we describe how the high-dimensional complete state of the promoter is mapped onto the low-dimensional activity of the gene. We use a formalism similar to that of Desponds et al., 2016; Tran et al., 2018.
The promoter features N binding sites : its complete state at time T is described by an N dimensional vector , where if the binding site i is empty/full. We assume the gene has two levels of activity: either it is OFF and no mRNA is produced, or it is ON and mRNA is produced at a fixed rate. Activity is then described by a simple Boolean variable equal to 0/1, corresponding to the gene being OFF/ON.
We also assume that the activity of the gene depends only on the total number of transcription factors bound to the promoter so that there is an integer such that (where is the indicator function). From the point of view of gene activity, we are only interested in the dynamics of . We make another simplifying assumption: the probabilities of increasing or decreasing only depend on the value of and not on which binding sites specifically are bound or unbound, that is itself has a Markovian dynamics in .
At a given time t, we describe the stochastic state of the promoter as a vector of probability where is the probability of having and . The Markovian dynamics of B translate into an transition matrix M for . M describes the dynamics of the promoter from the point of view of gene activity. If transcription factors do not dimerize or form complexes, then only if since the probability of two of them binding or unbinding decreases as the square of time for short times.
Let us now relate the statistics of the switching times for the gene activity a to the transition matrix M. Starting from a distribution of OFF states we compute the cumulative distribution function of the waiting time before switching to the state
where Trans is the transpose, is a vector of 1 for states corresponding to active genes and 0 for states corresponding to inactive genes, and is a modified version of M where all ON states act as 'sinks’ (i.e. all the transition rates from ON states have been set to 0). The expression in Equation 12 computes the cumulative probability of transitions to an ON state before time t, that is .
For simplicity, we restrict ourselves to promoter structures where there is only one point of entry into the ON or the OFF states (i.e. any switching event from ON to OFF or vice versa will end with the promoter having a specific number of binding sites). Under this hypothesis, the probability vector in Equation 12 is uniquely determined and independent of the specific dynamics of the previous ON or OFF times. We relax these constraints on the promoter structure in Appendix 8. We denote by (respectively ) the average of the OFF (respectively ON) times for the real concentration L.
Random walk of the log ratio
When the two hypothesized concentrations are very similar to each other as compared to the concentration scale L set by the actual concentration value, , the discrimination is hard and requires a large number of binding and unbinding events. In this limit, the random walk can be approximated by a drift-diffusion process with drift V and diffusion D:
where is a Gaussian white noise. The decision time for the case of symmetric boundaries is a random variable T with mean given by Equation 1 in the main text (Siggia and Vergassola, 2013). depends on the details of the biochemical sensing of the TF concentration and promoter architecture via the drift and diffusivity. Computing V and D in Equation 13 is enough to derive the mean first-passage time to the decision and even its full distribution by computing its Laplace transform as a solution of the drift-diffusion equation using standard techniques of first-passage for Gaussian processes (Siggia and Vergassola, 2013).
Drift
For many switching events and , the sums in Equation 11 can be replaced by continuous averages over binding time distributions. Since and differ at most by one, we can also replace the two values by a unique value J, which is the number of ON-OFF cycles. The times are distributed according to the ON and OFF probability distributions for the real concentration L. At a given large time t, the number J of terms in the sum and the log-ratios appearing in Equation 11 are a priori correlated but Wald’s equality (Wald, 1945a) ensures that the average of the sum is the product of the two averages. Since the expected number of cycles grows linearly in time as , we conclude that the drift term in Equation 13 is given by:
where is the Kullback-Leibler divergence between the two distributions P and Q.
In sum, the drift is determined by how informative the distribution of waiting times in the ON and OFF states are about the concentration differences, with an average rate equal to the inverse of the cycle time . The Kullback-Leibler divergence appearing in Equation 14 is intuitive : it represents the distance between the real concentration and the hypothetical concentration in the space of probabilistic models of switching times. The larger that distance, the easier it becomes to tell the difference between the two distributions through random sampling.
Expansion of the drift for small concentration differences
When the two candidate concentrations are similar, , the quantities computed in the previous subsection can be expanded at first order in the differences in concentrations and . Starting from Equation 14, we expand the drift V at first and second orders:
where means that the same operations and integrations are performed for PON.
Due to conservation of probability, the integral of the first and second L-derivatives of vanish. The first-order expansion of V vanishes and we have at first non-vanishing order in
If for instance then the drift is positive, favouring the concentration L1, closer to the real concentration.
Equality between drift and diffusivity
In this subsection, we present different ways of proving the equality between V and D in SPRT when the two hypotheses are close by connecting different approaches. We show that this equality is a general property of random walks in Bayesian belief space. We check that these results are true in the controlled case of one binding site in Appendix 3—figure 1.
First approach: the exit points of the decision process
As proved by Wald in the original paper where he introduced sequential probability ratio tests (Wald, 1944), the nature of the test (the ratio between the likelihood of two hypotheses) requires a specific relationship between the error and the boundaries that define the regions of decision. In our case, we assume the same probability of calling L1 when L2 is true and calling L2 when L1 is true, leading to the definition of only one error level e and two symmetric boundaries K and −K. Specifically, Wald shows that (see also Siggia and Vergassola, 2013).
When the two hypotheses are close enough that the variations of the log-likelihood can be approximated by a Gaussian process, one can compute the exit probabilities in terms of the drift V and diffusivity D. The equation for the probability of absorption at the upper boundary K is
with the boundary condition and . The corresponding solution is
as shown in the supplementary material of Siggia and Vergassola, 2013. Setting in Equation 18, we find that
We note that this probability of absorption is also by definition of the error (assuming for instance that ) leading to
which in turn gives
And so we find that in the limit of close hypotheses, .
Second approach: expansion for small concentration differences
Let us consider the SPRT process between two hypotheses L1 and L2 and assume for simplicity that . In this version of the proof, we consider that the difference is small compared to L (i.e ) and expand the expressions for drift and diffusion in increasing orders of to find that they match. We have shown in the subsection ’Expansion of the drift for small concentration differences’ of Appendix 3 that the first non-vanishing term in the expansion of the drift is of order 2 in and is given by Equation 16. An integral formula for the second moment of the log-likelihood is given in Equation A15 of Carballo-Pacheco et al., 2019 from which we get that the diffusivity of the log ratio is the sum of four terms. The first term is given by
is proportional to and so is of order . The second term is given by
The prefactor V is of order . Expanding the first two terms in the brackets gives the same type of terms as in Equation 15. Because of probability conservation we find that these terms are of the same order as V (i.e ). The last two terms have prefactors s and t respectively which break the argument for vanishing first order terms in Equation 15 and these terms are of order . Putting the pieces together, we find that is of order . The third term is given by
Because ON and OFF times are independent, cross terms in are products of two and are of subleading order . Using the symmetry between and , we expand one of the square terms in
The fourth term is which is of order . So we find that, at leading order, which has the same expansion as V (see Equation 16). And so we recover that for small concentration differences, .
In the case of one binding site with ON rate and OFF rate ν we check that the formula from Carballo-Pacheco et al., 2019 gives the exact expression computed in Siggia and Vergassola, 2013
Replacing L2 with L in Equation 26 and expanding in we have
We identify the drift computed for the one binding site example in the paragraph ’Mean decision time : connecting drift-diffusion and Wald’s approaches’ of the main text and find that at first order drift and diffusivity are equal.
Third approach: general properties of Bayesian random walks
To understand the origin of the equality between drift and diffusion, we go back to the definition of SPRT as the update of the Bayesian beliefs in two competing hypotheses. We no longer condition the process on one hypothesis being true but rather on the probability that each one is true given the value of the log ratio a certain time point. A very general property of posterior distributions updated with evidence is that their average does not change. This martingale property of belief update can be shown in the following way: assume that the nucleus receives extra evidence with probability . Before that evidence comes, the probability that hypothesis 1 is true has a certain distribution . Then we have that the probability that hypothesis 1 is true varies as :
where we used Bayes rule. The integral of over sums up to one and we are left with
In the context of two hypothetical concentrations, we consider the normalized likelihood of the two hypotheses at time t: and , where is the likelihood that hypothesis i is true based on the evidence accumulated up to time t. The martingale property translates as
where are averages taken assuming hypothesis is true and is the variation of Q through time (or accumulated evidence). We note that and , where is the log-likelihood ratio at a given time. We express the variations of the probabilities in terms of the log-likelihood so that we can connect it to the drift-diffusion parameters. The drift and diffusivity that depend a priori on the real concentration are the average and variance of the variation of , respectively. We have , , and (where is the total change over time ). When the two concentrations are close, the statistics of the acquisition of evidence are symmetric for L1 and L2 and we get that and . As explained in the paragraph 'Mean decision time : connecting drift-diffusion and Wald’s approaches’ of the main text, computing the derivatives of with respect to is straightforward. Plugging these relationships into Equation 30 and gathering terms in V and D we find that
from which we derive that .
The first passage time to decision
The first exit time of a drift-diffusion process starting from 0 with two symmetric boundaries at +K and −K is given by (see for instance [Redner, 2001]). Plugging in , we recover Equation 4 of the main text:
We check that this approximation is correct for small concentration differences in Appendix 3—figure 2a,b and c. As mentioned in the main text, because the hyperbolic tangent is very flat for high values of its argument, we find that the mean decision time depends weakly on corrections to when the error rate e is small (which is equivalent to K large). We check that this result in true in Appendix 3—figure 2d. We note that from we have . And so we recover the second part of Equation 4 from the main text
In one of the original papers about SPRT (Wald, 1945a), Wald derives a similar formula for the total number of events before the decision:
where Jexit is the number of ON-OFF events when decision happens. Combining Equation 34 with Wald’s equality (Wald, 1945b; Durrett, 2010) that states that , we recover Equation 33.
When are correlations between the times of events leading to decision important?
To compute the drift or diffusivity, one must compute the variation of the log-likelihood up to a given time T. The fact that the total time T is fixed introduces correlation between the duration of the different ON-OFF events. Can these correlations be ignored, leading to an effective drift or diffusion per cycle? They can only when the two concentrations are close, as we check in Appendix 3—figure 3.
Following the arguments in Carballo-Pacheco et al., 2019, let us consider the following generating function for the cumulants of the log-likelihood difference at a given time T assuming hypothesis i is true ( or 2) :
where is the log-likelihood of hypothesis , n is the number of ON-OFF cycles and and are respectively the ON and OFF waiting times. This is not exactly what appears in Carballo-Pacheco et al., 2019, but it is sufficient to grasp the consequences of the correlations introduced by a global constraint on the duration of the process, that is, the function. Note that such constraint breaks the statistical independence of the various cycles, and introduces dependencies even though the and factorize. We rewrite the Heaviside function using the Laplace transform form and the product of the probabilities as , where and are the log-likelihoods for hypothesis . Putting the pieces together we obtain
where
One can remark that the expression is factorized, that is, f is raised to the power n, and even f itself can be interpreted as 'expectation value' of (and same with ) over the 'distribution' , that is, the constraint introduces an exponential factor that distorts the weight of the various durations. These elements are consistent with the idea of a random variable per cycle that can be averaged to compute the drift and diffusivity. However, the idea of ignoring correlations between duration of events is not valid because is a function of . Indeed, summing the series over n gives and the asymptotic behavior is then determined by the first singularity encountered as is moved to the left for the inverse Laplace transform. For each value of , this determines a value of , that is (see Carballo-Pacheco et al., 2019 for the complete calculation).
What the above says more qualitatively is that the variables per cycle are correlated because of the global constraint and there is not a single-cycle effective probability distribution that can account for that because of the dependence of on . In other words, different moments involve different configurations and distortions of the weights for the durations of the cycles. The only exception is when vanishes, which is the case for two close hypotheses. Indeed, from , one obtains and it also holds . We checked explicitly in the subsection 'Equality between drift and diffusivity' of Appendix 3 that in that limit in the formula from Carballo-Pacheco et al., 2019 the first two terms are negligible and the diffusivity reduces to the variance of the log-likelihoods only.
Indeed, when the two hypothetical concentrations are close and the correlations can be ignored, a diffusivity per cycle can be defined naturally as
where is the average of the squared log-likelihood variation (see Equation 24). We check in Appendix 3—figure 3 that this formula is a good approximation for small concentration differences in the context of one binding site. In that context we have
where ν is the OFF rate and is the ON rate. We find that this approximation is better than the approximation only when the cycle times depend weakly on the concentration (i.e, for one binding site, when ν is small, see Appendix 3—figure 3).
Appendix 4
The mean first-passage time for different architectures
Activation by multiple TF - irreversible binding all-or-nothing models
We first consider the all-or-nothing two binding site cycle model depicted in Main Text Figure 5a.
In this model, ON events end when the complex formed by two copies of the TF unbinds. It follows that ON times are exponentially distributed and independent of the concentration . Because they are independent of the concentration, they will not contribute to the log ratios. The mean ON time is given by .
The activation time (OFF time) requires two copies of the TF to bind and is the sum of the times it takes for each of them to bind. From a probabilistic point of view, this sum becomes a convolution and the OFF times are given by
The average OFF time is .
We can now compute the drift (the ON times do not contribute to information)
The calculations above are only valid when the two binding rates are different . Let us now assume that . The ON time distribution is unchanged but the OFF time distribution is now
where we have identified , the standard Gamma distribution with exponent 2 and parameter . The expression of the drift simplifies as:
When the two binding rates μ1 and μ2 are similar but not exactly equal the activation time distribution can be approximated by the Gamma distribution . It is then convenient to use Equation 43 with as a parameter.
Activation by multiple TF – irreversible binding with 1-or-more activation
We now consider the non-equilibrium two binding site 1-or-more activation model depicted in Main Text Figure 5b. The OFF times are exponentially distributed . They will contribute as in the one binding-site exponential model.
The ON times are now a convolution of a concentration-dependent exponential step with rate and a concentration-independent unbinding with rate ν. Their distribution is
The expression is valid for and, as previously, reduces to a distribution with exponent two in case of equality.
The corresponding drift is
Appendix 5
Appendix 6
All-or-nothing vs 1-or-more activation
In this section, we give a more detailed derivation of the result given in the subsection ’Activation rules’ of the main text. Specifically, we compare two activation rules for the same promoter architecture: two binding sites with binding rates μ1 and μ2. We consider that the two binding sites bind TF independently so that (there are two free targets for the first binding). The two copies of the TF unbind together with unbinding rate ν. It is a cycle model. We consider the 1-or-more activation rule (Main Text Figure 5b) and the all-or-nothing activation rule (Main Text Figure 5a). We will prove that the fastest activation scheme is the all-or-nothing scheme when and 1-or-more when . This result corresponds to the following intuitive idea : given a choice to associate the second binding with either the first binding or the unbinding for the readout by the cell, it is more informative to convolve it with the fastest of the rates to maximize the information extracted from it. This is in general true if the time per cycle is the same for the different architectures considered.
The total time it takes to go through a cycle is the same for the two architectures: , so the difference in performance will come from the amount of information extracted per cycle. We are interested in the limit of similar concentrations, that is, and , . Since and the decision time is inversely proportional to V, it is enough to compare (the drift in the all-or-nothing scheme) with (the drift in the 1-or-more scheme) to determine the fastest architecture.
To compute , we start from Equation 41, plugging in , and . We expand for :
where is the Riemann zeta function. Eventually we have
While the expansion of does not take a simple form in general, it can be computed for the specific value of . When , and , expanding to second order Equation 45 becomes
We can now use the above equality and the following arguments to reach the conclusion stated at the beginning of the section. is independent of ν because no information is gained about the concentrations from the step involving unbinding. Conversely, is an increasing function of ν: as ν decreases, the unbinding part takes over in the convolution for the ON times. Since the difference between the two convolutions can only decrease as ν decreases, it becomes harder and harder to differentiate the two ON time distributions, their Kullback-Leibler divergence becomes smaller, and the drift term is reduced.
In sum, for , ; is independent of ν whilst increases with ν. We conclude that for , and the 1-or-more scheme is preferred. Conversely, for , and the all-or-nothing scheme is preferred.
Appendix 7
Comparing one and two binding-site architectures
In this section we compare the performance of a one binding-site architecture (Main Text Figure 5d) to that of an equilibrium all-or-nothing two binding-site architecture (Main Text Figure 5c). The motivation is to provide detailed explanations for the results given in subsection 'How the number of binding sites affects decisions' of the main text. To rank their performance, we compare, as in the previous appendix 6, the drift for the two architectures.
To do the comparison, we optimize the two binding site architecture rates μ1, μ2 and ν1 for a fixed value of the error rate e, the real concentration L, the hypothetical concentrations L1 and L2 and the second unbinding rate . The fixed second unbinding rate sets a time scale for the process. For concreteness, we suppose that the real concentration and decision is hard , .
Decisions can be trivially sped up by increasing indefinitely all the rates to very high values. To avoid this, in the optimization process we set an upper bound to be . We choose this upper bound to be smaller than the largest values of considered () and larger than the smaller values of considered (). From a biological point of view, this upper bound for the ON rates can correspond to the diffusion limited arrival rate. For the OFF rates, it can correspond to a minimum bound time required to trigger a downstream mechanism (for instance, the activation of the gene). We check in Appendix 7—figure 2 that curves and effects similar to those discussed below are obtained if the upper bounds are modified.
For all values of parameters, we find that the optimal ON rates are maximal (). We observed and discussed in subsection 'How the number of binding sites affects decision' of the main text that for certain values of the parameters (ν and L), the optimal first unbinding rate ν1 reaches the upper bound while for smaller values the optimal first unbinding rate remains at 0 (Main Text Figure 5f inset). The transition between the two regions is sharp. Setting ν1 to 0 is equivalent to having an effective one binding site model because the promoter never goes back to the 0 state. For that reason, we want to compare the performance of a two binding site model with parameters μ1, μ2, ν1 and to that of a one binding site model with parameters μ2 and .
Since we are in the limit of small concentration differences, the speed of decision is set by V and . We denote the mean drifts of the one and the two binding-site models as and , respectively. Similarly, and are the mean times per cycle of the two models.
The one binding-site architecture activates with rate and deactivates with rate . Both waiting times for ON and OFF expression states are exponential. Following results in the paragraph 'Mean decision time : connecting drift-diffusion and Wald’s approaches' of the main text, we have and . We conclude that :
In the two binding site architecture, the ON times are exponentially distributed with rate . The OFF times are more complex as they can result from several cycles from a promoter with a binding site occupied to an empty promoter before finally switching to two full binding sites and gene activation. We compute using the modified transition matrix of the Markov chain with the ON states acting as sinks as described in Appendix 3. The OFF time distribution is given by the derivative of Equation 12.
In the two binding site model, the transition matrix is a 3 by 3 matrix and can be diagonalized analytically to compute explicitly the exponential in Equation 12 . We set since it is always the case in the identified optimal architectures for the two binding site model (where is the specific value of L for which upper bounds on ν1 and are equal, in Main Text Figure 5f). We compute the distribution of the OFF times explicitly and find
where .
Computing the mean of the distribution in Equation 50 , we find that the average time for a cycle in the two binding site architecture is
We can now numerically integrate . We use this simplified formula to compare the performance of the two architectures and recover the results of Main Text Figure 5f . We show an example for a specific value of L, varying in Appendix 7—figure 1.
Appendix 8
Out of equilibrium architectures and averaging over several steps
In some architectures where several copies of the TF can bind or unbind at once, there can be correlations between successive ON and OFF times. This depends on the structure of the promoter Markov chain. It is made of OFF and ON states. There can be correlations between the OFF and ON times if the chain can enter the ON state subgraph through different ON states, coming from different OFF states (or the same situation reverting ON and OFF states). In that case, the time that the chain will spend in the ON state will depend on the initial entry state, and so it will depend on the OFF state from which the chain entered the ON subgraph, giving a correlation with the previous OFF time to the ON time. If the structure is particularly complex, these correlations can span over several ON-OFF cycle. We do however assume that the chain is ergodic so that they vanish at long times.
To deal with this situation, a solution is to average the contribution to the information of ON/OFF events over several ON/OFF times, until the correlation with the initial times is lost. The event in the log ratio becomes a series of ON and OFF times and their joint probability (as they no longer factorize). The next series of events can be considered approximately (or exactly in certain cases) independent of the previous series and the rest of the theory can be applied to this sum of independent variables.
We did not explore these architectures in detail as they are extremely complex, do not seem to increase the performance of the promoter for the readout, and most likely the type of information about the concentration that is hidden in the correlation between ON and OFF would be very hard to decode for downstream mechanisms.
Appendix 9
Appendix 10
Parameters for Main Text Figure 6
Parameters for Main Text Figure 5 panel a, panel b blue full line, panel c red and panel d red are those identified as optimal for , : , , , , , , , , , , , .Parameters for Main Text Figure 5 panel b green dashed line, panel c blue and panel d blue are those identified as optimal for , : , , , , , , , , , , , .Other lines in panel b correspond to architectures that are close to optimality, with parameters in the same range as the ones given above.
Appendix 11
Approximating the log-likelihood function with RNA levels
In this section, we give details of the calculations linked to the model of RNA production presented in the section 'Estimating the log-likelihood function with RNA concentrations' of the main text. In this model, we assume that when the promoter enters the ON state, it takes a time before polymerase is loaded. This time can be associated with the formation of polymerase clusters at the transcription sites (Cho et al., 2016a; Cho et al., 2016b or of more complex multifactor transient hubs recently observed in Drosophila embryos (Park et al., 2019). Once the promoter returns to the OFF state, our model also includes a delay during which the gene continues loading polymerase. This delay can be associated with the dissolution of the clusters and hubs mentioned above or simply with the inertia of polymerase loading. Under these assumptions, the contribution of an ON time t to RNA level is
The contribution of an OFF time s to RNA level is given by
We have that the RNA level at time T is
where the are the ON times up to time T and the OFF times up to time T.
We look for sets of parameters that give a positive drift when and negative drift when . For such architectures, the ON and OFF production levels will roughly balance each other at the boundary. To ensure a positive RNA level at the boundary, we redefine the ON rate: so that the basal rate is now a factor proportional to time in both ON time and OFF time RNA contributions. In that sense, the basal rate is a systematic drift of the RNA level that is proportional to time and can be removed from the equations by defining moving boundaries and . With these redefined rates we now look for architectures such that the drift at the anterior boundary is positive:
and the drift at the posterior boundary is negative:
Specifically, we look for architectures that satisfy these conditions and for which the ratio of the drift over the variance contributions to the log-likelihood is as large as possible to avoid dynamics that are dominated by noise. Finally we look for a set of threshold concentrations that give the most favorable speed-accuracy tradeoff. We show the results of our non-exhaustive search in Appendix 11—figure 1. For one of the architectures that fall below the error level of 32% and the decision time threshold of 3 min, we check the RNA expression profile (Main text Figure 7). For RNA degradation, we use the rate that reduces to close to the boundary and yields the desired linear in time degradation level, but that is proportional to for small values of the concentration. For this function, we compute the average number of RNA molecules in the cell after 3 min of dynamics (Main text Figure 7) and find that the RNA profile has a Hill coefficient of 5.2.
Appendix 12
Exploring the effect of joint enhancer and promoter dynamics
In this section, we explore the effect of the computational power of an enhancer when added to that of the promoter. Bicoid is known to target many gene enhancers (Driever and Nüsslein-Volhard, 1988; Struhl et al., 1989 following complex concentration dependent patterns (Hannon et al., 2017) . We choose one of the many possible models of enhancer dynamics as en example. For simplicity, we assume a two-state enhancer that is made of one Bicoid-binding site, with dynamics independent of the local promoter dynamics. We assume that the enhancer turns on with a rate and turns off with a rate following Markovian dynamics (i.e the waiting times are exponentially distributed). We take a simple rule for gene activity: the gene is ON when both the promoter (which is still a six binding site promoter as in the previous models) and the enhancer are in the ON states.
From a mathematical point of view, the complete system can be viewed as a 14-state Markov chain with states , , corresponding to the promoter having i Bicoid molecules bound and the enhancer having j Bicoid molecules bound. We use this Markov chain to compute the waiting time statistics of the ON and OFF states of the hunchback gene activity. An added difficulty is that the OFF state can now be entered through two transitions: the promoter turning OFF, or the enhancer turning OFF. To account for this degeneracy, we compute the OFF time statistics for each transition. We then compute the complete probability distribution using a weighted average of the two distributions with specific transitions, where the weights are the probability of each transition happening first. We check that this method predicts the correct waiting times (see Appendix 12—figure 1). We note that this method overlooks the correlation between successive ON and OFF times. A perfect Bayesian machine could take advantage of these correlations to improve its estimate of the concentration. Here we assume that the cell cannot.
Once the waiting times for the ON and OFF states of the gene are computed, the drift and diffusion of the SPRT process is obtained by integration and the first passage time to decision between anterior and posterior given by Equation 32. Using that scheme, we optimize the parameters of the enhancers and promoter for the fastest decision with a given error between the anterior and posterior boundary regions. As an example, we picked the case for the promoter activation rule and imposed . We still impose that half the nuclei at the boundary be active on average (i.e the product of the activities of the enhancer and the promoter at the boundary is equal to ). We also limit the binding rate to be diffusion limited. We find that enhancer dynamics improve the performance of the cell: the optimal scheme performs the decision in an average of instead of for the promoter alone. In these optimal schemes the enhancer is mostly ON due to slower binding dynamics than the promoter (only one binding site versus 6). Assuming the enhancer target is six times larger (or equivalently that the enhancer also has six binding sites), but that the enhancers dynamics can still be approximated by exponential waiting times, we find that in the optimal scheme, the enhancer is ON for a reasonable fraction of the time (about 75 %) and that the performance is improved down to .
Enhancer presence seems to improve the flexibility of promoter states to get good performance while maintaining a high Hill coefficient and half the genes active at the boundary.
A complete study of enhancer dynamics is beyond the scope of this paper but would include other models such as genes that activate when either the enhancer or the promoter are ON, or generally dynamics where the enhancer state influences the transition rates of the promoter.
Data availability
All data analyzed during this study were previously published in the literature and references are included in the paper.
References
-
Know the Single-Receptor sensing limit? think againJournal of Statistical Physics 162:1353–1364.https://doi.org/10.1007/s10955-015-1412-9
-
Physics of chemoreceptionBiophysical Journal 20:193–219.https://doi.org/10.1016/S0006-3495(77)85544-6
-
Physical limits to biochemical signalingPNAS 102:10040–10045.https://doi.org/10.1073/pnas.0504321102
-
Perceptual decision making: drift-diffusion model is equivalent to a Bayesian modelFrontiers in Human Neuroscience 8:102.https://doi.org/10.3389/fnhum.2014.00102
-
Environmental sensing, information transfer, and cellular decision-makingCurrent Opinion in Biotechnology 28:149–155.https://doi.org/10.1016/j.copbio.2014.04.010
-
Impact of RNA modifications and RNA-Modifying enzymes on eukaryotic ribonucleasesThe Enzymes, RNA Modification 41:299–329.https://doi.org/10.1016/bs.enz.2017.03.008
-
Speed-accuracy tradeoffs in animal decision makingTrends in Ecology & Evolution 24:400–407.https://doi.org/10.1016/j.tree.2009.02.010
-
Precision of readout at the hunchback gene: analyzing short transcription time traces in living fly embryosPLOS Computational Biology 12:e1005256.https://doi.org/10.1371/journal.pcbi.1005256
-
BookProbability: Theory and ExamplesCambridge: Cambridge University Press.https://doi.org/10.1017/CBO9780511779398
-
Maximum likelihood and the single receptorPhysical Review Letters 103:158101.https://doi.org/10.1103/PhysRevLett.103.158101
-
Role of spatial averaging in the precision of gene expression patternsPhysical Review Letters 103:258101.https://doi.org/10.1103/PhysRevLett.103.258101
-
Studies of nuclear and cytoplasmic behaviour during the five mitotic cycles that precede gastrulation in Drosophila embryogenesisJournal of Cell Science 61:31.
-
The neural basis of decision makingAnnual Review of Neuroscience 30:535–574.https://doi.org/10.1146/annurev.neuro.29.051605.113038
-
The gap gene networkCellular and Molecular Life Sciences 68:243–274.https://doi.org/10.1007/s00018-010-0536-y
-
The Berg-Purcell limit revisitedBiophysical Journal 106:976–985.https://doi.org/10.1016/j.bpj.2013.12.030
-
Enhancer and promoter interactions-long distance callsCurrent Opinion in Genetics & Development 22:79–85.https://doi.org/10.1016/j.gde.2011.11.001
-
Live imaging of bicoid-dependent transcription in Drosophila embryosCurrent Biology 23:2135–2139.https://doi.org/10.1016/j.cub.2013.08.053
-
3 minutes to precisely measure morphogen concentrationPLOS Genetics 14:e1007676.https://doi.org/10.1371/journal.pgen.1007676
-
On optimal decision-making in brains and social insect coloniesJournal of the Royal Society Interface 6:1065–1074.https://doi.org/10.1098/rsif.2008.0511
-
Dense Bicoid hubs accentuate binding along the morphogen gradientGenes & Development 31:1784–1794.https://doi.org/10.1101/gad.305078.117
-
Limits of sensing temporal concentration changes by single cellsPhysical Review Letters 104:248101.https://doi.org/10.1103/PhysRevLett.104.248101
-
Mutations affecting the pattern of the larval cuticle in Drosophila melanogaster : I. zygotic loci on the second chromosomeWilhelm Roux's Archives of Developmental Biology 193:267–282.https://doi.org/10.1007/BF00848156
-
Temporal pattern recognition through analog molecular computationACS Synthetic Biology 8:826–832.https://doi.org/10.1021/acssynbio.8b00503
-
Embryonic cleavage cycles: how is a mouse like a fly?Current Biology 14:R35–R45.https://doi.org/10.1016/j.cub.2003.12.022
-
Growing an embryo from a single cell: a hurdle in animal lifeCold Spring Harbor Perspectives in Biology 7:a019042.https://doi.org/10.1101/cshperspect.a019042
-
Precision of hunchback expression in the Drosophila embryoCurrent Biology 22:2247–2252.https://doi.org/10.1016/j.cub.2012.09.051
-
Infomax strategies for an optimal balance between exploration and exploitationJournal of Statistical Physics 163:1454–1476.https://doi.org/10.1007/s10955-016-1521-0
-
BookA Guide to First-Passage ProcessesCambridge: Cambridge University Press.https://doi.org/10.1017/CBO9780511606014
-
Decision making in the arrow of timePhysical Review Letters 115:250602.https://doi.org/10.1103/PhysRevLett.115.250602
-
Network representations and methods for the analysis of chemical and biochemical pathwaysMolecular BioSystems 9:2189–2200.https://doi.org/10.1039/c3mb70052f
-
Mutual repression enhances the steepness and precision of gene expression boundariesPLOS Computational Biology 8:e1002654.https://doi.org/10.1371/journal.pcbi.1002654
-
Only accessible information is useful: insights from gradient-mediated patterningRoyal Society Open Science 2:150486.https://doi.org/10.1098/rsos.150486
-
Precision in a rush: trade-offs between reproducibility and steepness of the hunchback expression patternPLOS Computational Biology 14:e1006513.https://doi.org/10.1371/journal.pcbi.1006513
-
Sniffers, buzzers, toggles and blinkers: dynamics of regulatory and signaling pathways in the cellCurrent Opinion in Cell Biology 15:221–231.https://doi.org/10.1016/S0955-0674(03)00017-6
-
On cumulative sums of random variablesThe Annals of Mathematical Statistics 15:283–296.https://doi.org/10.1214/aoms/1177731235
-
Sequential tests of statistical hypothesesThe Annals of Mathematical Statistics 16:117–186.https://doi.org/10.1214/aoms/1177731118
-
Some generalizations of the theory of cumulative sums of random variablesThe Annals of Mathematical Statistics 16:287–293.https://doi.org/10.1214/aoms/1177731092
Article and author information
Author details
Funding
National Science Foundation (PoLS Grant 1411313)
- Massimo Vergassola
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
We are grateful to Gautam Reddy and Huy Tran for multiple relevant discussions. This work was supported by the National Science Foundation, Program PoLS, Grant 1411313. JD was partially supported by the NSF-Simons Center for Quantitative Biology (Simons Foundation SFARI/597491-RWC and the National Science Foundation 17764421). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Copyright
© 2020, Desponds et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,554
- views
-
- 358
- downloads
-
- 23
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Computational and Systems Biology
- Physics of Living Systems
Planar cell polarity (PCP) – tissue-scale alignment of the direction of asymmetric localization of proteins at the cell-cell interface – is essential for embryonic development and physiological functions. Abnormalities in PCP can result in developmental imperfections, including neural tube closure defects and misaligned hair follicles. Decoding the mechanisms responsible for PCP establishment and maintenance remains a fundamental open question. While the roles of various molecules – broadly classified into “global” and “local” modules – have been well-studied, their necessity and sufficiency in explaining PCP and connecting their perturbations to experimentally observed patterns have not been examined. Here, we develop a minimal model that captures the proposed features of PCP establishment – a global tissue-level gradient and local asymmetric distribution of protein complexes. The proposed model suggests that while polarity can emerge without a gradient, the gradient not only acts as a global cue but also increases the robustness of PCP against stochastic perturbations. We also recapitulated and quantified the experimentally observed features of swirling patterns and domineering non-autonomy, using only three free model parameters - the rate of protein binding to membrane, the concentration of PCP proteins, and the gradient steepness. We explain how self-stabilizing asymmetric protein localizations in the presence of tissue-level gradient can lead to robust PCP patterns and reveal minimal design principles for a polarized system.
-
- Computational and Systems Biology
- Physics of Living Systems
Synthetic genetic oscillators can serve as internal clocks within engineered cells to program periodic expression. However, cell-to-cell variability introduces a dispersion in the characteristics of these clocks that drives the population to complete desynchronization. Here, we introduce the optorepressilator, an optically controllable genetic clock that combines the repressilator, a three-node synthetic network in E. coli, with an optogenetic module enabling to reset, delay, or advance its phase using optical inputs. We demonstrate that a population of optorepressilators can be synchronized by transient green light exposure or entrained to oscillate indefinitely by a train of short pulses, through a mechanism reminiscent of natural circadian clocks. Furthermore, we investigate the system’s response to detuned external stimuli observing multiple regimes of global synchronization. Integrating experiments and mathematical modeling, we show that the entrainment mechanism is robust and can be understood quantitatively from single cell to population level.