Geometric models for robust encoding of dynamical information into embryonic patterns
Abstract
During development, cells gradually assume specialized fates via changes of transcriptional dynamics, sometimes even within the same developmental stage. For anteriorposterior (AP) patterning in metazoans, it has been suggested that the gradual transition from a dynamic genetic regime to a static one is encoded by different transcriptional modules. In that case, the static regime has an essential role in pattern formation in addition to its maintenance function. In this work, we introduce a geometric approach to study such transition. We exhibit two types of genetic regime transitions arising through local or global bifurcations, respectively. We find that the global bifurcation type is more generic, more robust, and better preserves dynamical information. This could parsimoniously explain common features of metazoan segmentation, such as changes of periods leading to waves of gene expressions, ‘speed/frequencygradient’ dynamics, and changes of wave patterns. Geometric approaches appear as possible alternatives to gene regulatory networks to understand development.
Introduction
Development from one zygote to a viable animal is a complex process (Wolpert et al., 2006), comprising multiple dynamical subprocesses, including cell movements, tissue morphogenesis, dynamical gene expressions, and cellular differentiations. Eventually, cell identities are fixed by various mechanisms, such as multistable gene regulatory networks and epigenetic markers. Little is known about how this transition from a dynamic/initiation phase to a static/maintenance one is mediated. Are there general characteristics that should be matched between dynamic and static phases to mediate a robust transition?
In dynamical systems theory, a transition between different regimes is called a ‘bifurcation’, defined as a qualitative change in the dynamics of a system driven by a socalled ‘control parameter’ (Strogatz, 2015). Bifurcations are of many types but can be systematically classified. For instance, generic families of potentials driving the dynamics have been identified as different ‘catastrophes’ (Poston and Stewart, 2012). While such mathematical descriptions are highly technical, they are reminiscent of the theory of epigenetic landscapes pushed forward by Waddington, 1957. It is thus natural to ask if such classifications can be done for development. Could dynamical systems theory help us in this pursuit, and in studying development in general? Here, the main issue is to find a way to frame the problem to derive general results.
In recent years, numerous experimental studies have revealed that quantitative changes of gene expressions during development often followed standard predictions from dynamical systems theory (Huang et al., 2007). The Waddington landscape’s analogy (Jaeger and Monk, 2014) has led to many insights in cell differentiation (Graf and Enver, 2009), and recent data on cell reprogramming quantitatively validated the associated ‘landscape picture’ (Pusuluri et al., 2018). Geometric models of development have been developed in particular cases, precisely predicting the general phenotypes of wildtype and mutants (e.g. the development of Caenorhabditis elegans vulva [Corson and Siggia, 2012] and Drosophila bristle patterns [Corson et al., 2017]).
The ClockandWavefront model (Cooke and Zeeman, 1976), accounting for the observed dynamical somite (vertebrae precursors) formation, was inspired by catastrophe theory. The model predicted that a retracting wavefront translates the periodic expression of a genetic clock into a spatial pattern via ‘catastrophic’ transitions demarcating the positions of the somites (Figure 1A). Identification of the predicted clock in 1997 (Palmeirim et al., 1997) has since led to many subsequent theoretical and experimental works, including observation of similar clocks in multiple arthropods (ElSherif et al., 2012; Sarrazin et al., 2012). Cooke and Zeeman, 1976 originally assumed that the clock is an external process, blind to the subsequent segmentation process it directs. However, it has been very clear from the early experiments in Palmeirim et al., 1997 that cellular oscillators increase their period prior to segmentation, leading to traveling waves of various signalling pathways such as Notch (Giudicelli et al., 2007; Morelli et al., 2009; Figure 1A). Importantly, Notch waves eventually stabilize into a pattern of delta ligand stripes (Giudicelli and Lewis, 2004; Jiang et al., 2000), with a functional continuity between the dynamic and the static regime. Indeed, it has been shown that the dynamical phase of the clock is encoded into static rostrocaudal identities (Oginuma et al., 2010). This suggests that the observed oscillation is not a simple external pacemaker for segment formation: rather, clocks, associated waves and the resulting stripes combine into an emergent process leading to proper fate encoding. Segmentation thus possibly appears as the canonical example of transition from a dynamical gene expression regime to a static functional one.
Two broad scenarios have been proposed to model this process (see Figure 1) . In the first scenario, the period of the individual oscillators is diverging to infinity as they become more anterior (or similarly, the frequency of the clock is approaching zero), automatically giving rise to a fixed pattern (Figure 1B–F). This model corresponds to Julian Lewis’ model for chairy one expression pattern in somitogenesis (appendix of [Palmeirim et al., 1997]), and it is possible to experimentally quantify the period divergence within this model (Giudicelli et al., 2007). This also corresponds to the implicit scenario of many theoretical models assuming that the frequency of the clock approaches zero as cells get more anterior, such as the models in Ares et al., 2012; Morelli and Jülicher, 2007, possibly with a sharp discontinuity suppressing period divergence (Jörg et al., 2015; Soroldoni et al., 2014). Those models are appealing owing to their simplicity, since all behaviour is encoded in a dynamical frequency gradient (possibly mediated by FGF [Dubrulle and Pourquié, 2004]). However, it is unclear what happens from a dynamical systems theory standpoint (a noteworthy exception being the proposal that the gradient emerges through a singularity in phase similar to the Burger’s equation [Murray et al., 2013]). In particular, the pattern in this scenario literally corresponds to a frozen clock, such that there is an infinite number of local steady states corresponding to the frozen phases of the oscillators.
A second scenario grounded in dynamical systems theory has been proposed (François and Siggia, 2012). In this scenario, a genetic network transits from an oscillatory state to an ensemble of (stable) epigenetic states (in Waddington’s sense) fixing the pattern. Possible examples include the initial reactiondiffusion based model by Meinhardt, 1986, or the cellautonomous model under morphogen control evolved in François et al., 2007; Figure 1G. Based on geometric arguments, if bifurcations are local, the most generic model of this transition is expected to present two steps as explained in François and Siggia, 2012. As a steep control parameter (possibly controlled by a morphogen such as FGF) decreases, the oscillation dies out through a Hopf bifurcation, leading to a single transient intermediate state. Then, for even lower values of the morphogen, one or several new (stable) states appear (technically through saddlenode bifurcations, see Figure 1—figure supplement 1). If the system translates rapidly enough from the oscillatory regime to the multistable regime, a pattern can be fixed (Figure 1H–K). Contrary to the previous scenario where the period of the clock goes to infinity, a Hopf bifurcation is associated to a finite period when the clock stops. The pattern of gene expression itself is laid thanks to multiple expression states discretizing the phase of the clock (Figure 1—figure supplement 1). Importantly, a finite number of states are observed, for example anterior and posterior fates within one somite (as first pointed out by Meinhardt, 1982).
In this paper, we revisit those ideas with explicit modeling to characterize the behavior of systems transitioning from a dynamical regime (such as an oscillation) to a static multistable regime. We introduce two new assumptions: 1. the two different phases of developmental expression (dynamic and static) can be separated into two independent sets of transcriptional modules acting on several genes simultaneously, and 2. the system smoothly switches from one set to the other. This proposal is motivated by the recent suggestion in insects that different sets of enhancers control waves of gap genes at different phases of embryonic growth (ElSherif and Levine, 2016). Such assumptions explain the socalled ‘speedgradient’ model suggested to explain the gene expression wave dynamics observed during AP patterning in the beetle Tribolium (Zhu et al., 2017) (see Figure 1—figure supplement 2) and (with some additional assumptions) the more subtle gene expression dynamics observed during AP patterning in Rudolf et al., 2020; ElSherif and Levine, 2016. Using both genenetwork and geometric formalisms, we characterize the types of bifurcations found in systems transitioning from a dynamic to a static regime. Surprisingly, we find that if the transition is smooth enough, global bifurcations appear. This situation is different from the standard scenario (Hopf and saddlenodes) that we nevertheless recover if the transition is more nonlinear. This is a generic result that is better studied and understood using geometric models. We further show that the transition through a global bifurcation is more robust than the sequence of Hopf and saddlenode bifurcations with respect to several perturbations that we simulate. Finally, we find that this model can explain many features of metazoan segmentation, such as ‘speedgradient’ mechanisms or changes of spatial wave profiles due to relaxationlike oscillations, and we discuss biological evidence and implications. This geometric approach thus offers a plausible scenario underlying embryonic patterning with many associated experimental signatures.
Model
In the following, we consider a class of developmental models based on the combination of (at least) two different transcriptional modules. Biologically, those two modules correspond to two sequential developmental phases. The main assumptions are that those transcriptional modules are globally regulated for multiple genes at the same time (which could be done for instance through chromatin global regulations) and that there is a continuous transition from one to the other. Here, we focus on metazoan segmentation and regionalization, but the formalism might be applicable to other patterning processes where both an initiation phase and a maintenance phase have been described.
We use ordinary differential equations to model our system. Calling $P$ a vector encoding the state of all proteins in any given cell (typically $P$ corresponds to concentrations of proteins), a generic singlecell equation describing all models presented in the following is:
In Equation 1, variable $g$ encodes an external control parameter of the developmental transition. For example, $g$ could be an external morphogen concentration driving patterning, but more complex situations with feedback are possible, where $g$ could also be part of the system (e.g. the phase difference between oscillators [Beaupeux and François, 2016; Sonnen et al., 2018]). For simplicity, we rescale variables so that $g$ is constrained between 0 and 1. The terms $D\left(P\right)$ and $S\left(P\right)$ correspond to different sets of modules, their influence on the dynamics being weighted by functions ${\theta}_{D}\left(g\right)$ and ${\theta}_{S}\left(g\right)$, respectively. The term $\eta \left(g,P\right)$ encodes the noise. Finally, $C\left(P\right)$ represents dynamical terms that are independent of the transcriptional modules, such as protein degradation.
We focus here on the simplest twomodule case, where $S\left(P\right)$ encodes a multistable system (i.e. with multiple fixed points at steady state) and $D\left(P\right)$ a dynamic system (i.e. oscillatory). In this situation we will assume ${\theta}_{S}\left(0\right)=1$, ${\theta}_{S}\left(1\right)=0$, ${\theta}_{D}\left(0\right)=0$, and ${\theta}_{D}\left(1\right)=1$, meaning that for $g=1$ the network is in a pure dynamic phase, while for $g=0$ the network is multistable. Details on the specific forms of $D\left(P\right)$, $S\left(P\right)$, ${\theta}_{D}\left(g\right)$ and ${\theta}_{S}\left(g\right)$ are given in the following and in the Appendix. We study two types of models: genenetwork like models where $D\left(P\right)$ and $S\left(P\right)$ explicitly model biochemical interactions between genes (such as transcriptional repression), and geometric models where $D\left(P\right)$ and $S\left(P\right)$ directly encode flows in an abstract 2D phase space, similarly to Corson and Siggia, 2017.
We model an embryo as a line of cells, corresponding to the anteroposterior axis. The dynamics within each cell (position $x$) is described by Equation 1. The only difference between cells is that the dynamics of $g$ is a prescribed function of $x$, for example we assume that there is a function $g\left(x,t\right)$ describing the dynamics of a morphogen. We focus on the transition between the two regimes as $g$ continuously changes from 1 to 0 in different cells as a function of time. We will typically consider a sliding morphogen gradient moving along the anteroposterior axis with speed $v$, described by $H\left(s\left(xvt\right)\right)$ where the function $H$ encodes the shape of the morphogen, and parameter $s$ is a measure of the gradient’s spatial steepness.
We also include noise in the system with the help of an additive Gaussian white noise. For gene networks, we follow an approach similar to the τleaping algorithm (Gillespie, 2001), where the variance of the noise corresponds to the sum of the production and the degradation terms (approximating independent Poisson noises). A multiplicative noise intensity term $\sqrt{1/\mathrm{\Omega}}$ is introduced, where Ω can be interpreted as the typical concentration of the proteins in the system, so that bigger Ω corresponds to lower noise. In addition, we add diffusion coupling the cells in the stochastic gene network models. For the geometric model, the variance of the noise is held independent of the position $x$. A more detailed description of the noise and diffusion terms is provided in the Appendix.
All source codes and data used for this paper are available at: https://github.com/laurentjutrasdube/DualRegime_Geometry_for_Embryonic_Patterning (copy archived at https://github.com/elifesciencespublications/DualRegime_Geometry_for_Embryonic_Patterning).
Relation to existing biological networks
The model described above aims at being generic, but one can relate it to existing developmental networks. A two modules dichotomy was initially proposed in the Drosophila context, where two enhancers (early and late) were observed for Krüppel and knirps (ElSherif and Levine, 2016). An extension of this model has been proposed in Zhu et al., 2017 for a gap gene cascade under control of the maternal gene Caudal, which would thus play the role of $g$ (a detailed model is reproduced in the Supplement, see Figure 1—figure supplement 2). The dynamic module corresponds to a genetic cascade comprising hunchback, Krüppel, millepates, and giant. Each gene in this sequence activates the next one, and later genes repress earlier ones. For the static module, each gene selfactivates, and hunchback and Krüppel repress one another. The situation is less clear for vertebrates, given the plethora of oscillating genes and possible redundancy in three different pathways, specifically Notch, Wnt and FGF (Dequéant et al., 2006). The twomodule system we consider assumes both the dynamic and static regimes are realized by the same set of genetic components (genes and/or signalling pathways). Notch signaling is an ideal candidate to be a component of both a dynamic and a static regime that might mediate vertebrate somitogenesis, since Notch is implicated in both the core segmentation clock of vertebrates (e.g. via Lfng [Dale et al., 2003], or the hes/her family in zebrafish [Lewis, 2003]) and oscillation stabilization (Jiang et al., 2000; Oginuma et al., 2010). Importantly, several genes of the Notch signalling pathways (e.g. DeltaC in zebrafish) are first expressed in an oscillatory manner then stabilize in striped patterns, as expected in our model (Giudicelli and Lewis, 2004; Wright et al., 2011). There are also multiple genetic interactions between members of the Notch pathway, in particular again Delta genes (SchwendingerSchreck et al., 2014), with different roles and changes of regulations in the dynamic vs static phase (see e.g. [Wright et al., 2011]).The oscillation itself could be mediated through one or several negative feedback loops in this pathway (Lewis, 2003), and stabilization could be realized through one of the multiple Notch signaling interactions (possibly via cell coupling, similarly to what is observed in other systems [Corson and Siggia, 2017]).
Results
A model for the transition between two genetic modules: Hopf vs. SNIC
In Zhu et al., 2017, it was suggested that the transition from a ‘wavelike’ behaviour to a static pattern during Tribolium segmentation was mediated by a smooth transition from one set of modules (corresponding to the oscillatory phase) toward another (corresponding to the fixed pattern). This explained the ‘speedgradient’ mechanism where the typical timescale of the dynamical system depends strongly on an external gradient (in this case, the concentration of the transcription factor Caudal). In the Appendix, we further study the associated bifurcation, and observe that new fixed points corresponding to the stabilization of gap gene expressions appear on the dynamical trajectory of those gap genes (Figure 1—figure supplement 2F). In simple words, the gap gene expression pattern slowly ‘freezes’ without any clear discontinuity in its behaviour from the dynamic to the static phase, which is reminiscent of the ‘infiniteperiod’ scenario displayed on Figure 1.
We first aim to generalize this observed property. A simple way to generate waves of gene expressions (as in the gapgene system described above) is to consider an oscillatory process, so that each wave of the oscillation corresponds to a wave of gap genes. We are not saying here that the gapgene system is an oscillator, but rather that its dynamics can be encompassed into a bigger oscillator (Verd et al., 2018). The other advantage of considering oscillators is that we can better leverage dynamical systems theory to identify and study the bifurcations. Furthermore, it allows for a better connection with oscillatory segmentation processes in vertebrates and arthropods.
We thus start with an idealized gene regulatory network with three genes under the control of two regulatory modules (Figure 2). In the dynamic phase $D\left(P\right)$, we assume that the three genes are oscillating with a repressilator dynamics (Elowitz and Leibler, 2000), so that the system keeps a reference dynamical attractor and an associated period. In the static phase $S\left(P\right)$, we assume that the module encodes a tristable system via mutual repression (Figure 2A).
We study the dynamics in a simulated embryo under the control of a regressing front of $g$ (Figure 2B). Transition from the dynamic module to the static module is expected to form a pattern by translating the phase of the oscillator into different fates, implementing a clock and wavefront process similar in spirit to the one in François et al., 2007. We compare two versions of this model, presenting the two different behaviors that we found. In Model 1 (Figure 2C–H), the weights of the two modules are nonlinear in $g$: ${\theta}_{D}\left(g\right)={g}^{2}$ and ${\theta}_{S}\left(g\right)={\left(1g\right)}^{2}$ (Figure 2C). In Model 2 (Figure 2I–N), the weights of the two modules are linear in $g$: ${\theta}_{D}\left(g\right)=g$ and ${\theta}_{S}\left(g\right)=1g$ (Figure 2I). We note that the initial and final attractors of both models are identical. Importantly, only the transition from one set of modules (and thus one type of dynamics) to the other is different. This twomodule system thus offers a convenient way to compare the performance of different modes of developmental transition while keeping the same ‘boundary conditions’ (i.e. the same initial and final attractors).
Figure 2E and Figure 2K show the kymographs for both models without noise, with behaviors of individual cells in Figure 2D and Figure 2J. While the final patterns of both models are the same (Figure 2F and Figure 2L), giving rise to a repeated sequence of three different fates, it is visually clear that the pattern formed with Model 2 is more precise and sharper along the entire dynamical trajectory than the one formed with Model 1, which goes through a ‘blurry’ transitory phase (compare midrange values of $g$ on Figure 2E and Figure 2K).
To better understand this result, we plot the bifurcation diagram of both models as a function of $g$ in Figure 2G and Figure 2M. As $g$ decreases, Model 1 is the standard case of a local Hopf bifurcation (Strogatz, 2015), which happens at $g=0.72$. Three simultaneous saddlenode bifurcations appear for lower values of $g$, corresponding to the appearance of the fixed points defining the three regions of the pattern. The behaviour of Model 2 is very different: the fixed points form on the dynamical trajectory, via three simultaneous Saddle Node on Invariant Cycle (or SNIC) bifurcations (Strogatz, 2015). Both models display waves corresponding to the slowing down of the oscillators, leading to a static regime. In Model 1, the timescale disappears with a finite value because of the Hopf bifurcation (Figure 2H). For Model 2, it diverges because of the SNIC (Figure 2N), suggesting an explicit mechanism for the infiniteperiod scenario of Figure 1.
To further quantify the differences of performance between the two models, we introduce noise (encoded with variable Ω, see the Model section and the Appendix) and diffusion (Figure 3A–D). We also define a mutual information metric measuring how precisely the phase of the oscillator is read to form the final pattern (Figure 3E, see the Appendix for details), consistent with the experimental observation in vertebrate segmentation that oscillatory phases and pattern are functionally connected (Oginuma et al., 2010). Intuitively, this metric quantifies in a continuous way the number of fates encoded by the system at steady state. Ideal mutual information for the three mutually exclusive genes of Models 1 and 2 gives $lo{g}_{2}\left(3\right)\text{}\sim \text{}1.6$ bits of mutual information, meaning that the pattern deterministically encodes the phase of the cycle into three static fates with equal weights. While addition of noise decreases this mutual information as expected (Figure 3E), Model 2 (black curves) always outperforms Model 1 (red curves). For a reasonable level of noise corresponding to a few thousands of proteins in the system, Model 2 can encode 2^{1.3 }∼ 2.5 fates, close to the optimum 3. Furthermore, for a given diffusion constant, Model 1 requires a ten times smaller noise level than Model 2 to encode the same amount of mutual information, which thus indicates much better noise resistance for Model 2.
Those observations suggest that appearance of stable fixed points through SNIC bifurcations rather than through Hopf bifurcations generates a more robust pattern. The superiority of Model 2 can be rationalized in the following way: when there is a Hopf bifurcation, only one fixed point exists for a range of $g$ values, so that all trajectories are attracted towards it. This corresponds to the ‘blurred’ zone in the kymographs of Figure 2 and Figure 3. In presence of noise, the effect is to partially erase the memory of the phase of the oscillation when only one fixed point is present for the dynamics. Conversely, a SNIC bifurcation directly translates the phase of the oscillation into fixed points, without any erasure of phase memory, ensuring higher information transfer from the dynamic to the static phase, and therefore more precise patterning.
The Hopf bifurcation of Model 1 occurs when the weights of the dynamic and static modules become small compared to the degradation term, which generates an ‘intermediate regime’ with one single fixed point after the oscillations of the dynamic module and before the multistability of the static module. The specific form of the weights is not what determines the bifurcation, but rather the presence or absence of an intermediate regime. We confirm this observation with similar 3gene models that used Hill functions for the weights ${\theta}_{D}$ and ${\theta}_{S}$ (Figure 2—figure supplement 1 and Figure 3—figure supplement 1). Interestingly, we get both Hopf and SNIC bifurcations with the same shape for the two weights; the Hopf is obtained by shifting the weight of the dynamic term toward larger values of the control parameter. This effectively generates the required intermediate regime where both weights are small compared to the degradation term.
Genefree models present a similar geometry of transition
Hopf and saddlenode bifurcations are ‘local’ bifurcations, in the sense that changes of the flow in phase space are confined to an arbitrarily small region of phase space as the bifurcation is approached. They do not in principle require complex changes of the flow or finetuning of the parameters to happen. As such, they are the most standard cases in many natural phenomena and in most theoretical studies. Conversely, SNIC bifurcations are ‘global’ bifurcations (Strogatz, 2015; Ermentrout, 2008): they are associated to changes of the flow in large regions of phase space (e.g. when a limit cycle disappears with a nonzero amplitude) and usually require some special symmetries or parameter adjustments to occur (e.g. to ensure that a saddlenode collides with a cycle).
It is therefore a surprise that SNIC bifurcations spontaneously appear in the models considered here. To better understand how this is possible and if this is a generic phenomenon, we follow ideas first proposed by Corson and Siggia, 2012 and consider geometric (or genefree) systems. We aim to see if: 1. SNIC bifurcations are generically observed, and 2. a model undergoing a SNIC bifurcation is in general more robust to perturbations than a model undergoing a Hopf bifurcation, with initial and final attractors being held fixed. We thus build 2D geometric versions of the system (variables $y$ and $z$). The dynamic module $D\left(P\right)$ is defined by a nonharmonic oscillator on the unit circle, while the static module $S\left(P\right)$ is defined by two stable fixed points, at $y=\pm 1$, $z=0$ (see Figure 4A, and the Appendix for the equations). Like previously, we build a linear interpolation between the two systems as a function of $g$ and explore the consequence on the bifurcations (Figure 4B–H). Since the flow in the system is 2D, we can also easily visualize it (Figure 4I and Figure 4—video 2).
In brief, this geometric approach confirms all the observations made on the gene network model of the previous section, and further clarifies the origin of the SNIC bifurcation. Because of the smooth transition between modules, the entire flow in 2D needs to interpolate from a cycle to a bistable situation. When both modules have close to equal weights (around $g=0.5$), the flow and associated cycle concentrate around two future fixed points. This appears in retrospect as the most natural way to interpolate between the two situations since both types of attractor (stable limit cycle, and multiple stable fixed points) are effectively present at the same time around $g=0.5$. For this reason, the oscillations are also more similar to relaxation oscillations, rapidly jumping between two values corresponding to the future fixed points. When $g$ is further lowered, the weight of the static module dominates and ‘tears apart’ the cycle, forming two fixed points.
This situation is so generic that in fact, to obtain a Hopf bifurcation, we need to mathematically reinforce the fixed point at $y=0$ for intermediate values of $g$. More precisely, three things are required to get a Hopf bifurcation. First, we need to add an extra term, the ‘intermediate module’, characterized by a single fixed point at $y=0$ and $z=0$. Without this module, we consistently get a SNIC bifurcation, for all forms of coupling tested, including linear and nonlinear couplings (Figure 4—figure supplement 1A–D). Second, we need to set the weight of the intermediate module to 0 for $g=1$ and $g=0$, so that we get the oscillations of the dynamic module at the beginning of the simulation and the bistability of the static module at the end of the simulation. We achieve this by setting the weight of the intermediate term to $g\left(1g\right)$, which is of second order in $g$. Third, we need to make the weights of the dynamic and static modules smaller than the weight of the intermediate module for intermediate values of the control parameter, that is for $g$ around 0.5. This is achieved by using cubic weights for the dynamic and static modules (Figure 4—figure supplement 1G and Figure 4—figure supplement 2). Weights of the fourth order in $g$ also lead to a Hopf bifurcation (Figure 4—figure supplement 1H), while linear weights lead to a SNIC bifurcation (Figure 4—figure supplement 1E). Interestingly, quadratic weights lead to simultaneous supercritical Hopf and pitchfork bifurcations (Figure 4—figure supplement 1F). The nonlinearity of the coupling helps reinforce the relative weight of the intermediate module for values of $g$ around 0.5, but the exact shape of the nonlinearity is not crucial. Furthermore, our mutual information metric confirms that the pattern is more robustly encoded when the system goes through a SNIC bifurcation rather than through a sequence of Hopf and pitchfork bifurcations (Figure 4—figure supplement 5F).
Detailed simulations of the genefree model with the intermediate module and with cubic weights for the dynamic and static modules are shown on Figure 4—figure supplement 2. As expected for a Hopf bifurcation, the flow first concentrates on the central fixed point at $y=0$, before reemerging in a bistable pattern for lower $g$ (Figure 4—figure supplement 2I and Figure 4—video 1). There are technically two types of Hopf bifurcations, depending on the stability of the limit cycle. A Hopf bifurcation is supercritical (resp. subcritical) if a stable (resp. unstable) limit cycle becomes a stable (resp. unstable) fixed point. All Hopf bifurcations discussed previously are supercritical. However, by changing slightly the dynamic module of the genefree model (and including the intermediate module as well as cubic weights for the dynamic and static modules) we can also obtain a subcritical Hopf bifurcation (Figure 4—figure supplement 3). During this bifurcation, an unstable limit cycle forms around the origin. This unstable limit cycle coexists with the stable limit cycle of the dynamic module for some range of $g$ values before they annihilate each other during a socalled ‘saddlenode of limit cycles’ bifurcation. Finally, the two fixed points of the static module form during two simultaneous saddlenode (of fixed points) bifurcations (Figure 4—figure supplement 3I and Figure 4—video 3). Again, our mutual information metric confirms that the pattern is more precise when the system goes through a SNIC bifurcation (Model 2), rather than through supercritical or subcritical Hopf bifurcations (Models 1 and 3, resp.) (Figure 4—figure supplement 5E). Taken together, these results suggest that keeping the static and dynamic attractors fixed, patterning is both more generic and more robustly encoded through a SNIC bifurcation than through a Hopf bifurcation, at least in simple lowdimension models.
Robustness and asymmetry in the fixed points
A concern with the results of the previous section might be that those mathematical models are in fact finetuned and too symmetrical, so that in particular when the transition occurs, both new fixed points appear for the same value of the control parameter. Furthermore, real biological networks have no reason to be perfectly symmetrical (although evolution itself might select for more symmetrical dynamics if needed). We thus relax our hypotheses to study a system where parameters and trajectories are not symmetrical (Figures 5 and 6).
Going back first to the gene network model, we induce an asymmetry between the fixed points by changing thresholds of repression in the static phase (Figure 5A). The bifurcation diagrams of Figure 5B–C indicate that the asymmetry of the fixed points indeed breaks the simultaneity of appearance of all fixed points in both scenarios. We nevertheless notice that for those changes of parameters, all bifurcations still happen in a very narrow range of $g$ for the SNIC model.
Asymmetry of the fixed points might therefore destroy the advantage of SNIC vs Hopf by creating a transient zone where one of the fixed points always dominates. We thus perform a comparison between Models 1 and 2 with the same asymmetric static enhancers (Figure 5, see also Figure 5—figure supplements 1 and 2, and the Appendix for details). To compare the two cases, we consider different timescales of the morphogen gradient. The reasoning is that the slower the decay of $g$, the more time the system spends in a region of parameter space without all three final fixed points, allowing the system to relax and ‘lose’ phase information. Conversely, a faster decay of $g$ means that less time is spent in a region with few fixed points, and therefore the patterns are expected to be more robust.
We first decrease the thresholds of repression of gene A by both genes B and C (Figure 5A). Results of these simulations are shown in Figure 5: Model 2 with a SNIC bifurcation still outperforms Model 1 with Hopf and saddlenode bifurcations. In particular, it is again visually clear on kymographs that Model 2 produces a robust and welldefined pattern at any time point of the simulations, while Model 1 gives rise to a much ‘fuzzier’ pattern before the transition. Model 1 produces a robust static pattern only for a steep gradient (allowing to quickly move through the ‘fuzzy’ phase) and a weak asymmetry in the static module (Figure 5E). It is brittle to any change of the dynamics of $g$ (Figure 5H) or to stronger asymmetry in the static module (Figure 5—figure supplement 1E,H). Conversely, Model 2 is robust to different shapes of the morphogen (Figure 5F,I). Only for a strong asymmetry does the system lose one fixed point (Figure 5—figure supplement 1I), but even in this case transitions through a SNIC bifurcation appear superior to transitions through a Hopf bifurcation.
The fragility of the Hopf bifurcation to asymmetries in the parameters can be understood as follows. In the asymmetric versions of Model 1, one of the fixed points of the static term forms at the Hopf bifurcation, way before the two other fixed points form. It is therefore the only attractor available for a large range of $g$ values. However, in Model 2 the same asymmetry only favors one of the fixed points for a small range of $g$ values, generating a robust pattern. Again, we can use the mutual information metric defined above to quantify the robustness of the pattern and confirm the superiority of Model 2 (Figure 5—figure supplement 2J). We also confirmed these results for the case of random modifications of the repression thresholds of all interactions in the static term (Figure 5—figure supplement 2).
The asymmetry introduced in Figure 5 changes the shapes of the basins of attraction and the positions of the fixed points. The geometric model allows to change those features independently. The most generic way to introduce an asymmetry in the system is to fix the positions of the fixed points of the static regime and change only the positions of the basins of attraction (the reason is that the future fates depend on the position of the separatrix between different regimes [Corson and Siggia, 2012]). To replicate this situation in the 2D genefree models, we move the unstable fixed point of the static term along the $y$ axis. Results of this procedure are shown on Figure 6 and confirm our results on the genenetwork based models: Model 2 bifurcates via a SNIC and is always more robust than Model 1. When we change the positions of the fixed points in the static regime to move them away from the limit cycle (still in an asymmetric way), interestingly both Models 1 and 2 now bifurcate via SNICs (Figure 6—figure supplement 1). Furthermore, we see that for Model 1, the amplitude of the limit cycle decreases before the bifurcation, while for Model 2, the amplitude increases (Figure 6—figure supplement 1E).
We conclude from all those numerical perturbations that even with asymmetric basins of attraction and asymmetric parameters, transitions based on SNIC bifurcations are both more generic and more robust than the ones based on Hopf bifurcations, at least in simple lowdimension models.
Spatial wave profiles: Hopf vs SNIC
Our theoretical study suggest that SNIC based transitions are both more robust and more generic than Hopf/saddlenodes ones. We thus now examine ways to distinguish between Hopf and SNIC based transitions experimentally. The natural method would be to modify the value of the control parameter of the bifurcations (in our case parameter $g$) and check how the attractors of the system change. A recent experimental example can be found in the auditory hairbundle context where it has been suggested that Hopf bifurcations can be distinguished from SNIC bifurcations by tuning the external driving force (Salvi et al., 2016). However, the situation in development is different from any sensory system since the actual control parameters are not known and are likely combinations of various signaling systems (such as FGF or Wnt). There could also be many compensatory developmental mechanisms, making it difficult to fully take control of the system, as well known by developmental geneticists.
In the absence of a direct control, we have to rely on indirect measurements. The most obvious choice is to use the anteroposterior position along the embryo (or along the PSM in the case of somite formation) as a proxy for the control parameter. In situs are informative of what locally happens in insects (ElSherif et al., 2012; Zhu et al., 2017). In vertebrates, it is possible to monitor in realtime segmentation oscillators in embryos (Aulehla et al., 2008; Webb et al., 2016; Delaune et al., 2012) and in tissue cultures (Lauschke et al., 2013; Matsuda et al., 2020; DiazCuadros et al., 2020), so that many properties of the cycles can be inferred.
A first relevant metric (used in Salvi et al., 2016) is the shape of the limit cycle as a function of the control parameter. There is a clear contrast between Hopf and SNIC bifurcations on our simulated kymographs: for Hopf bifurcations, as mentioned above the transition zone is ‘blurred’ because the damped oscillations relax toward the unique fixed point, while for SNIC bifurcations the oscillations localize close to the fixed points, and as a consequence the pattern is gradually reinforced (Figures 2, 3, 4, 5; damped oscillations are especially visible in the geometric model). To better visualize the ‘blurry’ transition zone characteristic of Hopf bifurcations in the genefree model (resp. in the 3gene model), we compute the distribution of the values of the geometric variables (resp. of the gene concentrations) for different values of parameter $g$ (Figure 7—figure supplement 2). In models with Hopf bifurcations, the distribution becomes very narrow for intermediate values of $g$, which is a direct consequence of the damped oscillations past the Hopf bifurcations. This suggests that measuring experimentally the distribution of gene concentrations across developmental time at fixed anteroposterior positions can help distinguish SNIC from Hopf bifurcations during actual pattern formation processes. The experimental picture, both from in situs and live imaging, does not support a narrowing of the distribution: it shows clear increases of amplitude that seems more consistent with a gradual reinforcement of the pattern, similar to a SNIC (see e.g. [Delaune et al., 2012]).
Another metric that could help identify the type of bifurcation is the time evolution of the period (resp. of the frequency), which presents a discontinuity for both types of Hopf bifurcations, but continuously goes to infinity (resp. zero) for SNIC bifurcations. The local spatial wavelength of the pattern provides a continuous measurement related to the local period (as first used and derived in Giudicelli et al., 2007). Calling $S$ the somite size (or the wavelength of anterior/posterior markers within somites after clock stopping), $T\left(x\right)$ and $S\left(x\right)$ the respective period and local wavelength of the pattern at position $x$ (with $x=0$ being the tail and $x=1$ the front), and assuming the cells move towards the anterior with constant speed, we have
In the posterior $T\left(x\right)=T\left(0\right),$ so that $S\left(x\right)$ is infinite: all cells are synchronized locally. As cells move towards the anterior, $T\left(x\right)$ decays and $S\left(x\right)$ decreases but stays bigger than $S$. If there is a Hopf bifurcation, one expects that the clock will stop with a final period $T\left(1\right)$, corresponding to a critical wavelength of the wave $S\left(1\right)=S\frac{T\left(1\right)}{T\left(1\right)T\left(0\right)}$, which is strictly greater than $S$. There is therefore in principle a discontinuity between the wavelength of the propagating wave in the PSM and the wavelength of the static pattern. Conversely, if $T\left(x\right)$ goes continuously to infinity, $S\left(x\right)$ decays continuously to converge towards $S$ (as measured in Giudicelli et al., 2007). This is a simple and intuitive experimental outcome: in this situation the wavelength of the propagating wave in the PSM decreases until it exactly matches the pattern of anterior markers. Therefore, a signature of the finite vs infinite period bifurcation might be visible by comparing the local wavelength of propagating waves in the PSM to the wavelength of anterior markers in the pattern. Practically, monitoring the distance between the peaks of the oscillations does not give a clear difference between Hopf and SNIC, the reason being that such a discrete measurement can smooth out an abrupt change in the period and that past the Hopf bifurcation, damped oscillations actually give rise to a ‘ghost’ frequency which can decrease quite significantly (see Figure 2—figure supplement 2). Therefore, a measurement of the local wavelength might be less conclusive than a measurement of the amplitude. Nevertheless, we notice that a SNIC scenario gives rise to a combination of an increase of the amplitude with a continuous shrinking of the wavelength, meaning that many refining waves (corresponding to broad variations of periods) will simultaneously propagate.
Finally, SNIC bifurcations are also accompanied by characteristic changes of the shape of spatial wave profiles that might be observable in experiments. We further compare a SNICbased model to a phase model where an infiniteperiod behaviour is explicitly assumed, namely the model of a collection of coupled oscillators from Morelli et al., 2009. A kymograph of the spatiotemporal profile of the frequency imposed on the oscillators is shown in Figure 7A, and the dynamics of the resulting pattern formation process is shown on the kymograph of Figure 7B, with the final pattern on Figure 7C. The most striking difference is observed on the shape of the spatial wave profile as it moves towards the region where the pattern stabilizes. In the infiniteperiod scenario of Morelli et al., 2009, the phase profile is by construction symmetric (albeit stretched in the posterior compared to the anterior, see Figure 7E). In the SNIC scenario, we see a clear asymmetry in the peaks of the wave profile (as shown schematically in Figure 7D). Indeed, following the peaks of oscillations spatially from anterior to posterior (left to right), we see that the transition from positive to negative values of $y$ occurs in two steps. During the first step, the system stays close to $y=1$ for some time, with $y$ values decreasing slowly, while in the second step the system goes rapidly towards negative $y$ values and reaches $y=1$ fast (Figure 7F). This phenomenon, which gives rise to a ‘sawtooth’ pattern of propagating waves, is observed in all our versions of Model 2 (and is notably absent from all our versions of Model 1, see Figure 7—figure supplement 1). Those spatial asymmetries are likely due to the asymmetries of the limit cycles of relaxation oscillators, where a system jumps between two (or more) steady states in an asymmetric way (which can also be observed in systems close to criticality, see Tufcea and François, 2015). Our model thus offers a simple explanation of wave asymmetry, solving the longstanding problem of the asymmetry of AP vs PA transitions, which is possibly crucial for segment polarity as first suggested by Meinhardt, 1982.
Discussion
In this work, we have explored the dynamical properties of generic twomodule systems, where one module corresponds to a dynamic phase of genetic expression and the other corresponds to a static phase controlling embryonic patterning. The unexpected result is that those models typically present global bifurcations where new fixed points appear on the trajectories in phase space. Such SNIC bifurcations come from the smooth interpolation between a flow defining an oscillator in phase space and a landscape characterized by several fixed points. The oscillating attractor then gets continuously deformed until it breaks into several fixed points, leading to the SNIC. This interpolation is a direct consequence of the assumed twomodule control as shown on multiple examples above. Importantly, the overall developmental sequence in this context is emergent, since the dynamics close to the bifurcation cannot be understood independently from the static or dynamic modules only. SNIC bifurcations also provide robustness to various perturbations (since fixed points appearing on cycles better preserve information on the oscillatory phase). Below we detail how this model also recapitulates many observed features of metazoan segmentation.
Experimental evidence
In absence of a direct handle on the control parameter, comparison of SNIC and Hopf bifurcations in simulated systems reveal three possible signatures of SNIC bifurcations in the propagating anteroposterior waves of genetic expression: 1. oscillations gradually reinforcing the pattern for SNIC (instead of damped oscillations for Hopf) 2. relaxationlike oscillations leading to some asymmetry in the wave pattern, and 3. a continuous shrinking of the spatial wavelength during the dynamic phase to match the static pattern for SNIC, with many welldefined propagating waves.
For prediction 1, decades of in situs of oscillating genes and of experimental monitoring of Notch signalling pathways argue against damped oscillations in somitogenesis. Weak waves gradually refine into welldefined stripes in the anterior PSM, in most species (see for example the comparison made in Gomez et al., 2008), suggesting an increase of amplitude in oscillations. Consistent with this, visualising segmentation clock oscillations in live vertebrate embryos suggests an increase of oscillation amplitude as cells get more anterior in the PSM (Delaune et al., 2012; Lauschke et al., 2013). A similar observation was also made for gap genes during shortgerm segmentation in Tribolium (Zhu et al., 2017).
For prediction 2, in somitogenesis, there is an asymmetry in the wave pattern before stabilization. The transition from the high to low phase within one somite is shallower than the transition between those two phases from one somite to the other (i.e. posterior of one somite to anterior of the next), as detailed in Shih et al., 2015. A possible readout is the downstream pattern of Cadherin controlling segment boundary formation, which presents a sawtooth profile (McMillen et al., 2016), consistent with the observed asymmetry in the wave pattern. It has been proposed that the main reason for the clock control was precisely to generate such periodic sawtooth pattern, and a SNIC bifurcation offers a plausible mechanism. In insects, it has been proposed that one of the roles of traveling waves of segmentation genes (e.g. eve genes shift) is indeed to provide segment polarity (Rothschild et al., 2016; Clark, 2017; Clark and Akam, 2016), which is consistent with this proposal, although precise quantification of wave asymmetry has not been done in this context to our knowledge. In situs of gap genes in short germ insects show asymmetry between the anterior and posterior borders of the propagating gene expression waves as well (Zhu et al., 2017).
For prediction 3, the slowdown of oscillations during vertebrate segmentation appears quite variable between species. Mouse segmentation clock seems at first more consistent with a relatively sudden Hopf scenario. There is roughly a 2π phase shift between the posterior and the front, consistent with only a moderate change of the period/wavelength (Lauschke et al., 2013). Other species have several propagating waves within the PSM, seemingly more consistent with a broader period regime. In zebrafish, in vivo realtime imaging suggests a wavelength of propagating waves in the anterior PSM of roughly twice the segment length, meaning that the period of the clock at the front is at least half of the period in the tail bud (Shih et al., 2015). However, this measurement likely overestimates the wavelength and thus underestimates the period at the front since it is based on a peak to peak measurement between the last two waves. Thus, the period of the clock might further increase in the anterior, consistent with a period divergence and with measurements of Giudicelli et al., 2007 on in situs, who proposed that the period indeed diverges (Giudicelli et al., 2007). Importantly, as said above, while the period increases the amplitude increases too Shih et al., 2015, which is more consistent with a SNIC bifurcation. In situs in snake PSM clearly show a decrease of the wavelength that seems to continuously match the pattern in the anterior, possibly more consistent with an infiniteperiod bifurcation as well (Gomez et al., 2008).
A direct experimental measure of the wavelength of the pattern for the last oscillations can be found with Mesp2 (Takahashi et al., 2000; Saga and Takeda, 2001). The consensus on Mesp2 is that it is expressed in the last few waves of oscillations, and that its pattern continuously regresses to reach exactly the wavelength of the anterior pattern. This is consistent with an infiniteperiod bifurcation. In Oginuma et al., 2010 it is argued that Mesp helps setting the segment boundary in mouse by precisely reading the Notch wave, explaining both its biological role and why it is a convenient marker of the wavelength. We notice that the model used to explain the experiments in Oginuma et al., 2010 is Julian Lewis’ model from the appendix of Palmeirim et al., 1997, thus presenting an infiniteperiod divergence.
One cannot completely exclude from all those experimental observations a Hopf scenario where waves would not be very damped (or would be amplified for some reason) past the bifurcation, while the period of damped oscillations would become long until the saddlenode bifurcation is reached. A way to experimentally falsify this scenario is to induce a smoother transition and see if damped oscillations appear. Interestingly, both in vertebrates and in short germ insects, changes of Wnt signalling indeed give rise to smoother transitions from the dynamic to the static regime (suggesting that Wnt could be related to the control parameter $g$). In mouse, betacatenin gain of function mutants give rise to considerably extended PSM toward the anterior (Aulehla et al., 2008), with up to five waves of oscillating Lfng (compared to only one in WT). Thus, this extended PSM qualitatively looks much more like a zebrafish or a snake PSM, with reinforcement of Lfng in situs from anterior to posterior (suggesting a refinement of the pattern and not damped oscillations, see Figure 4g,h of Aulehla et al., 2008). In the anterior PSM of such mutants, the wavelength of the oscillations decreases to match exactly the Mesp2 wavelength (Aulehla et al., 2008, see e.g. Figure 3), and the very last waves move extremely slowly towards the anterior (Alexander Aulehla, private communication), consistent with a divergence of the timescale. In Tribolium, axin RNAi similarly changes the wave pattern, where smoother and more spatially extended propagation of gap gene expressions are observed in those mutants, with reduced wavelengths compared to the WT (see Figure 4 of Zhu et al., 2017). Those phenotypes are exactly what is expected from an infiniteperiod bifurcation, again consistent with the SNIC scenario.
Lastly, damped oscillations would also be absent from Hopf scenarios where the bifurcation that destroys the limit cycle occurs at a value of the control parameter that is almost exactly the value at which the system becomes multistable. This happens for example when the supercritical Hopf bifurcation that kills the limit cycle happens at the same value of parameter $g$ than the pitchfork bifurcation that generates bistability (Figure 4—figure supplement 1F). A similar phenomenon occurs in the model presented on Figure 4—figure supplement 4: the two simultaneous subcritical Hopf bifurcations that generate the bistability and the saddlenode of limit cycles that destroys the stable limit cycle happen in a very narrow range of $g$ values. We cannot fully exclude those cases from the experimental data, but we notice they are intrinsically less generic since multiple bifurcations have to occur at the same time, which happens only in our models because of specific parameter choices, symmetries in the equations or specific choices of nonlinearity for the weights of the modules.
Model predictions
The most straightforward prediction of the model proposed here is the presence of several global transcriptional modules between strongly interacting genes, directly controlling the smooth changes of developmental timescale (in a similar way to the ‘speedgradient’ model in Zhu et al., 2017). Many developmental genes are regulated by multiple ‘shadow’ enhancers (Cannavò et al., 2016). A smooth transition between different enhancers has even been observed for gap genes in Drosophila (ElSherif and Levine, 2016). Global regulation of transcriptional modules could be biologically achieved through ‘super enhancers’ regulating many genes at the same time (Hnisz et al., 2017). A nontrivial prediction of our model is that the intrinsic timescale of the system is a function of the relative balance of transcriptional activities of the modules. The transcriptional control described here naturally allows for infiniteperiod bifurcations, an implicit mechanism in several models of metazoan segmentation. This is to be contrasted with classical models of negative feedback oscillators such as the Goodwin model, where the timescale is entirely controlled by degradation and is independent from transcription and translation rates (Forger, 2011), and with models of delayed oscillators, where the timescale is essentially controlled by transcriptional delays (Lewis, 2003).
Our model is controlled by an external parameter $g$. The most natural hypothesis would be that $g$ corresponds to an actual morphogen gradient. As said above, Wnt is a natural candidate, but feedbacks clearly exist with Caudal in Tribolium (Zhu et al., 2017) and FGF in vertebrates. However, in the spirit of the initial wavefront proposal by Cooke and Zeeman, $g$ could also be in some context a temporal variable, for example an effective timer. Recent works on somitogenesis have suggested that the segmentation front could also be coupled to the slowing down of two oscillators (Lauschke et al., 2013), possibly one corresponding to Notch/FGF and the other to Wnt (Sonnen et al., 2018), so that the oscillation could feedback on itself to define $g$. One could also imagine that $g$ is related to more biophysical variables (density, elasticity) (Hubaud et al., 2017). It is important to point out that in our framework the nature of the bifurcation does not depend on the nature of $g$. While it might be difficult to experimentally disentangle feedbacks between the bifurcations and the control parameter from actual properties of the bifurcations themselves, our predictions on the nature of the bifurcations would not change.
An assumption of this twomodule framework is that the same genes interact to control the system in both the dynamic and static regimes, giving rise to a smooth dynamical transition during development. This is consistent with what is observed for gap gene dynamics in short germ insects (Zhu et al., 2017). For vertebrate segmentation, we do not know yet mechanistically how both regimes are controlled, but the Notch signalling pathway is implicated in all steps of somitogenesis and in particular is known to gate information from the oscillatory to the segmented regime (Oginuma et al., 2010). An opposite view would be that the transition from dynamic to static regime is de facto sudden (even if it appears smooth for other reasons). Such scenario could be realized in different ways. For instance, different enhancers could regulate completely different sets of genes in the dynamic vs static regimes. The ‘static’ genes would then interact with the ‘dynamic’ genes only briefly during development, ensuring transmission of positional information between the static and dynamic regimes in a very localized region of time and space. In somitogenesis, as said above specific genes are indeed expressed at the socalled ‘front’ (such as Mesp2) and could act like gating processes transferring the information from the clock to an independent patterning system. In this case, we would be back to a sequential point of view where different regimes of development live in different regions of phase space, and the local bifurcation scenario would then be more plausible.
Evolution and developmental plasticity
Evolutionary simulations for the evolution of patterning have favored a scenario based on Hopf and saddlenode bifurcations (François et al., 2007). Those simulations did not include multiple enhancers like here, and all transcriptional regulations had essentially to evolve from scratch, possibly suggesting a ‘Ursegmentation clock’ based on Hopf and saddlenode bifurcations. This scenario is not excluded by our model: in particular there would be no difference between Hopf and SNIC bifurcations under the control of steep gradients of $g$, which shortcut the region where only one fixed point is present. So Hopf/saddle node bifurcations under control of a steep gradient are likely the easiest solution found by (simulated) evolution. However, adding a more complex, enhancerbased evolutionary grammar might allow for more combinatorial use of dynamical attractors, and associated robustness to external perturbations (such as the shape of the $g$ gradient). A SNIC bifurcation might then plausibly evolve from the modularization (WestEberhard, 2003) of a system based on Hopf and saddlenode bifurcations (both in vivo and in silico). First, since the SNIC bifurcation is the generic scenario that we observe in our framework, it should be easy to discover by evolution once a multienhancer system combining an oscillator and a bistable system has evolved. Second, SNICcontrolled segmentation is plastic in the sense that changes of dynamics of the control parameter would change the transient phenotypes (such as the number of oscillating waves) but would still generate a pattern. Modularizations leading to developmental plasticity has been suggested to be an important engine for evodevo (WestEberhard, 2003), since it allows for intraspecific variability without impinging on the most important phenotypes (here, segmentation).
Importantly, such plasticity is indeed observed experimentally both for short germ and vertebrate segmentation. For instance, in Tribolium one can considerably modify Caudal gradient dynamics and still see proper patterning (Zhu et al., 2017). It could thus explain how and why there is so much quantitative variability in segmentation mechanisms, such as short vs intermediate germ band segmentation (as suggested in Zhu et al., 2017). In somitogenesis, there is a lot of interspecies quantitative variability in the numbers of waves and rescaled periods (Gomez et al., 2008) while the qualitative dynamics itself appears very conserved see for example (Krol et al., 2011). In other words, a twomodule mechanism makes the dynamics both more robust – a generic bifurcation scenario gives precise phase encoding – and more evolvable – one can vary many features of the system (e.g. basins of attractions, dynamics of the control parameter) and still get proper patterning, again a hallmark of developmental plasticity.
In brief, we have discussed a twomodule based model of the smooth transition in development from a dynamical regime to a static one. This model explains timescale divergence, as well as robustness to changes of morphogen dynamics (Zhu et al., 2017) and to noise. It provides a possible explanation for smooth robust transitions in metazoan segmentation, with a nontrivial (global) bifurcation. Further experimental and theoretical studies are required to assess the importance of smooth transitions for encoding dynamic information into spatial patterns of genetic expressions.
Appendix 1
1 A twoenhancer model reproduces dynamical features of tribolium segmentation
In Zhu et al., 2017, two of us proposed a model of Tribolium segmentation relying on the interplay of two sets of enhancers. In short, two sets of enhancers (static $S(P)$, dynamic $D(P)$) were used, where the role of parameter g is played by morphogen Caudal (cad) (ElSherif et al., 2012; Figure 1—figure supplement 2). $S(P)$ encodes a multistable system and $D(P)$ a sequential cascade of genetic expression of gap genes ($hb$, $Kr$, $mlpt$, $gt$). This system was found to implement a ‘speed gradient’ model, where the speed of traveling waves of gap genes from posterior to anterior depended on the level of cad concentration (Figure 1—figure supplement 2B–D). This led to robust patterning of the embryo (Figure 1—figure supplement 2E) but the mathematical origin of the speed gradient was not explained.
To better understand the underlying dynamics of the system, we consider the time courses of multiple cells at different positions and thus with different final fates. Figure 1—figure supplement 2F shows the projection of the cells’ dynamics on a 2D plane corresponding to the first two genes expressed in the cascade ($Kr$ and $hb$), as well as a typical flow for different values of $cad$ while keeping the other genes ($mlpt$, $gt$ and $X$) at zero. Importantly, for $cad=0.13$, we see the appearance of a new fixed point (green disk on Figure 1—figure supplement 2F).
We make four observations:
The flow of the system is canalized. The trajectories of the cells stay very close to one another in phase space.
As cad is lowered, the new fixed point appears very close to the common trajectory of all cells (Figure 1—figure supplement 2F, top row), and clearly separates the trajectories of cells ending up at different fates (Figure 1—figure supplement 2F, bottom row).
When cad further decreases, the new fixed point moves in the high hb, low Kr region, corresponding to the eventual fate of Cell 1.
When the new fixed point appears, the flow of cells past this fixed point is slowed down (Figure 1—figure supplement 2G).
These four observations offer a concise explanation to the ‘speed gradient’ model: as the system gets closer to the bifurcation happening at $cad=0.13$, the system is slowing down because of the future fixed points appearing on the trajectory. Intuitively, this is due to the fact that a fixed point corresponds to a frozen state, and thus to an infinite timescale (static). When cad varies, the system has to interplay between a nonzero timescale (dynamics) and such infinite timescale, and it thus makes sense a priori that in between, the timescale of the system diverges. This mechanism is close in principle to the critical timing proposed in Tufcea and François, 2015.
2 List of the functions used for the dynamics of each model
2.1 Gene network models
In the gene network models, biochemical interactions between genes are modeled explicitly. Ordinary differential equations (ODEs) represent the dynamics of the concentration of the proteins that are encoded by the genes in the network. The deterministic part of the dynamics is composed of a protein production term and a protein degradation term. The production rate of a given protein can be altered by the interactions between the genes. Hill functions are use to model repression and activation of the production of a given protein by the genes. When multiple genes affect the concentration of a protein, the Hill functions corresponding to each interaction are multiplied. In the simulations, we set to one the maximal production rate of all proteins. Similarly, we set the degradation rate of all proteins to 1. In Equation 1 of the main text, $C(P)$ encodes the degradation term, and ${\mathrm{\Theta}}_{S}(g)\text{}S(P)+{\mathrm{\Theta}}_{D}(g)\text{}D(P)$ represents the production term.
2.1.1 3gene models
The proteins associated to the three genes are named arbitrarily $A$, $B$ and $C$:
Appendix 1—table 1 lists the values of the parameters used in the repression interactions of all versions of the 3gene models: the symmetric version used to generate the results of Figure 2, Figure 2—figure supplements 1 and 2, Figure 3, Figure 3—figure supplement 1 and Figure 7—figure supplement 2, the version with a weak asymmetry used in Figure 5, the version with a strong asymmetry used in Figure 5—figure supplement 1 and the version with a randomized asymmetry used in Figure 5—figure supplement 2. In the latter version, we randomly picked the values of the repression interactions of the static term $S(P)$ from a Gaussian distribution with mean 0.4 and standard deviation 0.04. Appendix 1—table 2 lists the weights ${\mathrm{\Theta}}_{D}(g)$ and ${\mathrm{\Theta}}_{S}(g)$ used for all 3gene models: Models 1 and 2 used to generate the results of Figure 2, Figure 2—figure supplement 1, Figure 3, Figure 5, Figure 5—figure supplements 1 and 2 and Figure 7—figure supplement 2, as well as Models 1 and 2 with Hill functions for the weights ${\mathrm{\Theta}}_{D}(g)$ and ${\mathrm{\Theta}}_{S}(g)$, used in Figure 2—figure supplement 1, Figure 3—figure supplement 1 and Figure 7—figure supplement 2.
2.1.2 Model of Tribolium segmentation
In the model of Figure 1—figure supplement 2, the interactions between hunchback (hb), Krüppel (Kr), millepattes (mlpt), giant (gt) and an unidentified gene X are modeled (see the supplement of Zhu et al., 2017). Note that the role of parameter g is played by $caudal$ ($cad$) in the model for Tribolium segmentation.
2.2 Genefree models
In the genefree model, ODEs encode flows in an abstract 2D phase space. The two geometric variables are named arbitrarily y and z:
where parameter ${y}_{0}$ (resp. ${y}_{1}$ and ${y}_{2}$) controls the position of the unstable fixed point (resp. the stable fixed points) of the static term along the y axis. Parameter ${y}_{0}$ is set to 0 in the symmetric version of the model used in Figure 4, Figure 4—figure supplements 1, 2, 3, 4 and 5, Figure 4—videos 1, 2, 3, 4, Figure 6, Figure 7, Figure 7—figure supplement 1 and Figure 7—video 1, as well as in the asymmetric versions of Figure 6—figure supplement 1 and Figure 7—video 1. In Figure 6, parameter ${y}_{0}$ is set to 0.05 and 0.1 to model different levels of asymmetry in the basins of attraction. In Figure 7—figure supplement 1, parameter ${y}_{0}$ is set to 0.02 for Model 1 and 0.05 for Model two to obtain a similar level of asymmetry in the final pattern generated by the two models. We set ${y}_{1}=1$ and ${y}_{2}=1$ for all versions of the genefree models, except for the asymmetric versions used to generate the results of Figure 6—figure supplement 1 and Figure 7—video 1. In the former, the stable fixed points of the static module are placed outside the region delimited by the limit cycle of the dynamic module by setting ${y}_{1}=2$ and ${y}_{2}=2.5$, while in the latter, we set ${y}_{1}=1.75$ and ${y}_{2}=1$.
To obtain a supercritical Hopf bifurcation with the genefree model, we followed a similar approach than for the 3gene model. We reasoned that the sum of the weights of the dynamic and static modules should become smaller than a degradationlike term for values of g around 0.5. For this reason, an ‘intermediate term’ $I(P)={[z\mathit{\hspace{1em}}y]}^{T}$ is introduced in the ODE. The intermediate term is weighted by the function ${\mathrm{\Theta}}_{I}(g)$. Equation 1 of the main text thus becomes:
Recall that in a given cell, only the dynamic module should be present at the beginning of the simulation, when $g=1$. Similarly, only the static module should be present at the end of the simulation, when $g=0$. Therefore, we set the weight of the intermediate module equal to $g\text{}(1g)$, which is zero at both $g=1$ and $g=0$. Since this weight is of the order two in g, we make the weights of the dynamic and static modules of the order three in g to ensure that they become smaller than the weight of the intermediate term for g around 0.5. To obtain subcritical Hopf bifurcations with the genefree model, we used a slightly different dynamic module:
Appendix 1—table 3 lists the weights used for all genefree models: Model one used to generate the results of Figure 4—figure supplements 1, 2 and 5, Figure 4—video 1, Figure 6, Figure 6—figure supplement 1, Figure 7—figure supplements 1 and 2, and Figure 7—video 1, Model two used to generate the results of Figure 4, Figure 4—figure supplements 1 and 5, Figure 4—video 2, Figure 6, Figure 6—figure supplement 1, Figure 7, Figure 7—figure supplements 1 and 2, and Figure 7—video 1, Model three used to generate the results of Figure 4—figure supplements 3 and 5, Figure 4—video 3 and Figure 7—figure supplement 2, and Model four used to generate the results of Figure 4—figure supplements 4 and 5, Figure 4—video 4 and Figure 7—figure supplement 2.
2.3 Infiniteperiod scenarios of Figure 1 and Figure 7
The infiniteperiod scenario of Figure 1B–F is a simplified version of the model of the appendix of Palmeirim et al., 1997. The dynamics of the phase of the oscillators are modeled directly using the following ODE:
The infiniteperiod scenario of Figure 7A–E is the 1D model of coupled oscillators from Gillespie, 2001. In brief, the dynamics of the phase of the oscillators are described by the following ODE:
where $\u03f5$ represents the coupling strength between a cell and its two nearest neighbors, a is the average cell diameter (cd), and τ is the time delay in the coupling. The spatiotemporal profile of the frequency of the oscillators $\omega (x,t)$ is given by the following formula:
where ${\omega}_{\mathrm{\infty}}$ represents the characteristic intrinsic frequency of the oscillators, v is the speed at which the spatial frequency profile moves along the posterior direction, and σ controls the spatial steepness of the frequency profile. Appendix 1—table 4 lists the parameter values used to generate the results of Figure 7A–E. See Gillespie, 2001 for more details.
2. 4 Hopf scenario of Figure 1
The Hopf scenario of Figure 1G–K is the cellautonomous model evolved in silico in François et al., 2007. The model describes the dynamics of two proteins, the effector protein $E$ and the repressor protein $R$, under the control of morphogen g via ODEs with time delays:
The subscript of a closed parenthesis indicates the time at which the expression inside the parenthesis is evaluated. If no such parenthesis with a subscript is present in a given expression, this expression is evaluated at time t. The values of all parameters are given in Appendix 1—tables 5 and 6
2.5 Van der Pol oscillator of Figure 7
The schematic of an ‘asymmetric wave’profile shown on Figure 7D is generated with the following ODEs describing a Van der Pol oscillator:
where parameter μ is set to 2.5. The schematic of a "symmetric wave" profile shown on Figure 7D is a sine function.
3 Spatiotemporal profile of the control parameter for each model
For all models except the model for Tribolium segmentation and the infiniteperiod scenario of Figure 7A–E, the following function is used to describe the spatiotemporal profile of the input g, which is treated either as the concentration of a morphogen in the gene network models, or as an abstract control parameter in the genefree models:
where parameter s controls the steepness of the gradient and v represents the speed at which the gradient moves along the anteroposterior axis. Parameter ${x}_{\text{osc}}$ allows to generate a few oscillations inside the first simulated cell before g starts decreasing. Note that the position vector x is normalized in all our simulations, such that positions are constrained from 0 to 1. Appendix 1—table 7 lists the values of the parameters used for the gradients of all models (except the model for Tribolium segmentation): the gradients of the infiniteperiod scenario and of the Hopf scenario used to generate the results of Figure 1B–F and Figure 1G–K, respectively, the shallow gradient used in the 3gene models of Figure 2, Figure 2—figure supplement 1 and 2, Figure 3, Figure 3—figure supplement 1, Figure 5, Figure 5–figure supplements 1 and 2, and Figure 7—figure supplement 2, the steep gradient used in the 3gene models of Figure 5, Figure 5–figure supplements 1 and 2 and , and the gradients used in the genefree models of Figure 4, Figure 4–figure supplements 2, 3, 4 and 5, Figure 6, Figure 6—figure supplement 1, Appendix 1—table 7, Figure 7—figure supplements 1 and 2 and Figure 7—video 1.
In the model for Tribolium segmentation, the role of input g is played by the maternal gene $cad$. The dynamics of $cad$ is modelled with a Hill function:
where the time dependencies of parameters ${x}^{*}(t)$ and $n(t)$ encode respectively the regression of the morphogen gradient along the anteroposterior axis, and the gradual increase in the steepness of the morphogen gradient:
4 Integration schemes
4. 1 Euler algorithm for deterministic simulations
Equation 1 of the main text can be integrated via the Euler algorithm to obtain a time series representing the deterministic dynamics of vector $P$:
The Euler algorithm, which is equivalent to approximating the temporal derivative of $P$ by a firstorder finite difference, was used to perform deterministic simulations of all versions of the 3gene models (Figure 2, Figure 2—figure supplements 1 and 2, Figure 5, Figure 5—figure supplements 1 and 2 and Figure 7—figure supplement 2). A similar version of this algorithm that includes the intermediate term was used for deterministic simulations of the genefree models (Figure 4, Figure 4—figure supplements 2, 3 and 4, Figure 6, Figure 6—figure supplement 1, Figure 7, Figure 7—figure supplements 1 and 2 and Figure 7—video 1). The Euler algorithm was also used to perform simulations of the infiniteperiod and Hopf scenarios (Figure 1 and Figure 7). On the other hand, deterministic simulations of the model for Tribolium segmentation were carried out via the lsoda integrator from the scipy library in Python (Figure 1—figure supplement 2).
4.2 Langevin equation for stochastic simulations of the 3gene models
The stochastic nature of chemical reactions, due at least partly to the finite number of molecules involved in these reactions, introduces fluctuations in protein concentrations in single cells. To generate the results of Figure 3 and Figure 3—figure supplement 1, noise was introduced in the 3gene models in a chemically realistic and mathematically rigorous way by following the method of Gillespie, 2001. In the generic formulation of the present problem, there are $N$ molecular species ${S}_{i}$, $i=1,\mathrm{\dots},N$, that can interact through $M$ different reactions ${R}_{j}$, $j=1,\mathrm{\dots},M$. Let ${X}_{i}(t)$ represent the number of ${S}_{i}$ molecules at time t. Then, the vector $X(t)\equiv [{X}_{i}(t)\phantom{\rule{1em}{0ex}}...\phantom{\rule{1em}{0ex}}{X}_{N}(t)]$ represents the state of the whole system of $N$ molecules at time t. For each reaction ${R}_{j}$, a propensity function ${a}_{j}$ is defined such that if the system is in state $X$ at time t, then ${a}_{j}(X)\text{}dt$ is the probability that one ${R}_{j}$ reaction will occur in the next infinitesimal time interval $dt$, i.e. between t and $t+dt$. For each reaction ${R}_{j}$, a statechange vector ${\nu}_{j}$ is defined such that its ith component ${\nu}_{ji}$ represents the change in the number of ${S}_{i}$ molecules produced by one ${R}_{j}$ reaction. Once the $M$ propensity functions and statechange vectors are defined, the time evolution of the state vector $X(t)$ is found via the $N$ deterministic reaction rate equations:
The numerical integration of these rate equations can be performed via the Euler algorithm:
The stochastic form of this simulation algorithm is given by the chemical Langevin equation:
where ${N}_{1}(t)\text{},...\text{},{N}_{M}(t)$ are $M$ independent Gaussian random variables with mean and variance equal to 0 and 1, respectively, and that are not correlated in time. In the 3gene models, the role of vector $X$ is played by $P$. Note that rescaling the numbers of proteins ${X}_{i}$ by constant factors corresponds to multiplying both sides of Equations 18, 19, 20 by that constant factor (as long as the statechange vectors ${\nu}_{j}$ are also rescaled). Therefore, Equations 18, 19, 20 are still valid when simulating protein concentrations scaled from 0 to 1 instead of absolute numbers of proteins. Furthermore, the reactions of the 3gene models are encoded in the protein production and degradation terms. The propensities of the protein production and degradation terms are respectively ${\mathrm{\Theta}}_{D}(g)\text{}D(P)+{\mathrm{\Theta}}_{S}(g)\text{}S(P)$ and $P$. Equation 19 thus becomes Equation 17, and Equation 20 can be rewritten as the following expression:
where ${N}^{\text{prod}}(t)=[{N}_{1}^{\text{prod}}(t),\text{}{N}_{2}^{\text{prod}}(t),\text{}{N}_{3}^{\text{prod}}(t)]$ and ${N}^{\text{deg}}(t)=[{N}_{1}^{\text{deg}}(t),\text{}{N}_{2}^{\text{deg}}(t),\text{}{N}_{3}^{\text{deg}}(t)]$ are 2 vectors, each containing 3 independent Gaussian random variables with mean 0 and variance 1. This equation can be simplified by leveraging the fact that the sum of Gaussian random variables with mean 0 and different variances is equal to a single Gaussian random variable with mean 0 and a variance equal to the sum of the variances:
where $N(t)=[{N}_{1}(t),\text{}{N}_{2}(t),\text{}{N}_{3}(t)]$ is a vector containing 3 independent Gaussian random variables with mean 0 and variance 1. Note that a different independent random variable is used for each protein, since the production term of each protein is due to a different combination of repression interactions. To control the level of noise, a parameter $\mathrm{\Omega}$ is introduced in the previous equation such that increasing $\mathrm{\Omega}$ decreases the level of noise:
Since noise arises at least partly from the stochastic nature of single reactions between a finite number of proteins, increasing the concentration of proteins is expected to buffer the intrinsic chemical noise. Therefore, the noise level is expected to decrease as the protein concentration is increased. The following mathematical derivation shows that parameter $\mathrm{\Omega}$ can be interpreted as the typical concentration of proteins in the system, such that increasing the protein concentration corresponds to increasing the value of parameter $\mathrm{\Omega}$. First, let’s take a look at the stochastic integration algorithm for protein $A$ and write explicitly the maximal production rate ${\rho}_{A}$ and the degradation rate ${\delta}_{A}$:
where a + superscript on a protein concentration indicates that this variable is evaluated at time $t+dt$ and the absence of a superscript on a variable indicates that it is evaluated at time t. Multiplying both sides of the equation by $\mathrm{\Omega}$ leads to the following expression:
Now, let’s rescale all quantities that have the units of protein concentration by a factor of $\mathrm{\Omega}$. To achieve this, we define the rescaled variables ${A}^{\ast}=\mathrm{\Omega}\text{}A$, ${B}^{\ast}=\mathrm{\Omega}\text{}B$ and ${C}^{\ast}=\mathrm{\Omega}\text{}C$, as well as rescaled parameters $\rho}_{{A}^{\ast}}=\mathrm{\Omega}\text{}{\rho}_{A$, $K}_{D}^{{B}^{\ast}\u22a3{A}^{\ast}}=\mathrm{\Omega}\text{}{K}_{D}^{B\u22a3A$, $K}_{S}^{{B}^{\ast}\u22a3{A}^{\ast}}=\mathrm{\Omega}\text{}{K}_{D}^{B\u22a3A$ and $K}_{S}^{{C}^{\ast}\u22a3{A}^{\ast}}=\mathrm{\Omega}\text{}{K}_{D}^{C\u22a3A$:
A similar procedure can be followed for proteins $B$ and $C$. Therefore, multiplying the stochastic term of the Langevin equation for all proteins by $1/\sqrt{\mathrm{\Omega}}$ is equivalent to rescaling all variables and parameters that have the units of a protein concentration by a factor of $\mathrm{\Omega}$. Since we set the maximal production rates and the degradation rates of all proteins to 1, the typical concentration of proteins $A$, $B$ and $C$ is normalized to 1. Rescaling all protein concentrations and all parameters with units of protein concentration by a factor of $\mathrm{\Omega}$ thus corresponds to setting the typical concentration of proteins to $\mathrm{\Omega}$. In conclusion, parameter $\mathrm{\Omega}$ of equation 23 indeed corresponds to the typical concentration of proteins.
4.3 Celltocell coupling in the 3gene models
A strategy that a cell can use to fight the intrinsic noise in protein concentrations is to evaluate the protein expression state of its neighbors and change its own protein expression state accordingly. In the stochastic simulations of the 3gene models, celltocell communication is modelled via a diffusion term included in the differential equations describing the dynamics of the set of protein concentrations. The higher the concentration of a given protein is in a given simulated cell, the more this protein will diffuse to neighboring simulated cells. Diffusion thus models the process of adjusting the protein concentration of a given cell according to the protein concentration of surrounding cells. The dynamics of vector $P$ in the 3gene models is therefore given by the following differential equation:
where the diffusion constant $D$ controls the strength of celltocell coupling. The complete stochastic simulation algorithm for the 3gene model thus becomes:
for $i=1,2,3$. Note that diffusion is not included in the stochastic term, since diffusion of proteins is not a reaction in itself. In the simulations, the second spatial derivative is approximated by a secondorder central finite difference with reflective boundaries.
4.4 Stochastic simulations of the genefree models
Since the genefree models simulate the dynamics of abstract variables that do not represent explicitly protein concentrations, the variance of the noise is held independent of the state of the system. The stochastic algorithm used to generate the results of Figure 4—figure supplement 5 is therefore the following:
where $i=1,2$, and $N(t)=[{N}_{1}(t),{N}_{2}(t)]$ is a vector containing 2 independent Gaussian random variables with mean 0 and variance 1. Parameter $\mathrm{\Omega}$ is still included to control the level of noise, but it cannot be interpreted as the typical concentration of proteins in the system since the genefree models do not simulate explicitly protein interactions.
Mathematical formula for the mutual information
In deterministic simulations, the initial phase of the genetic oscillation inside a given cell determines in which part of the pattern this cell will end up. This is not necessarily the case in stochastic simulations. To quantify the robustness to noise of a given model for specific values of parameter $\mathrm{\Omega}$ (and of the diffusion constant $D$ in the case of the 3gene models) it is required to define a metric that measures the accuracy with which the initial phase of the genetic oscillations inside a cell predicts the region of the pattern in which this cell will end up. The specific metric used in Figure 3, Figure 3—figure supplement 1, Figure 4—figure supplement 5 and Figure 5—figure supplement 2 is the mutual information between the initial phase of the oscillator and the final protein expression state of the simulated cells. The mutual information $I(x,y)$ between two discrete variables x and y is given by the following expression:
where $X$ and $Y$ are the sets of possible values for x and y, respectively. Intuitively, the mutual information between two variables quantifies the amount of information obtained on the value of the first variable by knowing the value of the second variable (and vice versa). If the logarithm is in base 2, the units of the mutual information are bits. To measure how precisely the phase of the oscillator is read to form the final pattern, variable x is set to the phase of the oscillation in protein expression at the beginning of the simulation ${\varphi}_{i}$, and variable y is set to the protein expression state at the end of the simulation $\begin{array}{}\text{(31)}& & I({\varphi}_{i},{P}_{f})={\displaystyle \sum _{{P}_{f}}\sum _{{\varphi}_{i}}p({\varphi}_{i},{P}_{f})\text{}\mathrm{log}\left(\frac{p({\varphi}_{i},{P}_{f})}{p({\varphi}_{i})\text{}p({P}_{f})}\right)}& \text{(32)}& & \phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}{\displaystyle =\sum _{{P}_{f}}\sum _{{\varphi}_{i}}p({\varphi}_{i},{P}_{f})\text{}\mathrm{log}\left(\frac{p({\varphi}_{i},{P}_{f})}{p({\varphi}_{i})\text{}p({P}_{f})}\right)}& \text{(33)}& & \phantom{\rule{2em}{0ex}}\phantom{\rule{2em}{0ex}}{\displaystyle =\sum _{{P}_{f}}\sum _{{\varphi}_{i}}p({P}_{f}{\varphi}_{i})\text{}p({\varphi}_{i})\text{}\mathrm{log}\left(\frac{p({P}_{f}{\varphi}_{i})}{\sum _{{\varphi}_{i}}p({P}_{f}{\varphi}_{i})\text{}p({\varphi}_{i})}\right)}& \end{array}$
To get the second equality, the fact that $p(x,y)=p(xy)\text{}p(y)$ for any two variables x and y was used to get rid of the joint probability $p(\varphi ,{R}_{i})$, which is not straightforward to evaluate directly. Similarly, the fact that $p(y)=\sum _{x\in X}p(x,y)=\sum _{x\in X}p(xy)\text{}p(y)$ for any two variables x and y was used to get rid of $p({P}_{f})$, which is less easy to compute than $p({\varphi}_{i})$. Indeed, ${\varphi}_{i}$ is sampled uniformly in the simulations of the 3gene and genefree models, since the speed of regression of the input g is constant throughout the simulations. In the 3gene models, the different phases ${\varphi}_{i}$ are defined as the different states of protein expression along the oscillation cycle generated by the dynamic module ($g=1$). A uniform sample of ${\varphi}_{i}$ is obtained by sampling this oscillation cycle at constant time intervals for a total time length of one period. In the genefree models, the different phases ${\varphi}_{i}$ are defined as the different sets of (y, z) values along the oscillation cycle generated by the dynamic module ($g=1$). Since the oscillations are on the unit circle (centered at the origin) and have a constant speed along the cycle, sampling uniformly the angles from the positive y axis (starting at 0 and stopping at $2\pi $) generates a uniform sample of ${\varphi}_{i}$.
Description of the source codes
All codes are written in the python3 programming language (except for two Mathematica notebooks). Commented jupyter notebooks can be found on Github at the following address: https://github.com/laurentjutrasdube/DualRegime_Geometry_for_Embryonic_Patterning. This repository also contains folders with the source data files, as well as the source codes used to generate the data files.
3gene_det.ipynb This notebook performs deterministic simulations of the symmetric 3gene Models 1 and 2, and deterministic simulations of the 3gene Models 1 and 2 with Hill functions for the weights of the dynamic and static modules. It also performs a bifurcation analysis of these models using the data found in the XPPAUTO_data folder, which also contains the .ode files used to generate the data with the XPP AUTO software (Ermentrout, 2008). Figure 2, Figure 2—figure supplements 1 and 2, and Figure 7—figure supplement 2 show the results obtained with this notebook.
3gene_stoch.ipynb This notebook performs stochastic simulations of the symmetric 3gene Models 1 and 2, and stochastic simulations of the 3gene Models 1 and 2 with Hill functions for the weights of the dynamic and static modules. It also generates plots of the mutual information using the data found in the Mutual_info_data folder, which also contains the python codes used to generate the data. Figure 3 and Figure 3—figure supplement 1 show the results obtained with this notebook.
3gene_asym.ipynb This notebook performs deterministic simulations of the asymmetric 3gene Models 1 and 2. It also performs a bifurcation analysis of these models and generates plot of the mutual information using the data found in the XPPAUTO_data and Mutual_info_data folders, respectively. Figure 5 and Figure 5—figure supplements 1 and 2 show the results obtained with this notebook.
Genefree_det.ipynb This notebook performs deterministic simulations of the symmetic genefree Models 1, 2, 3 and 4. It also performs a bifurcation analysis of these models and generates flow plots using the data found in the XPPAUTO_data and Mathematica_data folders, respectively. Figure 4, Figure 4—figure supplements 1, 2, 3 and 4, Figure 4—videos 1, 2, 3 and 4, and Figure 7—figure supplement 2 show the results obtained with this notebook.
Genefree_stoch.ipynb This notebook performs stochastic simulations of the symmetric genefree Models 1, 2, 3 and 4. It also generates the mutual information plots using the data found in the Mutual_info_data folder. Figure 4—figure supplement 5 shows the results obtained with this notebook.
Genefree_asym.ipynb This notebook performs deterministic simulations of the asymmetic genefree Models 1 and 2. It also performs a bifurcation analysis of these models using the data found in the XPPAUTO_data folder. Moreover, it generates plots of the flow and of the spatial wave profiles. Figure 6, Figure 6—figure supplement 1, Figure 7 and Figure 7—figure supplement 1 show the results obtained with this notebook.
Hopf_scenario_Fig1.ipynb This notebook performs deterministic simulations of the gene network model evolved in silico in François et al., 2007. Results are shown on Figure 1. It also performs a bifurcation analysis of this model, shown on Figure 1—figure supplement 1.
Infiniteperiod_scenario_Fig1.ipynb This notebook performs deterministic simulations of the infiniteperiod model of Figure 1, which is a simplified version of the model in the appendix of Palmeirim et al., 1997.
Infiniteperiod_scenario_Fig7.ipynb This notebook performs deterministic simulations of the infiniteperiod model of Figure 7, which is adapted from Morelli et al., 2009.
Tribolium_model.ipynb This notebook performs deterministic simulations of the model for Tribolium segmentation from Zhu et al., 2017. It also generates flow plots and computes the speed of the cells in phase space. Figure 1—figure supplement 2 shows the results obtained with this notebook.
Data availability
https://github.com/laurentjutrasdube/DualRegime_Geometry_for_Embryonic_Patterning (copy archived at https://github.com/elifesciencespublications/DualRegime_Geometry_for_Embryonic_Patterning).

GithubID laurentjutrasdube/DualRegime_Geometry_for_Embryonic_Patterning. Geometric models for robust encoding of dynamical information into embryonic patterns.
References

Collective modes of coupled phase oscillators with delayed couplingPhysical Review Letters 108:204101.https://doi.org/10.1103/PhysRevLett.108.204101

A clock and wavefront model for control of the number of repeated structures during animal morphogenesisJournal of Theoretical Biology 58:455–476.https://doi.org/10.1016/S00225193(76)801312

Deriving structure from evolution: metazoan segmentationMolecular Systems Biology 3:154.https://doi.org/10.1038/msb4100192

Phenotypic models of evolution and development: geometry as destinyCurrent Opinion in Genetics & Development 22:627–633.https://doi.org/10.1016/j.gde.2012.09.001

Approximate accelerated stochastic simulation of chemically reacting systemsThe Journal of Chemical Physics 115:1716–1733.https://doi.org/10.1063/1.1378322

The vertebrate segmentation clockCurrent Opinion in Genetics & Development 14:407–414.https://doi.org/10.1016/j.gde.2004.06.014

Bifurcation dynamics in lineagecommitment in bipotent progenitor cellsDevelopmental Biology 305:695–713.https://doi.org/10.1016/j.ydbio.2007.02.036

Bioattractors: dynamical systems theory and the evolution of regulatory processesThe Journal of Physiology 592:2267–2281.https://doi.org/10.1113/jphysiol.2014.272385

Continuum theory of gene expression waves during vertebrate segmentationNew Journal of Physics 17:093042.https://doi.org/10.1088/13672630/17/9/093042

Evolutionary plasticity of segmentation clock networksDevelopment 138:2783–2792.https://doi.org/10.1242/dev.063834

BookModels of segmentationIn: Bellairs R, Ede D. A, Lash J. W, editors. Somites in Developing Embryos. Boston: Springer. pp. 179–189.https://doi.org/10.1007/9781489920133

Precision of genetic oscillators and clocksPhysical Review Letters 98:228101.https://doi.org/10.1103/PhysRevLett.98.228101

Modelling DeltaNotch perturbations during zebrafish somitogenesisDevelopmental Biology 373:407–421.https://doi.org/10.1016/j.ydbio.2012.10.014

BookCatastrophe Theory and Its ApplicationsLondon: Dover Publications.https://doi.org/10.1002/bs.3830230411

The making of the somite: molecular events in vertebrate segmentationNature Reviews Genetics 2:835–845.https://doi.org/10.1038/35098552

Identification of bifurcations from observations of noisy biological oscillatorsBiophysical Journal 111:798–812.https://doi.org/10.1016/j.bpj.2016.07.027

Mesp2 initiates somite segmentation through the notch signalling pathwayNature Genetics 25:390–396.https://doi.org/10.1038/78062

Critical timing without a timer for embryonic developmentBiophysical Journal 109:1724–1734.https://doi.org/10.1016/j.bpj.2015.08.024

BookDevelopmental Plasticity and EvolutionOxford University Press.https://doi.org/10.1038/hdy.2015.14
Article and author information
Author details
Funding
Simons Foundation (MMLS)
 Laurent JutrasDubé
 Paul François
Natural Sciences and Engineering Research Council of Canada (CREATE in Complex Dynamics)
 Laurent JutrasDubé
Fonds de Recherche du Québec  Nature et Technologies (B2)
 Laurent JutrasDubé
Deutsche Forschungsgemeinschaft (EL 870/21)
 Ezzat ElSherif
Natural Sciences and Engineering Research Council of Canada (Discovery Grant)
 Paul François
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
We thank members of the François and ElSherif groups for insightful discussions, as well as referees for their insightful comments. This work was supported by a CREATE: Complex Dynamics fellowship and a FRQNT B2 scholarship to L JutrasDubé, a DFG grant EL 870/2–1 to E ElSherif, a NSERC Discovery Grant and a Simons Investigator in Mathematical Modelling of Living Systems to P François.
Version history
 Received: February 5, 2020
 Accepted: August 7, 2020
 Accepted Manuscript published: August 10, 2020 (version 1)
 Version of Record published: September 3, 2020 (version 2)
 Version of Record updated: May 5, 2022 (version 3)
Copyright
© 2020, JutrasDubé et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 1,774
 views

 291
 downloads

 21
 citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.