Highthroughput mathematical analysis identifies Turing networks for patterning with equally diffusing signals
 Cited 30
 Views 3,755
 Annotations
 Article
 Figures and data

Jump to
 Abstract
 eLife digest
 Introduction
 Results
 Discussion
 Materials and methods
 Appendix 1: Details of the automated linear stability analysis
 Appendix 2: Graphtheoretical formalism to analyze network topologies
 Appendix 3: Mechanism of selforganizing pattern formation
 Appendix 4: Synthetic reactiondiffusion circuit design
 Appendix 5: Derivation and simulations of models with sigmoidal kinetics
 Appendix 6: Model definitions and parameters used for the simulations
 References
 Decision letter
 Author response
 Article and author information
 Metrics
Abstract
The Turing reactiondiffusion model explains how identical cells can selforganize to form spatial patterns. It has been suggested that extracellular signaling molecules with different diffusion coefficients underlie this model, but the contribution of cellautonomous signaling components is largely unknown. We developed an automated mathematical analysis to derive a catalog of realistic Turing networks. This analysis reveals that in the presence of cellautonomous factors, networks can form a pattern with equally diffusing signals and even for any combination of diffusion coefficients. We provide a software (available at http://www.RDNets.com) to explore these networks and to constrain topologies with qualitative and quantitative experimental data. We use the software to examine the selforganizing networks that control embryonic axis specification and digit patterning. Finally, we demonstrate how existing synthetic circuits can be extended with additional feedbacks to form Turing reactiondiffusion systems. Our study offers a new theoretical framework to understand multicellular pattern formation and enables the widespread use of mathematical biology to engineer synthetic patterning systems.
https://doi.org/10.7554/eLife.14022.001eLife digest
Developing embryos initially consist of identical cells that specialize over time to create the different parts of the adult animal. More than sixty years ago, Alan Turing proposed that this spontaneous breaking of uniformity could be controlled by two molecules that interact with each other and move by diffusion at different rates between cells. In such “reactiondiffusion” systems, the interactions between the molecules cause repeating peaks in their concentrations in different locations, which could influence how different parts of the embryo develop. However, how these hypothetical molecules relate to the genes that control embryonic development has remained largely unknown.
Marcon et al. have now developed a computational method to identify the conditions that enable periodic patterns to form spontaneously in realistic reactiondiffusion systems with mobile signaling molecules and immobile factors such as membranelocalized receptors. By computationally screening millions of biologically relevant networks, Marcon et al. found that a key requirement of classical Turing models – that the mobile signaling molecules must diffuse at different rates – does not need to be met for patterns to form. Instead, some networks can form patterns with signals that diffuse at equal rates, while others can form patterns with any combination of diffusion rates.
The computational method developed by Marcon et al. can be used to interpret the mechanisms that allow patterns to form in biological systems, such as those that control embryonic development. It can also be used to develop synthetic networks that regulate genes for the formation of tissues in particular spatial patterns.
https://doi.org/10.7554/eLife.14022.002Introduction
How cells selforganize to form ordered structures is a central question in developmental biology (Hiscock and Megason, 2015), and identifying selforganizing mechanisms promises to provide new tools for synthetic biology and regenerative medicine (Chen and Weiss, 2005; Guye and Weiss, 2008; Isalan et al., 2008; Bansagi et al., 2011; Chau et al., 2012; Mishra et al., 2014; Schaerli et al., 2014; Wroblewska et al., 2015). More than six decades ago, Alan Turing proposed a theoretical model in which interactions between diffusible substances can break the initial symmetry of cell fields to form periodic patterns (Turing, 1952). Subsequent work from Gierer and Meinhardt postulated that such selforganizing processes require differential diffusivity between a shortrange selfenhancing activator and a feedbackinduced longrange inhibitor (Gierer and Meinhardt, 1972). Numerous studies have proposed models based on these concepts to explain pattern formation during development, including skin appendage specification (Sick et al., 2006; Harris et al., 2005), lung branching (Menshykau et al., 2012; Hagiwara et al., 2015), tooth development (SalazarCiudad and Jernvall, 2010), rugae formation (Economou et al., 2012), and digit patterning (Sheth et al., 2012; Raspopovic et al., 2014). However, the evidence in support of specific activatorinhibitor pairs has been limited, and few studies have provided experimental support for the differential diffusivity of activators and inhibitors (Kondo and Miura, 2010; Marcon and Sharpe, 2012; Müller et al., 2012).
Pattern formation processes are regulated by the interactions between secreted signaling molecules and their receptors that activate complex cellautonomous signaling events. However, since most reactiondiffusion models have been reduced to abstract networks of two diffusible reactants, the influence of immobile cellautonomous factors on reactiondiffusion patterning is largely unknown. Previous theoretical studies on selected network topologies have challenged the differential diffusivity requirements and indicated that in the presence of an immobile substance, patterns can form for a wider range of reaction and diffusion parameters (Othmer and Scriven, 1969; Pearson and Horsthemke, 1989; Pearson, 1992; Pearson and Bruno, 1992; Rauch and Millonas, 2004; Levine and Rappel, 2005; Miura et al., 2009; Raspopovic et al., 2014; Korvasova et al., 2015). These and other studies (Meinhardt, 2004; Werner et al., 2015) suggest that extending models beyond abstract twonode systems can reveal different pattern formation requirements and may uncover new biologically relevant network designs. However, due to the complex mathematical analysis required to identify and understand such systems, extending reactiondiffusion models to more realistic signaling networks has been challenging, and the main assumption in the field has remained that complex models should reduce to simple systems that require an effective differential diffusivity.
Here, we developed the freely available and userfriendly software RDNets (available at http://www.RDNets.com) to perform a highthroughput mathematical analysis of complex reactiondiffusion networks with nondiffusible components. In comparison to previous numerical studies, this method guarantees completeness, reproducibility, and detailed mechanistic insights into the principles underlying pattern formation. We used RDNets to build a comprehensive catalog of minimal threenode and fournode reactiondiffusion networks that include interactions between diffusible signals and cellautonomous factors. Our results show that reactiondiffusion systems have three types of requirements for the diffusible signals depending on the network topology: Type I networks require differential diffusivity, Type II networks allow equal diffusivities, and Type III networks allow for unconstrained diffusivity. Overall, 70% of the networks identified by our analysis are of Type II and Type III and thus do not require differential diffusivity to form a spatial pattern. This reveals that realistic reactiondiffusion systems are based on mechanisms that are fundamentally different from the concepts of shortrange activation and longrange inhibition based on differential diffusivity (Gierer and Meinhardt, 1972) that have been predominant in previous models of pattern formation. Our software can be used to explore these new networks and is a unique tool to understand in vivo reactiondiffusion systems and to engineer synthetic circuits with spatial patterning capabilities.
Results
Understanding how complex gene regulatory networks control cellular behavior is a challenging problem in biology; even small networks can contain regulatory feedbacks that make systems behaviors difficult to predict (Le Novère, 2015). Mathematical biology has helped to identify network motifs that underlie basic behaviors such as oscillations, bistability or noise reduction (Kepler and Elston, 2001; ShenOrr et al., 2002; Mangan and Alon, 2003), but this approach has been difficult to scale up to more complex networks and behaviors. Previous studies have overcome this obstacle by using numerical simulations to screen for topologies that implement a certain behavior (SalazarCiudad et al., 2000; Ma et al., 2009; Cotterell and Sharpe, 2010). However, such simulations demand large computational power, their coverage is incomplete, and they do not have the explanatory power of analytical approaches. The ideal tool to analyze the behavior of gene networks should retain the explanatory power of mathematical approaches and yet be able to comprehensively screen for network topologies and the underlying mechanistic principles.
We have developed the webbased software RDNets (http://www.RDNets.com) to derive a comprehensive catalog of minimal threenode and fournode reactiondiffusion networks and their patternforming conditions. Our analysis reveals that networks have different diffusivity requirements depending on the topology. RDNets can constrain candidate topologies with qualitative and quantitative experimental data, making it a convenient tool for users that aim to study developmental patterning networks or to design synthetic reactiondiffusion circuits.
Automated mathematical analysis of reactiondiffusion networks
We developed an automated linear stability analysis (Murray, 2003) to derive the pattern forming conditions of networks with N nodes (Figure 1a, Materials and methods). Linear stability analysis determines whether a system can form a pattern by testing i) if the concentrations of the reactants are stable at steady state, and ii) if diffusiondriven instabilities arise with small perturbations. Because of its mathematical complexity, this type of analysis has been the exclusive domain of mathematicians and systems biologists (Koch and Meinhardt, 1994; Satnoianu et al., 2000; Murray, 2003; Miura and Maini, 2004), and its application beyond tworeactant models has required dedicated theoretical studies for selected networks (Othmer and Scriven, 1971; White and Gilligan, 1998; Klika et al., 2012; Korvasova et al., 2015). To generalize the analysis to networks with more than two nodes, we utilized a modern computer algebra system and developed the software pipeline RDNets that automates the algebraic calculations. Within this framework, secreted molecules like ligands and extracellular inhibitors are represented by diffusible nodes, and cellautonomous components such as receptors and kinases are represented by nondiffusible nodes. Our software analyzes networks with k interactions between the nodes; these interactions are represented by first order kinetics rates, where a positive rate corresponds to an activation and a negative rate to an inhibition.
The software pipeline comprises six steps to identify patterning networks:
Construction of a list of possible networks of size k.
Selection of strongly connected networks without isolated nodes or nodes that solely act as readouts.
Deletion of symmetric networks, such that isomorphic networks are considered only once.
Selection of networks that are stable in the absence of diffusion (i.e. homogeneous steady state stability).
Selection of networks that are unstable in the presence of diffusion (i.e. instability to spatial perturbations).
Analysis of the possible reactiondiffusion topologies associated with the networks and derivation of the resulting inphase and outofphase patterns.
Steps 4 and 5 represent the core part of the automated linear stability analysis and involve the majority of analytical computations. In Step 6, our software screens the possible reactiondiffusion topologies associated with a network. A reactiondiffusion network of size k defines only a set of k regulatory links between nodes but does not make any assumption on whether these are activating or inhibiting interactions. In the following, we refer to the possible combination of activating and inhibiting interactions as 'network topologies'.
Highthroughput mathematical screen for minimal threenode and fournode reactiondiffusion networks
We used our software RDNets to systematically explore the effect of cellautonomous factors in reactiondiffusion models for the generation of selforganizing patterns. We studied two types of networks: a) 3node networks with two diffusible nodes and one nondiffusible node representing the interaction between two secreted molecules and one signaling pathway, and b) 4node networks with two diffusible nodes and two nondiffusible nodes representing the interaction between multiple ligands and signaling pathways. Table 1 shows the number of networks identified at each step of our automated mathematical analysis (see Figure 1—figure supplements 1–4 for the complete catalog of the identified reactiondiffusion networks). Our analysis revealed that in the presence of cellautonomous factors there are three types of networks with different constraints on the diffusible signals:
where D is the list of diffusion coefficients that are nonzero.
We found that 70% of the identified networks with nondiffusible nodes are of Type II and Type III (Figure 1b), showing that in the presence of cellautonomous factors the differential diffusivity requirement is unexpectedly rare. Type III networks have never been characterized before and surprisingly have patterning conditions that are independent of specific diffusion rates. We found that Type III networks are not only numerous but also extremely robust to changes in parameter values compared to Type I and Type II networks (Figure 1b, Materials and methods). Using numerical simulations, we systematically confirmed our mathematical analysis and determined that a network can form all possible combinations of inphase or outofphase periodic patterns depending on the network topology (Figure 1c, Appendix 1). Together, our results show that realistic reactiondiffusion networks are intrinsically robust, do not require differential diffusivity, and have patterning capabilities identical to classical twonode reactiondiffusion models. Importantly, the novel class of Type III networks that we discovered suggests a new mechanism of pattern formation that is independent of shortrange activation and longrange inhibition based on differential diffusivity.
The network topology defines Type I, Type II and Type III networks
To obtain insight into the organizing principles underlying the three types of networks identified by our highthroughput analysis, we developed a novel graphtheoretical formalism to express the pattern forming conditions in terms of network feedbacks rather than reaction parameters (see Materials and methods and Appendix 2). This analysis determines which feedback cycles contribute to the stability and the instability conditions (Figure 2a,b) and defines the topological features that underlie Type I, Type II, and Type III networks. In agreement with previous studies (Murray, 2003), our analysis confirmed that twonode networks can only simultaneously satisfy the stability and instability conditions when the diffusion ratio d between the inhibitor and the activator is greater than one (Figure 2, left column). This observation has been linked with the widespread belief that reactiondiffusion systems require differential diffusivity to implement shortrange autoactivation and longrange inhibition. Our analysis instead suggests that the differential diffusivity requirement arises from the opposite nature of the stability and instability conditions, which require that the destabilizing feedback must be both higher and lower than the stabilizing feedback. Since the diffusion term only appears in the destabilizing condition, it assumes the role of a unique pivot that can satisfy both conditions simultaneously when d > 1. Our results indicate that the presence of nondiffusible nodes allows feedbacks that do not appear in the instability conditions to act as an additional pivot to satisfy both conditions simultaneously by increasing stability. This is the case for most Type II networks (Figure 2, middle column) that contain additional negative feedbacks that allow for equal diffusivities (Klika et al., 2012; Korvasova et al., 2015). Importantly, our analysis also reveals that nondiffusible nodes can implement positive feedbacks that can drive the network unstable independently of stabilizing feedbacks and for any diffusion ratio d. This is the case for Type III networks (Figure 2, right column), where the stability and instability conditions are uncoupled and can be simultaneously satisfied for large parameter sets. This is possible because immobile factors can act as 'capacitors' that retain and amplify perturbations independently of the reactants’ diffusion coefficients (see Appendix 3 for details). Such systems represent a fundamentally new pattern formation mechanism that has not been described previously.
Together, our results show that models based on 'shortrange autoactivation and longrange inhibition' implemented by differential diffusivity are only a special case of a general tradeoff between stabilizing and destabilizing feedbacks required for pattern formation. The virtually indistinguishable simulations of Type I networks with differential diffusivity and Type II networks with equal diffusivities reveal that the final aspect of the periodic patterns does not reflect a difference in the range of activators and inhibitors but only a difference in their amplitude (Figure 2c, see Appendix 3 for details). Indeed, in other Type II and Type III networks the relationship between the amplitude of activators and inhibitors can even be inverted, such that the perceived range of the activator appears larger than the perceived range of the inhibitor. Therefore, in contrast to previous studies (Kondo and Miura, 2010), we propose that longrange lateral inhibition is not required to limit the expansion of the activator (Appendix 3).
Qualitative and quantitative constraints for candidate networks
To demonstrate the functionality and applicability of RDNets, we analyzed two known selforganizing developmental patterning networks, the Nodal/Lefty reactiondiffusion system and the BMP/Sox9/Wnt network. In the following, we show how quantitative and qualitative experimental data from these developmental systems can be used to constrain the highthroughput analysis and to characterize the possible underlying patterning topologies.
It has been proposed that Nodal and Lefty implement an activatorinhibitor system that patterns the germ layers and the leftright axis in vertebrates (Chen and Schier, 2001; Shiratori and Hamada, 2006; Shen, 2007; Meinhardt, 2009; Schier, 2009; Kondo and Miura, 2010; Rogers and Schier, 2011; Korvasova et al., 2015) (Figure 3a). In agreement with this hypothesis, the selfenhancing activator Nodal has been shown to diffuse 7.5 times slower than the feedbackinduced inhibitor Lefty in living zebrafish embryos (Müller et al., 2012). The Nodal/Lefty system has been modeled as a twocomponent activatorinhibitor system (Nakamura et al., 2006; Müller et al., 2012), but the influence of cellautonomous factors including receptors and the wellcharacterized intracellular signal transduction cascade via phosphorylated Smad2/3 (Schier, 2009) has not been studied. We used our software to screen for networks that extend the twonode Nodal/Lefty system with a nondiffusible node corresponding to active Nodal signaling (Figure 3b). The screen was constrained with known qualitative regulatory interactions: a positive feedback loop between Nodal and its signaling, and a promotion of Lefty by Nodal signaling (Figure 3b). Moreover, we constrained the two negative selfregulations on Nodal and Lefty, which represent their clearance from the diffusible pool, with the previously measured clearance rate constants (Müller et al., 2012). Finally, we selected only reactiondiffusion networks that produced inphase patterns of Nodal and Lefty, which recapitulate their overlapping expression domains (Schier, 2009). With these constraints, our mathematical analysis identified just two possible minimal networks: In one network Lefty inhibits Nodal signaling indirectly at the receptor level, and in the other network Lefty inhibits Nodal directly (Figure 3b). These predictions are in agreement with the two possible mechanisms by which Lefty has been proposed to inhibit Nodal activity: by binding to the Nodal receptor or by directly sequestering Nodal (Chen and Shen, 2004). However, the role and significance of these two alternative mechanisms for Nodal/Leftymediated patterning has remained unclear (Cheng et al., 2004; Middleton et al., 2013). Our mathematical analysis predicts that the first mechanism (Lefty blocks the receptor complex) determines a Type II network, whereas the second mechanism (Lefty blocks Nodal directly) determines a Type III network. Importantly, both models suggest that the Nodal/Lefty system may form patterns without differential diffusivity of activator and inhibitor. Using the clearance rate constants of Nodal and Lefty as quantitative constraints, our mathematical analysis predicts a possible minimum diffusion ratio d = 0.55 for the Type II network, whereas the Type III network allows for any combination of diffusion coefficients (Figure 3d). The robustness analysis of the networks shows that for unconstrained valued of d, the Type III network is more robust to parameter changes (Figure 3c). However, when we fix the diffusion ratio to the experimentally quantified value (Müller et al., 2012) (d = 7.5), the Type II network becomes more robust than the Type III network (Figure 3d). This shows that, while Nodal and Lefty do not necessarily need to have different diffusivities to form a pattern, the combination of differential diffusivity and clearance rate constants increases the robustness of the Type II system.
As a second example, we used RDNets to analyze the BMP/Sox9/Wnt (BSW) selforganizing network that underlies digit patterning (Sheth et al., 2012; Raspopovic et al., 2014). The expression patterns and the signaling activity of the network components have been wellcharacterized showing that Sox9 forms periodic expression peaks that are outofphase of BMP expression and Wnt activity (Figure 4a). A threenode reactiondiffusion network with two diffusible nodes for the secreted signals BMP and Wnt and a nondiffusible node for the transcription factor Sox9 has previously been derived based on the known regulatory interactions (Figure 4b). It was shown that this network recapitulates the outofphase pattern between BMP/Wnt and Sox9 and forms a pattern with extremely low differential diffusivity requirements (d = 1.25). Our comprehensive mathematical analysis reveals that this threenode system is just another topology of the reactiondiffusion network that we analyzed for the extended Nodal/Lefty system; it is therefore a Type II network that can potentially form a pattern even when BMP and Wnt have equal diffusion coefficients. In previous studies, this observation was missed because the clearance rates of BMP and Wnt had been assumed to be identical (Raspopovic et al., 2014). However, as we showed in the previous example for Nodal and Lefty, if BMP is cleared faster than Wnt, the diffusion ratio can be equal to or lower than one, d ≤ 1. The threenode BSW model recapitulates the outofphase pattern between BMP/Wnt and Sox9, but due to its high abstraction level it does not explain the opposite BMP expression and BMP activity patterns observed in the experimental data (Figure 4a). We therefore used RDNets to screen for more complex models with five nodes that represent all components of the network: two diffusible nodes for BMP (B) and Wnt (W) and three nondiffusible nodes, one for the canonical BMP pathway through pSmad1/5/8 (Sm), one for the intracellular Wnt signaling cascade (βcatenin, β), and one for Sox9 (S). We selected only networks that formed inphase and outofphase patterns reflecting the experimental data (Figure 4a). Previous studies (Raspopovic et al., 2014) showed that Sox9 is promoted by BMP signaling through pSmad1/5/8 and is inhibited by Wnt through βcatenin. Similar to the Nodal/Lefty example, we constrained the mathematical screen by incorporating these known regulatory interactions. Unexpectedly, the screen revealed that if βcatenin would directly inhibit Sox9, no network could recapitulate the outofphase patterns between BMP expression and BMP signaling. By performing a more general screen that left this interaction unconstrained, we found that the opposite BMP expression and signaling patterns can be obtained when βcatenin indirectly inhibits Sox9 through pSmad/1/5/8. RDNets also predicts that the most robust networks include the following additional interactions: i) a negative feedback from Sox9 to Wnt, ii) a negative feedback from pSmad1/5/8 to BMP, and iii) either a positive feedback from βcatenin to Sox9 or a negative feedback from βcatenin on Wnt (Figure 4b, gray arrows). Interestingly, the majority of networks identified by our screen was of Type III, suggesting that the proportion of Type III networks increases when more nondiffusible nodes are added.
Designing robust synthetic reactiondiffusion circuits
Although reactiondiffusion mechanisms have a simple network design, they exhibit unique selforganizing capabilities making them appealing for synthetic engineering (Diambra et al., 2015). So far, the synthetic implementation of reactiondiffusion systems has been impeded by the small patternforming parameter space of simple twonode models, their requirement for differential diffusivity (Carvalho et al., 2014), and a general gap between abstract models and real senderreceiver reactiondiffusion circuits (Marcon and Sharpe, 2012; Barcena Menendez et al., 2015).
RDNets provides a comprehensive catalog of reactiondiffusion networks that do not require differential diffusivity of the signaling molecules, which enables bioengineers to explore new mechanisms to form periodic spatial patterns in a robust manner. We demonstrate the utility of RDNets by proposing an extension to an existing synthetic circuit for cellcell communication in yeast (Chen and Weiss, 2005). The original synthetic circuit introduced a diffusible plant hormone, cytokinin isopentenyladenine (IP), and its receptor AtCRE1 into yeast (Figure 5a). This circuit was used to implement a senderreceiver and a quorum sensing mechanism based on a positive feedback loop between IPsignaling and IP (Figure 5a). We used RDNets to identify possible signaling networks that can extend this positive feedback with additional interactions to form a reactiondiffusion pattern. Since at least two diffusible nodes are required to form a pattern (Murray, 2003), we screened minimal 4node networks that include the engineered positive feedback and candidate interactions with another diffusible node. In order to look for realistic and easily implementable signaling circuits, we explored only networks with interactions between diffusible nodes through nondiffusible factors representing intracellular signaling cascades. We also imposed selfregulations on diffusible nodes to be exclusively inhibitory, representing decay. With these constraints, our highthroughput analysis identified 16 minimal reactiondiffusion networks (5 Type I, 3 Type II, 8 Type III), of which the Type II and Type III networks were most robust to parameter changes (Figure 5—figure supplement 1). In the following, we demonstrate how the conditions derived by RDNets can be used to engineer the most simple and robust Type II network (Figure 5a  right). In addition to the positive feedback loop, this network contains three additional negative feedbacks: two are selfregulations that correspond to decay, and one is a negative feedback between the newly introduced diffusible node and the nondiffusible node representing the receptor. This network suggests that a simple extension to the circuit developed in Chen and Weiss (2005) could be obtained by a) destabilizing the signaling hormone and the receptor to increase their turnover (c_{1}, c_{2}), and b) introducing another hormone that signals through the same receptor and implements a negative feedback loop to its own expression or activity (c_{3}, Figure 5a  right).
In addition to revealing possible topologies, our automated analysis provides the mathematical formulae of pattern forming conditions. This feature together with the specification of quantitative constraints can be used to calculate pattern forming parameter ranges. To determine the strength of the newly introduced negative feedbacks required for pattern formation, we constrained the positive cycle strength with the firstorder kinetic rate quantified in Chen and Weiss (2005) by fitting measurements of the signaling activity of IP (Figure 5c). Moreover, we assumed that the diffusion and decay rates are similar. With these constraints, RDNets determined that the newly introduced negative feedback has to be stronger than the positive feedback and decay rates (Figure 5c  right). This could be implemented using the more responsive IPsignaling promoter (TRSSRE) developed in Chen and Weiss (2005) to drive the expression of the inhibitor. This specific synthetic network represents only one possibility. We find other Type III networks to be even more robust to parameter changes, but they appear to require the design of more complex synthetic circuits (Appendix 4). Once a synthetic network is designed, RDNets can also be used to automatically derive kinetic models that can simulate the reactiondiffusion network (Figure 5d,e, Appendix 5). Numerical simulations can be used to investigate the qualitative aspect of the pattern and its spatial periodicity. In the long term, all these features open new avenues for designing synthetic reactiondiffusion circuits that could be coupled with gene expression to enable complex applications, such as fabrication of spatially patterned threedimensional biomaterials and tissue engineering in mammalian cells (Chen and Weiss, 2005; Carvalho et al., 2014).
Discussion
We developed the webbased software RDNets, which exploits a modern computer algebra system to identify new reactiondiffusion networks that reflect realistic signaling systems with diffusible and cellautonomous factors. Our approach is a new example of highthroughput mathematical analysis, which has several benefits over previous numerical approaches (Ma et al., 2009). First, RDNets can be run from most web browsers and does not demand large computational power. Second, our mathematical analysis yields closedform solutions and is complete, in contrast to numerical simulations that can necessarily only sample from a small region of the entire parameter space. Third, RDNets derives the conditions for pattern formation and therefore provides mechanistic explanatory power to the users. In addition, it helps to identify reactiondiffusion topologies that are in agreement with qualitative and quantitative experimental constraints, which makes it an unprecedented tool for users that aim to study developmental patterning networks or to design reactiondiffusion synthetic circuits.
Motivated by theoretical studies that showed that nondiffusible factors can influence pattern forming conditions, we used our software to systematically explore the effect of nondiffusible reactants in reactiondiffusion models. Our analytical approach is both comprehensive and informative and reveals that depending on the network topology, reactiondiffusion systems can belong to three classes: Type I systems that require differential diffusivity, Type II systems that can form patterns with equal diffusivity, and Type III systems that form patterns independent of specific diffusion rates. In particular, the novel class of Type III networks has not been described before and challenges models of shortrange activation and longrange inhibition based on differential diffusivity that have dominated the field of developmental and theoretical biology for decades (see Appendix 3 for details).
We used RDNets to obtain new mechanistic insight into two developmental patterning systems. By using quantitative data to constrain possible patterning networks, we found a Type II and a Type III topology that extend the Nodal/Lefty activatorinhibitor system with realistic cellautonomous signaling. In such extended networks, Nodal and Lefty do not necessarily need to have different diffusivities to form a pattern. However, our results suggest that the differential diffusivity can contribute to a more robust patterning system in Type II networks with indirect Nodal signaling inhibition. We propose that the high general robustness of the Type III network might have played a role for the evolution of the Nodal/Lefty reactiondiffusion system in the first place, and that the indirect Nodal signaling inhibition of Type II networks might have been finetuned during evolution (Figure 3—figure supplement 1). Similarly, we extended the threenode BSW digit patterning model with additional previously characterized cellautonomous factors and constrained a fivenode model with qualitative data. Our analysis identified realistic network topologies that accurately reflect the previously puzzling opposite pattern of BMP ligands and BMP signaling and predicts novel interactions between network components.
Finally, we used RDNets to design a novel synthetic patterning circuit based on a previously engineered positive feedback module. Identifying a comprehensive catalog of gene networks that can perform a certain behavior has been shown to be a successful strategy to uncover the design space of stripeforming networks (Cotterell and Sharpe, 2010), which can be directly useful to synthetic biology. In particular, this approach permitted a whole family of network mechanisms to be synthetically constructed in bacteria – all capable of forming a gene expression stripe in a bacterial lawn (Schaerli et al., 2014). Similarly, our software provides a comprehensive catalog of reactiondiffusion networks and enables bioengineers to explore new mechanisms to form periodic spatial patterns in a robust manner. These networks explicitly include nondiffusible factors that mediate signaling and are easy to relate with senderreceiver synthetic toolkits (Barcena Menendez et al., 2015). In addition, we found that the majority of realistic reactiondiffusion networks eliminate the differential diffusivity requirement that is difficult to implement synthetically (Carvalho et al., 2014; Barcena Menendez et al., 2015). The possibility to use qualitative and quantitative constraints to screen for reactiondiffusion networks makes RDNets a unique tool to customize patterning systems from initial starting networks. Moreover, the patternforming conditions derived by the software can be used to estimate parameter ranges and network robustness. Particularly promising is our finding that each network is associated with a set of topologies that exhaustively determine all the inphase and outofphase relations between periodic patterns (Figure 5d). It is therefore possible to design network topologies that promote the colocalized expression of any desired combination of factors. This will enable novel applications in tissue engineering, where the colocalized expression of differentiating factors can be used to induce specific tissues (Kaplan et al., 2005). Coupled with the threedimensional patternforming capabilities of reactiondiffusion mechanisms (Figure 5e), this could be used to devise new strategies for engineering scaffolds or tissues with complex architecture.
In summary, our analysis defines new concepts of reactiondiffusionmediated patterning that are directly relevant for developmental and synthetic biology. We demonstrate three applications of our software RDNets to understand developmental mechanisms and to design synthetic patterning systems, but this approach can be extended to numerous other patterning processes (Economou et al., 2012; Menshykau et al., 2012; Hagiwara et al., 2015). We therefore expect that RDNets will contribute to the widespread use of mathematical biology and that a similar approach could be applied to other dynamical processes such as oscillations and traveling waves (Bement et al., 2015).
Materials and methods
Details of the automated mathematical analysis
Request a detailed protocolWe analyzed reactiondiffusion networks represented by a reaction matrix J and a diffusion matrix D of size NxN, where N is equal to the number of nodes. The matrix J corresponds to the Jacobian of the reactiondiffusion system and contains partial derivatives that describe the relative influence of one node on another. Elements of the reaction matrix represent the first order kinetics rates of the regulatory interactions in the network, where a positive rate corresponds to an activation and a negative rate to an inhibition. The matrix D contains the diffusion rates of the reactants along its principal diagonal and is zero otherwise.
Our analysis aims to identify minimal reactiondiffusion networks, defined as the networks with the minimal number of edges k that can form a reactiondiffusion pattern. In the case of 2node networks, it has been described that minimal reactiondiffusion networks must have 2x2=4 edges (Murray, 2003), and therefore only a completely connected network is allowed. This completely connected 2node network allows for only two reactiondiffusion topologies: the 'activatorinhibitor system' that forms inphase periodic patterns, and the 'substratedepleted model' that forms outofphase periodic patterns. Our automated approach takes the following inputs through a graphical user interface: the number of network nodes N, constraints on J and D including reaction or diffusion rates set to zero, and the number of regulatory interactions k. This last parameter defines the number of edges that each network should have with an upper bound of NxN edges representing a completely connected network (Figure 1a). This parameter also defines the number of possible networks that are analyzed by the software, which is calculated according to
This number represents the possible subsets of size k that can be taken from J and corresponds to the number of possible networks of size k.
An important part of the automated highthroughput mathematical analysis is the derivation of the characteristic polynomial, a mathematical expression that determines the stability of the reactiondiffusion system, which is calculated from the determinant of a matrix that combines J and D, the 'wave number' q, and the eigenvalue λ. For 3node networks, the characteristic polynomial has the form
where λ is the eigenvalue associated with the reactiondiffusion system, and the coefficients $a}_{1},\text{}{a}_{2$ and ${a}_{3}$ are polynomials formulated in terms of the elements of J, D and q (see Appendix 1). The eigenvalue λ determines the stability of the network: negative real solutions of λ represent a system that is stable around its steady state, while a positive real solution of λ represents an unstable system. The variable q that appears in the coefficients $a}_{1},\text{}{a}_{2$ and ${a}_{3}$ is the wave number that is introduced by the linear stability analysis and is multiplied for D. For values q>0, this parameter defines the periodicity of the reactiondiffusion pattern. Step 4 of our pipeline entails finding the ranges of the reaction parameters in J and diffusion parameters in D for each network, for which the solutions λ are all negative when q=0. Similarly, Step 5 requires finding parameter intervals, for which at least one solution λ has a positive real part when q>0. For characteristic polynomials of degree higher than 2, this is usually done by using the RouthHurwitz stability criterion (Murray, 2003), a mathematical theorem that finds the necessary and sufficient condition for all negative roots in terms of the polynomial coefficients $a}_{1},\text{}{a}_{2}...{a}_{n$. However, as the number of network nodes N increases, finding these parameter intervals becomes challenging and tedious because the coefficients $a}_{1},\text{}{a}_{2}...{a}_{n$ are also complex polynomials of high degree in q. We used a computer algebra system to automatically derive and analyze the RouthHurwitz criterion in terms of the coefficients $a}_{1},\text{}{a}_{2}...{a}_{n$. Finally, Step 6 requires to evaluate which of the ${2}^{k}$ possible topologies that exist for a given network are compatible with the patternforming conditions derived in Step 5 (see Appendix 1 for details).
The complete analysis of minimal networks is limited by the existence of analytical solutions. According to the AbelRuffini theorem, there is no general algebraic solution for systems with more than four nodes. However, in practice many fivenode networks can be solved if the constraints specified in the input of RDNets lead to a simplification of the coefficients of the characteristic polynomial, as is the case for the fivenode digit patterning network (Figure 4). Analytical approaches become also challenging when further diffusible nodes are added and when minimal models are extended with additional interactions.
Robustness calculation
Request a detailed protocolWe analyzed the robustness of the networks by calculating the volume of the parameter space that respects the patternforming condition in relation to the unit length multidimensional space of all the possible parameter values. This robustness measure corresponds to the probability of randomly picking patternforming parameters. The patternforming parameter volume is calculated with a multiple integral of the patternforming conditions over all the parameters of the reactiondiffusion networks, in the form
where ${k}_{1}\mathrm{...}{k}_{NxN}$ are the reaction parameters and ${d}_{1}\mathrm{...}{d}_{n}$ the diffusion parameters, $f({k}_{1}\dots \text{}{k}_{NxN},{d}_{1}\dots \text{}{d}_{n})$ are the patternforming conditions of the networks, and $l}_{1},\text{}{l}_{2$ and $d{l}_{1},\text{}d{l}_{2}$ are the limits of reaction and diffusion variables that are set respectively to (0.5, 0.5) and (0, 1) representing a multidimensional parameter space of unit side length.
Graphtheoretical formalism
Request a detailed protocolTo investigate the topological basis of Type I, Type II, and Type III networks, we developed a new theoretical framework based on graph theory that can be used to rewrite the patternforming conditions in terms of network feedbacks rather than their reaction rates. Further details of this theory are provided in Appendix 2.
Graphical user interface of RDNets and specification of qualitative and quantitative constraints
Request a detailed protocolOur webbased software RDNets was written in Mathematica (Wolfram Research Inc., Champaign, Illinois) and is available at http://www.RDNets.com. RDNets requires only the installation of the freely available Wolfram CDF player plugin (http://www.wolfram.com/cdfplayer/). A simple graphical interface can be used to specify inputs and constraints and to run the highthroughput mathematical analysis. Constraints can be specified by clicking on the nodes or edges of the networks, or by providing specific values for the corresponding parameters (see User Guide available at http://www.RDNets.com). These constraints are automatically translated into mathematical formulae that are coupled with the symbolic linear stability analysis performed by the computer algebra system. The graphical user interface can be used to explore and simulate the list of reactiondiffusion topologies given as output of the linear stability analysis. Additional constraints can be progressively added to the analysis to refresh the output and to narrow down the list of candidate topologies (see User Guide available at http://www.RDNets.com).
Appendix 1: Details of the automated linear stability analysis
We consider a reactiondiffusion system of the form
where $\mathbf{\mathbf{c}}$ is a vector of $N\ge 2$ reactant concentrations, $\mathbf{\mathbf{f}}$ represents the reaction kinetics, and $\mathbf{\mathbf{D}}$ is a diagonal matrix of diffusion coefficients greater than or equal to zero. Equation (2) represents zeroflux boundary conditions, where $\partial \mathbf{\mathbf{\Omega}}$ is the closed boundary of the spatial domain $\mathbf{\mathbf{\Omega}}$ and $\overline{\mathbf{\mathbf{n}}}$ is the outward normal to $\partial \mathbf{\mathbf{\Omega}}$. We restrict our analysis to zero flux boundary conditions because we are interested in deriving the analytical conditions required to form a selforganizing spatial pattern in the absence of preexisting asymmetries or external inputs.
To derive the pattern formation conditions of a reactiondiffusion system of size $N$, we use a computer algebra system to automatically build a list of $N$ reactants ($l$), a reaction Jacobian matrix of size $N\text{x}N$ ($\mathbf{\mathbf{J}})$, which represents the partial derivatives evaluated at steady state, and a diagonal diffusion matrix $\mathbf{\mathbf{D}}$ of size $N\text{x}N$ in the form
where up to the sixnodecase we name the reactants ${r}_{1},{r}_{2},{r}_{3},{r}_{4},{r}_{5},$ and ${r}_{6}$ as $\text{u},\text{v},\text{w},\text{z},\text{h}$, and $r$.
Following classical linear stability analysis (Murray, 2003), we derive the stability matrix ${\mathbf{\mathbf{F}}}^{\text{RD}}$ as
where $q$ is the wavenumber and $C$ is a newly introduced $N\text{x}N$ connectivity matrix whose elements can only be 0 or 1 and which is used to systematically select subsets of the Jacobian matrix $\mathbf{\mathbf{J}}$.
Our software RDNets also takes as input the parameter $k$, which represents the number of edges that will be considered between network nodes. In other words, the number $k$ defines $Nk$ elements of $\mathbf{\mathbf{J}}$ that will be set to zero. Finally, the software can also take as input a series of constraints on the elements of $\mathbf{\mathbf{J}}$ and $\mathbf{\mathbf{D}}$ from qualitative and quantitative experimental data.
Step 1. Possible networks of size $k$
We first generate a list of possible networks with $k$ edges. This is done by systematically selecting all the possible combinations of $Nk$ elements of $\mathbf{\mathbf{J}}$ that can be set to zero. The number of combinations is calculated as the binomial coefficient
For each of these combinations we derive a matrix $C$ with $Nk$ elements set to 0 and the remaining elements set to 1. Appendix 1—figure 1 shows an example of a $C$ matrix and its corresponding network in the case of $N=3$ and $k=6$.
Importantly, we define the minimal size $k$ of a reactiondiffusion network of $N$ nodes as the minimal number of edges $k$ for which at least one of the possible networks can satisfy the requirements for Turing instabilities. In the case of $N=2$, it is well known that all the elements of $\mathbf{\mathbf{J}}$, that is $2x2=4$, have to be different than zero, and therefore the minimal network size is $k=4$ (Murray, 2003). In other words, all the regulatory edges of 2node networks have to be present to be able to form a pattern, and therefore there is just one possible network. By analyzing 2node, 3node, 4node, and 5node networks, we empirically identified the minimal network sizes shown in Appendix 1—Table 1.
Step 2. Selecting only strongly connected networks
From the matrices $C$, we construct a list of adjacency directed graphs that represent each network. These graphs can be constructed by deriving the adjacency matrices from $C$ as the transpose ${C}^{T}$. Finally, we use an inbuilt function of the computer algebra system Mathematica to select matrices $C$ that correspond to strongly connected graphs. This filter guarantees that networks with isolated nodes or nodes that solely act as readouts are discarded (Appendix 1—figure 2).
Step 3. Deletion of symmetric networks
The third step of our automated analysis removes all symmetric networks. Symmetric networks are defined as networks whose graphs are isomorphic. To find isomorphic networks, we extended the inbuilt isomorphism test function of Mathematica to take into account all reaction or diffusion constraints that may introduce additional differences. An example of a symmetric network and their isomorphisms is given in Appendix 1—figure 3.
Step 4. Selecting stable networks
This step is a central part of the linear stability analysis and required us to develop a program that derives the characteristic polynomial of a reactiondiffusion system in a symbolic manner. This was done by calculating the determinant of a matrix $A$ defined as
where $\lambda $ is the eigenvalue and $I$ the identity matrix. The coefficients of the characteristic polynomial $({\text{a}}_{1},\mathrm{\cdots},{\text{a}}_{N})$ are also polynomials in terms of $\mathbf{\mathbf{J}}$,$\mathbf{\mathbf{D}}$ and $q$. A diffusiondriven instability requires that the solutions of the characteristic polynomial are all negative when $q=0$ and that there is at least one real positive solution when $q\phantom{\rule{thinmathspace}{0ex}}>\phantom{\rule{thinmathspace}{0ex}}0$:
As the number of nodes $N$ increases, finding the analytical solution of Equation (5) becomes unfeasible. As an alternative, the RouthHurwitz stability criterion can be used to derive the necessary and sufficient conditions to satisfy Equation (6) by deriving RouthHurwitz terms that are obtained by combining the coefficients $({\text{a}}_{1},\mathrm{\cdots},{\text{a}}_{N})$. Our software RDNets automatically derives the RouthHurwitz terms ${\mathrm{\Delta}}_{i}$ for the general case of degree $N$. For example, in the case of $N=4$ the RouthHurwitz terms are
The RouthHurwitz stability criterion states that all eigenvalues have a negative real part if and only if ${\mathrm{\Delta}}_{i}>0$ for i=1 to $N$. As mentioned above, the coefficients ${\text{a}}_{1}$ to ${\text{a}}_{4}$ are polynomials in terms of $\mathbf{\mathbf{J}}$,$\mathbf{\mathbf{D}}$ and $q$, which quickly become complex as $N$ increases. With $N=4$, the general $\mathbf{\mathbf{J}}$,$\mathbf{\mathbf{D}}$ matrices, the characteristic polynomial and the corresponding coefficients are
It is evident that even in the case with $N=4$ the coefficients have complex formulae. Moreover, when substituted in Equation (8), they give rise to further complex RouthHurwitz terms of higher order in $q$. The stability conditions are derived when $q=0$, which simplifies the coefficients. However, even in this case the manual derivation of the conditions that make all RouthHurwitz terms positive is challenging. We therefore automated these tedious algebraic calculations using the computer algebra system in Mathematica.
Step 5. Selecting unstable networks with diffusion
In the previous section we showed that the conditions for the stability of a reaction network can be derived by imposing that all the RouthHurwitz determinants are positive at $q=0$. Conversely, the requirements for the existence of diffusiondriven instabilities are obtained by imposing that at least one RouthHurwitz determinant turns negative when diffusion is considered: $\mathrm{\exists}\phantom{\rule{thinmathspace}{0ex}}i,\phantom{\rule{thinmathspace}{0ex}}{\mathrm{\Delta}}_{i}(q)\phantom{\rule{thinmathspace}{0ex}}<\phantom{\rule{thinmathspace}{0ex}}0$ for some $q\phantom{\rule{thinmathspace}{0ex}}>\phantom{\rule{thinmathspace}{0ex}}0$.
In addition, the RouthHurwitz theorem allows – by checking which of the determinants turn negative – to derive the conditions that lead to stationary Turing patterns or oscillatory patterns. This can be done by considering the necessary and sufficient conditions for the absence of diffusiondriven instabilities presented in previous studies (Kellog, 1972; Cross, 1978; Othmer, 1982). A violation of any of these conditions is necessary and sufficient for the existence of a single real eigenvalue crossing the right half plane and thus for the formation of a stable Turing pattern. This occurs if and only if ${\mathrm{\Delta}}_{N}(q)\phantom{\rule{thinmathspace}{0ex}}<\phantom{\rule{thinmathspace}{0ex}}0$ for $q\phantom{\rule{thinmathspace}{0ex}}>\phantom{\rule{thinmathspace}{0ex}}0$, which can be rewritten as the simpler equivalent condition ${a}_{N}(q)\phantom{\rule{thinmathspace}{0ex}}<\phantom{\rule{thinmathspace}{0ex}}0$ and ${a}_{k}(q)\phantom{\rule{thinmathspace}{0ex}}>\phantom{\rule{thinmathspace}{0ex}}0$ for $q\phantom{\rule{thinmathspace}{0ex}}>\phantom{\rule{thinmathspace}{0ex}}0$ and for all $k$ different than $N$, by taking into account the identities between the ${a}_{k}$ coefficient and ${k}_{th}$ elementary symmetric functions of the eigenvalues (Horn and Johnson, 1990). Since in this work we are interested in the analysis of stable Turing patterns, we can use these conditions to select for networks that produce a stable pattern and filter out those that produce oscillatory patterns.
As shown in the previous section, these conditions generally lead to complex formulae. Nondiffusible factors are associated to vanishing entries in the diagonal of $\mathbf{\mathbf{D}}$, which reduces the complexity of the equations. Nevertheless, the negativity conditions of RouthHurwitz determinants (see for example ${\mathrm{\Delta}}_{3}$ in Equation [8]) remain tedious to derive. However, the algebraic calculations are automated with the computer algebra system in RDNets.
The last step of the analysis consists of finding when the stability conditions and the instability conditions for the existence of Turing patterns are fulfilled simultaneously. Importantly, the computer algebra system allows to handle the combinatorial explosion of algebraic cases systematically in moderate computational time.
Step 6. Analysis of possible network topologies
The networks derived in Step 5 represent connectivities between nodes (matrix $C$) associated with coefficients of the characteristic polynomial that can satisfy the patternforming conditions. These conditions are written in terms of $J$ and $D$ but do note make any explicit assumption on whether elements of the reaction matrix $J$ must have a positive (representing activation) or a negative (representing inhibition) value. We define a network topology as a set of signs associated with reaction rates of $J$ whose correspondent elements in $C$ are set to one. Given a network of size $k$, there are ${2}^{k}$ possible topologies that encode all the possible combinations of positive and negative signs for a given matrix $C$. Reactiondiffusion topologies are topologies that can satisfy the pattern forming conditions. Our highthroughput analysis of minimal 3node and 4node signaling networks reveals that every unconstrained network is associated with a set of topologies that exhaustively determine all possible inphase and outofphase periodic patterns between network nodes (Appendix 1—figure 4).
Optional step: Pattern phase prediction
RDNets can analyze the list of networks topologies to select for a specific phase of the periodic patterns. This feature requires to perform both analytical and numerical computations that derive the relative sign of the eigenvectors associated with a network topology. First, a random set of parameters that satisfies the pattern forming conditions is obtained for each network topology using an inbuilt function of the computer algebra system. The parameters are substituted into the stability matrix (Equation [3]) to leave $q$ as the only free parameter, and the solutions of the characteristic polynomial $\lambda $(Equation [5]) are calculated. The solution ${\lambda}_{r}$ without a complex part is selected, and its dispersion relation is analyzed to identify ${q}_{max}$ for which the eigenvalue $\lambda $ is maximum using a gradient ascend numerical method.
Finally, ${q}_{max}$ is substituted into $q$ in Equation (3), and the eigenvectors of the matrix are calculated (a list of eigenvectors for each solution $\lambda $). Each component of the eigenvector associated with ${\lambda}_{r}$ represents a reactant, and the relative sign between them represents the phase of the patterns. Reactants with eigenvector components of the same sign will be inphase, while reactants with eigenvector components of opposite sign will be outofphase.
Appendix 2: Graphtheoretical formalism to analyze network topologies
Our highthroughput mathematical analysis reveals that different network topologies determine patternforming requirements with three different types of diffusion constraints: Type I requires differential diffusivity, Type II allows for equal diffusivity, and Type III represents conditions independent of specific diffusion rates. To investigate the topological basis of these constraints, we developed a new framework based on graph theory to analyze the patternforming conditions in terms of network feedbacks rather than elements of the reaction matrix $J$. This can be done by rewriting the coefficients of the characteristic polynomial in terms of cycles of the graph associated with the reaction matrix. These coefficients can be used to derive the RouthHurwitz determinants and thus the stability and instability conditions in terms of cycles.
Analyzing patternforming conditions in terms of network feedbacks
As mentioned in Appendix 1, a diffusiondriven instability occurs when the characteristic polynomial in Equation (5) has all negative real solutions with $q=0$ and further has at least one positive real solution when $q\phantom{\rule{thinmathspace}{0ex}}>\phantom{\rule{thinmathspace}{0ex}}0$. Given the RouthHurwitz conditions, the necessary and sufficient conditions to respect these requirements can be formulated in terms of the coefficients $({\text{a}}_{1},\mathrm{\cdots},{\text{a}}_{N})$ of the characteristic polynomial. In the following, we derive a new graphtheoretical formalism to express the coefficients ${a}_{k}$ in terms of cycles of the graph associated with the reaction matrix $\mathbf{\mathbf{J}}$.
Let ${\gamma}_{k}=\{{i}_{1},\mathrm{\dots},{i}_{k}\}$ be a sequence of $k$ distinct integers such as $1\u2a7d{i}_{1}\phantom{\rule{thinmathspace}{0ex}}<\phantom{\rule{thinmathspace}{0ex}}{i}_{2}....\phantom{\rule{thinmathspace}{0ex}}<\phantom{\rule{thinmathspace}{0ex}}{i}_{k}\u2a7dN$, and let ${S}_{k}^{N}$ be the set of all the different ${\gamma}_{k}$ sequences of $k$ elements in $\{1,\mathrm{\dots},N\}$. ${\mathbf{F}}^{RD}({\gamma}_{k})$ denotes the $k$by$k$ principal submatrix of ${\mathbf{\mathbf{F}}}^{RD}$ given by the coefficients with row and column indices equal to ${\gamma}_{k}=\{{i}_{1},\mathrm{\dots},{i}_{k}\}$. There are $N!\mathrm{/}(Nk)!k!$ different $k$by$k$ principal submatrices ${\mathbf{\mathbf{F}}}^{RD}({\gamma}_{k})$, which are in a onetoone correspondence with all the different sequences ${\gamma}_{k}$ in ${S}_{k}^{N}$. The sum of the determinants of all the different $k$by$k$ principal submatrices is denoted by
The following identity, which can be verified using the Laplace expansion of the determinant (Horn and Johnson, 1990), expresses the coefficients of the characteristic polynomial in Equation (5) in terms of ${E}_{k}$
The new formulation of the coefficients ${a}_{k}$ of the characteristic polynomial can be expanded to partially uncouple the contribution of the diffusion and reaction terms:
Uncoupling of the diffusion and reaction contributions is achieved by expanding the minors of order $k$ in Equation (12) and grouping the resulting terms according to the number of entries from the diffusion matrix. In this way, each minor $det[{\mathbf{\mathbf{F}}}^{\mathbf{\mathbf{R}\mathbf{D}}}({\gamma}_{k})]$ is expressed as a summatory of products of all possible minors $det[\mathbf{\mathbf{J}}({\gamma}_{m})]$ of order $m\u2a7dk$ given by the sequences $\gamma =\{{i}_{1},\mathrm{\dots},{i}_{m}\}$ multiplied by the $km$ coefficients $({d}_{{j}_{1}}\cdot {d}_{{j}_{2}}\mathrm{\dots}\cdot {d}_{{j}_{km}})$ given by the complementary set $\overline{\gamma}=\{{j}_{1},\mathrm{\dots},{j}_{km}\}$ in $\mathbf{\mathbf{D}}$. The subsets ${\overline{\gamma}}_{m}$ and ${\gamma}_{m}$ are complementary sets in ${\gamma}_{k}$ in the sense that ${\gamma}_{m}\cap {\overline{\gamma}}_{m}=\mathrm{\varnothing}$ and ${\gamma}_{m}\cup {\overline{\gamma}}_{m}={\gamma}_{k}$. Then, after some tedious algebra, the following expression is reached:
The first and third summands in Equation (13) stem exclusively from reaction and diffusion terms. The second is formed by minors of the reaction matrix weighted by the complementary coefficients of the diffusion matrix. Next, we show how determinants can be formulated in terms of cycles of the reaction graph associated with a matrix. This allow us to reformulate Equation (13) in terms of feedbacks of the reactiondiffusion network.
The definition of the reaction and interaction graphs closely follows the definition of the Coates graph of a general square matrix (Brualdi and Cvetkovic, 2008). The reaction graph ${\text{G}}_{R}[\mathbf{\mathbf{J}}]$ is a labeled, weighted, directed graph associated to the linearization of the reactiondiffusion system. In a system with $N$ interacting species, ${\text{G}}_{R}[\mathbf{\mathbf{J}}]$ is a graph with $N$ nodes that has a directed edge from node $j$ to node $i$ if ${\mathbf{\mathbf{J}}}_{ij}\ne 0$. The weight assigned to the edge is the coefficient ${\mathbf{\mathbf{J}}}_{ij}$. Note that according to this definition, the entries ${\mathbf{\mathbf{J}}}_{ii}$ in the diagonal of the reaction matrix have associated an edge with $i$ as the initial and terminal node. These type of edges are called loops and account for decay terms and autocatalysis in the reaction. The interaction graph ${\text{G}}_{I}[{\mathbf{F}}^{\text{RD}}]$ is the equivalent of the reaction graph including the diffusion term in ${\mathbf{\mathbf{F}}}^{RD}$. If the diffusion matrix is diagonal, both graphs are topologically identical and the only difference lies in the weight of the loops. The following $4$x$4$ matrix $\mathbf{\mathbf{A}}$ is used as an example to illustrate these definitions:
According to the definitions given previously, ${G}_{R}[\mathbf{\mathbf{A}}]$ is the $4$node graph shown in Appendix 2—figure 1.
The definitions from graph theory introduced next will be necessary to develop the framework for the analysis of the stability of a reactiondiffusion system. The indegree and the outdegree of a node are the number of edges that have this node as the initial or terminal node, respectively. A loop, defined as an edge that originates and ends at the same node, contributes 1 to both the indegree and the outdegree of that node. As an example, in the graph of Appendix 2—figure 1, node $3$ has indegree equal to $2$, for it has an incoming edge from node 1 and the loop. The outdegree of this node is $3$, because there are two edges going to nodes $1$ and $4$ plus the loop contribution (Appendix 2—figure 2).
A cycle of length $m$ is a subset of $m$ distinct nodes and $m$ distinct edges that join ${i}_{k}$ to ${i}_{k+1}$ for $k=1,\mathrm{\dots},m$ and an edge from ${i}_{m}$ back to ${i}_{1}$. By this definition, loops are also cycles of length one. The weight of a cycle $w(c)$ is the product of weights of the edges that form the cycle. Cycles are classified as positive or negative according to the sign of their weight. The graph ${G}_{R}[\mathbf{\mathbf{A}}]$ used as an example has, aside from the four loops associated to the diagonal terms in $\mathbf{\mathbf{A}}$, a negative cycle of length 2 and a negative cycle of length 3. ${C}_{2}$ is negative and its weight is $w({C}_{2})=e\cdot c$, whereas ${C}_{3}$ has weight $w({C}_{3})=e\cdot f\cdot d$ and is also a negative cycle (Appendix 2—figure 2).
A subgraph of the reaction graph is a directed graph formed by a subset of edges and whose set of nodes are a subset ${\gamma}_{k}=\left\{{i}_{1}\mathrm{\dots}{i}_{k}\right\}$ of those in ${G}_{R}$, with ${\gamma}_{k}\subset \left\{\mathrm{1\hspace{0.17em}...}N\right\}$. The induced subgraph of ${\gamma}_{k}$, referred as the Isubgraph ${I}_{{\gamma}_{k}}$, is the subgraph of ${G}_{R}[\mathbf{A}]$ formed by the subset of nodes ${\gamma}_{k}$ and all the edges that join nodes within this set. The induced subgraph ${I}_{{\gamma}_{k}}$ is identical to the graph ${\text{G}}_{\text{R}}[\mathbf{A}({\gamma}_{k})]$ obtained by applying the definition of the reaction graph to the principal submatrix $\mathbf{\mathbf{A}}({\gamma}_{k})$, so that all the definitions and properties of the reaction graph carry over its Isubgraphs. As an example, consider the $3$by$3$ principal submatrix matrix $\mathbf{\mathbf{A}}({\gamma}_{3})$ induced by the sequence ${\gamma}_{3}=\{1,2,4\}$:
The Isubgraph associated with ${\gamma}_{k}$ (Appendix 2—figure 3) is obtained by applying the definition of the reaction graph to the matrix $\mathbf{\mathbf{A}}({\gamma}_{3})$ or, equivalently, by erasing from the complete graph ${G}_{R}[\mathbf{\mathbf{A}}]$ the nodes that do not belong to ${\gamma}_{3}$ and the edges that do not start and finish in the nodes of ${\gamma}_{3}$. Note that the induced subgraphs of the interaction graph ${G}_{I}[{\mathbf{\mathbf{F}}}^{\mathbf{\mathbf{R}\mathbf{D}}}({\gamma}_{k})]$ obtained by considering all possible sequences ${\gamma}_{k}$ are in a onetoone correspondence with ${\mathbf{\mathbf{F}}}^{\text{RD}}({\gamma}_{k})$, the $k$ x $k$ principal submatrices appearing in the expansion of the $k$th coefficient of the characteristic polynomial in Equations (10) and (11). Likewise, all the terms ${\mathbf{\mathbf{J}}}^{\mathbf{\mathbf{R}}}({\gamma}_{k})$ in Equation (13) of the coefficient ${a}_{k}$ correspond to one and only one Isubgraph of the reaction graph ${G}_{I}[{\mathbf{\mathbf{J}}}^{\mathbf{\mathbf{R}}}]$. Hence, the graph definitions introduced previously provide a method to associate a graph to each of the terms in the algebraic stability conditions.
A spanning subgraph is a subgraph that includes all the nodes in ${G}_{R}$, but not necessarily all edges. A linear spanning subgraph $\mathrm{\ell}$, also referred to as an Lsubgraph, is a spanning subgraph of ${G}_{R}$, in which each node has indegree 1 and outdegree 1. This definition implies that an Lsubgraph is composed by a set of disjoint cycles and isolated loops, where the cycles are disjoint in the sense that each node belongs to one and only one cycle. The three different Lsubgraphs contained ${G}_{R}[\mathbf{\mathbf{A}}]$ are depicted in Appendix 2—figure 4.
The number of cycles in a Lsubgraph is denoted by $s(\mathrm{\ell})$. The weight of a Lsubgraph is simply defined as the product of weights of the cycles contained in it:
For the linear spanning subgraphs depicted in Appendix 2—figure 4, the number of cycles and weights are:
The notion of linear spanning subgraphs can be naturally extended to the Isubgraphs of ${G}_{R}[\mathbf{\mathbf{A}}]$. The Lsubgraphs contained in ${I}_{{\gamma}_{k}}$ are all the different subgraphs of order $k$ formed by a set of disjoint cycles spanning the $k$ nodes of the induced subgraph.
The Lsubgraphs contained in the reaction graph of a matrix are the factors that determine the stability of the system. It has been shown that the graph methodology provides a way to assign an Isubgraph to each of the terms appearing in the algebraic equations that determine the stability of the system. Particularly, these equations are expressed in terms of principal minors $det[{\mathbf{\mathbf{J}}}^{\mathbf{\mathbf{R}}}({\gamma}_{k})]$ of the reaction matrix. Next, it will be shown that the value of each of these minors is determined only by the weights of the Lsubgraphs contained in the associated Isubgraph ${I}_{{\gamma}_{k}}$. The intimate relationship between the dynamics of a reactiondiffusion system and the cyclical structure of the reaction network is explained by this fact.
The expression of the determinant of an $N$ x $N$ matrix $\mathbf{\mathbf{A}}$ as a linear combination of the weights of the Lsubgraphs in ${G}_{R}[\mathbf{\mathbf{A}}]$ is known as the Coates formula:
where the sum goes through all the L subgraphs in ${G}_{R}[\mathbf{\mathbf{A}}]$. A sketch of the proof of the Coates formula is given following the work of Chen, 1997 (p. 143), and a more formal proof can be found in Brualdi and Cvetkovic, 2008 (p. 94). The first part of the proof shows that there is a onetoone correspondence between the nonvanishing terms in the determinant of a matrix and the linear spanning subgraphs in its associated graph. The second part of the proof shows that the sign of the contribution of the nonvanishing terms is also dictated by the structure the linear spanning subgraphs. The classical definition of the determinant of a $N$ x $N$ matrix is:
where the sum is over all the $N!$ permutations $p=\{1,\mathrm{\dots},N\}\to \{{i}_{1},\mathrm{\dots},{i}_{N}\}$. The signature of the permutation is given by ${\epsilon}_{{i}_{1},\mathrm{\dots},{i}_{N}}$ and is equal to $+1$ if $p$ is an even permutation and equal to $1$ if it is odd. All nonvanishing terms in Equation (19) are a product ${a}_{1{i}_{1}}\cdot \mathrm{\dots}\cdot {a}_{{}_{N{i}_{N}}}$ of $N$ coefficients. Each index appears twice, one as a row index and one as a column index, so that each row and column contribute to the product with exactly one coefficient. Hence, the subgraph in ${G}_{I}[\mathbf{\mathbf{A}}]$ defined by the entries ${a}_{1{i}_{1}}\cdot \mathrm{\dots}\cdot {a}_{{}_{N{i}_{N}}}$ has $N$ edges with exactly one edge coming into every node and one edge coming out of every node. Therefore, the subgraph associated to every term in Equation (19) is by definition a linear spanning subgraph in ${G}_{I}[\mathbf{\mathbf{A}}]$.
Conversely, every linear spanning subgraph $\mathrm{\ell}$ in ${G}_{I}[\mathbf{\mathbf{A}}]$ has $N$ nodes with indegree and outdegree equal to one. The $N$ edges in $\mathrm{\ell}$ are associated to $N$ coefficients in $\mathbf{\mathbf{A}}$, the edge directed to node $j$ being the only one in the $j$th row, and the edge coming out of node $j$ being the only one in the $j$th column. Arranging the indexes by increasing row order, the weight of $\mathrm{\ell}$ becomes $w(\mathrm{\ell})={a}_{1{i}_{1}},\mathrm{\dots},{a}_{{}_{N{i}_{N}}}$, showing the correspondence between each linear spanning subgraph in ${G}_{I}[\mathbf{\mathbf{A}}]$ with one and only one of the permutations $p$ in the definition of the determinant. In this way, a onetoone correspondence has been established between the Lsubgraphs in ${G}_{I}[\mathbf{\mathbf{A}}]$ and the nonvanishing permutations terms in the $det[\mathbf{\mathbf{A}}]$. Explicit calculation of the determinant of the example matrix illustrates the first part of the proof, as det$[\mathbf{\mathbf{A}}]$ is proven to be a linear combination of the weights of the Lsubgraphs ${\mathrm{\ell}}_{1}$, ${\mathrm{\ell}}_{2}$, ${\mathrm{\ell}}_{3}$ represented in Appendix 2—figure 4:
The onetoone correspondence provides a convenient way to label a particular Lsubgraph by the associated permutation. The permutation $p=\{{i}_{1},\mathrm{\dots},{i}_{N}\}$ defines univocally the Lsubgraph $\mathrm{\ell}(p)$ as the subgraph of ${G}_{I}[\mathbf{\mathbf{A}}]$ obtained by selecting the edge from node ${i}_{1}$ to node 1, from node ${i}_{2}$ to node 2 and generally from node ${i}_{k}$ to node $k$ for $k=1,\mathrm{\dots},N$. In this way, the sign of the contribution of an Lsubgraph to the determinant can be derived considering the signature of its associated permutation. The Lsubgraphs ${\mathrm{\ell}}_{1}$ and ${\mathrm{\ell}}_{3}$ of the example correspond to the even permutations ${p}_{1}=\{1,2,3,4\}$ and ${p}_{3}=\{4,2,1,3\}$. Thus, the corresponding terms in the determinant must have positive sign, as it is confirmed examining the explicit expression in Equation (20). Conversely, ${\mathrm{\ell}}_{2}$ is associated to the odd permutation ${p}_{2}=\{3,2,1,4\}$, and consequently the sign of the corresponding term is negative. In the same way that permutations define univocally a Lsubgraph, a cyclic permutation of $k$ integers defines univocally a cycle passing through $k$ nodes in ${G}_{I}[\mathbf{\mathbf{A}}]$.
The second part of the proof shows how the signature of a permutation ${\epsilon}_{p}$ is related to the structure of the associated Lsubgraph; more precisely, the number of cycles $s(\mathrm{\ell})$ contained in it. The parity of a permutation is the number of transpositions in which it can be decomposed. The decomposition is generally not unique, but the parity is invariant, so that permutations are classified as even or odd according to this number. The signature of a transposition is defined as $1$, and by extension the signature of a permutation is given by the product of the signatures of its factors. Hence, the signature of even permutations is +1, whereas the sign of odd permutations is 1. The theory of symmetric groups of finite degree establishes that for any permutation $p$ there is a unique decomposition of $p$ as a product of $s(p)$ cyclic permutations (Clarke, 1974):
In turn, any cyclic permutation of $i$ objects can be written as the product of $i1$ transpositions. Hence, any permutation of $N$ objects can be factorized as $s(p)$ cyclic permutations of $i,j,k,\mathrm{\dots},$ objects, with $i+j+k+...=N$. The signature of the permutation, given by the product of the signatures of the cyclic factors is then ${\epsilon}_{p}={(1)}^{i1}{(1)}^{j1}{(1)}^{k1}\mathrm{\dots}={(1)}^{NS(p)}$. Rearranging, the following identity is obtained:
It has been shown that a permutation $p$ corresponds to an Lsubgraph $\mathrm{\ell}$, and that the cyclic permutations in $p$ correspond to the cycles in $\mathrm{\ell}$. Thus, replacing the permutation $p$ for $\mathrm{\ell}$ and the number of cyclic permutations in $p$ for the number of cycles in $s(\mathrm{\ell})$ completes the proof of the Coates formula.
The expression of $det[\mathbf{\mathbf{A}}]$ in Equation (20) can now be derived strictly from the graphical structure of the associated graph. Indeed, the sign of the contributions of ${\mathrm{\ell}}_{1}$ and ${\mathrm{\ell}}_{3}$ are positive because they contain an even number of cycles (four loops in the former case, one loop and one cycle of length 3 in the latter case), whereas ${\mathrm{\ell}}_{2}$ contains an odd number of cycles (two loops and a cycle of length 2) and accordingly, its contribution is negative.
The Coates formula leads naturally to the definition of the weight of an induced subgraph as the signed sum of the weights of the Lsubgraphs contained in it. According to this definition, the weight of the Isubgraph ${I}_{{\gamma}_{k}}$ is equal to the determinant of the principal submatrix $A({\gamma}_{k})$:
This definition is the last element required to reformulate the stability conditions for a reactiondiffusion system from a graphtheoretical point of view. The coefficient of order $k$ in the characteristic polynomial was expressed in Equations (10–11) as a sum over all the principal minors of order $k$ in ${\mathbf{\mathbf{F}}}^{\mathbf{\mathbf{R}\mathbf{D}}}$. A method to associate a graph ${G}_{I}[{\mathbf{\mathbf{F}}}^{\mathbf{\mathbf{R}\mathbf{D}}}]$ to ${\mathbf{\mathbf{F}}}^{\mathbf{\mathbf{R}\mathbf{D}}}$ has been established. Particularly, each $k$ x $k$ principal submatrix ${\mathbf{\mathbf{F}}}^{\mathbf{\mathbf{R}\mathbf{D}}}({\gamma}_{\mathbf{\mathbf{k}}})$ corresponds to an induced subgraph ${I}_{{\gamma}_{k}}$ of order $k$ in the interaction graph. Furthermore, the associated principal minor $[det{\mathbf{\mathbf{F}}}^{\mathbf{\mathbf{R}\mathbf{D}}}({\gamma}_{\mathbf{\mathbf{k}}})]$ is given by the weight $w({I}_{{\gamma}_{k}})$ of the associated Isubgraph in ${G}_{I}$. Substitution of this identity restates the algebraic expression of ${a}_{k}$ given in Equation (12) as sum of weights of the induced subgraphs as:
where the summation goes over all the Isubgraphs of order $k$ in the interaction graph. Expanding the weight of the Isubgraphs in terms of the Lsubgraphs according to Equation (23) leads to
Likewise, substitution of the weights of the Isubgraphs in the reaction graph in Equation (13) leads to the graphical counterpart of the uncoupled expressions of ${a}_{k}$:
Examination of these results provides an important insight on the relationship between the feedback structure of the reaction scheme and the stability of the associated reactiondiffusion system. More precisely, the graphbased expressions reveal that every cycle in the network has a defined role in the dynamics of the system. This allows to break down the complete network into smaller functional motifs, thus providing a powerful tool to analyze general networks independently of their complexity or specific parameter values. In the following, we show how this analysis can be applied to the three examples shown in Figure 2.
Cycle analysis of the networks shown in Figure 2
In the following, we show how the graphtheoretical formalism can be used to analyze the three networks presented in Figure 2. For each network (Appendix 2—figures 5, 6, 7), we present the coefficients of the characteristic polynomial, stability conditions, and instability conditions in terms of cycles. Finally, we highlight the feedback that provides instability and the tradeoff between stabilizing and destabilizing feedbacks that underlies the minimum diffusion ratio $d$.
Nondiffusible destabilizing cycles and noise amplification
Numerical simulations of the networks identified by our analysis reveal that when a destabilizing cycle comprises only nondiffusible reactants, the reactiondiffusion system does not form a periodic pattern but rather amplifies the noise in the initial conditions. This behavior depends on the specific dispersion relation associated with these systems that shows an asymptotic behavior as the wavenumber $q$ increases and leads to an amplification of any of the fluctuations in the initial conditions. Therefore, although these types of reactiondiffusion systems fulfill the Turing instability conditions, they do not amplify a preferential wavelength. This type of behavior has been previously described in reactiondiffusion systems composed of two diffusible reactants and one immobile reactant that activates itself (White and Gilligan, 1998; Klika et al., 2012). Similar behaviors can be observed for indirect nondiffusible autoactivation implemented for example by two immobile nodes that mutually activate or repress each other. In RDNets, these networks are filtered out by default, but they can be reincluded in the analysis by selecting the option 'Noise Amplifying Nets'.
Appendix 3: Mechanism of selforganizing pattern formation
In this section, we use numerical simulations to investigate the mechanism that underlies pattern formation in Type II and Type III networks and relate our findings to the classical interpretations of reactiondiffusion patterning proposed by Turing and Meinhardt and Gierer. We use two types of numerical simulations throughout the discussion: i) simulations on continuous onedimensional domains, and ii) simulations on a pair of cells inspired by the original simulation proposed by Turing. The simulations on continuous onedimensional domains were performed using linear models with cubic saturation terms derived as described in Appendix 6. The simulations on a pair of cells were performed using linear models without saturation terms as originally done by Turing (Turing, 1952).
Previous proposals of selforganizing pattern formation
The role of differential diffusivity in models based on local autoactivation and lateral inhibition
Selforganizing pattern formation has previously been described as the combination of two processes: local autoactivation and lateral inhibition (LALI) (Oster, 1988; Meinhardt and Gierer, 2000; Maini, 2004; Newman and Bhat, 2009; Green and Sharpe, 2015) implemented by a poorly diffusive selfenhancing activator and a longrange inhibitor whose main role is to limit the expansion of activation peaks. Lateral inhibition can be implemented in two alternative ways (Koch and Meinhardt, 1994): 1) in the activatorinhibitor model, lateral inhibition is implemented by a rapidly diffusing inhibitor that is promoted by the activator and that limits the expansion of activation peaks; 2) in the substratedepletion model, it is implemented by the consumption of a rapidly diffusing substrate that is required for the selfenhancement of the activator. The schematic representation of the activatorinhibitor model shown in Appendix 3—figure 1 illustrates how local autoactivation and lateral inhibition are assumed to underlie pattern formation.
Two implicit assumptions underlie the classical interpretation of the activatorinhibitor model discussed above: i) local autoactivation and lateral inhibition happen in chronological order: first the activator promotes itself, and then the inhibitor diffuses into the neighboring regions to limit the propagation of the activator; ii) the differential diffusivity between activator and inhibitor is a necessary condition to stabilize neighboring regions and to prevent 'an overall autocatalytic explosion' (Meinhardt and Gierer, 2000).
Our study reveals that in the presence of immobile reactants, reactiondiffusion systems can form periodic patterns even when all diffusible reactants have the same diffusivity. In the following, we show that the LALI concept cannot be easily applied to these systems. Our analysis challenges the classical interpretation of activatorinhibitor models and demonstrates that the role of the differential diffusivity is not to stabilize neighboring regions by preventing the formation of activator peaks, but rather to destabilize the system upon spatial perturbations as originally proposed by Turing (Turing, 1952). Finally, we show how reactiondiffusion networks with two diffusing reactants and one immobile reactant can become unstable to spatial perturbations even without differential diffusivity of the mobile reactants.
Relationship of Type II networks to LALI models
The type II network presented in Figure 2 can form a selforganizing pattern even when all mobile reactants have equal diffusivities. Our analysis based on a graphtheoretical formalism demonstrates that this can be achieved by reaction terms that increase the stability of the homogeneous steady state. In particular, the combination of the conditions for homogeneous steady state stability and instability to spatial perturbations shows that the minimum diffusion ratio $d$ required to form a pattern is defined by the ratio between two stabilizing feedbacks ${\text{c}}_{1}$ and ${\text{c}}_{2}$:
If the stabilizing feedback ${\text{c}}_{1}$ is stronger than ${\text{c}}_{2}$, the system can form selforganizing patterns even with equal diffusivity ($d=1$). These two stabilizing feedbacks correspond to the negative selfregulatory loops of the diffusible nodes v and w. Such negative selfregulatory loops have often been interpreted as decay terms. From the LALI perspective, this suggests the possibility that although the diffusion coefficients of v and w can be equal, their range  i.e. the ratio between diffusion and decay  must be different, such that the decay of the destabilizing node v is greater than the decay of the stabilizing node w:
Within the LALI framework, the network topology suggests that u and v, which mutually activate each other, could behave like the shortrange autoactivator, while w could implement the longrange inhibitor. Although v and w have the same diffusivities, w may still behave like a classical longrange inhibitor due to its longer halflife to limit the expansion of the activator. This idea also appears to be consistent with the simulated periodic patterns of v and w that are qualitatively similar to the peaks of an activator and an inhibitor, respectively (Figure 2c and Appendix 6—Table 4). We note, however, that opposite periodic patterns of v and w can be obtained with the same diffusivities and halflives but using different kinetic parameters to implement the cycle ${\text{c}}_{3}$ that connects u and w (Appendix 3—figure 2). The LALI concept is therefore not an appropriate framework to describe pattern formation for these networks. Indeed, the LALI concept requires the subjective definition of a subset of nodes that behaves as the activator and a subset of nodes that behave as the inhibitor with the final aim of identifying different effective ranges (Miura, 2007; Korvasova et al., 2015). Such effective ranges need to be defined ad hoc for each network and in cases with equally diffusing reactants appear to be just reformulations of the reaction kinetics that do not contribute to the identification of the general principles that underlie pattern formation.
Other Type II networks are also difficult to relate to classical LALI activatorinhibitor models. For example, the network shown in Appendix 3—figure 3, which has no selfregulatory negative loop on v, allows for a minimum diffusion ratio $d$ that depends on a tradeoff between the destabilizing cycle ${\text{c}}_{4}$ and the stabilizing cycles ${\text{c}}_{1}$, ${\text{c}}_{2}$ and ${\text{c}}_{3}$:
In this case, the two stabilizing terms that define the minimum diffusion ratio $d$ are ${\text{c}}_{2}^{2}$ and ${\text{c}}_{3}$, which correspond to the squared strength of the selfregulatory negative loop of w and to the strength of the negative feedback between v and u, respectively. These terms cannot be related in a straightforward manner to the range of the two diffusible nodes. Within the LALI framework, the network topology suggests that u and v could behave like the shortrange autoactivator, while w could behave like the longrange inhibitor. However, numerical simulations show that the system can form periodic patterns that are opposite to those of classical activatorinhibitor models (Appendix 3—figure 3). The LALI framework therefore has little explanatory power to describe the pattern formation processes for Type II networks.
Relationship of Type III networks to LALI models
The LALI concept of local autoactivation and longrange lateral inhibition is also difficult to relate to the mechanism that underlies Type III networks. These networks satisfy the instability to spatial perturbations due to their topology and have patternforming conditions that only depend on the requirements for homogeneous steady state stability. For example, the Type III network shown in Figure 2 can form a pattern for any diffusion ratio $d$:
These stabilizing terms cannot be related in a straightforward manner to a short activation range and a long inhibition range. In addition, numerical simulations show that depending on the diffusion ratio $d$, which can vary freely, periodic patterns qualitatively opposite to the patterns expected in an activatorinhibitor system can be formed (Figure 2c right and Appendix 3—figure 4a). Moreover, as in the case of Type II networks, the qualitative aspect of the periodic patterns not only changes due to the diffusion ratio $d$ but also depending on the reaction kinetics, e.g. in the different Type III topology shown in Appendix 3—figure 4b.
These observations suggest that the final aspect of selforganizing periodic patterns is not necessarily related with the effective ranges of activators and inhibitors and does not reflect a mechanism that prevents the expansion of activator peaks. We propose instead that the periodic patterns are simply associated with the final growth at which each reactant reaches a dynamic equilibrium, which is determined both by reaction and diffusion terms.
In summary, applying the LALI concept based on differential diffusivity to investigate more complex networks provides little insights into the mechanism that underlies pattern formation. As an alternative, the graphtheoretical formalism presented in this study can be systematically applied to all networks and helps to break down Turing systems into different parts by identifying destabilizing and stabilizing feedbacks. Our analysis highlights that the main role of diffusion is to destabilize the system to spatial perturbations by helping the destabilizing feedbacks to be stronger than the stabilizing feedbacks. In the next section, we show that these findings are consistent with the original reactiondiffusion example described by Turing, where the differential diffusivity is a necessary condition to destabilize the system. This contrasts with interpretations of reactiondiffusion patterning based on LALI models with local autoactivation and lateral inhibition, where the differential diffusivity requirement has been described as a necessary condition to stabilize activator peaks that would be otherwise unstable (Appendix 3—figure 1).
The role of differential diffusion in Turing’s model
The original reactiondiffusion example proposed by Turing
In his seminal paper, Turing presented a simple example to demonstrate how diffusion can destabilize a reactiondiffusion system by amplifying small fluctuations (Turing, 1952). In this example, he considered two diffusible morphogens $X$ and $Y$ that react according to the equations in Appendix 3—figure 5. This system is in equilibrium when $\text{X}=1$ and $\text{Y}=1$. Turing described a numerical simulation consisting of a pair of cells that initially have roughly the same amount of X and Y ($\text{X}=1.06,\text{Y}=1.02$ in the first cell and $\text{X}=0.94,\text{Y}=0.98$ in the second cell). The simulation was performed by separately calculating the concentration changes resulting from either reaction or diffusion terms (Appendix 3—figure 5).
When no diffusion is considered (${d}_{X}={d}_{Y}=0$), this system returns to the equilibrium because of the feedbacks that implement a selfregulatory stable system (Appendix 3—figure 6a–b). To reach equilibrium, however, the system transiently increases or decreases the absolute concentrations of X and Y depending on the initial perturbations. The transient increase and decrease of X and Y depends on the autoactivation of X and the autoinhibition of Y, respectively. These two selfregulatory loops are both key feedbacks that can destabilize the system in the presence of diffusion. In previous interpretations based on LALI models (see above), the autoactivation of the activator (X) was described as an important ingredient to destabilize the system, but the importance of the autoinhibition of the inhibitor Y was generally overlooked (Koch and Meinhardt, 1994; Marcon and Sharpe, 2012; Economou and Green, 2014). The simulation in Appendix 3—figure 6b shows that when the relative difference between the activator and the inhibitor is large enough (${\mathrm{\Delta}}_{c}$), the autoactivation of X can bring the concentrations above the equilibrium steady state while the autoinhibition of Y can bring the concentrations below the the equilibrium steady state. Due to the intrinsic stability of the network, however, the relative difference between X and Y concentrations (${\mathrm{\Delta}}_{c}$) is reduced over time, and deviations are only temporary.
Turing observed that diffusion, which normally act as an equilibrating force, could increase ${\mathrm{\Delta}}_{c}$ under certain conditions and further deviate the system from equilibrium. If both X and Y diffuse equally (${d}_{X}={d}_{Y}=1$), the system quickly returns to its steady state, and diffusion works as an equilibrating force to redistribute X and Y towards homogeneity by moving more activator than inhibitor from the first to the second cell (Appendix 3—figure 6c). However, if Y diffuses more than X (${d}_{X}=0.5$ and ${d}_{Y}=4.5$) the larger flow of Y from the first to the second cell helps to maintain a larger relative difference ${\mathrm{\Delta}}_{c}$ in the first and second cell, respectively. Importantly, the same flow of Y drives the deviation from the equilibrium state in both cells simultaneously  the first cell deviates above equilibrium while the second cell deviates below. This challenges the classic LALI interpretation of the activatorinhibitor model shown in Appendix 3—figure 1: First, there is no chronological order between the formation of an activator peak and lateral inhibition in the surrounding areas, since they happen simultaneously. Second, it shows that these two processes are a direct consequence of the differential diffusivity, whose main role is not to prevent the expansion of activator peaks, but rather to maintain a larger relative difference ${\mathrm{\Delta}}_{c}$ between activator and inhibitor to promote a simultaneous deviation above and below the equilibrium state.
In summary, the example presented by Turing shows that the role of differential diffusivity is to destabilize the reactiondiffusion system to simultaneously deviate above and below equilibrium. This suggests that the classical interpretation of the activatorinhibitor model within the LALI framework (Appendix 3—figure 1) is inaccurate: In particular, the longrange inhibitor does not limit the expansion of forming activator peaks but rather promotes the simultaneous formation of activation and inhibition peaks. Indeed, irrespective of the strength of initial perturbations, if X and Y have equal diffusivities (or if they do not diffuse) the system will always return to its equilibrium state and will never lead to 'an overall autocatalytic explosion' (Meinhardt and Gierer, 2000). An indiscriminate expansion of activator peaks can be obtained only when X diffuses and Y is immobile or in systems implemented by just one diffusible selfenhancing activator. However, these systems represent different scenarios that do not provide an explanation for the role of differential diffusivity in classical reactiondiffusion systems.
A reinterpretation of the substratedepletion model within the Turing framework
In addition to the activatorinhibitor system, the substratedepletion model is another classical twocomponent reactiondiffusion system, which forms outofphase periodic patterns of activator and substrate as opposed to the inphase patterns of activatorinhibitor systems (Appendix 3—figure 7). Applying the LALI concepts of local autoactivation and lateral inhibition to explain pattern formation in the substratedepletion model is nontrivial (Appendix 3—figure 7d–g), since it is unclear how the higher diffusivity of the substrate could limit the expansion of a forming activator peak. A different explanation that is not directly based on differential diffusivity was proposed by assuming that activator peaks stop expanding when the substrate is consumed below a certain threshold (Koch and Meinhardt, 1994).
In contrast, the original interpretation by Turing can easily explain the mechanism that underlies pattern formation in the substratedepletion model. In this system, the interactions between X and Y are opposite to the interactions in the activatorinhibitor model; therefore, the system deviates from equilibrium when the activator X is high and the substrate Y is low in the same cell and vice versa. Nevertheless, differential diffusivity plays the same role as in the activatorinhibitor model by causing a larger flow of the substrate Y from one cell to the other, which simultaneously destabilizes the system in two opposite directions with respect to the equilibrium state (Appendix 3—figure 7a–c).
Simultaneous destabilization in two opposite directions by differential diffusivity
The simultaneous destabilization in opposite directions by differential diffusivity described in the section above is also supported by numerical simulations on more than two cells started with small random deviations from the equilibrium steady state as initial conditions. These simulations do not show the chronological series of events depicted by classical interpretations of the LALI framework based on local autoactivation and lateral inhibition (Appendix 3—figure 1) but rather show the simultaneous formation of activation and inhibition peaks across the whole spatial domain (Appendix 3—figure 8).
The only situation when patterning dynamics in agreement with the LALI interpretation can be observed is when large localized perturbations of the activator are used as initial conditions (Appendix 3—figure 9a). However, these simulations do not only reflect the dynamics of the Turing network but also the dilution and propagation of a localized perturbation, which stimulates the formation of a dissipative soliton (Purwins et al., 2005). This is not the best scenario to investigate the mechanisms that drive pattern formation because the localized perturbation hides the underlying mechanisms that break the symmetry of an initially homogeneous state. The original simulation proposed by Turing (Appendix 3—figure 6) and the onedimensional simulation shown in Appendix 3—figure 8 suggest that breaking symmetry is not the result of LALI but instead the result of a simultaneous deviation from the equilibrium state in opposite directions due to the differential diffusivity. In agreement with this interpretation, if the magnitude of the localized perturbation is reduced, the simultaneous appearance of inhibitor and activator peaks can be recovered (Appendix 3—figure 9b). The LALI model therefore does not accurately describe the underlying Turing mechanism that can drive the selforganization of periodic patterns in the absence of initial asymmetries.
Mechanism of pattern formation with equally diffusing signals
In the following, we discuss how reactiondiffusion models that contain immobile factors can become unstable to spatial perturbations even when all diffusible reactants have the same diffusivity.
Instability with equally diffusing signals due to immobile reactants
The original example presented by Turing can be modified into a Type III network by introducing an additional reactant Z that participates in a mutual activation with the reactant X and is repressed by Y:
In this network, the autocatalysis of X is no longer direct as the one in Appendix 3—figure 6a, but it is implemented by the mutual activation with Z (Appendix 3—figure 12a). Similar to the original example of Turing, the system is in equilibrium when $\text{X}=1,\text{Y}=1$ and $\text{Z}=1$. The LALI model does not provide a satisfactory explanation for the pattern formation mechanism in this system, since the inhibitor does not spread faster into the lateral domains of the activator peak due to equal diffusivities (Appendix 3—figure 10 and Appendix 3—figure 11). In the following, we therefore analyze in detail how this modified system is destabilized by amplifying small fluctuations.
We performed numerical simulations of a pair of cells starting from a slight perturbation of the equilibrium state: $\text{X}=1.04,\text{Y}=1.02,\text{Z}=1.06$ in the first cell and $\text{X}=0.96,\text{Y}=0.98,\text{Z}=0.94$ in the second cell. When no diffusion is considered (${d}_{X}={d}_{Y}={d}_{Z}=0$), the system returns to the equilibrium owing to its intrinsic stability (Appendix 3—figure 12b). As in the original example of Turing (see above), the equilibrium is reached by transiently increasing or decreasing the absolute concentrations of X, Y, and Z depending on the initial perturbations, but a reduction of the relative difference between X and Y concentrations (${\mathrm{\Delta}}_{c}$) brings the system back to equilibrium (Appendix 3—figure 12b). Similarly, if all reactants diffuse equally (${d}_{X}={d}_{Y}={d}_{Z}=1$), the system quickly returns to its equilibrium since diffusion helps to redistribute X, Y, and Z towards homogeneity (Appendix 3—figure 6c).
However, if X and Y diffuse equally (${d}_{X}={d}_{Y}=1$) and Z is immobile (${d}_{Z}=0$), the relative difference between X and Y (${\mathrm{\Delta}}_{c}$) is progressively increased, and both cells keep on deviating from equilibrium (Appendix 3—figure 6d). In the original example presented by Turing (see above), the deviation from equilibrium was guaranteed by differential diffusivity that implemented a larger flow of Y from the first to the second cell. In contrast, when X and Y have the same diffusivity, the flow of X instead is larger due to the higher concentration gradient of X between the two cells. This leads to a progressive decrease in ${\mathrm{\Delta}}_{c}$ and therefore to a return to equilibrium (Appendix 3—figure 6c). In the simulation of the threenode network shown in Appendix 3—figure 12d, X and Y have the same diffusivity and consequently a larger flow of X is also observed. However, in this case the immobile reactant Z together with the mobile reactant Y can compensate for the larger flow of X. This is possible because Z is not redistributed by diffusion and together with a small flow of Y can promote and inhibit X in the first and second cell respectively (see first equation in Equation (32) and dashed blue line in Appendix 3—figure 12d). In addition, the small flow of Y form the first to the second cell allows Z to further deviate from equilibrium because of its positive feedback with X (see third equation in Equation [32]), which reenforces the effect of Z over time and maintains the divergence of the entire system from equilibrium. Immobile reactants therefore fulfill a role as 'capacitors' that can integrate the effect of diffusing reactants to destabilize the reactiondiffusion system and to drive selforganizing pattern formation.
Inspecting the example shown in Appendix 3—figure 12 from a LALI perspective, it could be speculated that the effect of the immobile reactant in Type III networks is just equivalent to a reduction of the effective range of the autoactivation to satisfy the differential diffusivity independently of the activator diffusion rate. This simplification, however, neglects the important role of the specific reaction terms associated with the immobile reactant. Indeed, in the example shown in Appendix 3—figure 12d, the immobile reactant Z does not just reduce the range of activator peaks; rather its dynamics  and in particular its ability to diverge fast from equilibrium  are an integral part of the mechanism that allows the peaks to form in the first place. These dynamics do not only depend on the mutual activation between X and Z but also on the the inhibition of Z by Y, whose diffusion stimulates Z to diverge from equilibrium. Interpreting these dynamics as an effective change in spatial range of the activator appears to provide little insights into the general mechanism that underlie pattern formation. Indeed, in other Type III networks, e.g. more complex networks with more than one immobile reactant, the identification of similar effective ranges requires the definition of alternative ad hoc approximations, which ultimately correspond just to a reformulation of the reaction kinetics of the network.
In conclusion, Type III networks can form a pattern even when the diffusing signals have the same range due to the capacitor effect of immobile reactants and associated reaction terms. Remarkably, this can happen even when the activator diffuses more than the inhibitor (Appendix 3—figure 13). This in an intrinsic property of Type III topologies, where the immobile reactant acts as a buffer to amplify any small advantages or disadvantages over the inhibitor diverging quickly from the equilibrium state. In agreement with this finding, if the fast divergence of the immobile reactant is limited by a negative selfregulation, most networks are of Type II, meaning that their ability to compensate for the range of the mobile reactants depends on the relative speed at which each reactant grows. For example, in the Type II network shown in Appendix 3—figure 3, this can happen only when the growth of the activator is slowed down sufficiently by the inhibitor through the negative cycle ${\text{c}}_{3}$.
Summary: The role of immobile reactants in driving selforganizing patterns
The examples presented in this Appendix highlight that in classical tworeactant Turing models the differential diffusivity destabilizes the equilibrium state by maintaining an imbalance between reactants, which drives a further deviation from equilibrium. Importantly, the reaction terms of the Turing system guarantee that the deviation happens simultaneously above and below the equilibrium state. In the activatorinhibitor model, for example, the differential diffusivity not only gives an advantage to the selfenhancement of the activator but simultaneously to the autoinhibition of the inhibitor. Therefore, in accordance with a recent proposal (Klika et al., 2012), we suggest that the negative selfregulation of the inhibitor has a more important role than previously assumed.
In agreement with these observations, onedimensional simulations like the one shown in Appendix 3—figure 8 reveal that the periodic patterns of Turing systems are formed with a simultaneous appearance of activation and inhibition peaks and that the patterning dynamics do not follow the sequence of events described by the LALI mechanism based on local autoactivation and lateralinhibition. The periodic patterns therefore do not reflect a longer range of the inhibitor to limit the autoactivation. Instead, we propose that the periodic patterns of both the activator and the inhibitor reflect only one range, usually referred to as the wavelength, which is determined by the differential diffusion but also by reaction terms. According to this view, the periodic patterns formed in the activatorinhibitor model reflect different amplitudes of activator and inhibitor levels  rather than a difference in the ranges  that depend both on differential diffusivity and reaction terms.
We therefore propose that the role of immobile reactants in Type II and Type III networks is not to implement an effective difference in the ranges of local autoactivation and longrange inhibition, but rather to help the system to diverge from equilibrium, which is normally achieved by differential diffusivity in classical twocomponent Turing systems. We find that immobile factors can help to destabilize the system, since they are not subjected to the equilibrating effect of diffusion and therefore fulfill a role as 'capacitors' that can integrate the effect of diffusing reactants to destabilize the reactiondiffusion system by quickly amplifying perturbations.
Appendix 4: Synthetic reactiondiffusion circuit design
The networks presented in Figure 5—figure supplement 1 are all alternative implementations of synthetic reactiondiffusion systems obtained by addition of negative feedbacks to an existing synthetic circuit that implements a positive feedback. In contrast to classical activatorinhibitor models, these networks show that many realistic reactiondiffusion systems do not require differential diffusivity. In addition, given the explicit representation of cellautonomous factors, these networks also suggest at which level of the signaling pathways the new feedbacks should be introduced. On the one hand, these predictions help to bridge the gap between theoretical models and real systems, and on the other hand they present engineers with new challenges for the implementation of specific synthetic network designs. The highthroughput results of RDNets can be used to choose the network design that better fits the available synthetic toolkits.
In the following, we provide an example of an alternative implementation of the reactiondiffusion system presented in Figure 5 with a network that has a more complex synthetic design but that requires almost no parameter optimization. In particular, we analyze one of the highly robust Type III networks identified by RDNets (highlighted in Figure 5—figure supplement 1). This network requires the addition of three negative feedback loops: one corresponding to the decay of the ligand involved in the positive feedback (${c}_{1}$), one implementing a negative feedback between the ligand and its signaling (${c}_{3}$), and another feedback between an additional ligand and its signaling (${c}_{2}$, Appendix 4—figure 1b). Strikingly, this Type III network has very simple pattern forming requirements (Appendix 4—figure 1c): To guarantee stability, ${c}_{2}$ has to be greater than the other negative feedback, and the strength of the positive feedback ${c}_{4}$ has to be small. The network does not have special requirements for the instability, which is intrinsically guaranteed by the Type III network topology. A comparison between the original network that implemented the positive feedback (Appendix 4—figure 1a) and this candidate topology (Appendix 4—figure 1b) reveals that two of the new feedbacks could be implemented simply by increasing the turnover rate of IP (${c}_{1}$) and by implementing a negative feedback between IPsignaling and its own expression or activity (${c}_{3}$). The third feedback (${c}_{2}$), however, implies a more complex synthetic design: in this case, another ligand that signals through an independent receptor should be introduced. Moreover, the signaling of the new ligand should implement a negative feedback on its own expression or activity that is mediated by components of the signaling pathway of IP (z). This last requirement is more challenging to implement since it requires to design a signaling pathway component of IP that can simultaneously promote IP and inhibit the new ligand. The sharing of this component is an intrinsic requirement of the model and guarantees the coupling between the two signaling pathways. The simple addition of a downstream target of IP that independently inhibits the new ligand does not represent an equivalent implementation. A possible common element that could fulfill this design is a factor that mediates the secretion of IP but also the produces an inhibitor of the new ligand. This factor, which in the model corresponds to z, would have to be promoted by both signaling pathways.
In conclusion, although the practical implementation of this system appears more challenging, it is also extremely appealing  once the network topology has been developed, the system would robustly guarantee the formation of a spatial pattern with the only requirement that the destabilizing feedback is not too strong.
Appendix 5: Derivation and simulations of models with sigmoidal kinetics
RDNets can automatically provide reaction and diffusion parameters that satisfy the pattern forming requirements of network topologies. These parameters are used to develop and simulate partial differential equations whose partial derivatives have a linear part equal to the correspondent element in the Jacobian matrix analyzed by RDNets. In other words, the linear part of the partial derivative of any reaction regulation term (linear or nonlinear) can be easily related with the reaction rates analyzed by RDNets. RDNets provides inbuilt functionalities to simulate linear models with simple cubic saturation terms as the ones derived in Raspopovic et al. (2014) and in Miura and Maini (2004). However, the predictions of our analysis are not limited to linear models but are also particularly relevant for partial differential equations with nonlinear terms such as sigmoidal kinetics that have often been used in gene network modeling (Mjolsness et al., 1991). In the following, we present a strategy to derive models with sigmoid regulation functions that can be built on top of the parameter rates identified by RDNets.
Since the automated linearstability analysis performed by RDNets is a systematic exploration of Jacobian matrices $J$ and diffusion matrices $D$, we define an approach to derive nonlinear dynamical models from the Jacobian $J$. This corresponds to the reverse of what has been done in previous studies, where the Jacobian matrix $J$ and the pattern forming conditions were derived by analyzing dynamical nonlinear models (Diambra et al., 2015; Koch and Meinhardt, 1994).
A general sigmoid regulation function $s(x,k)$ that describes a change in concentration promoted by an input $x$ is defined as
where $k$ is a parameter that defines the nonlinearity or steepness of the sigmoid (Appendix 5—figure 1) and $x$ is the input concentration of a reactant.
The function $s(x,k)$ is equal to $0.5$ when $x$ is equal to zero:
We rewrite $s(x,k)$, such that a differential equation with these sigmoid regulation terms would have steady state at $x=0$:
Next, we observed that the partial derivative of $s(x,k)$ evaluated at steady state is equal to
We therefore modify $s(x,k)$, such that the resulting partial differential equation at steady state has derivative $k$:
Since the elements of the Jacobian represent the partial derivatives evaluated at steady state, the sigmoid regulation functions in Equation (34) have steepness constants that directly relate with the reaction rates of the Jacobian but also have an intrinsic saturation as in real biological systems. A reactiondiffusion system, where all the regulation functions have sigmoid kinetics, will form a pattern as predicted by the linear stability analysis. As an example, we write the equation of the synthetic network presented in Figure 5 with sigmoid regulation terms and perform a 1D simulation (Appendix 5—figure 2 and Equation [35]).
where $({\alpha}_{2},{\alpha}_{5},{\alpha}_{4},{\alpha}_{12},{\alpha}_{13})$ are additional terms that can be used to change the saturation of the sigmoids. In this example they are set to one.
Note that in Equation (35) the terms with ${k}_{9}$ and ${k}_{16}$ are not sigmoid regulation terms, since they represent first order decays. Similar to linear models, numerical simulations confirm that this system can form a stable spatial pattern. We also observe that changing the saturation of different regulatory sigmoid has interesting effects on the final aspect of the pattern. These numerical simulations are left for future theoretical analysis and are beyond the focus of this study that deals with realistic reactiondiffusion topologies and the analytical conditions that lead to pattern formation.
Appendix 6: Model definitions and parameters used for the simulations
The simulations in the main text were performed by deriving systems of partial differential equations with linear reaction terms and negative cubic saturation terms from the networks, similar to previous approaches (Raspopovic et al., 2014; Miura and Maini, 2004). Systems of partial differential equations can be derived from the Jacobian matrix $\mathbf{\mathbf{J}}$ and the diffusion matrix $\mathbf{\mathbf{D}}$ as shown in the following example (Appendix 6—figure 1):
where the matrix $\mathbf{\mathbf{S}}$ contains cubic saturation terms only for the nodes that are part of the destabilizing positive feedback (green arrows) and is zero otherwise.
Systems derived in this manner have a homogeneous steady state at zero, in the example above $({u}_{0},{v}_{0},{w}_{0})=(0,0,0)$, and thus form selforganizing periodic patterns with concentration peaks of positive and negative values. Negative values do not represent negative concentrations (which are impossible), but represent a relative negative deviation from the homogeneous steady state, which was set to 0 for convenience.
The simulations presented in this study were executed using random initial conditions around the homogeneous steady state uniformly distributed in the interval (0.001, 0.001).
Parameters for the simulations in Figure 1c
The simulations in Figure 1c were performed on a unitlength spatial domain $0\le x\le 1$ with the network described above. The amplitude of the concentration profiles was rescaled to facilitate visualization. The parameters are given in Appendix 6–Tables 1 and 2 for the simulations on the left and on the right, respectively, of Figure 1c.
Parameters for the simulations in Figure 2c
Onedimensional simulations were performed on a unitlength spatial domain: $0\le x\le 1$. Twodimensional simulations were performed on a squared domain with side length $l=5$: $0\le x\le 5$ and $0\le y\le 5$. The networks and parameters of the simulations are shown in Appendix 6–Tables 3, 4, and 5 and in Appendix 6—figures 2, 3, and 4.
Parameters for the simulation in Figure 4b
The onedimensional simulation was performed on a domain with length 5: $0\le x\le 5$. The network and parameters used for the simulation are shown in Appendix 6—figure 5 and Appendix 6–Table 6. A system of partial differential equations that forms a pattern was obtained by defining cubic saturation terms for $\beta $, Sm, S, and W that are all part of the destabilizing positive feedback highlighted in green.
References
 1

2
Senderreceiver systems and applying information theory for quantitative synthetic biologyCurrent Opinion in Biotechnology 31:101–107.https://doi.org/10.1016/j.copbio.2014.08.005
 3
 4

5
Genetically encoded senderreceiver system in 3D mammalian cell cultureACS Synthetic Biology 3:264–272.https://doi.org/10.1021/sb400053b
 6
 7
 8

9
Two modes by which Lefty proteins inhibit Nodal signalingCurrent Biology 14:618–624.https://doi.org/10.1016/j.cub.2004.02.042
 10
 11

12
Graph theoretic approach to the stability analysis of steady state chemical reaction networksThe Journal of Chemical Physics 60:1481–1492.https://doi.org/10.1063/1.1681221
 13

14
Three types of matrix stabilityLinear Algebra and Its Applications 20:253–263.https://doi.org/10.1016/00243795(78)900216

15
Cooperativity to increase Turing pattern space for synthetic biologyACS Synthetic Biology 4:177–186.https://doi.org/10.1021/sb500233u
 16

17
Modelling from the experimental developmental biologists viewpointSeminars in Cell & Developmental Biology 35:58–65.https://doi.org/10.1016/j.semcdb.2014.07.006
 18
 19

20
Customized signaling with reconfigurable protein scaffoldsNature Biotechnology 26:526–528.https://doi.org/10.1038/nbt0508526
 21

22
Molecular evidence for an activatorinhibitor mechanism in development of embryonic feather branchingProceedings of the National Academy of Sciences of the United States of America 102:11734–11739.https://doi.org/10.1073/pnas.0500781102
 23
 24
 25

26
It takes a village to grow a tissueNature Biotechnology 23:1237–1239.https://doi.org/10.1038/nbt10051237

27
On complex eigenvalues of m and p matricesNumerische Mathematik 19:170–175.https://doi.org/10.1007/BF01402527
 28

29
The influence of receptormediated interactions on reactiondiffusion mechanisms of cellular selforganisationBulletin of Mathematical Biology 74:935–957.https://doi.org/10.1007/s1153801196994

30
Biological pattern formation: From basic mechanisms to complex structuresReviews of Modern Physics 66:1481–1507.https://doi.org/10.1103/RevModPhys.66.1481
 31

32
Investigating the Turing conditions for diffusiondriven instability in the presence of a binding immobile substrateJournal of Theoretical Biology 367:286–295.https://doi.org/10.1016/j.jtbi.2014.11.024

33
Quantitative and logic modelling of molecular and gene networksNature Reviews. Genetics 16:146–158.https://doi.org/10.1038/nrg3885

34
Membranebound Turing patternsPhysical Review E 72:061912.https://doi.org/10.1103/PhysRevE.72.061912
 35

36
The impact of Turing’s work on pattern formation in biologyMathematics Today 40:140–141.

37
Structure and function of the feedforward loop network motifProceedings of the National Academy of Sciences of the United States of America 100:11980–11985.https://doi.org/10.1073/pnas.2133841100

38
Turing patterns in development: What about the horse part?Current Opinion in Genetics & Development 22:578–584.https://doi.org/10.1016/j.gde.2012.11.013
 39

40
Outofphase oscillations and traveling waves with unusual properties: The use of threecomponent systems in biologyPhysica D: Nonlinear Phenomena 199:264–277.https://doi.org/10.1016/j.physd.2004.08.018

41
Models for the generation and interpretation of gradientsCold Spring Harbor Perspectives in Biology 1:a001362.https://doi.org/10.1101/cshperspect.a001362

42
Branch mode selection during early lung developmentPLoS Computational Biology 8:e1002377.https://doi.org/10.1371/journal.pcbi.1002377

43
Wave pinning and spatial patterning in a mathematical model of Antivin/LeftyNodal signallingJournal of Mathematical Biology 67:1393–1424.https://doi.org/10.1007/s002850120592z

44
A load driver device for engineering modularity in biological networksNature Biotechnology 32:1268–1275.https://doi.org/10.1038/nbt.3044

45
Speed of pattern appearance in reactiondiffusion models: Implications in the pattern formation of limb bud mesenchyme cellsBulletin of Mathematical Biology 66:627–649.https://doi.org/10.1016/j.bulm.2003.09.009

46
Modulation of activator diffusion by extracellular matrix in Turing systemRIMS Kokyuroku Bessatsu B3:165–176.

47
The cystbranch difference in developing chick lung results from a different morphogen diffusion coefficientMechanisms of Development 126:160–172.https://doi.org/10.1016/j.mod.2008.11.006

48
A connectionist model of developmentJournal of Theoretical Biology 152:429–453.https://doi.org/10.1016/S00225193(05)803911
 49
 50
 51

52
Dynamical patterning modules: A "pattern language" for development and evolution of multicellular formThe International Journal of Developmental Biology 53:693–705.https://doi.org/10.1387/ijdb.072481sn

53
Lateral inhibition models of developmental processesMathematical Biosciences 90:265–286.https://doi.org/10.1016/00255564(88)900703

54
Interactions of reaction and diffusion in open systemsIndustrial & Engineering Chemistry Fundamentals 8:302–313.https://doi.org/10.1021/i160030a020

55
Instability and dynamic pattern in cellular networksJournal of Theoretical Biology 32:507–537.https://doi.org/10.1016/00225193(71)901548

56
Dynamics of Synergetic Systems191–204, Synchronized and differentiated modes of cellular dynamics, Dynamics of Synergetic Systems, Berlin, Heidelberg, Springer Berlin Heidelberg, 10.1007/9783642675928_16.

57
Turing instabilities with nearly equal diffusion coefficientsThe Journal of Chemical Physics 90:1588–1599.https://doi.org/10.1063/1.456051

58
Pattern formation in a (2 + 1)species activatorinhibitorimmobilizer systemPhysica A: Statistical Mechanics and Its Applications 188:178–189.https://doi.org/10.1016/03784371(92)90264Q
 59

60
Dissipative Solitons267–308, Dissipative solitons in reactiondiffusion systems, Dissipative Solitons, Springer, 10.1007/10928028_11.
 61

62
The role of transmembrane signal transduction in Turingtype cellular pattern formationJournal of Theoretical Biology 226:401–407.https://doi.org/10.1016/j.jtbi.2003.09.018

63
Morphogen gradients: From generation to interpretationAnnual Review of Cell and Developmental Biology 27:377–407.https://doi.org/10.1146/annurevcellbio092910154148

64
Gene networks capable of pattern formation: From induction to reactiondiffusionJournal of Theoretical Biology 205:587–603.https://doi.org/10.1006/jtbi.2000.2092
 65

66
Turing instabilities in general systemsJournal of Mathematical Biology 41:493–512.https://doi.org/10.1007/s002850000056

67
A unified design space of synthetic stripeforming networksNature Communications 5:4905.https://doi.org/10.1038/ncomms5905

68
Nodal morphogensCold Spring Harbor Perspectives in Biology 1:a003459.https://doi.org/10.1101/cshperspect.a003459

69
Nodal signaling: Developmental roles and regulationDevelopment 134:1023–1034.https://doi.org/10.1242/dev.000166
 70
 71

72
The leftright axis in the mouse: From origin to morphologyDevelopment 133:2095–2104.https://doi.org/10.1242/dev.02384
 73

74
The chemical basis of morphogenesisPhilosophical Transactions of the Royal Society B: Biological Sciences 237:37–72.https://doi.org/10.1098/rstb.1952.0012

75
Scaling and regeneration of selforganized patternsPhysical Review Letters 114:138101.https://doi.org/10.1103/PhysRevLett.114.138101

76
Spatial heterogeneity in three species, plantparasitehyperparasite, systemsPhilosophical Transactions of the Royal Society B: Biological Sciences 353:543–557.https://doi.org/10.1098/rstb.1998.0226

77
Mammalian synthetic circuits with RNA binding proteins for RNAonly deliveryNature Biotechnology 33:839–841.https://doi.org/10.1038/nbt.3301
Decision letter

Naama BarkaiReviewing Editor; Weizmann Institute of Science, Israel
In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.
Thank you for submitting your work entitled "Highthroughput mathematical analysis identifies signaling networks for robust reactiondiffusion patterning" for consideration by eLife. Your article has been reviewed by two peer reviewers, and the evaluation has been overseen by Naama Barkai as the Reviewing Editor and Senior Editor.
The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.
The following individuals involved in review of your submission have agreed to reveal their identity: Hans Othmar.
Summary:
Determining the existence/nonexistence of Turing instabilities is difficult for anything but the simplest network topologies. Consequently, a tool/resource/method to perform this calculation for more complex networks would be a significant and novel contribution to the field.
This paper outlines such a method. The method is clearly described (including a wellwritten supplemental), and rigorous. The use of examples to illustrate the approach is useful and convincing (particularly the synthetic biology example in which the method seems very useful).
The main finding of the paper is that Turing instabilities can readily form for a large range of parameters in realistic biological circuits including with different diffusivities of the components than commonly believed this is a novel conclusion and is different from expectations from simpler networks.
Essential revisions:
1) The previous results and references mentioned by reviewer #2 should be incorporated into the analysis. This would probably require quite a substantial effort, but it probably necessary;
2). Several misinterpretations are discussed by Reviewer #1. (in particular presence of LALI in type II/III networks but also the other points). Please correct and explain.
Reviewer #1:
1) I do not agree that the type II/type III networks do not show local activation, long range inhibition (LALI)
The authors find new networks that generate Turing instabilities without differential diffusivities (but see #2 below for a minor point). In several places, they state as a key result that these networks are "fundamentally different from the concepts of shortrange activation and longrage inhibition". We disagree. We think that LALI is present, but differential diffusivity is no longer required for LALI.
Type II network:
Viewed from the perspective of the variable u, then u inhibits itself over a long range via w, and activates itself over a short range via v.
Mathematically, the condition for instability (rewritten) is:
D_w / D_v > c3 / c4
(with c3, c4 defined to be positive)
Reinterpreting the parameters c3,c4 by the more traditional molecular halflives, c3 = 1/ tW, c4 = 1/ tV
Then the instability condition is:
D_w t_w > D_v t_v
Now, if we write the lengthscale of a linearly degraded and diffusing molecule as
L_w = sqrt(D_w t_w)
Then we get:
L_w > L_vi.e. LALI.
Type III network
Viewed from the perspective of the variable u, u activates itself over a short range by v, and over a long range by the combined action of w and v. The effective range of w and v together is expected to be larger than v alone (if w diffuses); thus we expect LALI for any (positive) diffusivities.
To make this a little more concrete, assuming that v and w reach pseudosteady state relative to u (i.e. move to some region of parameter space).
Then the fourier transform of u, u(q), should obey (the linear terms only):
\partial u(q) / \partial t =
=u (q).[A / (D_v q^2+c_v) – B / ((D_v q^2+c_v) (D_w q^2+c_w))]
If we look at the second term, expanding to lowest order in q^2, we get a term like:
u(q). B / (const + (D_v + D_w)q^2)
In this case the Aterm is the local activation by v; the Bterm is the long range inhibition via w and v – which by definition has an effective diffusivity larger than D_v.
Now these arguments won't work for all regions of parameter space – but we expect the same qualitative logic to hold true – LALI (albeit with no direct differential diffusivity).
2) Turing instabilities require differential diffusivity
This has been proved in the general case, see the Satnoianu 2005 reference. I think the point to make is that by having an immobile component and a diffusing component automatically satisfies differential diffusivity. This is more of a mathematical, rather than a biological point.
Satnoianu 2005:
Satnoianu, R. A., and P. van den Driessche. "Some remarks on matrix stability with application to Turing instability." Linear algebra and its applications 398 (2005): 6974.
3) The graph theory is interesting, but its main point is obscure in the writing
I found the graph theory section very interesting. The main conclusion as I see it: it provides an elegant/pictorial way to calculate the a_k (RouthHurwitz) coefficients. These can then be used to derive the instability conditions.
However, from the writing it wasn't clear that this was the main result from the graph theory – when reading the main text and the supplemental, I thought that the graph theory was used to derive instability conditions directly without recourse to the Routh parameters. I would recommend more plainly/directly saying what the graph theory does.
Relatedly, the requirement for a cycle with positive weight ("the instability cycle") is perhaps related to the condition for Turing instability outlined in Satnoianu 2000 (referred to in the main text). In this case, you might be able to make a graphtheoryonly condition: necessary conditions for instability are the existence of a positive weight cycle.
4) Clarify some 'robustness' terms in the main text
The term 'robust' was used in various places, some of which it was clear, and some of which it wasn't ("more robust" should be something like "more robust to parameter changes").
5) There are examples of a 3 node network with only 1 diffusing component that has a Turing instability. See example 3.1 in Satnoianu 2000. Only one diffusive species is required!
(What is really nice, however, is that I could get other examples using RDNets.com)
Reviewer #2:
This paper describes a computational approach and associated software aimed at automating the process of finding kinetic networks that support Turing instabilities. While this is a worthwhile effort, there are several issues concerning this paper.
First and foremost is that the authors are apparently unaware of a result proved long ago that gives necessary conditions on the kinetic network alone for the absence of Turing instabilities. Thus the violation of any of these conditions can give rise to a Turing instability. The result appears in a paper by Othmer in 1980, entitled 'Synchronized and differentiated modes of cellular dynamics', which appeared in Dynamics of Synergetic Systems, H. Haken ed. The theorem goes as follows (σ(A) is the spectrum of A.
Let D be diagonal with Dj {greater than or equal to} 0. In order that σ(K − μD) ⊂ LHP for all such D and all μ ∈ [0, ∞),it is necessary that
• σ(K) ⊂ LHP
• σ(K[i1, i2, · · ·, ip]) ⊂ LHP for all pth
order submatrices of K, where 1 {less than or equal to} p {less than or equal to} n − 1.
The result could alter the search for Turing instabilities described in Appendix 1 quite dramatically, since one could first categorize the networks that have a subnetwork that is unstable when severed from the full network at the steady state of the full system, and only then determine the pattern of diffusion coefficients that produce different types of instabilities. This eliminates the need to examine all RH determinants over the entire range of wave numbers. Notice that the theorem does not require that the fullyisolated subnetwork be unstable on its own, though this is allowable, but rather that the Jacobian of the terms affecting the subnetwork have one or more eigenvalues in the RHP.
A second remark is that much of Appendix 1 and most of Appendix 2 contains material readily available in the literature, and could be eliminated. It would be better to replace this material with a precise formulation of the problem the authors are addressing. For example, they implicitly assume that all diffusible species must satisfy homogeneous Neumann boundary conditions, but this means that the results don't apply to common situations in pattern formation, such as the production of a morphogen at the boundary of the domain.
https://doi.org/10.7554/eLife.14022.054Author response
Summary:
Determining the existence/nonexistence of Turing instabilities is difficult for anything but the simplest network topologies. Consequently, a tool/resource/method to perform this calculation for more complex networks would be a significant and novel contribution to the field.
This paper outlines such a method. The method is clearly described (including a wellwritten supplemental), and rigorous. The use of examples to illustrate the approach is useful and convincing (particularly the synthetic biology example in which the method seems very useful).
The main finding of the paper is that Turing instabilities can readily form for a large range of parameters in realistic biological circuits including with different diffusivities of the components than commonly believed this is a novel conclusion and is different from expectations from simpler networks.
We thank the editor for the positive assessment of our work and for highlighting the scientific impact of our findings. To emphasize the novel conclusions regarding the components’ diffusivities pointed out by the editor, we have changed the title to “Highthroughput mathematical analysis identifies Turing networks for patterning with equally diffusing signals” and improved the Abstract in a similar direction.
Essential revisions:
1) The previous results and references mentioned by reviewer #2 should be incorporated into the analysis. This would probably require quite a substantial effort, but it probably necessary;
As outlined in more detail below, the approach that we used to search for Turing patterning systems already took the theoretical results mentioned by reviewer #2 into account. We have now clarified the details of our approach in Appendix 1.
2). Several misinterpretations are discussed by Reviewer #1. (in particular presence of LALI in type II/III networks but also the other points). Please correct and explain.
We thank reviewer #1 for the insightful comments. Indeed, the relation of our findings with “local activation, long range inhibition” (LALI) models was not sufficiently detailed in the previous version of our manuscript. We agree that a thorough discussion of the underlying pattern forming mechanisms is of central interest to a broad readership. We have therefore significantly extended our analysis and now provide an additional section comprising 18 pages of simulations and discussion (Appendix 3) to address the role of LALI in Type II/III networks.
Reviewer #1:
1) I do not agree that the type II/type III networks do not show local activation, long range inhibition (LALI)
The authors find new networks that generate Turing instabilities without differential diffusivities (but see #2 below for a minor point). In several places, they state as a key result that these networks are "fundamentally different from the concepts of shortrange activation and longrage inhibition". We disagree. We think that LALI is present, but differential diffusivity is no longer required for LALI.
Type II network:
Viewed from the perspective of the variable u, then u inhibits itself over a long range via w, and activates itself over a short range via v.
Mathematically, the condition for instability (rewritten) is:
D_w / D_v > c3 / c4
(with c3, c4 defined to be positive)
Reinterpreting the parameters c3,c4 by the more traditional molecular halflives, c3 = 1/ tW, c4 = 1/ tV
Then the instability condition is:
D_w t_w > D_v t_v
Now, if we write the lengthscale of a linearly degraded and diffusing molecule as
L_w = sqrt(D_w t_w)
Then we get:
L_w > L_vi.e. LALI.
Type III network
Viewed from the perspective of the variable u, u activates itself over a short range by v, and over a long range by the combined action of w and v. The effective range of w and v together is expected to be larger than v alone (if w diffuses); thus we expect LALI for any (positive) diffusivities.
To make this a little more concrete, assuming that v and w reach pseudosteady state relative to u (i.e. move to some region of parameter space).
Then the fourier transform of u, u(q), should obey (the linear terms only):
\partial u(q) / \partial t =
=u (q).[A / (D_v q^2+c_v) – B / ((D_v q^2+c_v) (D_w q^2+c_w))]
If we look at the second term, expanding to lowest order in q^2, we get a term like:
u(q). B / (const + (D_v + D_w)q^2)
In this case the Aterm is the local activation by v; the Bterm is the long range inhibition via w and v – which by definition has an effective diffusivity larger than D_v.
Now these arguments won't work for all regions of parameter space – but we expect the same qualitative logic to hold true – LALI (albeit with no direct differential diffusivity).
We are grateful to the reviewer for these excellent remarks and the valuable input. We agree with the reviewer that it is important to relate our findings to the LALI mechanism originally proposed by Meinhardt and Gierer. We therefore wrote a new section (Appendix 3) to investigate with additional analyses and numerical simulations whether the dynamics of Type II/III networks fit with the classical description of pattern formation by LALI. This analysis indicates that the mechanism underlying pattern formation is different from the concept of shortrange activation and longrange inhibition based on differential diffusivity. We have therefore corrected the relevant sentences in our manuscript and now state that our analysis identified systems that are different from models of “shortrange activation and longrange inhibition based on differential diffusivity”.
1) We value the reviewer’s elegant interpretation of the Type II network shown in Figure 2c. As the reviewer points out, the final aspect of the pattern together with the larger value of D_{w}/c_{2} compared to D_{v}/c_{1} could be interpreted as a longer range of the inhibitor w compared to the activator v. However, we find that by just altering the magnitude of the rates associated with cycle c3 (without modifying the overall strength of c_{3} or D_{w}/c_{2} and D_{v}/c_{1}), the final aspect of the periodic patterns can change, such that v – surprisingly – appears to have a longer range than w (Appendix 3—figure 2). Moreover, the definition of ranges in terms of ratios between diffusion and decay cannot be generalized to other Type II networks that lack negative selfregulatory feedbacks of the diffusible molecules (e.g. Appendix 3—figure 3) or where the minimum diffusion ratio d is defined by more complicated reaction terms (e.g. equation 31 in Appendix 3).
2) Similar to the case discussed above, the final aspect of the patterns in Type III networks can also change depending on the reaction kinetics and diffusion ratios, showing that the relationship between the range of v and w within the same network can vary (Appendix 3—figure 4). Moreover, for other Type III networks, alternative ad hoc approximations of the effective ranges would need to be defined in order to interpret the systems within the LALI framework. In cases with equally diffusing reactants, such effective ranges appear to be just reformulations of the reaction kinetics that do not contribute to the identification of the general principles that underlie pattern formation.
3) To develop a general understanding of the pattern formation process in Type II/III networks, we have extended our work with simulations on a pair of cells inspired by Turing’s original analysis of pattern formation dynamics. Turing’s original simulation highlights that the role of differential diffusivity is not to limit the expansion of the activator but to destabilize the equilibrium state by maintaining an imbalance between reactants that drives a continuous deviation from equilibrium. Importantly, the reaction terms of a Turing system guarantee that the deviation happens simultaneously above and below the equilibrium state. In agreement with these observations, onedimensional simulations like the one shown in Appendix 6—figure 8 reveal that the periodic patterns of Turing systems are formed with a simultaneous appearance of activation and inhibition peaks. The periodic patterns therefore do not reflect a longer range of the inhibitor to limit the autoactivation.
We propose instead that the role of immobile reactants in Type II and Type III networks is not to implement an effective difference in the ranges of local autoactivation and longrange inhibition, but rather to help the system to diverge from equilibrium, which in classic twocomponent Turing systems is achieved by differential diffusivity. We find that immobile factors help to destabilize the system since they are not subjected to the equilibrating effect of diffusion and can therefore fulfill a role as “capacitors” that integrate the effect of diffusing reactants to destabilize the reactiondiffusion system by quickly amplifying perturbations (Appendix 3—figure 12).
2) Turing instabilities require differential diffusivity
This has been proved in the general case, see the Satnoianu 2005 reference. I think the point to make is that by having an immobile component and a diffusing component automatically satisfies differential diffusivity. This is more of a mathematical, rather than a biological point.
Satnoianu 2005:
Satnoianu, R. A., and P. van den Driessche. "Some remarks on matrix stability with application to Turing instability." Linear algebra and its applications 398 (2005): 6974.
We have made the appropriate changes in the title and the Abstract of our manuscript to highlight that our analysis refers to the lack of differential diffusivity of the mobile signalling molecules.
3) The graph theory is interesting, but its main point is obscure in the writing
I found the graph theory section very interesting. The main conclusion as I see it: it provides an elegant/pictorial way to calculate the a_k (RouthHurwitz) coefficients. These can then be used to derive the instability conditions.
However, from the writing it wasn't clear that this was the main result from the graph theory – when reading the main text and the supplemental, I thought that the graph theory was used to derive instability conditions directly without recourse to the Routh parameters. I would recommend more plainly/directly saying what the graph theory does.
Our graphtheoretical formalism provides a mechanistic understanding of the requirements for stability and diffusiondriven instability in a network. Importantly and in contrast with purely algebraic or numeric approaches, this is independent of the network complexity or the number of nodes because the graph formalism allows to break down the network into cycles that have a clear and defined role in the dynamics of pattern formation. This provides an intuitive understanding of the requirements for the generation of Turing patterns in terms of feedback, cycles, and network topologies.
We now explain in more detail at the beginning of Appendix 2 that the graphtheoretical formalism allows to rewrite the coefficients of the characteristic polynomial and therefore to derive the RouthHurwitz determinants in terms of cycles.
Relatedly, the requirement for a cycle with positive weight ("the instability cycle") is perhaps related to the condition for Turing instability outlined in Satnoianu 2000 (referred to in the main text). In this case, you might be able to make a graphtheoryonly condition: necessary conditions for instability are the existence of a positive weight cycle.
The reviewer is correct. The condition in Satnoianu 2000 is related to the requirement for a positive cycle. However, in our work we did not intend to recast in graphtheoretical form results from reaction network theory that have already been proven in another way. This is the reason why we did not pursue this line of inquiry, but the development of such graphtheoretical counterparts has been reviewed elsewhere (e.g. Radde 2010).
Radde N, Bar NS, M Banaji (2010). Graphical methods for analysing feedback in biological networks–A survey. International Journal of Systems Science. 41: 3546.
4) Clarify some 'robustness' terms in the main text
The term 'robust' was used in various places, some of which it was clear, and some of which it wasn't (e.g. line 212, "more robust" should be something like "more robust to parameter changes").
We have now clarified the term “robust” according to the reviewer’s suggestion where appropriate.
5) There are examples of a 3 node network with only 1 diffusing component that has a Turing instability. See example 3.1 in Satnoianu 2000. Only one diffusive species is required!
(What is really nice, however, is that I could get other examples using RDNets.com)
We thank the reviewer for pointing out the value of our software. There are indeed networks that satisfy Turing instability conditions with only one diffusing component. We observe, however, that in such networks the destabilizing cycle encompasses only nondiffusible nodes, which causes the network to amplify any of the fluctuations in the initial conditions without a preferred wavelength. Similar behaviors have also been observed when there is a positive selfregulatory feedback on the nondiffusible node (Klika et al. 2012, White and Gilligan 1998). RDNets offers the possibility to filter out such “noise amplifiers”.
Klika V, Baker RE, Headon D, EA Gaffney (2012). The influence of receptormediated interactions on reactiondiffusion mechanisms of cellular selforganisation. Bull Math Biol 74: 935957.
White KAJ, CA Gilligan (1998). Spatial heterogeneity in three species, plant–parasite–hyperparasite, systems. Philos Trans R Soc Lond B Biol Sci 353: 543557.
Reviewer #2:
This paper describes a computational approach and associated software aimed at automating the process of finding kinetic networks that support Turing instabilities. While this is a worthwhile effort, there are several issues concerning this paper.
First and foremost is that the authors are apparently unaware of a result proved long ago that gives necessary conditions on the kinetic network alone for the absence of Turing instabilities. Thus the violation of any of these conditions can give rise to a Turing instability. The result appears in a paper by Othmer in 1980, entitled 'Synchronized and differentiated modes of cellular dynamics', which appeared in Dynamics of Synergetic Systems, H. Haken ed. The theorem goes as follows (σ(A) is the spectrum of A.
Let D be diagonal with Dj {greater than or equal to} 0. In order that σ(K − μD) ⊂ LHP for all such D and all μ ∈ [0, ∞),it is necessary that
• σ(K) ⊂ LHP
• σ(K[i1, i2, · · ·, ip]) ⊂ LHP for all pth
order submatrices of K, where 1 {less than or equal to} p {less than or equal to} n − 1.
The result could alter the search for Turing instabilities described in Appendix 1 quite dramatically, since one could first categorize the networks that have a subnetwork that is unstable when severed from the full network at the steady state of the full system, and only then determine the pattern of diffusion coefficients that produce different types of instabilities. This eliminates the need to examine all RH determinants over the entire range of wave numbers. Notice that the theorem does not require that the fullyisolated subnetwork be unstable on its own, though this is allowable, but rather that the Jacobian of the terms affecting the subnetwork have one or more eigenvalues in the RHP.
We thank the reviewer for pointing out that the existence of an unstable subnetwork is sufficient for the existence of Turing patterns. In our analysis, we had already taken the implications of this theorem into account. We used the equivalent result from Cross 1978 (as explained in Othmer 1980) and another theorem proven by Kellog (theorem 4, page 174 in Kellog 1972; reviewed in Hershkowitz 1992) to characterize the conditions that determine stable Turing patterns. We now clarify this point in Appendix 1 (see “Step 5. Selecting unstable networks with diffusion”).
Cross GW (1978). Three types of matrix stability. Linear algebra and its applications. 20: 253 263.
Kellogg RB (1972). On complex eigenvalues of M and P matrices. Numerische Mathematik. 19: 170175.
Hershkowitz D (1992). Recent directions in matrix stability. Linear Algebra and its Applications. 171: 161186
A second remark is that much of Appendix 1 and most of Appendix 2 contains material readily available in the literature, and could be eliminated. It would be better to replace this material with a precise formulation of the problem the authors are addressing. For example, they implicitly assume that all diffusible species must satisfy homogeneous Neumann boundary conditions, but this means that the results don't apply to common situations in pattern formation, such as the production of a morphogen at the boundary of the domain.
Considering the broad readership of eLife, Appendix 1 and Appendix 2 are vital for an understanding of our findings by nonexperts and additionally serve to define the notation of our analysis. Following the suggestion of the reviewer, we have now improved Appendix 1 by adding a precise formulation of the problem we are addressing, including the definition of boundary conditions. We restrict our analysis to zero flux boundary conditions because we are interested in deriving the analytical conditions required to form selforganizing spatial patterns in the absence of preexisting asymmetries or external inputs.
https://doi.org/10.7554/eLife.14022.055Article and author information
Author details
Funding
European Molecular Biology Organization (EMBO LongTerm Fellowship ALTF 4332014)
 Luciano Marcon
Schweizerischer Nationalfonds zur Förderung der Wissenschaftlichen Forschung (Sinergia grant CRSII3_141918)
 Xavier Diego
Ministerio de Economía y Competitividad (Plan Nacional grant BFU201568725P)
 James Sharpe
Institució Catalana de Recerca i Estudis Avançats
 James Sharpe
Severo Ochoa (SEV20120208)
 James Sharpe
European Research Council (ERC Starting Grant 637840 (QUANTPATTERN))
 Patrick Müller
Human Frontier Science Program (Career Development Award CDA00031/2013C)
 Patrick Müller
MaxPlanckGesellschaft (Max Planck Society)
 Patrick Müller
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
This work was supported by EMBO and Marie Curie LongTerm Fellowships to LM; a Sinergia grant (Swiss National Science Foundation) to XD; a Plan Nacional grant from MINECO and support from ICREA and Severo Ochoa to JS; an ERC Starting Grant, an HFSP Career Development Award and support from the Max Planck Society to PM.
Reviewing Editor
 Naama Barkai, Weizmann Institute of Science, Israel
Publication history
 Received: December 23, 2015
 Accepted: April 7, 2016
 Accepted Manuscript published: April 8, 2016 (version 1)
 Accepted Manuscript updated: April 12, 2016 (version 2)
 Version of Record published: June 27, 2016 (version 3)
Copyright
© 2016, Marcon et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 3,755
 Page views

 1,088
 Downloads

 30
 Citations
Article citation count generated by polling the highest count across the following sources: Crossref, Scopus, PubMed Central.
Download links
Downloads (link to download the article as PDF)
Download citations (links to download the citations from this article in formats compatible with various reference manager tools)
Open citations (links to open the citations from this article in various online reference manager services)
Further reading

 Computational and Systems Biology
 Microbiology and Infectious Disease

 Computational and Systems Biology
 Immunology and Inflammation