Allosteric conformational ensembles have unlimited capacity for integrating information
Abstract
Integration of binding information by macromolecular entities is fundamental to cellular functionality. Recent work has shown that such integration cannot be explained by pairwise cooperativities, in which binding is modulated by binding at another site. Higherorder cooperativities (HOCs), in which binding is collectively modulated by multiple other binding events, appear to be necessary but an appropriate mechanism has been lacking. We show here that HOCs arise through allostery, in which effective cooperativity emerges indirectly from an ensemble of dynamically interchanging conformations. Conformational ensembles play important roles in many cellular processes but their integrative capabilities remain poorly understood. We show that sufficiently complex ensembles can implement any form of information integration achievable without energy expenditure, including all patterns of HOCs. Our results provide a rigorous biophysical foundation for analysing the integration of binding information through allostery. We discuss the implications for eukaryotic gene regulation, where complex conformational dynamics accompanies widespread information integration.
Introduction
Cells receive information in different ways, of which molecular binding is the most diverse and widespread. Binding events influence downstream biological functions. In the biophysical treatment that we present here, biological functions, such as the output of a gene or the oxygencarrying capacity of haemoglobin, are quantified as averages over the probabilities of microscopic states. We will be concerned with how binding events collectively determine these probability distributions and will refer to this process as the integration of binding information.
The most proximal form of such integration is pairwise cooperativity, in which binding at one site modulates binding at another site. This can arise through direct interaction, where one binding event creates a molecular surface, which either stabilises or destabilises the other binding event. This situation is illustrated in Figure 1A, which shows the binding of ligand to sites on a target molecule. (In considering the target of binding, we use ‘molecule’ for simplicity to denote any molecular entity, from a single polypeptide to a macromolecular aggregate such as an oligomer or complex with multiple components.) We use the notation ${K}_{i,S}$ for the association constant—onrate divided by offrate, with dimensions of (concentration)^{−1}—where $i$ denotes the binding site and $S$ denotes the set of sites which are already bound. This notation was introduced in previous work (Estrada et al., 2016) and is explained further in the Materials and methods. It allows binding to be analysed while keeping track of the context in which binding occurs, which is essential for making sense of how binding information is integrated.
Oxygen binding to haemoglobin is a classical example of integration of binding information, for which Linus Pauling gave the first biophysical definition of cooperativity (Pauling, 1935). At a time when the mechanistic details of haemoglobin were largely unknown, Pauling assumed that cooperativity arose from direct interactions between the four haem groups. He defined the pairwise cooperativity for binding to site $i$, given that site $j$ is already bound, as the fold change in the association constant compared to when site $j$ is not bound. In other words, the pairwise cooperativity is given by ${K}_{i,\{j\}}/{K}_{i,\mathrm{\varnothing}}$, where $\mathrm{\varnothing}$ denotes the empty set. (Pauling considered nonpairwise effects but deemed them unnecessary to account for the available data.) It is conventional to say that the cooperativity is ‘positive’ if this ratio is greater than 1 and ‘negative’ if this ratio is less than 1; the sites are said to be ‘independent’ if the cooperativity is exactly 1, in which case binding to site $j$ has no influence on binding to site $i$. This terminology reflects the underlying free energy (Equation 1). Association constants and cooperativities may be thought of as an alternative way of describing the freeenergy landscape, as we will explain in more detail in the Results. Figure 1A depicts the situation in which there is negative cooperativity for binding to site 1 and positive cooperativity for binding to site 3, given that site 2 is bound.
Studies of feedback inhibition in metabolic pathways revealed that information to modulate binding could also be conveyed over long distances on a target molecule, beyond the reach of direct interactions (Changeux, 1961; Gerhart, 2014; Figure 1B). Monod and Jacob coined the term ‘allostery’ for this form of indirect cooperativity (Monod and Jacob, 1961). Monod, Wyman and Changeux (MWC) and, independently, Koshland, Némethy and Filmer (KNF) put forward equilibrium thermodynamic models, which showed how effective cooperativity could arise from the interplay between ligand binding and conformational change (Koshland et al., 1966; Monod et al., 1965). In the twoconformation MWC model (Figure 2B), there is no ‘intrinsic’ cooperativity—the binding sites are independent in each conformation—and ‘effective’ cooperativity arises as an emergent property of the dynamically interchanging ensemble of conformations.
In these studies, the effective cooperativity between sites was not quantitatively determined. Instead, the presence of cooperativity was inferred from the shape of the binding function, which is the average fraction of bound sites, or fractional saturation, as a function of ligand concentration (Figure 2A). The famous MWC formula is an expression for this binding function (Monod et al., 1965). If the sites are effectively independent, the binding function has a hyperbolic shape, similar to that of a Michaelis–Menten curve. A sigmoidal curve, which flattens first and then rises more steeply, indicates positive cooperativity, while a curve which rises steeply first and then flattens indicates negative cooperativity. Surprisingly, despite decades of study, the effective cooperativity of allostery is still largely assessed in this way, through the shape of the binding function, which is sometimes quantified in terms of a sensitivity or Hill coefficient. However, the shape of the binding function, and any associated Hill coefficient, are measures which aggregate over conformations and binding states, and they give little insight into how binding information is being integrated. To put it another way, the underlying freeenergy landscape cannot be inferred from the shape of the binding function: as we will see below, different freeenergy landscapes can give rise to indistinguishable binding functions. One of the contributions of this paper is to show how effective cooperativities can be quantified, providing thereby a set of parameters which collectively describe the allosteric freeenergy landscape and placing allosteric information integration on a similar biophysical foundation to that provided by Pauling for direct interactions between two sites.
The MWC and KNF models are phenomenological: effective cooperativity arises as an emergent property of a conformational ensemble. This leaves open the question of how information is propagated between distant binding sites across a single molecule. This question was particularly relevant to haemoglobin, for which it had become clear that the haem groups were sufficiently far apart that direct interactions were implausible. Perutz’s Xray crystallography studies of haemoglobin revealed a pathway of structural transitions during cooperative oxygen binding which linked one conformation to another (Figure 2C), thereby relating the singlemolecule viewpoint to the ensemble viewpoint (Perutz, 1970). These pioneering studies provided important justification for key aspects of the MWC model, which has endured as one of the most successful mathematical models in biology (Changeux, 2013; Marzen et al., 2013).
Allostery was initially thought to be limited to certain symmetric protein oligomers like haemoglobin and to involve only a few, usually two, conformations. But Cooper and Dryden's theoretical demonstration that information could be conveyed by fluctuations around a dominant conformation anticipated the emergence of a more dynamical perspective (Cooper and Dryden, 1984; HenzlerWildman and Kern, 2007). At the singlemolecule level, it has been found that binding information can be conveyed over long distances by complex atomic networks, of which Perutz’s linear pathway (Figure 2C) is only a simple example (SchuelerFurman and Wodak, 2016; Kornev and Taylor, 2015; Knoverek et al., 2019; Wodak et al., 2019). These atomic networks may in turn underpin complex ensembles of conformations in many kinds of target molecules and allosteric regulation is now seen to be common to most cellular processes (Nussinov et al., 2013; Changeux and Christopoulos, 2016; Motlagh et al., 2014; Lorimer et al., 2018; Wodak et al., 2019; Ganser et al., 2019). The unexpected finding of widespread intrinsic disorder in proteins has been particularly influential in prompting a reassessment of the classical structurefunction relationship, with conformations which may only be fleetingly present providing plasticity of binding to many partners (Wrabl et al., 2011; Wright and Dyson, 2015; Berlow et al., 2018).
However, while ensembles have grown greatly in complexity from MWC’s two conformations and new theoretical frameworks for studying them have been introduced (Wodak et al., 2019), the quantitative analysis of information integration has barely changed beyond pairwise cooperativity. In the present paper, we will be particularly concerned with higherorder cooperativities (HOCs) in which multiple binding events collectively modulate another binding site (Figure 1C). Such higherorder effects can be quantified by association constants, ${K}_{i,S}$, where the set $S$ has more than one bound site. The size of $S$, denoted by $\mathrm{\#}(S)$, is the order of cooperativity, so that pairwise cooperativity may be considered as HOC of order 1. For the example in Figure 1C, the ratio, ${K}_{5,\{2,4,6\}}/{K}_{5,\mathrm{\varnothing}}$, defines the nondimensional HOC of order 3 for binding to site 5, given that sites 2, 4 and 6 are already bound. The notation used here is essential to express such higherorder concepts.
Higherorder effects have been discussed in previous studies (Dodd et al., 2004; Peeters et al., 2013; Martini, 2017; Gruber and Horovitz, 2018) and treated systematically in the mutantcycle strategy developed in Horovitz and Fersht, 1990 and recently reviewed (Carter, 2017). The latter approach relies on perturbing residues or modules to unravel networks of energetic couplings within a macromolecule. It focusses on the singlemolecule scale in contrast to the ensemble scale of the present paper (Figure 2). Mutantcycle studies have confirmed the presence of substantial higherorder interactions underlying information propagation in proteins (Jain and Ranganathan, 2004; Sadovsky and Yifrach, 2007; Carter et al., 2017). The two approaches may be seen as different ways of analysing the freeenergy landscape, as we explain in the Results.
HOCs were introduced in Estrada et al., 2016, where it was shown that experimental data on the sharpness of gene expression could not be accounted for purely in terms of pairwise cooperativities (Park et al., 2019a). In this context, the target molecule is the chromatin structure containing the relevant transcription factor (TF) binding sites and the analogue of the binding function is the steadystate probability of RNA polymerase being recruited, considered as a function of TF concentration (Estrada et al., 2016; Park et al., 2019a). The Hunchback gene considered in Estrada et al., 2016, Park et al., 2019a, which is thought to have six binding sites for the TF Bicoid, requires HOCs up to order 5 to account for the data, under the assumption that the regulatory machinery is operating without energy expenditure at thermodynamic equilibrium. An important problem emerging from this previous work, and one of the starting points for the present paper, is to identify a molecular mechanism capable of implementing such HOCs.
In the present paper, we show that allosteric conformational ensembles can implement any pattern of effective HOCs. Accordingly, they can implement any form of information integration that is achievable at thermodynamic equilibrium. We work at the ensemble level (Figure 2B) using a graphbased representation of Markov processes developed previously (below). We introduce a systematic method of ‘coarse graining’, which is likely to be broadly useful for other studies. This allows us to define the effective HOCs arising from any allosteric ensemble, no matter how complex. These effective HOCs provide a quantitative language in which the integrative capabilities of any ensemble can be specified. We show, in particular, that allosteric ensembles can account for the experimental data on Hunchback mentioned above, which was the problem that prompted the present study. It is straightforward to determine the binding function from the effective HOCs, and we derive a generalised MWC formula for an arbitrary ensemble, which recovers the functional perspective. Our results subsume and generalise previous findings and clarify issues which have been present since the concept of allostery was introduced. Our graphbased approach further enables general theorems to be rigorously proved for any ensemble (below), in contrast to calculation of specific models which has been the norm up to now.
Our analysis raises questions about how effective HOCs are implemented at the level of single molecules, similar to those answered by Perutz for haemoglobin and the MWC model (Figure 2C). This important problem lies outside the scope of the present paper and requires different methods (Wodak et al., 2019), such as the mutantcycle approach mentioned above (Carter, 2017). Our analysis is also restricted to ensembles which are at thermodynamic equilibrium without expenditure of energy, as is generally assumed in studies of allostery. Energy expenditure may be present in maintaining a conformational ensemble, for example, through posttranslational modification, but the significance of this has not been widely appreciated in the literature. Thermodynamic equilibrium sets fundamental physical limits on information processing in the form of ‘Hopfield barriers’ (Estrada et al., 2016; Biddle et al., 2019; Wong and Gunawardena, 2020). Energy expenditure can bypass these barriers and substantially enhance equilibrium capabilities. However, the study of nonequilibrium systems is more challenging and we must defer analysis of this interesting problem to subsequent work (Discussion).
The integration of binding information through cooperativities leads to the integration of biological functions. Haemoglobin offers a vivid example of how allostery implements this relationship. This one target molecule integrates two distinct functions, of taking up oxygen in the lungs and delivering oxygen to the tissues, by having two distinct conformations, each adapted to one of the functions, and dynamically interchanging between them. In the lungs, with a higher oxygen partial pressure, binding cooperativity causes the relaxed conformation to be dominant in the molecular population, which thereby takes up oxygen; in the tissues, with a lower oxygen pressure, binding cooperativity causes the tense conformation to be dominant in the population, which thereby gives up oxygen. Evolution may have used this integrative strategy more widely than just to transport oxygen, and we review in the Discussion some of the evidence for an analogy between functional integration by haemoglobin and by gene regulation.
Results
Construction of the allostery graph
Our approach uses the linear framework for timescale separation (Gunawardena, 2012), details of which are provided in the 'Materials and methods' along with further references. We briefly outline the approach here.
In the linear framework, a suitable biochemical system is described by a finite directed graph with labelled edges. In our context, graph vertices represent microstates of the target molecule and graph edges represent transitions between microstates, for which the edge labels are the instantaneous transition rates. A linear framework graph specifies a finitestate, continuoustime Markov process, and any reasonable such Markov process can be described by such a graph. We will be concerned with the probabilities of microstates at steady state. These probabilities can be interpreted in two ways, which reflect the ensemble and singlemolecule viewpoints of Figure 2. From the ensemble perspective, the probability is the proportion of target molecules which are in the specified microstate, once the molecular population has reached steady state, considered in the limit of an infinite population. From the singlemolecule perspective, the probability is the proportion of time spent in the specified microstate, in the limit of infinite time. The equivalence of these definitions comes from the ergodic theorem for Markov processes (Stroock, 2014). These different interpretations may be helpful when dealing with different biological contexts: a population of haemoglobin molecules may be considered from the ensemble viewpoint, while an individual gene may be considered from the singlemolecule viewpoint. As far as the determination of probabilities is concerned, the two viewpoints are equivalent.
The graph representation may also be seen as a discrete approximation of a continuous energy landscape, as in Figure 3, in which the target molecule is moving deterministically on a highdimensional landscape in response to a potential, while being buffeted stochastically through interactions with the surrounding thermal bath (Frauenfelder et al., 1991). In mathematics, this approximation goes back to the work of Wentzell and Freidlin on large deviation theory for stochastic differential equations in the low noise limit (Ventsel' and Freidlin, 1970; Freidlin and Wentzell, 2012). It has been exploited more recently to sample energy landscapes in chemical physics (Wales, 2006) and in the form of Markov State Models arising from molecular dynamics simulations (Noé and Fischer, 2008; Sengupta and Strodel, 2018). In this approximation, the vertices correspond to the minima of the free energy up to some energy cutoff, the edges correspond to appropriate limiting barrier crossings and the labels correspond to transition rates over the barrier.
The linear framework graph, or the accompanying Markov process, describes the timedependent behaviour of the system. Our concern in the present paper is with systems which have reached a steady state of thermodynamic equilibrium, so that detailed balance, or microscopic reversibility, is satisfied. The assumption of thermodynamic equilibrium has been standard since allostery was introduced (Koshland et al., 1966; Monod et al., 1965) but has significant implications, as pointed out in the Introduction, and we will return to this issue in the Discussion. At thermodynamic equilibrium, we can dispense with dynamical information and work with what we call ‘equilibrium graphs’ (Figure 3). These are also directed graphs with labelled edges but the edge labels no longer contain dynamical information in the form of rates but rather ratios of forward to reverse rates. These ratios are determined by the minima of the freeenergy landscape, with the equilibrium label on the edge from vertex $i$ to vertex $j$ being given by the formula in Figure 3 . Free energy is often expressed relative to a reference level, as we will do below, so it will be convenient to write the equilibrium label from $i$ to $j$ as
where $\mathrm{\Delta}{\mathrm{\Phi}}_{u}$ is the relative freeenergy of vertex $u$, ${k}_{B}$ is Boltzmann’s constant and $T$ is the absolute temperature (Figure 3). Note that if the edge in question involves components from outside the graph itself, such as a ligand which binds to $i$ to yield $j$, then the chemical potential of the ligand will contribute to the free energy. This contribution will manifest itself in the presence of a ligand concentration term in the edge label, as seen in Figure 4. The equilibrium edge labels are the only parameters needed at thermodynamic equilibrium and the free energies of the vertices can be recovered from them, up to an additive constant. From now on, in the main text, when we say ‘graph’, we will mean ‘equilibrium graph’.
We explain such graphs using our main example. Figure 4 shows the graph, $A$, for an allosteric ensemble, with multiple conformations ${c}_{1},\mathrm{\cdots},{c}_{N}$ and multiple sites, $1,\mathrm{\cdots},n$, for binding of a single ligand ($n=3$ in the example). The graph vertices represent abstract conformations with patterns of ligand binding, denoted $({c}_{k},S)$, where the index $k$ designates the conformation with $1\le k\le N$, and $S\subseteq \{1,\mathrm{\cdots},n\}$ is the subset of bound sites. Directed edges represent transitions arising either from binding without change of conformation (‘vertical’ edges), $({c}_{k},S)\to ({c}_{k},S\cup \{i\})$ where $i\notin S$, which occur for all conformations c_{k}, or from conformational change without binding (‘horizontal’ edges), $({c}_{k},S)\to ({c}_{j},S)$ where $k\ne j$, which occur for all binding subsets $S$. Edges are shown in only one direction for clarity—when binding or unbinding is present, we use the direction of binding—but edges are always reversible, in accordance with thermodynamic equilibrium. Ignoring labels and thinking only in terms of vertices and edges, or ‘structure’, $A$ has a product form: the vertical subgraphs, ${A}^{{c}_{k}}$, consisting of those vertices with conformation c_{k} and all edges between them, all have the same structure and the horizontal subgraphs, ${A}_{S}$, consisting of those vertices with binding subset $S$ and all edges between them, also all have the same structure (Figure 4). Structurally speaking, we can think of $A$ as the graph product (Ahsendorf et al., 2014) of the vertical subgraph ${A}^{{c}_{1}}$ and the horizontal subgraph ${A}_{\mathrm{\varnothing}}$ (Figure 4).
In an allostery graph, ‘conformation’ is meant abstractly as any state for which binding association constants can be defined. It does not imply any particular atomic configuration of a target molecule nor make any commitments as to how the pattern of binding changes.
The productform structure of the allostery graph reflects the ‘conformational selection’ viewpoint of MWC, in which conformations exist prior to ligand binding, rather than the ‘induced fit’ viewpoint of KNF, in which binding can induce new conformations. Considerable evidence now exists for conformational selection, in the form of transient, rarely populated conformations which exist prior to binding (Tzeng and Kalodimos, 2011). Induced fit may be incorporated within our graphbased approach by treating new conformations as always present but at extremely low probability. One of the original justifications for induced fit was that it enabled negative cooperativities, in contrast to conformational selection (Koshland and Hamadani, 2002), but we will show below that induced fit is not necessary for this and that negative HOCs arise naturally in our approach. Accordingly, the productform structure of our allostery graphs is both convenient and powerful.
The edge labels are the nondimensional ratios of the forward transition rate to the reverse transition rate; accordingly, the label for the reverse edge is the reciprocal of the label for the forward edge (Materials and methods). Labels may include the influence of components outside the graph, such as a binding ligand. For instance, the label for the binding edge $({c}_{k},S)\to ({c}_{k},S\cup \{i\})$ is $x{K}_{{c}_{k},i,S}$, where $x$ is the ligand concentration and ${K}_{{c}_{k},i,S}$ is the association constant (Figure 1A), with dimensions of (concentration)^{−1}, as described in the Introduction. Horizontal edge labels are not individually annotated and need only be specified for the horizontal subgraph of empty conformations, ${A}_{\mathrm{\varnothing}}$, since all other labels are determined by detailed balance (Materials and methods).
The graph structure allows HOCs between binding events to be calculated, as suggested in the Introduction. We will define this first for the ‘intrinsic’ HOCs which arise in a given conformation and explain in the next section how ‘effective’ HOCs are defined for the ensemble. In conformation c_{k}, the intrinsic HOC for binding to site $i$, given that the sites in $S$ are already bound, denoted ${\omega}_{{c}_{k},i,S}$, is defined by normalising the corresponding association constant to that for binding to site $i$ when nothing else is bound (Estrada et al., 2016),
HOCs are nondimensional quantities. If $S$ has only a single site, say $S=\{j\}$, then the intrinsic HOC of order 1, ${\omega}_{{c}_{k},i,\{j\}}$, is the classical pairwise cooperativity between sites $i$ and $j$. There is positive or negative intrinsic HOC if ${\omega}_{{c}_{k},i,S}>1$ or ${\omega}_{{c}_{k},i,S}<1$, respectively, and independence if ${\omega}_{{c}_{k},i,S}=1$ (Figure 1A).
For any graph $G$, the steadystate probabilities of the vertices can be calculated from the edge labels. For each vertex, $v$, in $G$, the probability, ${\text{Pr}}_{v}(G)$, is proportional to the quantity, ${\mu}_{v}(G)$, obtained by multiplying the edge labels along any directed path of edges from a fixed reference vertex to $v$. It is a consequence of detailed balance that ${\mu}_{v}(G)$ does not depend on the choice of path in $G$. This implies algebraic relationships among the edge labels. These can be fully determined from $G$ and independent sets of parameters can be chosen (Materials and methods). For the allostery graph, a convenient choice vertically is those association constants ${K}_{{c}_{k},i,S}$ with $i$ less than all the sites in $S$, denoted $i<S$; horizontal choices are discussed in the Materials and methods but are not needed for the main text.
Since probabilities must add up to 1, it follows that
Equation 3 yields the same result as equilibrium statistical mechanics, with the denominator being the partition function for the thermodynamic grand canonical ensemble. Equilibrium statistical mechanics typically focusses only on vertices and uses their free energies as the fundamental parameters. Directed graphs of the form considered here were previously used in Hill, 1966 and Schnakenberg, 1976 to study systems away from thermodynamic equilibrium, where the graph edges become essential to represent entropy production (Wong and Gunawardena, 2020). We find that the graph remains just as useful at thermodynamic equilibrium because binding and unbinding are the fundamental mechanisms through which information is integrated and these mechanisms must be represented by graph edges. Indeed, as the next section shows, graphs are invaluable for formulating higherorder concepts.
Our specification of an allostery graph allows for arbitrary conformational complexity and arbitrary interacting ligands (we consider only one ligand here for simplicity), with the independent association constants in each conformation being arbitrary and with arbitrary changes in these parameters between conformations. Moreover, the abstract nature of ‘conformation’, as described above, permits substantial generality. Allostery graphs can be formulated to encompass the two conformations of MWC (Marzen et al., 2013), nested models (Robert et al., 1987), the fluctuations of Cooper and Dryden, 1984 and more recent views of dynamical allostery (Tzeng and Kalodimos, 2011), the multiple domains of the Ensemble Allosteric Model developed by Hilser and colleagues (Hilser et al., 2012) and applied also to intrinsically disordered proteins (Motlagh et al., 2012), other ensemble models (LeVine and Weinstein, 2015; Tsai and Nussinov, 2014) and Markov State Models arising from molecular dynamics simulations (Noé and Fischer, 2008).
Relationships between higherorder measures
As mentioned in the Introduction, a systematic approach to higherorder effects using mutantcycle analysis was developed in Horovitz and Fersht, 1990 and Horovitz and Fersht, 1992 and widely used subsequently (Carter, 2017). The HOCs presented above were introduced in our previous work (Estrada et al., 2016), and the present paper is concerned not with HOCs per se, but with effective HOCs that arise from an allosteric ensemble, as will be described below. Nevertheless, it may still be helpful to explain the relationship between our HOCs and the higherorder couplings arising from mutantcycle analysis. We are grateful to an anonymous reviewer for making this point to us. The material which follows may be of particular interest to those familiar with the relevant literature but is not required for the main results of the paper.
Both HOCs and higherorder couplings can be seen as different ways of analysing the underlying freeenergy landscape. Both approaches make essential use of directed graphs to organise this landscape. Figure 5A shows the labelled equilibrium graph for ligand binding to three sites in a single conformation, while Figure 5B shows a directed graph of the kind used in Horovitz and Fersht, 1990 for defining higherorder couplings for perturbations to three sites. The latter graphs are sometimes called ‘boxes’ (Horovitz and Fersht, 1990). We use ‘sites’ here for either individual residues or the modules described in Carter, 2017. Perturbations are typically mutations, such as replacement of an asparagine residue by alanine. The choice of replacement can make a difference to the results, but this is not usually depicted in graph representations like Figure 5B. The directed edges have rather different interpretations in the two examples in Figure 5: for the equilibrium graph in Figure 5A, a directed edge represents the biochemical process of ligand binding; for the coupling graph in Figure 5B, a directed edge represents an experimental perturbation. In both cases, the vertices have an associated free energy, denoted $\mathrm{\Delta}{\mathrm{\Phi}}_{S}$, where $S\subseteq \{1,\mathrm{\cdots},n\}$ is either the subset of bound sites in the equilibrium graph (Figure 5A) or the subset of perturbed sites in the coupling graph (Figure 5B). The $\mathrm{\Delta}$ notation is conventionally used in the literature to signify a freeenergy difference (Equation 1) or free energy relative to a chosen zero level. A frequent choice of zero is the free energy of empty binding or of the unperturbed state, in which case $\mathrm{\Delta}{\mathrm{\Phi}}_{\mathrm{\varnothing}}=0$, but we have not assumed this here. Note that the free energies of the equilibrium graph have a contribution from the ligand, which manifests itself in the dependence of the edge labels on the ligand concentration, $x$, while the free energies of the coupling graph do not. Despite this difference, the free energies provide in both cases the fundamental independent thermodynamic parameters, of which there are ${2}^{n}1$ for $n$ sites, in terms of which both HOCs and higherorder couplings can be rigorously defined.
The definition is easiest for HOCs. Equation 1 tells us that the edge label, $x{K}_{i,S}$, is given by
We omit the single conformation from subscripts for clarity. It follows from Equation 2 that HOCs can be written in terms of free energies as follows:
HOCs are nondimensional quantities associated to graph edges. As noted above, there are algebraic relationships among them arising from detailed balance at thermodynamic equilibrium. An independent set of parameters is formed by restricting to those for which $i<S$, of which there are ${2}^{n}n1$. Taken together with the $n$ ‘bare’ association constants for initial ligand binding, ${K}_{i,\mathrm{\varnothing}}$, they form a complete set of ${2}^{n}1$ independent parameters for the freeenergy landscape. It follows from Equations 4 and 5 that these parameters can be used to recover the fundamental free energies, so that the two sets of parameters are mathematically equivalent.
Mutantcycle studies often refer to both Horovitz and Fersht, 1990 and Horovitz and Fersht, 1992, which present apparently different measures of higherorder coupling. The second of these papers introduces what we will refer to as the ‘residual free energy’ of a vertex and denote $\mathrm{\Delta}{\varphi}_{S}$. This is the free energy remaining at vertex $S$ after accounting for the contributions from all proper subsets of $S$. The residual free energy may be concisely defined recursively, starting from $\mathrm{\Delta}{\varphi}_{\mathrm{\varnothing}}=\mathrm{\Delta}{\mathrm{\Phi}}_{\mathrm{\varnothing}}$, by
We see from Equation 6 that $\mathrm{\Delta}{\varphi}_{\{i\}}=\mathrm{\Delta}{\mathrm{\Phi}}_{\{i\}}\mathrm{\Delta}{\mathrm{\Phi}}_{\mathrm{\varnothing}}$ and that $\mathrm{\Delta}{\varphi}_{\{i,j\}}=\mathrm{\Delta}{\mathrm{\Phi}}_{\{i,j\}}(\mathrm{\Delta}{\mathrm{\Phi}}_{\{i\}}+\mathrm{\Delta}{\mathrm{\Phi}}_{\{j\}})+\mathrm{\Delta}{\mathrm{\Phi}}_{\mathrm{\varnothing}}$. $\mathrm{\Delta}{\varphi}_{S}$ may be calculated directly from $\mathrm{\Delta}{\mathrm{\Phi}}_{X}$ but, as the previous example suggests, overlapping contributions of the actual free energies must be cancelled out (Horovitz and Fersht, 1992, Equation 4),
To see why Equation 7 is a consequence of Equation 6, note first that Equation 7 gives the correct result for $S=\mathrm{\varnothing}$. It may then be recursively checked by assuming it holds for $X\subset S$ and substituting into Equation 6 to check that it holds for $S$. Each subset $Y\subset S$ contributes a term $\pm \mathrm{\Delta}{\mathrm{\Phi}}_{Y}$ arising from $\mathrm{\Delta}{\varphi}_{X}$ for each $X$ that satisfies $Y\subseteq X\subset S$. The sign of $\mathrm{\Delta}{\mathrm{\Phi}}_{Y}$ coming from Equation 7 is ${(1)}^{\mathrm{\#}(X)\mathrm{\#}(Y)}$. These terms almost completely cancel each other out because, letting $p=\mathrm{\#}(S)\mathrm{\#}(Y)$,
Taking into account the additional sign coming from Equation 6, we recover Equation 7 for $S$. This proves recursively that Equation 7 is the solution of Equation 6 in terms of free energies.
We can go further to show how $\mathrm{\Delta}{\varphi}_{S}$ is expressed in terms of HOCs. For this, we must assume that $q=\mathrm{\#}(S)>1$. When $q=1$, ligand binding contributes to $\mathrm{\Delta}{\varphi}_{S}$, but when $q>1$ that is no longer the case, as we will see. Choose any site $i\in S$. The summation in Equation 7 involves ${2}^{q}$ terms $\mathrm{\Delta}{\mathrm{\Phi}}_{Y}$. It can be reorganised into a sum of ${2}^{q1}$ terms of the form $\pm (\mathrm{\Delta}{\mathrm{\Phi}}_{Z\cup \{i\}}\mathrm{\Delta}{\mathrm{\Phi}}_{Z})$, where $Z\subseteq S\backslash \{i\}$. The sign of these terms is given by the sign of $\mathrm{\Delta}{\mathrm{\Phi}}_{Z\cup \{i\}}$ coming from Equation 7 and is therefore ${(1)}^{\mathrm{\#}(S)\mathrm{\#}(Z)1}$. It is easy to see that, because $q>1$, there must be equal numbers of +1 and −1 signs. It follows from Equation 4 that
where the double exponent just means that the righthand side is a ratio in which those terms for which $\mathrm{\#}(S)\mathrm{\#}(Z)$ is odd go in the numerator and those terms for which $\mathrm{\#}(S)\mathrm{\#}(Z)$ is even go in the denominator. Using Equation 2, we can rewrite ${K}_{i,Z}$ as $K}_{i,\mathrm{\varnothing}}{\omega}_{i,Z$. Since there are equal numbers of each sign, we can cancel each occurrence of $x{K}_{i,\mathrm{\varnothing}}$ between numerator and denominator to yield a formula for residual free energies in terms of HOCs when $\mathrm{\#}(S)>1$:
The choice of $i\in S$ in Equation 8 is arbitrary. As an illustration of Equation 8, recalling from Equation 5 that ${\omega}_{i,\mathrm{\varnothing}}=1$, we see that
Equations 8 and 9 show how the residual free energy is built up from binding at any given site to the hierarchy of subsets of the remaining sites.
Residual free energies can be thought of as a measure of collective synergy between sites (Horovitz and Fersht, 1992). They are associated to graph vertices and constitute ${2}^{n}1$ independent parameters, with no algebraic relationships between them. It follows from Equations 6 and 7 that they are mathematically equivalent to the fundamental free energies. Residual free energies have also been independently described for other purposes in Equation 4 of Martini, 2017.
The higherorder couplings introduced in Horovitz and Fersht, 1990 appear at first sight to be quite different from the residual free energies introduced in Horovitz and Fersht, 1992. The couplings are described by examples for low orders, as are typically encountered in practice (Horovitz and Fersht, 1990). We provide a general definition here by introducing a slightly more complex version. A coupling is associated to a pair, consisting of, first, a vertex, $Z\subseteq \{1,\mathrm{\cdots},n\}$, and, second, an ordered sequence of distinct sites, $({i}_{1},\mathrm{\cdots},{i}_{k})$, none of which are in $Z$, so that $Z\cap \{{i}_{1},\mathrm{\cdots},{i}_{k}\}=\mathrm{\varnothing}$. The vertex $Z$ should be thought of as an ‘offset’ within the coupling graph and the sites, ${i}_{1},\mathrm{\cdots},{i}_{k}$ as specifying an ordered sequence of perturbations undertaken around $Z$. Higherorder couplings are conventionally used in the literature only for $Z=\mathrm{\varnothing}$, but this more complex version is needed for the definition in Equation 11 below. Associated to such a pair $Z,({i}_{1},\mathrm{\cdots},{i}_{k})$ is a $k$th order coupling, which we will denote by ${\mathrm{\Delta}}^{k}{\gamma}_{Z,({i}_{1},\mathrm{\cdots},{i}_{k})}$. We start by defining the firstorder coupling, ${\mathrm{\Delta}}^{1}{\gamma}_{Z,({i}_{1})}$, for any $Z$ satisfying the restriction above, in terms of the free energy,
With that in hand, we can define for $k\ge 2$, again for any $Z$ satisfying the restriction
where it is clear that $Z\cup \{{i}_{k}\}$ must be disjoint from $\{{i}_{1},\mathrm{\cdots},{i}_{k1}\}$, so that the righthand side of Equation 11 is recursively well defined. Unravelling Equations 11 and 10, we see that
which corresponds when $Z=\mathrm{\varnothing}$ to Equation 1 of Horovitz and Fersht, 1990. With some more work, it can be seen that Equation 11 reproduces the $k=3$ and $k=4$ examples in Horovitz and Fersht, 1990. Equation 12 expresses the intuition behind higherorder coupling, that it measures the effect of a perturbation relative to the unperturbed state, hierarchically for a sequence of perturbations.
It can be seen quite easily from Equations 5 and 12 that
We note from Equation 13 that ‘order’ is counted differently between HOCs and conventional higherorder couplings: when $Z=\mathrm{\varnothing}$, Equation 13 relates a higherorder coupling with $k=2$ to a HOC of order 1. Substituting Equation 13 into Equation 11 and continuing the recursion, we find that
at which point the similarity with Equation 9 becomes evident and the pattern emerges. It can be shown by direct substitution in Equation 11 that the following general formula holds, which expresses higherorder couplings in terms of HOCs for any $k\ge 2$:
Comparing Equation 14 with Equation 8 we see that, despite their very different definitions in Equations 11 and 6, conventional higherorder couplings are the same as residual free energies. Indeed, for $k\ge 1$,
Equation 15 may seem strange because a higherorder coupling is defined in terms of an ordered sequence of perturbations, $({i}_{1},\mathrm{\cdots},{i}_{k})$, while a residual free energy depends only on the subset of sites, $\{{i}_{1},\mathrm{\cdots},{i}_{k}\}$, without respect to the order of sites. It is a consequence of detailed balance at thermodynamic equilibrium that the order in which the perturbations are undertaken does not matter. For example, it is clear from Equation 12 that ${\mathrm{\Delta}}^{2}{\gamma}_{\mathrm{\varnothing},({i}_{1},{i}_{2})}={\mathrm{\Delta}}^{2}{\gamma}_{\mathrm{\varnothing},({i}_{2},{i}_{1})}$. More generally, if ρ is any permutation of the perturbed sites, so that ρ is a bijective function, $\rho :\{{i}_{1},\mathrm{\cdots},{i}_{k}\}\to \{{i}_{1},\mathrm{\cdots},{i}_{k}\}$, then it can be shown that
Note that Equation 16 follows from Equation 15 when $Z=\mathrm{\varnothing}$. This property of invariance under permutation is referred to as ‘symmetry’ in Horovitz and Fersht, 1990 and is similar to the algebraic relations which give rise to the independent HOCs, ${\omega}_{i,S}$ with $i<S$, as described previously.
The equality between the higherorder couplings introduced in Horovitz and Fersht, 1990 and the residual free energies introduced in Horovitz and Fersht, 1992, as described in Equation 15, is presumably well known to those in the field. It seems to be implicitly assumed in Horovitz and Fersht, 1992, but we have not found a clear statement of it in the literature. It would be difficult to formulate one in the absence of a general definition of higherorder coupling, as we have given in Equation 11. The formulas above may therefore be of some value in offering a rigorous treatment.
Each of the measures we have discussed, HOCs, residual free energies and higherorder couplings, offers a different way of analysing the freeenergy landscape using the graphs in Figure 5. HOCs are associated to graph edges; residual free energies are associated to graph vertices; and higherorder couplings are associated to sequences of sites, at least when symmetries are ignored. As we have seen above, the three measures are mathematically equivalent. However, they are useful for different purposes. HOCs tell us about the integration of binding information; residual free energies capture the collective synergy between sets of sites; and higherorder couplings show how these same synergies can be extracted from a sequence of experimental perturbations. One advantage of HOCs is that they are nondimensional quantities in terms of which it is straightforward to calculate the other measures. By doing so, we were able to show rigorously that higherorder couplings are also residual free energies (Equation 15).
Having explained how various higherorder measures are related to each other, we return to the question of how effective cooperativity arises from allosteric ensembles with multiple conformations. For this problem, HOCs are much easier to use than either residual free energies or higherorder couplings. With Equations 8 and 14 now available, effective residual free energies or effective higherorder couplings may be calculated from the effective HOCs that we construct below, but we will not exploit this capability in the present paper.
Coarse graining yields effective HOCs
As MWC showed, even if there is no intrinsic cooperativity in any conformation, an effective cooperativity can arise from the ensemble. This is usually detected in the shape of the binding function (Figure 2A). Here, we introduce a method of coarse graining through which effective cooperativities can be rigorously defined. We illustrate this for the allostery graph, $A$, and explain the general coarsegraining method in the Materials and methods. For allostery, the idea is to treat the horizontal subgraphs, ${A}_{S}$, as the vertices of a new coarsegrained graph, ${A}^{\varphi}$, (Figure 4, bottom right). There is an edge between two vertices in ${A}^{\varphi}$, if, and only if, there is an edge in $A$ between the corresponding horizontal subgraphs. It is not hard to see that ${A}^{\varphi}$ is identical in structure to any of the vertical subgraphs ${A}^{{c}_{k}}$. We can think of ${A}^{\varphi}$ as if it represents a single effective conformation to which ligand is binding, and we can index each vertex of ${A}^{\varphi}$ by the corresponding subset of bound sites, $S$. The key point, as explained in detail in the Materials and methods, is that it is possible to assign labels to the edges in ${A}^{\varphi}$ so that
with ${A}^{\varphi}$ being at thermodynamic equilibrium under these label assignments. According to Equation 17, the probability of being in a coarsegrained vertex of ${A}^{\varphi}$ is identical to the overall probability of being in any of the corresponding vertices of $A$. This is exactly the property a coarse graining should satisfy at steady state. It is not difficult to see why a procedure like this should work. The coarsegraining formula in Equation 17 tells us the expected probability distribution on the coarsegrained graph, ${A}^{\varphi}$. Equation 3 can then be used to back out the equilibrium labels on the edges of ${A}^{\varphi}$ which give rise to this probability distribution. We provide a more direct way of achieving the same result in Equation 40. This assignment of labels to ${A}^{\varphi}$ is the only way to ensure Equation 17 at equilibrium, so that the coarse graining is both systematic and unique. The Materials and methods gives a more careful treatment for coarse graining any linear framework graph, which may not itself be at thermodynamic equilibrium.
Our coarsegraining procedure offers a general method for calculating how effective behaviour emerges, at thermodynamic equilibrium, from a more detailed underlying mechanism. This procedure is likely to be broadly useful for other studies. We note that it applies only to the steady state. It does not provide a coarse graining of the underlying dynamics, which is a much harder problem.
Because ${A}^{\varphi}$ resembles the graph for ligand binding at a single conformation, we can calculate HOCs for ${A}^{\varphi}$—equivalently, effective HOCs for $A$—just as we did above, by normalising the effective association constants. Once the dust of calculation has settled (Materials and methods), we find that $A$ has effective association constants and effective HOCs:
The quantity ${\mu}_{S}({A}^{{c}_{k}})$ is calculated by multiplying labels over paths, as above, within the vertical subgraph ${A}^{{c}_{k}}$. The terms within angle brackets, of the form $\u27e8X({c}_{k})\u27e9$, where $X({c}_{k})$ is some function over conformations c_{k}, denote averages over the steadystate probability distribution of the horizontal subgraph: $\u27e8X({c}_{k})\u27e9={\sum}_{1\le k\le N}X({c}_{k}){\text{Pr}}_{{c}_{k}}({A}_{\mathrm{\varnothing}})$. The righthand formula in Equation 18 for the effective HOCs has a suggestive structure: it is an average of a product divided by the product of the averages. The effective parameters in Equation 18 provide a biophysical language in which the integrative capabilities of any ensemble can be rigorously specified.
Effective HOCs for MWClike ensembles
The functional viewpoint is readily recovered from the ensemble. A generalised MWC formula can be given in terms of effective HOCs, from which the classical twoconformation MWC formula is easily derived (Materials and methods). Some expected properties of effective HOCs are also easily checked (Materials and methods). First, ${\omega}_{i,S}^{\varphi}$ is independent of ligand concentration, $x$. Second, there is no effective HOC for binding to an empty conformation, so that ${\omega}_{i,\mathrm{\varnothing}}^{\varphi}=1$. Third, if there is only one conformation c_{1}, then the effective HOC reduces to the intrinsic HOC, so that ${\omega}_{i,S}^{\varphi}={\omega}_{{c}_{1},i,S}$.
More illuminating are the effective HOCs for the MWC model. We consider any conformational ensemble which is MWClike: there is no intrinsic HOC in any conformation, so that ${\omega}_{{c}_{k},i,S}=1$ and ${K}_{{c}_{k},i,S}={K}_{{c}_{k},i,\mathrm{\varnothing}}$; and the bare association constants are identical at all sites, so that we can set ${K}_{{c}_{k},i,\mathrm{\varnothing}}={K}_{{c}_{k}}$. There may, however, be any number of conformations, not just the two conformations of the classical MWC model. It then follows that ${\omega}_{i,S}^{\varphi}$ depends only on the size of $S$, so that we can write ${\omega}_{i,S}^{\varphi}$ as ${\omega}_{s}^{\varphi}$, where $s=\mathrm{\#}(S)$ is the order of cooperativity. Equation 18 then simplifies to (Materials and methods)
We see that, although there is no intrinsic HOC in any conformation, effective HOC of each order arises from the moments of ${K}_{{c}_{k}}$ over the probability distribution on ${A}_{\mathrm{\varnothing}}$. In particular, Equation 19 shows that the effective pairwise cooperativity is ${\omega}_{1}^{\varphi}=\u27e8{({K}_{{c}_{k}})}^{2}\u27e9/{\u27e8{K}_{{c}_{k}}\u27e9}^{2}$.
In studies of Gprotein coupled receptor (GPCR) allostery, Ehlert relates ‘empirical’ to ‘ultimate’ levels of explanation by a procedure similar to our coarse graining, but with only two conformations, and calculates a ‘cooperativity constant’ which is the same as ${\omega}_{1}^{\varphi}$ (Ehlert, 2016). Gruber and Horovitz calculate ‘successive ligand binding constants’ for the twoconformation MWC model which are the same as effective association constants, ${K}_{s}^{\varphi}$, (Gruber and Horovitz, 2018) (Materials and methods). To our knowledge, these are the only other calculations of effective allosteric quantities. We note that Equation 19 applies to all HOCs, not just pairwise, and to any MWClike ensemble, not just those with two conformations.
The classical MWC model yields only positive cooperativity (Koshland and Hamadani, 2002; Monod et al., 1965), as measured in the functional perspective (Figure 2A). We find that MWClike ensembles yield positive effective HOCs of all orders. Strikingly, these effective HOCs increase with increasing order of cooperativity: provided ${K}_{{c}_{k}}$ is not constant over conformations (Materials and methods),
This shows that ensembles with independent and identical sites, including the twoconformation MWC model, can effectively implement high orders and high levels of positive cooperativity. Equation 20 is very informative, and we return to it in the Discussion.
It is often suggested that negative cooperativity requires a different kind of ensemble to those considered here, such as one allowing KNFstyle induced fit (Koshland and Hamadani, 2002). However, if two sites are independent but not identical, so that ${K}_{{c}_{k},1,\mathrm{\varnothing}}\ne {K}_{{c}_{k},2,\mathrm{\varnothing}}$, then, with just two conformations, the effective pairwise cooperativity can become negative. Indeed, ${\omega}_{1,\{2\}}^{\varphi}<1$, if, and only if, the values of the association constants are not in the same relative order in the two conformations (Materials and methods). Negative effective cooperativity can arise from nonidentical sites and does not need a special kind of ensemble.
Integrative flexibility of ensembles
Equation 18 shows that effective HOCs of any order can arise for a conformational ensemble but does not reveal what values they can attain. Can they vary arbitrarily? The question can be rigorously posed as follows. Suppose that we are considering $n$ binding sites and that numbers ${\beta}_{i}>0$, for $1\le i\le n$, and ${\alpha}_{i,S}>0$, for $i<S$, are chosen at will. Does there exist a conformational ensemble such that the bare effective association constants satisfy ${K}_{i,\mathrm{\varnothing}}^{\varphi}={\beta}_{i}$, and the independent effective HOCs satisfy ${\omega}_{i,S}^{\varphi}={\alpha}_{i,S}$?
To address this question, we assume that there is no intrinsic HOC, so as not to introduce cryptically what we want to generate. It follows that the sites cannot be identical, for otherwise Equation 20 shows that integrative flexibility is impossible. Accordingly, the bare association constants, ${K}_{{c}_{k},i,\mathrm{\varnothing}}$ for $1\le i\le n$, can be treated as $n$ free parameters in each conformation c_{k}. If there are $N$ conformations in the ensemble, then there are $N1$ free parameters coming from the horizontal edges (Materials and methods). Dimensional considerations imply that the effective HOCs cannot take arbitrary values if $n(N1)<{2}^{n}1$. Conversely, we prove the following flexibility theorem: any pattern of values can be realised by an allosteric ensemble with no intrinsic cooperativity, to any required degree of accuracy, provided there are enough conformations with the right probability distribution and the right patterns of bare association constants.
To see why this is possible, we outline the argument here and give rigorous details in Theorem 1 in the Materials and methods. Other arguments may of course be possible and the details presented here should not be thought of as the only way for the results to hold. We will use an allostery graph $A$ whose conformations are indexed by subsets $T\subseteq \{1,\mathrm{\cdots},n\}$ and denoted ${c}_{T}$. Both binding subsets and conformations will then be indexed by subsets of $\{1,\mathrm{\cdots},n\}$. To avoid confusion, we will use $S$ to label binding subsets and $T$ to label conformations, so that a vertex of $A$ will be $({c}_{T},S)$. The allostery graph for the case $n=2$ is shown in Figure 6. We will focus on the horizontal subgraph of empty conformations, ${A}_{\mathrm{\varnothing}}$, because that is what is needed for calculating effective HOCs using Equation 18. We will take the reference vertex of ${A}_{\mathrm{\varnothing}}$ to be ${c}_{\mathrm{\varnothing}}$. Recall from what was explained previously that the product of the equilibrium labels along any path in ${A}_{\mathrm{\varnothing}}$ from the reference vertex to the vertex ${c}_{T}$ is the quantity ${\mu}_{{c}_{T}}({A}_{\mathrm{\varnothing}})$, in terms of which the steadystate probabilities of ${A}_{\mathrm{\varnothing}}$ are given by Equation 3. Let ${\lambda}_{T}={\mu}_{{c}_{T}}({A}_{\mathrm{\varnothing}})$. These quantities are ${2}^{n}1$ free parameters whose values we are going to assign. They are more convenient for our purposes than an independent set of equilibrium labels for ${A}_{\mathrm{\varnothing}}$. By Equation 3,
The other free parameters that we need are $n$ quantities, ${\kappa}_{1},\mathrm{\cdots},{\kappa}_{n}>0$, to which we will subsequently assign values, in terms of which we will define the intrinsic association constants. We will assume that the sites are independent in each conformation, so that all intrinsic HOCs of $A$ are 1. It follows that ${K}_{{c}_{T},i,S}={K}_{{c}_{T},i,\mathrm{\varnothing}}$. We then set ${K}_{{c}_{T},i,\mathrm{\varnothing}}={\kappa}_{i}$ if $i\in T$, and ${K}_{{c}_{T},i,\mathrm{\varnothing}}=\epsilon {\kappa}_{i}$ if $i\notin T$. Here, ε is a small positive quantity which can be chosen to determine the degree of accuracy to which the ${\beta}_{i}$ and ${\alpha}_{i,S}$ are approximated. In the calculations which follow, we will only be interested in terms which do not involve ε as a factor. Because the sites are independent in each conformation, it follows that, in the vertical subgraph, ${A}^{{c}_{T}}$, at any conformation ${c}_{T}$, ${\mu}_{S}({A}^{{c}_{T}})=({\prod}_{i\in S}{\kappa}_{i}){x}^{\mathrm{\#}(S)}$, whenever $S\subseteq T$. However, if $S\u2288T$, then ${\mu}_{S}({A}^{{c}_{T}})$ acquires factors of ε and so ${\mu}_{S}({A}^{{c}_{T}})\approx 0$, where ≈ means simply that the related quantities become equal as ε becomes very small. In this case, for our purposes, ${\mu}_{S}({A}^{{c}_{T}})$ is negligible whenever $S\u2288T$. Figure 6 illustrates how this plays out in the allostery graph for $n=2$.
To calculate the effective association constants, the lefthand formula in Equation 18 shows that we must evaluate the averages $\u27e8{K}_{{c}_{T},i,S}.{\mu}_{S}({A}^{{c}_{T}})\u27e9$ and $\u27e8{\mu}_{S}({A}^{{c}_{T}})\u27e9$. Using Equation 21,
The only terms in the sum which do not involve ε as a factor are those $T$ for which $S\subseteq T$. Furthermore, the definition of ${\mu}_{S}({A}^{{c}_{T}})$ given above shows that these terms do not depend on $T$. Similarly, using Equation 21 again,
and the only terms in the sum which do not involve ε as a factor are those for which $S\cup \{i\}\subseteq T$. These terms also do not depend on $T$. It follows from Equation 18 that
where we have ignored all terms involving ε as a factor.
Equation 22 tells us two things. First, that the effective association constants are approximately proportional to the corresponding κ’s. Hence, if the proportionality constants, which depend only on the ${\lambda}_{T}$, are determined, we can choose the ${\kappa}_{i}$ so as to make the bare effective association constants ${K}_{i,\mathrm{\varnothing}}^{\varphi}$ approximately equal to ${\beta}_{i}$. Second, Equation 22 tells us that the effective HOCs, ${\omega}_{i,S}^{\varphi}$, are independent of the ${\kappa}_{i}$ and depend only on the ${\lambda}_{T}$,
It remains for us to assign values to the ${\lambda}_{T}$ so that the effective HOCs become approximately equal to the α’s.
To do this, assume that, for the conformation ${c}_{T}$, the subset $T$ is written as $T=\{{i}_{1},\mathrm{\cdots}{i}_{k}\}$, where the indices are in increasing order, $i}_{1}<{i}_{2}<\cdots <{i}_{k$. Because of this ordering, the quantities ${\alpha}_{{i}_{j},\{{i}_{j+1},\mathrm{\cdots},{i}_{k}\}}$ are given to us by hypothesis. Hence, we can define
Here, δ is another small positive quantity, similar to ε, which can be chosen to set the degree of accuracy to which the β’s and α’s are approximated. As with ε, we will treat as negligible terms in which δ is a factor. Figure 6 illustrates Equation 24 for the case $n=2$.
It can be seen from Equation 24 that ${\sum}_{X\subseteq T}{\lambda}_{T}={\lambda}_{X}(1+U)$, where $U$ has a factor of δ and is therefore negligible as δ becomes very small, $U\approx 0$. It then follows from Equation 23 that
where we have used $U$ as a generic symbol for quantities which are negligible as δ becomes very small. By Equation 24, ${\lambda}_{S\cup \{i\}}={\alpha}_{i,S}\delta {\lambda}_{S}$, so that
This establishes part of what is required. For the other part, we can return to Equation 22 and set
from which it follows from Equation 22 that
Equations 26 and 27 show that the effective association constants and effective HOCs can take arbitrary positive values to any desired degree of accuracy, as determined by ε and δ. This establishes the flexibility theorem. The Materials and methods provides a more careful treatment in Theorem 1, which rigorously establishes the approximation as ε and δ become very small.
Figures 7 and 8 together illustrate the flexibility theorem. Figure 7A shows three arbitrarily chosen patterns of effective parameters for a target molecule with four ligand binding sites. Figure 7B shows the corresponding overall binding functions (black curves) together with the individual sitespecific binding functions (coloured curves). As a matter of thermodynamics, the overall binding function is always an increasing function of ligand concentration. In contrast, the sitespecific binding functions may increase or decrease depending on the combinations of positive and negative effective HOCs in Figure 7A, and thereby show more clearly the complexity arising from those different combinations. The implementation of the effective parameters by an allosteric ensemble, as specified by the flexibility theorem, is illustrated in Figure 8. Figure 8A shows the allosteric ensemble for $n=4$ sites as a product graph with 16 binding patterns and 16 conformations. Figure 8B shows the intrinsic association constants in each conformation coming from the proof of the flexibility theorem, to an accuracy of 0.01. Figure 8C confirms that this allosteric ensemble exactly reproduces the overall binding functions in Figure 7B.
In respect of the dimensional argument made previously, the allostery graph used in the proof above has ${2}^{n}1$ free parameters for ${A}_{\mathrm{\varnothing}}$ and the ${\kappa}_{1},\mathrm{\cdots},{\kappa}_{n}$ are a further $n$ free parameters, giving ${2}^{n}1+n$ free parameters in total. This is more than the minimal required number of ${2}^{n}1$ but not by much. It remains an interesting open question whether a conformational ensemble can be constructed, perhaps with more free parameters, which gives the effective HOCs exactly, rather than only approximately. One consequence of the definitions of ${K}_{{c}_{T},i,\mathrm{\varnothing}}$ and of ${\lambda}_{T}$ in Equation 24 is that the parameters of the allosteric ensemble become exponentially small, as is evident for the examples in Figure 8B. Another interesting question is whether alternative constructions can be found which do not exhibit such a broad range of parameter values. Irrespective of these questions, the proof given above confirms that there is no fundamental biophysical limitation to achieving any pattern of values to any desired degree of accuracy. Accordingly, a central result of the present paper is that sufficiently complex allosteric ensembles can implement any form of information integration that is achievable without energy expenditure.
Allosteric ensembles for Hill functions
As mentioned in the Introduction, the starting point for the present paper was to account for experimental data on gene expression. Studies in Drosophila have shown that the Hunchback gene, in response to the maternal TF Bicoid, is sharply expressed in a way that is well fitted, after appropriate normalisation, to a Hill function, ${\mathscr{H}}_{h}(x)={x}^{h}/(1+{x}^{h})$. This sharp expression underlies the initial patterning of anteriorposterior stripes in the early Drosophila embryo. Estimated values for the Hill coefficient, $h$, vary depending on the experimental construct and time of measurement but are typically in the range $4\le h\le 8$ during early nuclear cycle 14 (Tran et al., 2018). The relevant promoter is believed to have $n=6$ Bicoid binding sites, and the mechanistic basis for the sharpness is the subject of considerable interest. We showed in previous work that, if the promoter was assumed to have six Bicoid binding sites and to be operating at thermodynamic equilibrium, then the highest Hill coefficient that could be achieved of $h=6$, at the socalled Hopfield barrier, required HOCs for Bicoid binding of order up to 5 (Estrada et al., 2016). In particular, pairwise cooperativities, which had previously been invoked to account for the sharpness (Gregor et al., 2007), are not sufficient to explain the data. Left open by this previous work was a molecular mechanism which could create the highorder HOCs required for Hill functions. We have seen above that allosteric ensembles can create any pattern of HOCs, so it is natural to ask if there are allosteric ensembles which yield good approximations to Hill functions.
We implemented a numerical optimisation algorithm to find binding functions which approximated Hill functions (Materials and methods). Hill functions are naturally normalised so that ${\mathscr{H}}_{h}(1)=0.5$, so we followed the procedure introduced previously (Estrada et al., 2016) of normalising concentration to its value at halfmaximum: if the normalised binding function is denoted $f(x)$, then $f(1)=0.5$. Figure 9 shows results for an allosteric ensemble with four conformations for ligand binding to six sites. The ensemble has no intrinsic cooperativity in any conformation, so that ${K}_{{c}_{k},i,S}={K}_{{c}_{k},i,\mathrm{\varnothing}}$ for any binding subset $S\subseteq \{1,\mathrm{\cdots},6\}$, while the bare association constants, ${K}_{{c}_{k},i,\mathrm{\varnothing}}$, differ between the conformations (Figure 9B). This gives $4\times 6=24$ free parameters together with an additional three free parameters for the independent equilibrium labels on the horizontal subgraph ${A}_{\mathrm{\varnothing}}$ (Figure 9A). We limited the parameter ranges so that the ${K}_{{c}_{k},i,\mathrm{\varnothing}}$ were in the range $[{10}^{4},{10}^{4}]$ and the equilibrium labels of ${A}_{\mathrm{\varnothing}}$ were in the range $[{10}^{6},{10}^{6}]$. With these settings, it was not difficult to find normalised binding functions which are very well fitted by the Hill function, ${\mathscr{H}}_{h}(x)$, for Hill coefficients $h=4$, 5 and 6 (Figure 9D).
We were able to find multiple sets of parameters which yielded excellent fits; Figure 9 shows two representative examples for each Hill coefficient. It is evident that very different numerical ensembles (Figure 9B) can give almost identical binding functions (Figure 9D). This reinforces the point made in the Introduction that the binding function, or some associated measure of its shape, such as a Hill coefficient, are aggregate measures which give little insight into how binding information is being integrated. For this, the patterns of effective parameters provide more detailed information. As can be seen from Figure 9C, effective HOCs of all orders up to 5 are needed for each Hill function, as suggested previously (Estrada et al., 2016), with predominantly positive effective HOCs, ${\omega}_{i,S}^{\varphi}>1$, and varying amounts of independence, ${\omega}_{i,S}^{\varphi}=1$.
It is interesting to ask what role the size of the ensemble plays in approximating Hill functions. We cannot give a definitive answer but can make some observations. We were able to approximate ${\mathscr{H}}_{6}$ with a twoconformation ensemble with six sites but only with much wider parametric ranges. It was also more difficult in terms of optimisation time to find a good fit, and we did not find multiple fits. This suggests that the larger the ensemble the easier it is to approximate Hill functions with limited parameter ranges. It is also conceivable that the size of the ensemble may have to increase with the number of binding sites to retain control over the parametric ranges. We must leave such issues to subsequent work. While our results are numerical, and therefore limited to the ensemble we have analysed, it seems clear that allosteric ensembles provide a molecular mechanism that can closely approximate Hill functions with the required high orders of effective cooperativity, thereby providing a solution to our original question. Since Hill functions are widely used to fit data, the potential for an underlying allosteric mechanism may be broadly useful.
Discussion
Jacques Monod famously described allostery as ‘the second secret of life’ (Ullmann, 2011). It is only relatively recently, however, that the prescience of his remark has been appreciated and the wealth of conformational ensembles present in most cellular processes has been revealed (Changeux and Christopoulos, 2016; Motlagh et al., 2014; Nussinov et al., 2013).
The present paper seeks to expand the existing allosteric perspective by providing a biophysical foundation for information integration by conformational ensembles. Equation 48 and Equation 49 in the Materials and methods (Equation 18 above) provide for the first time a rigorous definition of effective, higherorder quantities—the association constants, ${K}_{i,S}^{\varphi}$, and cooperativities, ${\omega}_{i,S}^{\varphi}$,—arising from any ensemble. Since our methods are equivalent to those of equilibrium statistical mechanics (Material and methods), these definitions correctly aggregate the freeenergy contributions which emerge in the ensemble from ligand binding to a conformation, intrinsic cooperativity within a conformation and conformational change. As noted above, our results encompass recent work on effective properties of the classical, twoconformation MWC ensemble—for pairwise cooperativity (Ehlert, 2016) and higherorder association constants (Gruber and Horovitz, 2018)—but they hold more generally for ensembles of arbitrary complexity with any number of conformations, including those with intrinsic cooperativities.
The effective quantities introduced here provide a language in which the integrative capabilities of an ensemble can be rigorously expressed. To begin with, the overall binding function can be determined in terms of the effective quantities through a generalised MWC formula (Materials and methods), thereby recovering the functional viewpoint (Figure 2A) from the ensemble viewpoint (Figure 2B). This generalised MWC formula reduces to the usual MWC formula for the classical twoconformation MWC model (Equation 55). We also clarify issues which had been difficult to understand in the absence of a quantitative definition of effective quantities. We find that the classical MWC model exhibits effective HOCs of any order and that these are always positive. In other words, binding always encourages further binding. Moreover, these effective HOCs increase strictly with increasing order (Equation 20), so that the more sites which are bound, the greater the encouragement to further binding. We see that HOC has always been present, even for oxygen binding to haemoglobin, albeit unrecognised for lack of an appropriate quantitative definition. Equation 20 confirms in a more precise way the longstanding realisation from the functional perspective that the MWC model exhibits only positive cooperativity; at the same time it succinctly expresses the rigidity and limitations of this model.
It is often stated in the allostery literature that negative cooperativity requires induced fit, in which binding induces conformations which are not present prior to binding. This view goes back to Koshland, who pointed to the emergence of negative cooperativity in the KNF model of allostery, which allows induced fit, and contrasted that to the positive cooperativity of the MWC model, which assumes conformational selection (Koshland and Hamadani, 2002). Our language of effective quantities permits a more discriminating analysis. It confirms, as just pointed out, that the classical MWC model exhibits only positive effective HOCs but also shows that induced fit is not required for negative effective HOC, which can arise just as readily from conformational selection (Materials and methods). What is required is not a different kind of ensemble but, rather, binding sites that are not identical.
Our main result, on the flexibility of conformational ensembles, shows that positive and negative HOCs of any value can occur in any pattern whatsoever, provided that the conformational ensemble is sufficiently complex, with enough conformations (Figure 8). Since the effective quantities provide a complete parameterisation of an ensemble at thermodynamic equilibrium, we see that conformational ensembles can implement any form of information integration that is achievable without external sources of energy. In particular, allosteric ensembles can be found whose binding functions closely approximate Hill functions (Figure 9), thereby answering the question which prompted this study, as to how such functions might arise in gene regulation.
Eukaryotic gene regulation is one of the most complex forms of cellular information processing (Wong and Gunawardena, 2020). Information from the binding of multiple TFs at many sites, often widely distributed across the genome in distal enhancer sequences, must be integrated to determine whether, and in what manner, a gene is expressed. The results of the present paper offer a way to think further about how such integration takes place (Tsai and Nussinov, 2011). We focus on gene regulation, but our results may also be useful for analysing other mechanisms of information integration, such as GPCRs (Thal et al., 2018).
As pointed out in the Introduction, haemoglobin solves the problem of integrating two quite different physiological functions—picking up oxygen in the lungs and delivering oxygen to the tissues—by having two conformations, each adapted to one of these functions, and dynamically interconverting between them (Figure 10A). The effective cooperativity of oxygen binding ensures that the appropriate conformation dominates the ensemble in the distinct contexts of the lungs, where oxygen is abundant, and the tissues, where oxygen is scarce, so that oxygen is transferred from the former to the latter.
Genes have to be regulated to achieve yet more elaborate forms of integration, with the same gene being expressed differently in different contexts. Such pleiotropy is particularly evident in developmental genes (Bolt and Duboule, 2020) but usually occurs in distinct cells within the developing organism. The same gene is present in these cells, but it may be difficult to know whether the corresponding regulatory machineries are also the same. More directly suitable examples for the present discussion arise in individual cells exposed to distinct stimuli (Molina et al., 2013; Kalo et al., 2015; Lin et al., 2015), which may be particularly the case for neurons or cells of the immune system (Marco et al., 2020; Smale et al., 2013).
Depending on the input pattern of TFs present in a given cellular context (Figure 10B, left), a gene may be expressed in a certain way, as a distribution of splice isoforms, each with an overall level of mRNA expression and a pattern of stochastic bursting (Lammers et al., 2020; Figure 10B, right). A different input pattern of TFs may elicit a different mRNA output. Our results suggest that one way in which these different inputoutput relationships could be integrated in the workings of a single gene is through allostery of the overall regulatory machinery. An allosteric analogy in gene regulation was previously made by Mirny, 2010, building upon observations of indirect cooperativity between TFs that were mediated by nucleosomes (Miller and Widom, 2003). In the allosteric analogy, TF binding to DNA takes place in one of two conformations—nucleosome present or absent—which dynamically interchange, leading to the classical MWC model. Here, we build upon Mirny’s idea to suggest that not only indirect cooperativity but also, more broadly, information integration may be accounted for by the conformational dynamics of the gene regulatory machinery. The latter comprises not just individual nucleosomes but whatever other molecular entities are implicated in conveying information from TF binding sites to RNA polymerase and the transcriptional machinery (Figure 10B, centre), as discussed below. If this hypothesis is correct, then the flexibility result tells us that the overall regulatory conformational ensemble must exhibit sufficient complexity to implement the integration of binding information.
Studies of individual regulatory components have revealed many levels of conformational complexity. DNA itself exhibits conformational changes in respect of TF binding (Kim et al., 2013). Nucleosomes are moved or evicted to alter chromatin conformation and DNA accessibility (Mirny, 2010; Voss and Hager, 2014). TFs, in particular, show high levels of intrinsic disorder compared to other classes of proteins (Liu et al., 2006), especially in their activation domains, and these disordered regions exhibit dynamic multivalent interactions characteristic of higherorder effects (Chong et al., 2018; Clark et al., 2018). Hub TFs like p53 exhibit high levels of conformational flexibility in the context of specific DNA binding (Demir et al., 2017). Transcriptional coregulators, which do not directly bind DNA but are recruited there by TFs, exhibit substantial conformational complexity: CBP/p300 has multiple intrinsically disordered regions which facilitate higherorder cooperative interactions (Dyson and Wright, 2016), while the Mediator complex exhibits quite remarkable conformational changes upon binding to TFs (Allen and Taatjes, 2015). Transcription initiation subcomplexes such as TFIID, which help assemble the transcriptional machinery, show conformational plasticity (Nogales et al., 2017), while the Cterminal domain of RNA Pol II, which is repetitive and intrinsically disordered, shows surprising local structural heterogeneity (Portz et al., 2017). The significance of RNA conformational dynamics during transcription is becoming clearer (Ganser et al., 2019). Finally, transcription may also be regulated within largerscale entities, such as transcription factories (Edelman and Fraser, 2012), phaseseparated condensates (Sabari et al., 2018) and topological domains (Benabdallah and Bickmore, 2015). The role of such entities remains a matter of debate (Mir et al., 2019), but they may play a significant role in conveying information over long genomic distances between distal enhancers and target promoters (Furlong and Levine, 2018). From the perspective taken here, in view of their size and extent, they may exhibit conformational dynamics on longer timescales.
These various findings have emerged largely independently of each other. They indicate the presence of many conformations of components of the gene regulatory machinery, with these components dynamically interchanging on varying timescales. The collective effect of these coupled dynamics is difficult to predict but we can hazard some guesses. It has been suggested, for example, that multiprotein complexes like Mediator couple the conformational repertoires of their component proteins into complex allosteric networks for processing information (Lewis, 2010). From an ensemble viewpoint, if component $X$ has $m$ conformations and component $Y$ has $n$ conformations, we might naively expect that the coupling of $X$ and $Y$ in a complex yields roughly $mn$ conformations. Following this multiplicative logic for the many components involved in eukaryotic gene regulation, from DNA itself to condensates and domains, suggests that the gene regulatory machinery has enormous conformational capacity with a deep hierarchy of timescales.
In making the analogy to haemoglobin, it is the conformational dynamics which implements the transfer of information from upstream TF inputs to downstream gene output. In any given cellular context, as determined by the input pattern of TFs, we may expect one, or perhaps a few, overall regulatory conformations which are welladapted to generate the required mRNA output and these conformations will be the most frequently observed. The ensemble may exhibit complex patterns of positive and negative effective HOCs among the input TFs which will characterise the required output. In the light of our flexibility theorem, the occurrence of such HOCs, which appear to be necessary to account for data on gene regulation (Park et al., 2019a), may be seen as evidence for conformational complexity. When the cellular context changes, different conformations, adapted to produce the output required in the new context, may be present most often—although careful inspection may show them to have been more fleetingly present previously, as would be expected under conformational selection. More broadly, the complexity of the regulatory conformational ensemble and its dynamics reflects the complexity of functional integration which the gene has to undertake.
Furlong and Levine have suggested a ‘hub and condensate’ model for the overall gene regulatory machinery, which brings together aspects of earlier models to account for how remote enhancers communicate with target promoters (Furlong and Levine, 2018). The allosteric perspective taken here emphasises the significance of conformational dynamics for the functional integration undertaken by such ‘hubs’.
Testing these ideas on the scale of the regulatory machinery presents a daunting challenge, but recent developments point the way towards approaching them, including advances in cryoEM (Lewis and Costa, 2020), singlemolecule microscopy (Li et al., 2019; Bacic et al., 2020), NMR (Shi et al., 2020), synthetic biology (Park et al., 2019b) and the measurement of higherorder quantities (Gruber and Horovitz, 2018). Before experiments can be formulated, an appropriate conceptual picture needs to be described and that is what we have tried to formulate here. We now know a great deal about the molecular components involved in gene regulation, but the question of how these components collectively give rise to function has been harder to grasp. The allosteric analogy to haemoglobin, upon which we have built here, suggests a potential way to fill this gap.
In extending the haemoglobin analogy, we have sidestepped the issue of energy expenditure. This is not relevant for haemoglobin, but it can hardly be avoided in considering eukaryotic gene regulation, where reorganisation of chromatin and nucleosomes requires energydissipating motor proteins and posttranslational modifications driven by chemical potential differences are found on all components of the regulatory machinery (Wong and Gunawardena, 2020). What impact such energy expenditure has on ensemble functional integration is a very interesting question. In a separate study that was stimulated by the present paper, we have confirmed that, if a conformational ensemble is maintained in steady state away from thermodynamic equilibrium, then it can exhibit greater functional capabilities than at equilibrium. We hope to report on these findings subsequently. The results presented here offer a rigorous starting point for thinking about how regulatory ensembles integrate binding information at thermodynamic equilibrium. If, indeed, regulatory energy expenditure is essential for gene expression function, as studies increasingly suggest (Park et al., 2019a; Grah et al., 2020; Wolff et al., 2021), new methods, both theoretical and experimental, will be required to understand its functional significance.
Materials and methods
The linear framework
Background and references
Request a detailed protocolThe graphs described in the main text, like those in Figure 4, are ‘equilibrium graphs’, which are convenient for describing systems at thermodynamic equilibrium. Equilibrium graphs are derived from linear framework graphs. The distinction between them is that the latter specifies a dynamics, while the former specifies an equilibrium steady state. We first explain the latter and then describe the former. Throughout this section we will use ‘graph’ to mean ‘linear framework graph’ and ‘equilibrium graph’ to mean the kind of graph used in the main text.
The linear framework was introduced in Gunawardena, 2012, developed in Mirzaev and Gunawardena, 2013, Mirzaev and Bortz, 2015, applied to various biological problems in Ahsendorf et al., 2014, Dasgupta et al., 2014, Estrada et al., 2016, Wong et al., 2018a, Wong et al., 2018b, Yordanov and Stelling, 2018, Biddle et al., 2019, Yordanov and Stelling, 2020 and reviewed in Gunawardena, 2014, Wong and Gunawardena, 2020. Technical details and proofs of the ideas described here can be found in Gunawardena, 2012, Mirzaev and Gunawardena, 2013, as well as in the Supplementary Information of Estrada et al., 2016, Wong et al., 2018b, Biddle et al., 2019.
The framework uses finite, directed graphs with labelled edges and no selfloops to analyse biochemical systems under timescale separation. In a typical timescale separation, the vertices represent ‘fast’ components or states, which are assumed to reach steady state; the edges represent reactions or transitions; and the edge labels represent rates with dimensions of (time)^{−1}. The labels may include contributions from ‘slow’ components, which are not represented by vertices but which interact with them, such as binding ligands in the case of allostery.
Linear framework graphs and dynamics
Request a detailed protocolGraphs will always be connected, so that they cannot be separated into subgraphs between which there are no edges. The set of vertices of a graph $G$ will be denoted by $\nu (G)$. For a general graph, the vertices will be indexed by numbers $1,\mathrm{\cdots},N\in \nu (G)$ and vertex 1 will be taken to be the reference vertex. Particular kinds of graphs, such as the allostery graphs discussed in the paper, may use a different indexing. An edge from vertex $i$ to vertex $j$ will be denoted $i\to j$ and the label on that edge by $\mathrm{\ell}(i\to j)$. A subscript, as in $i{\to}_{G}j$, may be used to specify which graph is under discussion. When discussing graphs, we used the word ‘structure’ to refer to properties that depend on vertices and edges only, ignoring the labels.
A graph gives rise to a dynamical system by assuming that each edge is a chemical reaction under massaction kinetics with the label as the rate constant. Since each edge has only a single source vertex, the corresponding dynamics is linear and can be represented by a linear differential equation in matrix form:
Here, $G$ is the graph, $u$ is a vector of component concentrations and $\mathcal{L}(G)$ is the Laplacian matrix of $G$. Since material is only moved between vertices, there is a conservation law, ${\sum}_{i}{u}_{i}(t)={u}_{tot}$. By setting ${u}_{tot}=1$, $u$ can be treated as a vector of probabilities. In such a stochastic setting, Equation 28 is the master equation (Kolmogorov forward equation) of the underlying Markov process. This is a general representation: given any wellbehaved Markov process on a finite state space, there is a graph, whose vertices are the states, for which Equation 28 is the master equation.
The linear dynamics in Equation 28 gives the linear framework its name and is common to all applications. The treatment of the external components, which appear in the edge labels and which introduce nonlinearities, depends on the application. For the case of allostery treated here, we make the same assumptions as in thermodynamics for the grand canonical ensemble, with each ligand being present in a reservoir from which binding and unbinding to graph vertices does not change its free concentration. In this case, the edge labels are effectively constant. The same assumptions are implicitly used in other studies of allostery.
Steady states and thermodynamic equilibrium
Request a detailed protocolThe dynamics in Equation 28 always tends to a steady state, at which $du/dt=0$, and, under the fundamental timescale separation, it is assumed to have reached a steady state. If the graph is strongly connected, it has a unique steady state up to a scalar multiple, so that $dim\mathrm{ker}\mathcal{L}(G)=1$. Strong connectivity means that, given any two distinct vertices, $i$ and $j$, there is a path of directed edges from $i$ to $j$, $i={i}_{1}\to {i}_{2}\to \mathrm{\cdots}\to {i}_{k1}\to {i}_{k}=j$. Under strong connectivity, a representative steady state for the dynamics, $\rho (G)\in \mathrm{ker}\mathcal{L}(G)$, may be calculated in terms of the edge labels by the Matrix Tree Theorem. We omit the corresponding expression as it is not needed here, but it can be found in any of the references given above. This expression holds whether or not the steady state is one of thermodynamic equilibrium. However, at thermodynamic equilibrium, the description of the steady state simplifies considerably because detailed balance holds. This means that the graph is reversible, so that, if $i\to j$, then also $j\to i$, and each pair of such edges is independently in flux balance, so that
This ‘microscopic reversibility’ is a fundamental property of thermodynamic equilibrium. Note that a reversible, connected graph is necessarily strongly connected.
Take any path of reversible edges from the reference vertex 1 to some vertex $i$, $1={i}_{1}\rightleftharpoons {i}_{2}\rightleftharpoons \mathrm{\cdots}\rightleftharpoons {i}_{k1}\to {i}_{k}=i$, and let ${\mu}_{i}(G)$ be the product of the label ratios along the path:
It is straightforward to see from Equation 29 that ${\mu}_{i}(G)$ does not depend on the chosen path and that ${\rho}_{i}(G)={\mu}_{i}(G){\rho}_{1}(G)$. The vector $\mu (G)$ is therefore a scalar multiple of $\rho (G)$ and so also a steady state for the dynamics. The detailed balance formula in Equation 29 also holds for μ in place of ρ. At thermodynamic equilibrium, the only parameters needed to describe steady states are label ratios.
Equilibrium graphs and independent parameters
Request a detailed protocolThis observation about label ratios leads to the concept of an equilibrium graph. Suppose that $G$ is a linear framework graph which can reach thermodynamic equilibrium and is therefore reversible (above). $G$ gives rise to an equilibrium graph, $\mathcal{E}(G)$, as follows. The vertices and edges of $\mathcal{E}(G)$ are the same as those of $G$, but the edge labels in $\mathcal{E}(G)$, which we will refer to as ‘equilibrium edge labels’ and denote ${\ell}_{\mathrm{e}\mathrm{q}}(i\to j)$, are the label ratios in $G$. In other words,
Scheme 1 illustrates the relationship between the linear framework graph and the corresponding equilibrium graph. Note that the equilibrium edge labels of $\mathcal{E}(G)$ are nondimensional and that $\ell}_{\mathrm{e}\mathrm{q}}(j\to i)={\ell}_{\mathrm{e}\mathrm{q}}(i\to j{)}^{1$. The equilibrium edge labels are the essential parameters for describing a state of thermodynamic equilibrium.
These parameters are not independent because Equation 29 implies algebraic relationships among them. Indeed, Equation 29 is equivalent to the following ‘cycle condition’, which we formulate for $\mathcal{E}(G)$: given any cycle of edges, ${i}_{1}\to {i}_{2}\to \mathrm{\cdots}\to {i}_{k1}\to {i}_{1}$, the product of the equilibrium edge labels along the cycle is always 1:
This cycle condition is equivalent to the detailed balance condition in Equation 29 and either condition is equivalent to $G$ being at thermodynamic equilibrium.
There is a systematic procedure for choosing a set of equilibrium edge label parameters which are both independent, so that there are no algebraic relationships among them, and also complete, so that all other equilibrium edge labels can be algebraically calculated from them. Recall that a spanning tree of $G$ is a connected subgraph, $T$, which contains each vertex of $G$ (spanning) and which has no cycles when edge directions are ignored (tree). Any strongly connected graph has a spanning tree and the number of edges in such a tree is one less than the number of vertices in the graph. Since $G$ and $\mathcal{E}(G)$ have the same vertices and edges, they have identical spanning trees. The equilibrium edge labels ${\ell}_{\mathrm{e}\mathrm{q}}(i{\to}_{T}j)$, taken over all edges $i\to j$ of $T$, form a complete and independent set of parameters at thermodynamic equilibrium. In particular, if $G$ has $N$ vertices, there are $N1$ independent parameters at thermodynamic equilibrium.
In the main text, we defined an equilibrium allostery graph, $A$ (Figure 4), without specifying a corresponding linear framework graph, $G$, for which $\mathcal{E}(G)=A$. Because label ratios are used in an equilibrium graph, there is no unique linear framework graph corresponding to it. However, some choice of transition rates, $\mathrm{\ell}(i{\to}_{G}j)$ and $\mathrm{\ell}(j{\to}_{G}i)$, can always be made such that their ratio is ${\ell}_{\mathrm{e}\mathrm{q}}(i{\to}_{\mathcal{\mathcal{E}}\mathcal{(}\mathcal{\mathcal{G}}\mathcal{)}}j)$. Hence, some linear framework graph $G$ can always be defined such that $\mathcal{E}(G)=A$. In some of the constructions below, we will work with the linear framework graph, $G$, rather than with the equilibrium graph $A$ and will then show that the construction does not depend on the choice of $G$.
Steadystate probabilities and equilibrium statistical mechanics
Request a detailed protocolThe steadystate probability of vertex $i$, ${\text{Pr}}_{i}(G)$, can be calculated from the steady state of the dynamics by normalising, so that
where the first formula holds for any strongly connected graph and the second formula also holds if the graph is at thermodynamic equilibrium. In the latter case, Equation 29 holds and $\mu (G)$ can be defined by Equation 30. The second formula in Equation 33 corresponds to Equation 3. If the graph is at thermodynamic equilibrium, the equilibrium edge labels may be interpreted thermodynamically, as illustrated in Figure 3 and discussed in the main text (Equation 1):
If Equation 34 is used to expand the second formula in Equation 33, it gives the specification of equilibrium statistical mechanics for the grand canonical ensemble, with the denominator being the partition function.
It will be helpful to let $\mathrm{\Pi}(G)$ and $\mathrm{\Psi}(G)$ denote the corresponding denominators in Equation 33, so that $\mathrm{\Pi}(G)={\rho}_{1}(G)+\mathrm{\cdots}+{\rho}_{N}(G)$ for any strongly connected graph and $\mathrm{\Psi}(G)={\mu}_{1}(G)+\mathrm{\cdots}+{\mu}_{N}(G)$ for a graph which is at thermodynamic equilibrium. We will refer to $\mathrm{\Pi}(G)$ and $\mathrm{\Psi}(G)$ as partition functions. It follows from Equation 33 that
depending on the context.
The allostery graph
Structure and labels
Request a detailed protocolAn allostery graph, $A$, is an equilibrium graph which describes the interplay between conformational change and ligand binding, as illustrated in Figure 4. Its vertices are indexed by $({c}_{k},S)$, where c_{k} specifies a conformation with $1\le k\le N$ and $S\subseteq \{1,\mathrm{\cdots},n\}$ specifies a subset of sites bound by a ligand whose concentration is $x$. There is no difficulty in allowing multiple ligands and overlapping binding sites, but to keep the formalism simple, we describe here the case of a single ligand and distinct binding sites.
Recall from the main text that $A$ has vertical subgraphs, ${A}^{{c}_{k}}$, consisting of vertices $({c}_{k},R)$ for all binding subsets, $R$, together with all edges between them, with the vertices indexed by binding subsets, $R$, and with $R=\mathrm{\varnothing}$ being the reference vertex. $A$ has horizontal subgraphs, ${A}_{S}$, consisting of vertices $({c}_{i},S)$ for all conformations c_{i}, together with all edges between them, with the vertices labelled by conformations c_{i}, and with c_{1} being the reference vertex. The product structure of $A$ is revealed by all vertical subgraphs having the same structure as each other and all horizontal subgraphs having the same structure as each other (Figure 4).
As for the labels, the vertical binding edges have equilibrium labels,
where $x$ is the concentration of the ligand and ${K}_{{c}_{k},i,S}$ is the association constant for binding to site $i$ when the ligand is already bound at the sites in $S$. The horizontal edges, which represent transitions between conformations, have equilibrium labels, ${\ell}_{\mathrm{e}\mathrm{q}}(({c}_{k},S){\to}_{A}({c}_{l},S))$, which are not individually annotated. However, it is only necessary to specify these equilibrium labels for a single horizontal subgraph, of which the subgraph of empty conformations, ${A}_{\mathrm{\varnothing}}$, is particularly convenient. To see this, let us calculate the quantity ${\mu}_{({c}_{k},S)}(A)$ using Equation 30. Taking the reference vertex in $A$ to be $({c}_{1},\mathrm{\varnothing})$, we can always find a path to any given vertex $({c}_{k},S)$ of $A$ by first moving horizontally within ${A}_{\mathrm{\varnothing}}$ from $({c}_{1},\mathrm{\varnothing})$ to $({c}_{k},\mathrm{\varnothing})$ and then moving vertically within ${A}^{{c}_{k}}$ from $({c}_{k},\mathrm{\varnothing})$ to $({c}_{k},S)$. According to Equation 30, the steady state is given by the product of the equilibrium labels along this path, so that
Now consider any horizontal edge in $A$, $({c}_{k},S)\to ({c}_{l},S)$. Since $A$ is at thermodynamic equilibrium, it follows from Equation 29, using μ in place of ρ, and Equation 37, that
Applying Equation 29 to ${A}_{\mathrm{\varnothing}}$, with μ in place of ρ, we see that
Hence, it follows that
Accordingly, all the labels in $A$ are determined by the vertical labels in Equation 36, from which ${\mu}_{S}({A}^{{c}_{k}})$ and ${\mu}_{S}({A}^{{c}_{l}})$ are determined, and the horizontal labels in the subgraph of empty conformations, ${A}_{\mathrm{\varnothing}}$. As can be seen from Scheme 2, Equation 38 amounts to exploiting the equilibrium cycle condition in Equation 32.
Independent parameters
Request a detailed protocolWe can choose any spanning tree in the horizontal subgraph of empty conformations, ${A}_{\mathrm{\varnothing}}$. As explained above, the equilibrium labels on the edges of this tree define a complete set of $N1$ independent parameters for ${A}_{\mathrm{\varnothing}}$. As for the vertical subgraphs, ${A}^{{c}_{k}}$, which all have the same structure, consider the subgraph of ${A}^{{c}_{k}}$ consisting of all edges, together with the corresponding source and target vertices, of the form, $({c}_{k},S)\to ({c}_{k},S\cup \{i\})$, where $\mathrm{\varnothing}\subseteq S\subset \{1,\mathrm{\cdots},n\}$ and $i$ is less than all the sites in $S$ ($i<S$). It is not difficult to see that this subgraph is a spanning tree of ${A}^{{c}_{k}}$ (Estrada et al., 2016, SI, §3.2). Accordingly, the association constants, ${K}_{{c}_{k},i,S}$ from Equation 36, with $i<S$, form a complete set of independent parameters for ${A}^{{c}_{k}}$. Because of the product structure of $A$, adjoining the spanning trees in ${A}^{{c}_{k}}$, for each conformation c_{k} with $1\le k\le N$, to the spanning tree in ${A}_{\mathrm{\varnothing}}$, yields a spanning tree in $A$. Hence, the independent parameters for ${A}^{{c}_{k}}$ together with the $N1$ independent parameters for ${A}_{\mathrm{\varnothing}}$ are also collectively independent as parameters for $A$. It follows from the description of labels above that these parameters are also complete for $A$, so that any equilibrium label in $A$ can be expressed in terms of them.
A general method of coarse graining
Coarse graining a linear framework graph and Equation 17
Request a detailed protocolWe will describe the coarsegraining procedure for an arbitrary reversible linear framework graph, $G$, and then explain how this can be adapted to an equilibrium graph, as described for the allostery graph $A$ in the main text.
We will say that a graph $G$ is inuniform if, given any vertex $j\in \nu (G)$, then for all edges $i\to j$, $\mathrm{\ell}(i\to j)$ does not depend on the source vertex $i$.
Lemma 1
Request a detailed protocolSuppose that $G$ is reversible and inuniform. Then, $G$ is at thermodynamic equilibrium and the vector θ given by ${\theta}_{j}\mathrm{=}\mathrm{\ell}\mathrm{(}i\mathrm{\to}j\mathrm{)}$, which is welldefined by hypothesis, is a basis element in $\mathrm{ker}\mathit{}\mathrm{L}\mathit{}\mathrm{(}G\mathrm{)}$ and a steady state for the dynamics.
Proof: If ${i}_{1}\rightleftharpoons {i}_{2}\rightleftharpoons \mathrm{\cdots}\rightleftharpoons {i}_{k1}\rightleftharpoons {i}_{k}$ is any path of reversible edges in $G$, then the product of the label ratios along the path satisfies
because the intermediate terms cancel out by the inuniform hypothesis. If the path is a cycle, so that ${i}_{k}={i}_{1}$, then, again because of the inuniform hypothesis, the righthand side of Equation 39 is 1. Hence, $G$ satisfies the cycle condition in Equation 32 and is therefore at thermodynamic equilibrium. For the last statement, assume that i_{1} is the reference vertex 1 and that ${i}_{k}=j$, for any vertex $j$. Using Equation 30, we see that ${\mu}_{j}(G)={\theta}_{j}/{\theta}_{1}$. Since ${\theta}_{1}$ is a scalar multiple, the last statement follows.
Now let $G$ be an arbitrary reversible graph, which need not satisfy detailed balance. Let ${G}_{1},\mathrm{\cdots},{G}_{m}$ be any partition of the vertices of $G$, so that ${G}_{i}\subseteq \nu (G)$, ${G}_{1}\cup \mathrm{\cdots}\cup {G}_{m}=\nu (G)$ and ${G}_{i}\cap {G}_{j}=\mathrm{\varnothing}$ when $i\ne j$. Let $\mathcal{C}(G)$ be the labelled directed graph with $\nu (\mathcal{C}(G))=\{1,\mathrm{\cdots},m\}$ and let $u{\to}_{\mathcal{C}(G)}v$ if, and only if, there exists $i\in {G}_{u}$ and $j\in {G}_{v}$ such that $i{\to}_{G}j$. Finally, let the edge labels of $\mathcal{C}(G)$ be given by
The quantity $Q$ in Equation 40 is chosen arbitrarily so that the dimension of $\mathrm{\ell}(u\to v)$ is (time)^{−1}, as required for an edge label. This is necessary because, by the Matrix Tree Theorem, the dimension of ${\rho}_{i}(G)$ is (time)^{1−N}, where $N$ is the number of vertices in $G$. However, $Q$ plays no role in the analysis which follows because the coarse graining applies only to the steady state of $\mathcal{C}(G)$, not its transient dynamics, and, as we will see, $\mathcal{C}(G)$ is always at thermodynamic equilibrium, so that $Q$ disappears when equilibrium edge labels are considered.
Note that $\mathcal{C}(G)$ inherits reversibility from $G$ and that $\mathcal{C}(G)$ is inuniform. Hence, by Lemma 1, $\mathcal{C}(G)$ is at thermodynamic equilibrium and
where λ is a scalar that does not depend on $v\in \nu (\mathcal{C}(G))$. Since ${G}_{1},\mathrm{\cdots},{G}_{m}$ is a partition of the vertices of $G$, it follows from Equation 41 that
Equations 35 and 41 then show that both λ and $Q$ cancel in the ratio for the steadystate probabilities, so that
Equation 42 is the coarsegraining equation, as given in Equation 17.
Coarse graining an equilibrium graph
Request a detailed protocolThe coarsegraining procedure described above can be applied to any reversible graph, which need not be at thermodynamic equilibrium. However, the coarse graining described in the paper was for an equilibrium graph. It is not difficult to see that the construction above can be undertaken consistently for any equilibrium graph. It is helpful to first establish a more general observation. The choice of edge labels for $\mathcal{C}(G)$, as given in Equation 40, is not the only one for which Equation 42 holds, as the appearance of the factor $Q$ indicates. However, the label ratios in $\mathcal{C}(G)$ are uniquely determined by the labels of $G$.
Suppose that $G$ is a reversible graph with a vertex partition ${G}_{1},\mathrm{\cdots},{G}_{m}$, as above. $G$ need not be at thermodynamic equilibrium. Suppose that $C$ is a graph which is isomorphic to $\mathcal{C}(G)$ as a directed graph (‘structurally isomorphic’), in the sense that it has identical vertices and edges but may have different edge labels. (Technically speaking, an ‘isomorphism’ allows for the vertices of $C$ to have an alternative indexing to those of $\mathcal{C}(G)$ as long as the two indexings can be interconverted so as to preserve the edges. For simplicity of exposition, we assume that the indexing is, in fact, identical. No loss of generality arises from doing this.)
Lemma 2
Request a detailed protocolSuppose that $C$ is at thermodynamic equilibrium and the coarsegraining equation (Equation 42) holds for $C$, so that ${\text{\mathit{P}\mathit{r}}}_{u}\mathit{}\mathrm{(}C\mathrm{)}\mathrm{=}{\mathrm{\sum}}_{i\mathrm{\in}{G}_{u}}{\text{\mathit{P}\mathit{r}}}_{i}\mathit{}\mathrm{(}G\mathrm{)}$. If $u{\mathrm{\rightleftharpoons}}_{C}v$ is any reversible edge, then its equilibrium label depends only on $G$,
and $C$ and $\mathrm{C}\mathit{}\mathrm{(}G\mathrm{)}$ are isomorphic as equilibrium graphs, so that identical edges have identical equilibrium labels.
Proof: It follows from Equation 35 that ${\text{Pr}}_{i}(G)={\rho}_{i}(G)/\mathrm{\Pi}(G)$ and, since $C$ is at thermodynamic equilibrium, ${\text{Pr}}_{u}(C)={\mu}_{u}(C)/\mathrm{\Psi}(C)$. Using the coarsegraining equation for ${\text{Pr}}_{u}(C)$, we see that
Since $C$ is at thermodynamic equilibrium, Equation 29, with μ in place of ρ, implies that
Substituting with Equation 43, the partition functions cancel out to give the formula above. Since $\mathcal{C}(G)$ satisfies the same assumptions as $C$, it has the same equilibrium labels. Hence, $C$ and $\mathcal{C}(G)$ must be isomorphic as equilibrium graphs.
Corollary 1
Request a detailed protocolSuppose that $A$ is an equilibrium graph and that $G$ is any graph for which $\mathrm{E}\mathit{}\mathrm{(}G\mathrm{)}\mathrm{=}A$, as described above. If any coarse graining of $G$ is undertaken to yield the coarsegrained graph $\mathrm{C}\mathit{}\mathrm{(}G\mathrm{)}$, which must be at thermodynamic equilibrium, then
and $\mathrm{E}\mathit{}\mathrm{(}\mathrm{C}\mathit{}\mathrm{(}G\mathrm{)}\mathrm{)}$ depends only on $A$ and not on the choice of $G$.
Proof: $A$ acquires from $G$ the same coarse graining, with the partition ${A}_{1},\mathrm{\cdots},{A}_{m}$ of $\nu (A)$, where ${A}_{i}={G}_{i}\subseteq \{1,\mathrm{\cdots}m\}$. By hypothesis, $G$ is at thermodynamic equilibrium, so that ${\rho}_{i}(G)=\lambda {\mu}_{i}(G)$ for some scalar multiple λ. Also, since $\mathcal{E}(G)=A$, ${\mu}_{i}(G)={\mu}_{i}(A)$. Substituting in the formula in Lemma 2 yields the formula above. The equilibrium labels of $\mathcal{C}(G)$ therefore depend only on the equilibrium labels of $A$, as required.
It follows from Corollary 1 that coarse graining can be carried out on an equilibrium graph, $A$, by choosing any graph $G$ for which $\mathcal{E}(G)=A$ and carrying out the coarsegraining procedure described above on $G$. This justifies the coarsegraining construction described in the main text.
Coarse graining the allostery graph
Proof of Equation 18
Request a detailed protocolAs described in the main text and Figure 4, the coarsegrained allostery graph, ${A}^{\varphi}=\mathcal{C}(A)$, is defined using the partition of $A$ by its horizontal subgraphs, ${A}_{S}$, where $S$ runs through all binding subsets, $S\subseteq \{1,\cdots ,n\}$. $A}^{\varphi$ has the same structure of vertices and edges as any of the binding subgraphs, ${A}^{{c}_{k}}$, and is indexed in the same way by the binding subsets, $S$. Scheme 3 shows an example, which illustrates the calculations undertaken in this section.
Consider the reversible edge in ${A}^{\varphi}$, $S\rightleftharpoons S\cup \{i\}$, where $i\notin S$. This reversible edge effectively arises from the binding and unbinding of ligand at site $i$. According to Equation 36, its effective association constant, ${K}_{i,S}^{\varphi}$, should satisfy
Since $A$ is at thermodynamic equilibrium, we can make use of the formula in Corollary 1 to rewrite this as
Equations 30 and 36 tell us that ${\mu}_{({c}_{k},S\cup \{i\})}(A)=x{K}_{{c}_{k},i,S}{\mu}_{({c}_{k},S)}(A)$, so that, after rearranging,
We can now appeal to Equations 35 and 37 to rewrite the term in brackets on the right as
At this point, it will be helpful to introduce the following notation. If $G$ is any equilibrium graph and $u:\nu (G)\to \text{\mathbf{R}}$ is any realvalued function defined on the vertices of $G$, let $\u27e8u\u27e9$ denote the average of $u$ over the steadystate probability distribution of $G$,
With this notation in hand, we can rewrite the denominator in Equation 46 as $\u27e8{\mu}_{S}({A}^{{c}_{k}})\u27e9$, where, from now on, averages will be taken over the steadystate probability distribution of the horizontal subgraph of empty conformations, ${A}_{\mathrm{\varnothing}}$ (Scheme 3, bottom). Inserting this expression back into Equation 45 and rearranging, we obtain a formula for the effective association constant as a ratio of averages,
which gives the first formula in Equation 18. The ‘dot’ in Equation 48 signifies a product to make the formula easier to read. Scheme 3 demonstrates this calculation. Recall from the main text that HOCs are defined by normalising to the empty binding subset, so that ${\omega}_{i,S}^{\varphi}={K}_{i,S}^{\varphi}/{K}_{i,\mathrm{\varnothing}}^{\varphi}$. Furthermore, since the reference vertex of the vertical subgraphs, ${A}^{{c}_{k}}$, is taken to be the empty binding subset, ${\mu}_{\mathrm{\varnothing}}({A}^{{c}_{k}})=1$. It follows that the effective HOCs are given by
which gives the second formula in Equation 18.
Elementary properties of effective HOCs
Request a detailed protocolThe main text describes three elementary properties of effective HOCs which follow from Equation 49. The only quantity in Equation 49 which involves the ligand concentration, $x$, is ${\mu}_{S}({A}^{{c}_{k}})$. It follows from Equation 30 that this quantity is a monomial in $x$ of the form $a{x}^{p}$, where $a$ does not involve $x$ and $p=\mathrm{\#}(S)$. In particular, ${x}^{p}$ does not depend on the conformation c_{k}. It follows that ${x}^{p}$ can be extracted from the averages in Equation 49 and cancelled between the numerator and denominator. Hence, ${\omega}_{i,S}^{\varphi}$ is independent of $x$. If $S=\mathrm{\varnothing}$, then ${\mu}_{S}({A}^{{c}_{k}})=1$ for all $1\le k\le N$ and it follows from Equation 49 that ${\omega}_{i,\mathrm{\varnothing}}^{\varphi}=1$. Finally, if there is only one conformation c_{1}, the averages in Equation 49 collapse and ${\mu}_{S}({A}^{{c}_{1}})$ cancels above and below, so that ${\omega}_{i,S}^{\varphi}={\omega}_{{c}_{1},i,S}$, as required.
Generalised MWC formula
Request a detailed protocolThe original MWC formula calculates the binding curve, or fractional saturation, of the twoconformation model as a function of ligand concentration $x$ (Monod et al., 1965). Here, we do the same for an arbitrary allostery graph, $A$. Let $s=\mathrm{\#}(S)$. The fractional saturation of $A$ is given by the average binding,
normalised to the number of binding sites, $n$. By the coarsegraining formula in Equation 42, we can rewrite the fractional saturation as
The probability, ${\text{Pr}}_{S}({A}^{\varphi})$, can be calculated using Equation 33, which requires the quantities ${\mu}_{S}({A}^{\varphi})$. These can in turn be calculated by the path formula in Equation 30. We can choose the path in ${A}^{\varphi}$ to use the independent parameters introduced above. Let $S=\{{i}_{1},\mathrm{\cdots},{i}_{s}\}$, where ${i}_{1}<\mathrm{\cdots}<{i}_{s}$. Making use of Equation 44, we see that
Equation 51 can be rewritten in terms of the nondimensional effective HOCs, but it is simpler for our purposes to use instead the effective association constants, ${K}_{i,S}^{\varphi}$. The dependence on $x$ in Equation 51 shows that average binding is given by the logarithmic derivative of the partition function, $\mathrm{\Psi}({A}^{\varphi})$, so the fractional saturation can be written as
With this in mind, Equation 51 shows that the partition function can be written as a polynomial in $x$,
Finally, the ${K}_{i,S}^{\varphi}$ can be determined as averages over the horizontal subgraph of empty conformations using Equation 48. In this way, the fractional saturation in Equation 52 is ultimately determined by the independent parameters of $A$, giving rise thereby to a generalised MWC formula that is valid for any allostery graph. We explain below how the classical MWC formula is recovered using this procedure.
Effective HOCs for MWClike models
Proof of Equation 19 and related work
Request a detailed protocolLet $A$ be an allostery graph with ligand binding to $n$ sites which are independent and identical in each conformation. Because of independence, ${\omega}_{{c}_{k},i,S}=1$, so that ${K}_{{c}_{k},i,S}={K}_{{c}_{k},i,\mathrm{\varnothing}}$ does not depend on $S$; because the sites are identical, ${K}_{{c}_{i},i,S}$ does not depend on $i$. Hence, we may write ${K}_{{c}_{k},i,S}={K}_{{c}_{k}}$ and the labels on the binding edges of the vertical subgraph ${A}^{{c}_{k}}$ are all given by ${K}_{{c}_{k}}$. It follows from Equation 30 that ${\mu}_{S}({A}^{{c}_{k}})={({K}_{{c}_{k}})}^{s}$, where $s=\mathrm{\#}(S)$. Equation 49 then tells us that ${\omega}_{i,S}^{\varphi}$ also depends only on $s$, so that we can write it as ${\omega}_{s}^{\varphi}$, and Equation 49 simplifies to
which gives Equation 19.
If we consider the effective association constant instead of the effective HOC, then, with the same assumptions as above, Equation 48 tells us that
Suppose that only two conformations, $R$ and $T$, are present. Let ${\ell}_{\mathrm{e}\mathrm{q}}({c}_{R}\to {c}_{T})=L$ and write ${K}_{{c}_{T}}$ and ${K}_{{c}_{R}}$ as ${K}_{T}$ and ${K}_{R}$, respectively. Then, for any random variable on conformations, ${X}_{{c}_{k}}$, the average is given by $\u27e8{X}_{{c}_{k}}\u27e9=({X}_{{c}_{R}}+{X}_{{c}_{T}}L)/(1+L)$. Hence,
which is the formula for the (s + 1)th ‘intrinsic binding constant’ given by Gruber and Horovitz, 2018, Equation (2.10). In their analysis, the word ‘intrinsic’ corresponds to our ‘effective’.
We can use Equation 54 to work out what the generalised MWC formula derived above yields for the classical MWC model. Substituting Equation 54 in Equation 51, the intermediate terms in the product cancel out to leave,
in which the righthand side depends only on $s=\mathrm{\#}(S)$. Collecting together subsets of the same size, the partition function of ${A}^{\varphi}$ may be written as
It then follows from Equation 52 that the fractional saturation is given by
If we set $\alpha =x{K}_{R}$ and $c\alpha =x{K}_{T}$, this gives, for the fractional saturation,
which recovers the classical MWC formula in the notation of Monod et al., 1965, Equation 2.
Proof of Equation 20
Request a detailed protocolThe following result is unlikely not to be known in other contexts but we have not been able to find mention of it.
Lemma 3
Request a detailed protocolSuppose that $X$ is a positive random variable, $X\mathrm{>}\mathrm{0}$, over a finite probability distribution. If $s\mathrm{\ge}\mathrm{1}$, the following moment inequality holds,
with equality if, and only if, $X$ is constant over the distribution.
Proof: Suppose that the states of the probability space are indexed by $1\le i\le m$ and that p_{i} denotes the probability of state $i$. Then,
The quantity ${\alpha}_{s}=\u27e8{X}^{s+1}\u27e9\u27e8{X}^{s1}\u27e9{\u27e8{X}^{s}\u27e9}^{2}$ can then be written as
Collecting together terms in ${p}_{i}{p}_{j}$, we can rewrite this as
Note that the terms corresponding to $i=j$ yield $({X}_{i}^{s+1}{X}_{i}^{s1}{X}_{i}^{s}{X}_{i}^{s}){p}_{i}^{2}=0$ and so do not contribute to Equation 57. Choose any pair $1\le i\le m$ and $i<j\le m$ and let ${X}_{j}=\mu {X}_{i}$. Then, the coefficient of ${p}_{i}{p}_{j}$ in Equation 57 becomes
Now, $12\mu +{\mu}^{2}={(\mu 1)}^{2}\ge 0$ for $\mu \in \text{\mathbf{R}}$, with equality if, and only if, $\mu =1$. Since $X>0$ by hypothesis, $\mu >0$, so the coefficient of ${p}_{i}{p}_{j}$ is positive unless $\mu =1$. Hence, ${\alpha}_{s}>0$ unless ${X}_{i}={X}_{j}$ whenever $1\le i\le m$ and $i<j\le m$, which means that $X$ is constant over the distribution. Of course, if $X$ is constant, then clearly ${\alpha}_{s}=0$ for all $s\ge 1$. The result follows.
Corollary 2
Request a detailed protocolIf $A$ is an MWClike allostery graph, its effective HOCs satisfy
with equality at any stage if, and only if, ${K}_{{c}_{k}}$ is constant over ${A}_{\mathrm{\varnothing}}$.
Proof: It follows from Equation 53 that we can rewrite the effective HOCs recursively as
Since ${\omega}_{0}^{\varphi}=1$, the result follows by recursively applying Lemma 3 to $X={K}_{{c}_{k}}>0$. Equation 58 gives Equation 20.
Negative effective cooperativity
Request a detailed protocolWe consider an allostery graph $A$ with two conformations and two sites, in which binding is independent but not identical, so that the association constants differ between sites. Let ${K}_{{c}_{k},1,\mathrm{\varnothing}}={K}_{{c}_{k},1}$ and ${K}_{{c}_{k},2,\mathrm{\varnothing}}={K}_{{c}_{k},2}$, for $k=1,2$. Since the sites are independent, ${\omega}_{{c}_{k},1,\{2\}}=1$, so that ${K}_{{c}_{k},1,\{2\}}={K}_{{c}_{k},1}$, for $k=1,2$. It follows from Equation 30—see also Scheme 1—that
Let λ be the single equilibrium label in the horizontal subgraph of empty conformations,
It follows from Equations 30 and 33—see also the similar calculation in Scheme 3—that ${\text{Pr}}_{{c}_{1}}({A}_{\mathrm{\varnothing}})=1/(1+\lambda )$ and ${\text{Pr}}_{{c}_{2}}({A}_{\mathrm{\varnothing}})=\lambda /(1+\lambda )$. We know from Equation 49 that
and using the identifications above, we see that
Substituting and simplifying, we find that
The first and last terms are the same in the numerator and denominator, so it follows that ${\omega}_{1,\{2\}}^{\varphi}<1$ if, and only if,
which is to say
The lefthand side factors to give
We see that negative cooperativity arises if, and only if, the sites have opposite patterns of association constants in the two conformations.
Flexibility of allostery
The integrative flexibility theorem
Request a detailed protocolWe provide here a complete version of the proof that was sketched in the main text, showing rigorously how the approximation is handled. Some preliminary notation is needed. Recall that if $X$ is a finite set—typically, a subset of $\{1,\mathrm{\cdots},n\}$—then $\mathrm{\#}(X)$ will denote the number of elements in $X$. If $X$ and $Y$ are sets, then $X\backslash Y$ will denote the complement of $Y$ in $X$, $X\backslash Y=\{i\in X,i\notin Y\}$. To control the approximation, we will use the ‘little o’ notation: ${\mathcal{O}}_{u}(1)$ will stand for any quantity which depends on $u$ and for which ${\mathcal{O}}_{u}(1)\to 0$ as $u\to 0$. For instance, $Au+B{u}^{2}$ is ${\mathcal{O}}_{u}(1)$ but $(Au+B{u}^{2})/u$ is ${\mathcal{O}}_{u}(1)$ if, and only if, $A=0$. This notation allows concise expression of complicated expressions which vanish in the limit as $u\to 0$. Note that $f(u)\to A$ as $u\to 0$ if, and only if, $f(u)=A+{\mathcal{O}}_{u}(1)$, which is a useful trick for simplifying $f$.
Theorem 1
Request a detailed protocolSuppose $n\mathrm{\ge}\mathrm{1}$ and choose ${\mathrm{2}}^{n}\mathrm{}\mathrm{1}$ arbitrary positive numbers
Given any $\epsilon \mathrm{>}\mathrm{0}$ and $\delta \mathrm{>}\mathrm{0}$, there exists an allosteric conformational ensemble, which has no intrinsic HOC in any conformation, such that
for all corresponding values of $i$ and $S$.
Proof: Recall from the main text that we use an allostery graph $A$ whose conformations are indexed by subsets $T\subseteq \{1,\mathrm{\cdots},n\}$ and denoted ${c}_{T}$, as illustrated in Figure 6. The reference vertex of $A$ is $r=({c}_{\mathrm{\varnothing}},\mathrm{\varnothing})$. For the horizontal subgraph of empty conformations, ${A}_{\mathrm{\varnothing}}$, let ${\lambda}_{T}={\mu}_{{c}_{T}}({A}_{\mathrm{\varnothing}})$. It follows from Equation 30, using μ in place of ρ, that the ${\lambda}_{T}$ determine the equilibrium labels of ${A}_{\mathrm{\varnothing}}$. Keeping in mind that ${\lambda}_{\mathrm{\varnothing}}=1$, the ${\lambda}_{T}$ form a set of ${2}^{n}1$ independent parameters for ${A}_{\mathrm{\varnothing}}$, as explained above. The steadystate probabilities are then given by ${{\textstyle \text{Pr}}}_{{c}_{T}}({A}_{\mathrm{\varnothing}})={\lambda}_{T}/(\sum _{\mathrm{\varnothing}\subseteq X\subseteq \{1,\cdots ,n\}}{\lambda}_{X})$ (Equation 35).
Let ${\kappa}_{1},\mathrm{\cdots},{\kappa}_{n}>0$ be positive quantities whose values we will subsequently choose. We assume that all intrinsic HOCs are one and, for any binding microstate $S\subseteq \{1,\mathrm{\cdots},n\}$, we set
If ${c}_{T}$ is a conformation and $S\subseteq \{1,\mathrm{\cdots},n\}$ is a binding microstate, it follows from Equation 60 that
After coarse graining, we can calculate effective association constants and effective HOCs using the formulas in Equations 48 and 49. Let $S$ be a binding microstate and $i\notin S$. Using Equation 48 and Equations 60 and 61,
Letting $\epsilon \to 0$, we can use the trick described above to rewrite this as
Equation 62 is the more rigorous version of Equation 22. It follows from Equation 62, using the same trick to reorganise the terms which are ${\mathcal{O}}_{\epsilon}(1)$, that the effective HOCs are
Equation 63 is the more rigorous version of Equation 23. We see that the effective HOCs are independent of the quantities ${\kappa}_{i}$ and depend only on the parameters, ${\lambda}_{T}$, of the horizontal subgraph ${A}_{\mathrm{\varnothing}}$.
We can now specify the ${\lambda}_{T}$. If $T=\{{i}_{1},\mathrm{\cdots},{i}_{k}\}$, where ${i}_{1}<{i}_{2}<\mathrm{\cdots}<{i}_{k}$, we set
where each of the α quantities is given by hypothesis. Note that the exponent of δ depends only on the size of $T$ and not on which elements $T$ contains. Equation 64 is illustrated in Figure 6.
It follows from Equation 64 that, given any $X\subseteq \{1,\mathrm{\cdots},n\}$,
Using this, we see that the main term in Equation 63 has the form
It follows from Equation 64 that, when $i<S$, ${\lambda}_{S\cup \{i\}}={\alpha}_{i,S}{\lambda}_{S}\delta $, so using the trick above for reorganising the ${\mathcal{O}}_{\delta}(1)$ terms, we can rewrite Equation 65 as ${\alpha}_{i,S}+{\mathcal{O}}_{\delta}(1)$. Substituting back into Equation 63, we see that, when $i<S$,
Equation 66 is the more rigorous version of Equation 26.
With the choice of ${\lambda}_{T}$ given by Equation 64, we can return to Equation 62 with $S=\mathrm{\varnothing}$ and define
Substituting back into Equation 62 with $S=\mathrm{\varnothing}$, we see that
Equation 67 is the more rigorous version of Equation 27. The result follows from Equations 66, 67.
Construction of Figure 8
Request a detailed protocolWe implemented in a Mathematica notebook the proof strategy in Theorem 1 for any number $n$ of sites. The notebook takes as input parameters the ${\beta}_{i}$ and the ${\alpha}_{i,S}$ for $i<S$ in the statement of the theorem, along with specified values for the quantities ε and δ. It produces as output the effective bare association constants, ${K}_{i,\mathrm{\varnothing}}^{\varphi}$, and effective HOCs, ${\omega}_{i,S}^{\varphi}$ for $i<S$, as given by Theorem 1. The values of $\u03f5$ and δ can then be adjusted so that the calculated ${K}_{i,\mathrm{\varnothing}}^{\varphi}$ and ${\omega}_{i,S}^{\varphi}$ are as close as required to the ${\beta}_{i}$ and ${\alpha}_{i,S}$. The notebook is available on request.
Figure 8 shows the results from using this notebook on three examples, chosen by hand to illustrate different patterns of effective bare association constants and effective HOCs. The actual numerical values are listed below.
The colour names used here refer to the colour code for the three examples in Figure 8. The maximum error was calculated as the larger of ${\mathrm{max}}_{i}\left\frac{{\beta}_{i}{K}_{i,\mathrm{\varnothing}}^{\varphi}}{{\beta}_{i}}\right$ and ${\mathrm{max}}_{i,S}\left\frac{{\alpha}_{i,S}{\omega}_{i,S}^{\varphi}}{{\alpha}_{i,S}}\right$. The quantities δ and ε were adjusted to make the maximum error less than 0.01.
The binding curves for each example (Figure 7B) show the dependence on concentration of average binding to site $i$ (coloured curves), which can be written in terms of the coarsegrained graph, ${A}^{\varphi}$, in the form
Here, ${\chi}_{i}(S)$ is the indicator function for $i$ being in $S$,
Since the size of $S$, which was denoted by $s$ above, is given by $s={\sum}_{1\le i\le n}{\chi}_{i}(S)$, we see from Equation 50 that the fractional saturation (Figure 7B, black curves) is the sum of the average bindings over all sites, normalised to the number of sites, $n$.
Maroon  Orange  Red  

$\delta ={10}^{7},\epsilon ={10}^{12}$  $\delta ={10}^{7},\epsilon ={10}^{14}$  $\delta ={10}^{7},\epsilon ={10}^{16}$  
$i$  ${\beta}_{i}$  ${K}_{i,\mathrm{\varnothing}}^{\varphi}$  ${\beta}_{i}$  ${K}_{i,\mathrm{\varnothing}}^{\varphi}$  ${\beta}_{i}$  ${K}_{i,\mathrm{\varnothing}}^{\varphi}$ 
1  1.5777  1.5776  0.031353  0.031353  0.21257  0.21257 
2  24.013  24.014  0.011104  0.011104  0.84301  0.84301 
3  89.958  89.959  13.195  13.195  9.8514  9.8514 
4  0.015685  0.015685  52.437  52.437  27.000  27.000 
$i,S$  ${\alpha}_{i,S}$  ${\omega}_{i,S}^{\varphi}$  ${\alpha}_{i,S}$  ${\omega}_{i,S}^{\varphi}$  ${\alpha}_{i,S}$  ${\omega}_{i,S}^{\varphi}$ 
$1,\{2\}$  0.084815  0.0848456  1.0801  1.0801  50.455  50.454 
$1,\{3\}$  3.7432  3.7432  34.768  34.768  0.016359  0.016401 
$1,\{4\}$  0.044245  0.044264  0.032668  0.032669  0.60018  0.60018 
$2,\{3\}$  30.240  30.239  4.0683  4.0683  7.2944  7.2944 
$2,\{4\}$  0.074064  0.074083  1.5098  1.5098  0.010809  0.010809 
$3,\{4\}$  9.2687  9.2685  0.025183  0.025184  0.012613  0.012613 
$1,\{2,3\}$  4.0933  4.0933  0.31238  0.31238  57.783  57.783 
$1,\{2,4\}$  15.687  15.683  0.70016  0.70016  0.025618  0.025623 
$1,\{3,4\}$  0.013335  0.013349  0.13042  0.13056  4.4450  4.4450 
$2,\{3,4\}$  0.082851  0.082892  2.5235  2.5235  0.13584  0.13584 
$1,\{2,3,4\}$  6.5843  6.5825  0.017404  0.017407  0.063587  0.063833 
Max. error  0.00105  0.00105  0.00386 
Allosteric ensembles for Hill functions
Construction of Figure 9
Request a detailed protocolAs described in the main text, we considered an allosteric ensemble with four conformations and six ligand binding sites with no intrinsic cooperativity in any conformation. Accordingly, the bare association constants, ${K}_{{c}_{k},i,\mathrm{\varnothing}}$, constitute 6 free parameters for each conformation c_{k}, $k=1,\mathrm{\cdots},4$, giving 24 free parameters. A further 3 free parameters arise for the independent equilibrium labels of the horizontal subgraph of empty conformations, ${A}_{\mathrm{\varnothing}}$, giving 27 free parameters in total. The association constants were restricted to lie in the range $[{10}^{4},{10}^{4}]$ and the equilibrium labels in the range $[{10}^{6},{10}^{6}]$. To compare the binding function, $f(u)$, to the Hill functions ${\mathscr{H}}_{h}(x)$, the concentration variable, $u$, was normalised to its halfmaximal value, u_{0.5}, for which $f({u}_{0.5})=0.5$ (Estrada et al., 2016). The normalised binding function, $g(x)=f(x{u}_{0.5})$, then satisfies $g(1)=0.5$. We followed a twostep procedure to find binding functions which approximated Hill functions. The algorithm is publicly available on GitHub (github.com/rosamc/allosterypaper2021; copy archived at swh:1:rev:386b23961732962e8ac8390322c9c6e6dfc39168), and we describe it here in general terms. For step 1, we used the measures of position, $\gamma (g)$, and steepness, $\rho (g)$, of a normalised binding function, $g(x)$, introduced previously (Estrada et al., 2016). The steepness of $g(x)$ is the maximum value of its derivative,
and the position of $g$ is the normalised concentration at which that maximum occurs,
The combination of these two measures provides an estimate of the shape of the binding function (Estrada et al., 2016). Starting with a seed for random number generation, we randomly sampled parameter values independently and logarithmically within the ranges specified above to find parameter sets for which $\gamma (g)\in [0.5,1.2]$ and $\rho (g)\in [0.5,1.3]$, which ensures that $g$ is not too far in positionsteepness space from the Hill functions (Estrada et al., 2016, Supplementary Information, §6.1). This narrows down the search space substantially. Once such a parameter set has been found, step 2 of the procedure followed a Monte Carlo optimisation as follows. This algorithm was finetuned by hand, and full details are available with the source code on GitHub. The error between the selected binding function $g$ and the appropriate Hill function, ${\mathscr{H}}_{h}$, was measured as the average absolute difference between the functions at 1000 logarithmically spaced points between 0.0005 and 5,
where $u={10}^{0.0003003}$. Starting from the initial parameter set, ${\theta}_{0}$, as selected in the first step, we randomly chose each parameter with probability $p$ and, for each chosen parameter, we randomly picked a new value v_{1} logarithmically in the range $[m{v}_{0},M{v}_{0}]$, where v_{0} is the existing parameter value. If the chosen value fell outside the appropriate parameter range, we took v_{1} to be the limit of the range. Having done this for each parameter to generate a new parameter set, ${\theta}_{1}$, we chose ${\theta}_{1}$ for the next step of the iteration if $\delta ({g}_{{\theta}_{1}},{\mathscr{H}}_{h})<\delta ({g}_{{\theta}_{0}},{\mathscr{H}}_{h})$ and, if not, we chose ${\theta}_{1}$ with probability β; otherwise, we retained ${\theta}_{0}$. The algorithm parameters $p$, $m$ and $M$ were adjusted so that $p$ decreased and the range $[m,M]$ narrowed as the error decreased. Iterations were continued to an upper limit of $5\times {10}^{6}$, or until a parameter set was found for which $\delta ({g}_{\theta},{\mathscr{H}}_{h})<0.0002$. Step 1 and iterations of step 2 were undertaken with $\beta =0.25,0.5$ and 0.75 for each of 290 initial seeds for random number generation, and the examples shown in Figure 9 were selected from among those with the least error. For Hill coefficient $h=4$, we had to relax the error bound slightly and the two examples shown in Figure 9 satisfy $0.0003<\delta ({g}_{\theta},{\mathscr{H}}_{h})<0.0004$.
Data availability
No data has been generated or acquired for this study, which is purely theoretical.
References

The mediator complex: a central integrator of transcriptionNature Reviews Molecular Cell Biology 16:155–166.https://doi.org/10.1038/nrm3951

Recent advances in singlemolecule fluorescence microscopy render structural biology dynamicCurrent Opinion in Structural Biology 65:61–68.https://doi.org/10.1016/j.sbi.2020.05.006

ConferenceRegulatory domains and their mechanismsCold Spring Harbor Symposia on Quantitative Biology. pp. 45–51.https://doi.org/10.1101/sqb.2015.80.027268

Expanding the paradigm: intrinsically disordered proteins and allosteric regulationJournal of Molecular Biology 430:2309–2320.https://doi.org/10.1016/j.jmb.2018.04.003

The regulatory landscapes of developmental genesDevelopment 147:dev171736.https://doi.org/10.1242/dev.171736

ConferenceThe feedback control mechanisms of biosynthetic Lthreonine deaminase by LisoleucineCold Spring Harbor Symposia on Quantitative Biology. pp. 313–318.https://doi.org/10.1101/SQB.1961.026.01.037

50 years of allosteric interactions: the twists and turns of the modelsNature Reviews Molecular Cell Biology 14:819–829.https://doi.org/10.1038/nrm3695

Allostery without conformational change. A plausible modelEuropean Biophysics Journal : EBJ 11:103–109.https://doi.org/10.1007/BF00276625

A fundamental tradeoff in covalent switching and its circumvention by enzyme bifunctionality in glucose homeostasisJournal of Biological Chemistry 289:13010–13025.https://doi.org/10.1074/jbc.M113.546515

Cooperativity in longrange gene regulation by the lambda CI repressorGenes & Development 18:344–354.https://doi.org/10.1101/gad.1167904

Role of intrinsic protein disorder in the function and interactions of the transcriptional coactivators CREBbinding protein (CBP) and p300Journal of Biological Chemistry 291:6714–6722.https://doi.org/10.1074/jbc.R115.692020

Transcription factories: genetic programming in three dimensionsCurrent Opinion in Genetics & Development 22:110–114.https://doi.org/10.1016/j.gde.2012.01.010

Cooperativity has empirical and ultimate levels of explanationTrends in Pharmacological Sciences 37:620–623.https://doi.org/10.1016/j.tips.2016.06.001

The energy landscapes and motions of proteinsScience 254:1598–1603.https://doi.org/10.1126/science.1749933

BookRandom Perturbations of Dynamical SystemsHeidleberg, Germany: Springer.https://doi.org/10.1007/9783642258473

The roles of structural dynamics in the cellular functions of RNAsNature Reviews Molecular Cell Biology 20:474–489.https://doi.org/10.1038/s4158001901360

Unpicking allosteric mechanisms of homooligomeric proteins by determining their successive ligand binding constantsPhilosophical Transactions of the Royal Society B: Biological Sciences 373:20170176.https://doi.org/10.1098/rstb.2017.0176

Studies in irreversible thermodynamics IV. diagrammatic representation of steady state fluxes for unimolecular systemsJournal of Theoretical Biology 10:442–459.https://doi.org/10.1016/00225193(66)901378

Structural and energetic basis of allosteryAnnual Review of Biophysics 41:585–609.https://doi.org/10.1146/annurevbiophys050511102319

Strategy for analysing the cooperativity of intramolecular interactions in peptides and proteinsJournal of Molecular Biology 214:613–617.https://doi.org/10.1016/00222836(90)90275Q

Cooperative interactions during protein foldingJournal of Molecular Biology 224:733–740.https://doi.org/10.1016/00222836(92)90557Z

Advanced methods for accessing protein ShapeShifting present new therapeutic opportunitiesTrends in Biochemical Sciences 44:351–364.https://doi.org/10.1016/j.tibs.2018.11.007

DynamicsDriven allostery in protein kinasesTrends in Biochemical Sciences 40:628–647.https://doi.org/10.1016/j.tibs.2015.09.002

Proteomics and models for enzyme cooperativityJournal of Biological Chemistry 277:46841–46844.https://doi.org/10.1074/jbc.R200014200

A matter of time: using dynamics and theory to uncover mechanisms of transcriptional burstingCurrent Opinion in Cell Biology 67:147–157.https://doi.org/10.1016/j.ceb.2020.08.001

Caught in the act: structural dynamics of replication origin activation and fork progressionBiochemical Society Transactions 48:1057–1066.https://doi.org/10.1042/BST20190998

Intrinsic disorder in transcription factorsBiochemistry 45:6873–6888.https://doi.org/10.1021/bi0602718

Allostery and molecular machinesPhilosophical Transactions of the Royal Society B: Biological Sciences 373:20170173.https://doi.org/10.1098/rstb.2017.0173

A measure to quantify the degree of cooperativity in overall titration curvesJournal of Theoretical Biology 432:33–37.https://doi.org/10.1016/j.jtbi.2017.08.010

Statistical mechanics of MonodWymanChangeux (MWC) modelsJournal of Molecular Biology 425:1433–1460.https://doi.org/10.1016/j.jmb.2013.03.013

Collaborative competition mechanism for gene activation in vivoMolecular and Cellular Biology 23:1623–1632.https://doi.org/10.1128/MCB.23.5.16231632.2003

Laplacian dynamics with synthesis and degradationBulletin of Mathematical Biology 77:1013–1045.https://doi.org/10.1007/s1153801500757

Laplacian dynamics on general graphsBulletin of Mathematical Biology 75:2118–2149.https://doi.org/10.1007/s1153801398848

On the nature of allosteric transitions: a plausible modelJournal of Molecular Biology 12:88–118.https://doi.org/10.1016/S00222836(65)802856

ConferenceTeleonomic mechanisms in cellular metabolism, growth, and differentiationCold Spring Harbor Symposia on Quantitative Biology. pp. 389–401.https://doi.org/10.1101/SQB.1961.026.01.048

Interplay between allostery and intrinsic disorder in an ensembleBiochemical Society Transactions 40:975–980.https://doi.org/10.1042/BST20120163

Transition networks for modeling the kinetics of conformational change in macromoleculesCurrent Opinion in Structural Biology 18:154–162.https://doi.org/10.1016/j.sbi.2008.01.008

The underappreciated role of allostery in the cellular networkAnnual Review of Biophysics 42:169–189.https://doi.org/10.1146/annurevbiophys083012130257

Network theory of microscopic and macroscopic behavior of master equation systemsReviews of Modern Physics 48:571–585.https://doi.org/10.1103/RevModPhys.48.571

Computational approaches to investigating allosteryCurrent Opinion in Structural Biology 41:159–171.https://doi.org/10.1016/j.sbi.2016.06.017

Markov models for the elucidation of allosteric regulationPhilosophical Transactions of the Royal Society B: Biological Sciences 373:20170178.https://doi.org/10.1098/rstb.2017.0178

ConferenceToward an understanding of the genespecific and global logic of inducible gene transcriptionCold Spring Harbor Symposia on Quantitative Biology. pp. 61–68.https://doi.org/10.1101/sqb.2013.78.020313

BookAn Introduction to Markov ProcessesIn: Vakil R, editors. Graduate Texts in Mathematics. Berlin, Germany: SpringerVerlag. pp. 1–203.https://doi.org/10.1007/9783642405235

Precision in a rush: tradeoffs between reproducibility and steepness of the hunchback expression patternPLOS Computational Biology 14:e1006513.https://doi.org/10.1371/journal.pcbi.1006513

Genespecific transcription activation via longrange allosteric shapeshiftingBiochemical Journal 439:15–25.https://doi.org/10.1042/BJ20110972

A unified view of "how allostery works"PLOS Computational Biology 10:e1003394.https://doi.org/10.1371/journal.pcbi.1003394

Protein dynamics and allostery: an NMR viewCurrent Opinion in Structural Biology 21:62–67.https://doi.org/10.1016/j.sbi.2010.10.007

In memoriam: jacques Monod (19101976)Genome Biology and Evolution 3:1025–1033.https://doi.org/10.1093/gbe/evr024

On small random perturbations of dynamical systemsRussian Mathematical Surveys 25:1–55.https://doi.org/10.1070/RM1970v025n01ABEH001254

Dynamic regulation of transcriptional states by chromatin and transcription factorsNature Reviews Genetics 15:69–81.https://doi.org/10.1038/nrg3623

Energy landscapes: calculating pathways and ratesInternational Reviews in Physical Chemistry 25:237–282.https://doi.org/10.1080/01442350600676921

Gene regulation in and out of equilibriumAnnual Review of Biophysics 49:199–226.https://doi.org/10.1146/annurevbiophys121219081542

The role of protein conformational fluctuations in Allostery, function, and evolutionBiophysical Chemistry 159:129–141.https://doi.org/10.1016/j.bpc.2011.05.020

Intrinsically disordered proteins in cellular signalling and regulationNature Reviews Molecular Cell Biology 16:18–29.https://doi.org/10.1038/nrm3920

SteadyState differential dose response in biological systemsBiophysical Journal 114:723–736.https://doi.org/10.1016/j.bpj.2017.11.3780

Efficient manipulation and generation of kirchhoff polynomials for the analysis of nonequilibrium biochemical reaction networksJournal of the Royal Society Interface 17:20190828.https://doi.org/10.1098/rsif.2019.0828
Article and author information
Author details
Funding
National Science Foundation (1462629)
 John W Biddle
 Jeremy Gunawardena
National Institutes of Health (GM122928)
 Rosa MartinezCorral
European Molecular Biology Organization (ALTF6832019)
 Rosa MartinezCorral
National Science Foundation (DGE1144152)
 Felix Wong
James S. McDonnell Foundation
 Felix Wong
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
We are indebted to Hernan Garcia and an anonymous reviewer for questions and suggestions which helped to improve this paper. JWB and JG were supported by US National Science Foundation (NSF) Award #1462629. RMC was supported by US National Institutes of Health award #GM122928 and EMBO Fellowship ALTF6832019. FW was supported by the James S McDonnell Foundation and NSF Graduate Research Fellowship #DGE1144152.
Copyright
© 2021, Biddle et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 1,995
 views

 296
 downloads

 25
 citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Physics of Living Systems
We propose the Self Returning Excluded Volume (SREV) model for the structure of chromatin based on stochastic rules and physical interactions. The SREV rules of return generate conformationally defined domains observed by singlecell imaging techniques. From nucleosome to chromosome scales, the model captures the overall chromatin organization as a corrugated system, with dense and dilute regions alternating in a manner that resembles the mixing of two disordered bicontinuous phases. This particular organizational topology is a consequence of the multiplicity of interactions and processes occurring in the nuclei, and mimicked by the proposed return rules. Single configuration properties and ensemble averages show a robust agreement between theoretical and experimental results including chromatin volume concentration, contact probability, packing domain identification and size characterization, and packing scaling behavior. Model and experimental results suggest that there is an inherent chromatin organization regardless of the cell character and resistant to an external forcing such as RAD21 degradation.

 Physics of Living Systems
A hallmark of biomolecular condensates formed via liquidliquid phase separation is that they dynamically exchange material with their surroundings, and this process can be crucial to condensate function. Intuitively, the rate of exchange can be limited by the flux from the dilute phase or by the mixing speed in the dense phase. Surprisingly, a recent experiment suggests that exchange can also be limited by the dynamics at the droplet interface, implying the existence of an ‘interface resistance’. Here, we first derive an analytical expression for the timescale of condensate material exchange, which clearly conveys the physical factors controlling exchange dynamics. We then utilize stickerspacer polymer models to show that interface resistance can arise when incident molecules transiently touch the interface without entering the dense phase, i.e., the molecules ‘bounce’ from the interface. Our work provides insight into condensate exchange dynamics, with implications for both natural and synthetic systems.