Identification of ligand-specific G protein-coupled receptor states and prediction of downstream efficacy via data-driven modeling

  1. Oliver Fleetwood
  2. Jens Carlsson
  3. Lucie Delemotte  Is a corresponding author
  1. Science for Life Laboratory, Department of Applied Physics, KTH Royal Institute of Technology, Sweden
  2. Science for Life Laboratory, Department of Cell and Molecular Biology, Uppsala University, Sweden

Decision letter

  1. Toby W Allen
    Reviewing Editor; RMIT University, Australia
  2. José D Faraldo-Gómez
    Senior Editor; National Heart, Lung and Blood Institute, National Institutes of Health, United States

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

This manuscript reports advanced molecular dynamics simulations to describe

β2 adrenergic receptor modulation, expanding significantly on existing knowledge. The study has made use of an atomistic string method to measure the effects of agonists, antagonists and inverse agonists and to understand how ligands affect GPCR activity. The authors have presented sufficient analysis to demonstrate statistical significance of the data and have made connections to experimental measurements that are well described in the revised manuscript. Overall, this is a high-level computational study of biological significance that will be impressive to many eLife readers.

Decision letter after peer review:

Thank you for submitting your article "Identification of ligand-specific G protein-coupled receptor states and prediction of efficacy via data-driven modeling" for consideration by eLife. Your article has been reviewed by three peer reviewers, one of whom is a member of our Board of Reviewing Editors, and the evaluation has been overseen by José Faraldo-Gómez as the Senior Editor. The reviewers have opted to remain anonymous.

Based on the reviews received, which are enclosed below, as well as a follow-up discussion amongst reviewers and editors, we regret to inform you that we cannot publish your manuscript in its current form. However, the reviewers have agreed to re-evaluate a revised version of the manuscript that convincingly addresses the concerns raised. While we encourage you to submit a revision, please note that a positive outcome is not guaranteed.

As the editors have judged that your manuscript is of interest, but as described below that additional experiments are required before it is published, we would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). First, because many researchers have temporarily lost access to the labs, we will give authors as much time as they need to submit revised manuscripts. We are also offering, if you choose, to post the manuscript to bioRxiv (if it is not already there) along with this decision letter and a formal designation that the manuscript is "in revision at eLife". Please let us know if you would like to pursue this option. (If your work is more suitable for medRxiv, you will need to post the preprint yourself, as the mechanisms for us to do so are still in development.)

The reviewers have appreciated many aspects of the string method solution for the allosteric modulation of the β2 adrenergic receptor, but were concerned about the reliability of the simulations, their connections to experiments and the absence of discussion on the relationship to past GPCR activation studies. Concern is expressed about the use of Emax as the sole proxy for activated conformation, as well as a lack of connection of free energy calculations to experiments. A critical point to address is the absence of a description of the statistical reliability and error estimates from the string method solutions, as well as the dependence of the converged solutions on the starting conformation. This necessitates at least one demonstration that reliable results emerge with a very different initial structure. A significantly revised manuscript would need to adequately address the above concerns to the satisfaction of the reviewers.

Essential revisions:

The full reviews have been included below. Essential revisions include:

* Proof of independence of string solutions on starting structure.

* Proof of statistical reliability and error estimates.

* Improved connections to experiments that do not rely solely on Emax values.

* Improved relationship to past studies.

Reviewer #1:

This manuscript reports advanced MD simulation to describe the allosteric modulation of the β2 adrenergic receptor, expanding on the authors' recent simulations (Biochemistry 2020, 59:880-891) that explored communication of ligand binding to the G protein-binding site via "microswitches" within the protein. This study uses the same string method to examine the effects of full and biased agonists, antagonists and inverse agonists using previously solved Xray structures. The study also calls on a series of data-based analysis methods to seek answers to the question of how different ligands communicate their changes and affect GPCR activity. Observation of stabilization of active states for agonists and inactive states for other ligands is an important achievement for MD simulation, even if reproducing known experimental results. Also, characterizing how the distribution of kinetically stabilized (active-like) states are controlled by ligands, and correlations between microswitch expectation values and cAMP response experiments are important. However, the manuscript is highly technical and jargon-based such that many readers will not follow the analysis. The statistical reliability and dependence of the converged string solutions on starting conformation do not appear to have been tested. Finally, relationship between this and the previous manuscript (Fleetwood et al., 2020), as well as how results distinguish themselves from knowledge in the field could be made more clear.

String methods are notorious for depending on starting structure as well as the number and choice of collective variables, and may converge slowly on the optimal path, if at all. The authors do not show plots of convergence (how each PMF changes over #iterations), and do not appear to undertake tests for reproducibility. i.e. Starting from very different initial structures to show that the same final pathways emerge, leading to consistent free energy profiles. For example, starting again with an initial structure for an inverse agonist instead of one for an agonist and show the same results are computed. I note that in the Discussion it is said that several simulation replicas were used to reduce error. But to what does this apply (apparently not to the swarms)? Also, collective variables (CVs) are those defined in the previous study (Fleetwood et al., 2020) but what if the set of CVs is changed or the number of CVs is reduced/increased? While I do not suggest redoing with a different set, how does the reader judge the robustness of the CV-reliant results?

The method for free energy calculation uses a transition matrix on a CV grid with stationary solution. In the subsection “Free energy estimation”, the authors cite Pan, Sezer and Roux, 2008 (and Fleetwood et al., 2020), but it is not clear that Pan, Sezer and Roux, 2008, did this, and in fact I think the first to do this from a swarms of trajectories solution was Lev et al., 2017 (see also Flood, et al., 2019), not cited here.

Having computed free energy projections along collective variables, receptor stability in relation to efficacy instead relies on estimation values for variables and not free energies, and the justifications for this could be improved. The authors write that the width and depth of the basins determine the most probable state projected into that space. This sentence is not clear, but its meaning is important, because ultimately the relative values of free energies are discussed. If the 2D maps are valid projections of the full configurational space (albeit with sampling guided/biased by the string), then Boltzmann integrals over each basin should yield a valid equilibrium constant. I presume the concern is that projections along different pairs of CVs can lead to different apparent free energies, because different CVs map out different proportions of phase space, and potentially envelop multiple states of the system, but does this mean a Boltzmann integral over a site is not a true thermodynamic quantity? In the Discussion, the authors return to add that examination of one projection can overlook what is happening in other coordinates/switches, but more discussion/justification is needed. Related, the authors state that they have "accurately captured the relative stability of states", but this is confusing given the relative stability of states from the maps appears to have been discredited.

Instead of using free energies, the authors correlate "expectation values" of their CVs to experiments in the bottom of Figure 2. I assume a plot against free energy was abandoned as it was not working as planned? Ideally one would want to see free energies plotted against experimental efficacy, because estimation values of variables such as TM5…, may correlate to efficacy, but not uniquely map to shifting state equilibrium.

Many analysis tasks were completed by existing packages for which the meanings are not obvious. e.g. Demystifying, Scikit-learn… Materials and methods, even if accepted, deserve a sentence to explain and motivate. e.g. CVs were short inverse inter-residue distances used to train a restructured Boltzmann machine (the principles of this machine could be explained simply). The subsection “Data driven analysis reveals that ligands stabilize unique states” is highly technical and not well explained. While many will know about PCA, less will be known about MDS and T-SNE. The ability to identify signaling hotspots in Figure 4 is impressive, leading to a model for allosteric communication. But the machine learning approaches used come across as black box and require better physical interpretation.

Reviewer #2:

This manuscript describes the use of enhanced sampling molecular dynamics to calculate free energy landscapes for the β2 adrenergic receptor. A particular focus is the conformational dynamics and thermodynamics of microswitches that change conformation upon receptor activation. A variety of ligands are investigated to identify shared and divergent features of receptor conformational modulation. Unbiased methods are shown to identify known conformational switches as key regulators of receptor activation.

Overall the manuscript is interesting and clearly written. Figures are very clear and well presented. The subject matter has been very extensively studied in the GPCR family as a whole and in the β2 receptor in particular, and the results presented here largely align with existing understanding of GPCR activation and conformational regulation by ligands. A major concern is the question of whether the results presented here truly enhance our understanding of GPCR activation, or simply confirm things that are already known. Many groups have published detailed analysis of similar questions to those presented here, including the Dror, Nagarajan, and McCammon groups, among others. There is very little discussion of this prior work, which makes it difficult to see how the present manuscript fits within the broader context of GPCR activation molecular dynamics analysis.

A significant technical concern is the reliance on cAMP Emax values as the sole experimental validation. A prospective experimental test of hypotheses generated here would significantly enhance the manuscript. Barring this, a more extensive comparison of computational results with experimental data is essential in my view. The cAMP pathway is highly amplified, which may confound analysis linking Emax values directly to conformational equilibria. Manglik et al., 2015, presents an actual biophysical analysis of conformational equilibria (albeit with fewer ligands) and comparison to this would be helpful. Do relative energy predictions match these experimental observations?

A smaller issue is that the utility of the results presented here appears to be a bit overstated. For example, it is claimed (Introduction) that the results provide insight into how ligands with specific efficacy profiles can be designed. To support this statement, the authors should present examples of compounds they have designed based on their results, together with experimental data confirming these molecules have the intended efficacy profiles.

Reviewer #3:

This is an interesting paper, and I like to attempt to connect analysis of allostery to function. However, I'm extremely concerned about statistical uncertainty - it's not really discussed, and it would be easy to chalk all of the results up to limited sampling. It will be important for the authors to demonstrate this isn't the case.

Basically, my concern is that there's essentially no mention of statistical uncertainty or convergence anywhere in the document. One of the major claims is that the different agonists each populate a distinct substate - this is an incredibly important and interesting observation if true, but could also be explained by saying each simulation wandered in its own space and didn't have time to explore anywhere else. If it were run again, it might wander into a totally different place. I don't have a great feel for how rapidly swarms explore, but I do know the total sampling time of 1.4 µs per ligand sounds awfully small. There are examples in the literature showing that conventional simulations several times this length are not converged as far the configurations in the ligand binding pocket go (for example, Leioatts et al., Biophysical Journal, 2015, or some of the work from Dror's group).

I have a couple of ideas for how the authors could convince me this isn't just a sampling artifact. The best would probably be to pick one ligand and redo the whole calculation 4 more times, start to finish - looking at the variation between those replicates would be a decent estimate of the uncertainty. Ideally, they'd do this for all of the ligands, but I recognize that's not a reasonable request, which is why I suggest doing it for 1 ligand.

A weaker test would be to break the individual ligand calculations into blocks, somehow - I'm not totally sure how to do it - and show that the blocks are self-similar. If they're all wandering through the whole of the blob in Figure 3, you're more likely to be ok. If each block populates a discrete chunk of the blob (and the overall swarm data doesn't revisit), then there's a big problem.

I'd want to see error estimates on the free energies in Figure 2, plus a redo of the dimension reduction in Figure 3 to see if the ligands still separate more than the replicates of 1 ligand.

On a more minor note, the use of p-values in the subsection “Ligands control efficacy by reshaping microswitches’ probability distributions”, isn't really correct. The point the authors are trying to make is that the computed value doesn't predict the experiment (for good reason), and the correlation coefficients make that point for you.

https://doi.org/10.7554/eLife.60715.sa1

Author response

Reviewer #1:

This manuscript reports advanced MD simulation to describe the allosteric modulation of the β2 adrenergic receptor, expanding on the authors' recent simulations (Biochemistry 2020, 59:880-891) that explored communication of ligand binding to the G-protein binding site via "microswitches" within the protein. This study uses the same string method to examine the effects of full and biased agonists, antagonists and inverse agonists using previously solved Xray structures. The study also calls on a series of data-based analysis methods to seek answers to the question of how different ligands communicate their changes and affect GPCR activity. Observation of stabilization of active states for agonists and inactive states for other ligands is an important achievement for MD simulation, even if reproducing known experimental results. Also, characterizing how the distribution of kinetically stabilized (active-like) states are controlled by ligands, and correlations between microswitch expectation values and cAMP response experiments are important. However, the manuscript is highly technical and jargon-based such that many readers will not follow the analysis. The statistical reliability and dependence of the converged string solutions on starting conformation do not appear to have been tested. Finally, relationship between this and the previous manuscript (Fleetwood et al., 2020), as well as how results distinguish themselves from knowledge in the field could be made more clear.

We thank the reviewer for this assessment. Regarding the high technical level, we have tried to make simplifications and clarifications throughout the revised manuscript. For example, we now give more background on what the software and machine learning tools do. We have also tried to better define the connection between this manuscript and the previous work. We hope the changes appear satisfactory.

String methods are notorious for depending on starting structure as well as the number and choice of collective variables, and may converge slowly on the optimal path, if at all. The authors do not show plots of convergence (how each PMF changes over #iterations), and do not appear to undertake tests for reproducibility. i.e. Starting from very different initial structures to show that the same final pathways emerge, leading to consistent free energy profiles. For example, starting again with an initial structure for an inverse agonist instead of one for an agonist and show the same results are computed. I note that in the Discussion it is said that several simulation replicas were used to reduce error. But to what does this apply (apparently not to the swarms)? Also, collective variables (CVs) are those defined in the previous study (Fleetwood et al., 2020), but what if the set of CVs is changed or the number of CVs is reduced/increased? While I do not suggest redoing with a different set, how does the reader judge the robustness of the CV-reliant results?

In addition to the arguments listed in the cover letter, we point out that the previously published apo and BI-167107-bound string simulations were performed with a different set of Collective Variables (CVs) and initiated from different pathways (Fleetwood et al., 2020) than the new ligand-receptor simulations presented in this manuscript. Despite the differences in simulation set-up, it is evident from Figure 2 that the free energy landscapes for the agonists adrenaline and salmeterol are more similar to BI-167107 than to the other ligands. The same holds for the apo free energy landscape, which is strikingly similar to the inverse agonists and antagonists. If the starting conditions and CVs alone defined the outcome, this would not be the case.

Regarding the convergence of the PMF, we previously performed cross validation over the string iterations for the TM5 bulge microswitch expectation values (Supplementary Table 2 in the original manuscript). The microswitch expectation values were consistent between the first and second half of the last 224 string iterations, which indicates that these values have converged and were sampled from an equilibrium process. Additionally, samples drawn from an unconverged distribution, would lead to a large standard deviation and/or statistical outliers in the free energy landscapes. Thus our statistical error analysis and convergence estimation provide much strong evidence of convergence.

The method for free energy calculation uses a transition matrix on a CV grid with stationary solution. In the subsection “Free energy estimation”, the authors cite Pan, Sezer and Roux, 2008 (and Fleetwood et al., 2020), but it is not clear that Pan, Sezer and Roux, 2008, did this, and in fact I think the first to do this from a swarms of trajectories solution was Lev et al., 2017 (see also Flood, et al., 2019), not cited here.

The reviewer is indeed correct here. We have updated the citations accordingly in the manuscript.

Having computed free energy projections along collective variables, receptor stability in relation to efficacy instead relies on estimation values for variables and not free energies, and the justifications for this could be improved. The authors write that the width and depth of the basins determine the most probable state projected into that space. This sentence is not clear, but its meaning is important, because ultimately the relative values of free energies are discussed. If the 2D maps are valid projections of the full configurational space (albeit with sampling guided/biased by the string), then Boltzmann integrals over each basin should yield a valid equilibrium constant. I presume the concern is that projections along different pairs of CVs can lead to different apparent free energies, because different CVs map out different proportions of phase space, and potentially envelop multiple states of the system, but does this mean a Boltzmann integral over a site is not a true thermodynamic quantity? In the Discussion, the authors return to add that examination of one projection can overlook what is happening in other coordinates/switches, but more discussion/justification is needed. Related, the authors state that they have "accurately captured the relative stability of states", but this is confusing given the relative stability of states from the maps appears to have been discredited.

We thank the reviewer for this astute set of comments, which we have enjoyed considering and addressing.

To illustrate our point, we highlight Figure 2 and Author response image 1, in which we can certainly say qualitatively that agonists stabilize the active basin. However, due to the issues with projections pointed out by the reviewer and the fact that these energy landscapes are not ideal two-state systems, an integration over state boundaries requires us to, somewhat arbitrarily, define the state boundaries. This will in turn influence any quantitative comparison to experimental values. The expectation values (Figure 2C-F), on the other hand, require no state boundaries, justifying our choice of this methodology.

Author response image 1

That being said, we followed the reviewer’s advice and computed the relative stability of states for the different microswitches by taking the difference in free energy between the active and inactive regions of the free energy landscape (ΔG ∝-lnK, where K is the equilibrium constant). See the results in Figure 2—figure supplement 8 of the revised manuscript. State boundaries that were chosen are depicted as vertical dashed lines in Figure 2—figure supplement 7.The correlation between ΔG and cAMP signaling is strong for transmembrane microswitches, which also had a strong correlation between cAMP and their expectation values. Therefore, we see little practical difference between the two methods, but would recommend using the expectation value because it does not require defining arbitrary state boundaries.

Instead of using free energies, the authors correlate "expectation values" of their CVs to experiments in the bottom of Figure 2. I assume a plot against free energy was abandoned as it was not working as planned? Ideally one would want to see free energies plotted against experimental efficacy, because estimation values of variables such as TM5…, may correlate to efficacy, but not uniquely map to shifting state equilibrium.

Please see our comments in the previous paragraph. We chose expectation values since it does not require us to set state boundaries. This approach provides a single numerical value which, unlike having an entire probability distribution, is easier to correlate to an experimental numerical value. With that in mind, by merely taking the average and ignoring the characteristics of the free energy landscapes, such as the shape and locations of the basins, leaves out potential useful information. For this reason, we included the microswitch free energy landscapes as a basis of discussion together with the correlation plots in Figure 2.

Many analysis tasks were completed by existing packages for which the meanings are not obvious. e.g. Demystifying, Scikit-learn… Materials and methods, even if accepted, deserve a sentence to explain and motivate. e.g. CVs were short inverse inter-residue distances used to train a Restructured Boltzmann Machine (the principles of this machine could be explained simply). The subsection “Data driven analysis reveals that ligands stabilize unique states” is highly technical and not well explained. While many will know about PCA, less will be known about MDS and T-SNE. The ability to identify signaling hotspots in Figure 4 is impressive, leading to a model for allosteric communication. But the machine learning approaches used come across as black box and require better physical interpretation.

We thank the reviewer for pointing this out. It is easy to take one’s own methodological choice for granted! We have made revisions to clarify our rationale and better describe the tools used. For example, we describe what the software demystifying does when we mention it, and try to explain how we find important microswitches by computing the KL divergence in simpler terms.

Reviewer #2:

This manuscript describes the use of enhanced sampling molecular dynamics to calculate free energy landscapes for the β2 adrenergic receptor. A particular focus is the conformational dynamics and thermodynamics of microswitches that change conformation upon receptor activation. A variety of ligands are investigated to identify shared and divergent features of receptor conformational modulation. Unbiased methods are shown to identify known conformational switches as key regulators of receptor activation.

Overall the manuscript is interesting and clearly written. Figures are very clear and well presented. The subject matter has been very extensively studied in the GPCR family as a whole and in the β2 receptor in particular, and the results presented here largely align with existing understanding of GPCR activation and conformational regulation by ligands. A major concern is the question of whether the results presented here truly enhance our understanding of GPCR activation, or simply confirm things that are already known. Many groups have published detailed analysis of similar questions to those presented here, including the Dror, Nagarajan, and McCammon groups, among others. There is very little discussion of this prior work, which makes it difficult to see how the present manuscript fits within the broader context of GPCR activation molecular dynamics analysis.

We appreciate that the reviewer found the manuscript well presented. Following the reviewer’s comments, we have extended the discussion of prior work in the manuscript, as also described in the cover letter. Please also find arguments about the novelty of our work in the same place.

A significant technical concern is the reliance on cAMP Emax values as the sole experimental validation. A prospective experimental test of hypotheses generated here would significantly enhance the manuscript. Barring this, a more extensive comparison of computational results with experimental data is essential in my view. The cAMP pathway is highly amplified, which may confound analysis linking Emax values directly to conformational equilibria. Manglik et al., 2015, presents an actual biophysical analysis of conformational equilibria (albeit with fewer ligands) and comparison to this would be helpful. Do relative energy predictions match these experimental observations?

We believe that a comparison to cAMP measurements is relevant and interesting, since it represents a downstream cellular response and a strong correlation suggests our approach to have predictive powers. A comparison to Emax values instead of e.g. EC50 values makes sense, since our simulations have a ligand present at all times, which corresponds to experimental conditions with a very high ligand concentration. We agree that a prospective study would be ideal, but in the absence of experimental validation, we have followed the reviewer’s advice and have improved the comparison to experimental data, and in particular Manglik et al., 2015, as this was suggested by the reviewer. For a more detailed description of what has been done in this regard, refer to our comments under “Essential revisions” above.

A smaller issue is that the utility of the results presented here appears to be a bit overstated. For example, it is claimed (Introduction) that the results provide insight into how ligands with specific efficacy profiles can be designed. To support this statement, the authors should present examples of compounds they have designed based on their results, together with experimental data confirming these molecules have the intended efficacy profiles.

In the revised manuscript we are more precise in what our results allow us to do while avoiding speculative claims. We anticipate to use these methods in prospective studies going forward, but it is indeed not part of this study.

Reviewer #3:

This is an interesting paper, and I like to attempt to connect analysis of allostery to function. However, I'm extremely concerned about statistical uncertainty - it's not really discussed, and it would be easy to chalk all of the results up to limited sampling. It will be important for the authors to demonstrate this isn't the case.

We are glad that the reviewer found our paper interesting and agree that the statistical validation was not clearly presented and also lacking specific tests.

Basically, my concern is that there's essentially no mention of statistical uncertainty or convergence anywhere in the document. One of the major claims is that the different agonists each populate a distinct substate - this is an incredibly important and interesting observation if true, but could also be explained by saying each simulation wandered in its own space and didn't have time to explore anywhere else. If it were run again, it might wander into a totally different place. I don't have a great feel for how rapidly swarms explore, but I do know the total sampling time of 1.4 microseconds per ligand sounds awfully small. There are examples in the literature showing that conventional simulations several times this length are not converged as far the configurations in the ligand binding pocket go (for example, Leioatts et al., Biophysical Journal, 2015, or some of the work from Dror's group).

The reviewer is right, we had omitted important information, relying on information present in our previous paper. We have revised the manuscript such that this manuscript now stands on its own. Specifically, our assessment of statistical reliability can be found in the cover letter.

Note that the single equilibrated states are by definition converged from a starting structure using an adaptive sampling approach with multiple walkers. Sampling a large region of conformational space would require much more than 1.4 microsecond, but a small region kinetically accessible from a starting structure may be well-sampled within this time-frame. Actually, an important methodological aspect of this manuscript is that we could identify ligand-induced pharmacological properties by adaptive sampling of a small portion of conformational space.

I have a couple of ideas for how the authors could convince me this isn't just a sampling artifact. The best would probably be to pick one ligand and redo the whole calculation 4 more times, start to finish - looking at the variation between those replicates would be a decent estimate of the uncertainty. Ideally, they'd do this for all of the ligands, but I recognize that's not a reasonable request, which is why I suggest doing it for 1 ligand.

A weaker test would be to break the individual ligand calculations into blocks, somehow - I'm not totally sure how to do it - and show that the blocks are self-similar. If they're all wandering through the whole of the blob in Figure 3, you're more likely to be ok. If each block populates a discrete chunk of the blob (and the overall swarm data doesn't revisit), then there's a big problem.

We thank the reviewer for the specific suggestions! We indeed followed the reviewer’s second suggestion and performed stratified cross validation with 3 subsets of the walkers. More details can be found under Essential Revisions. We argue that this statistical test is as legitimate as rerunning the simulation for one ligand for two reasons. First, we can perform this test for all ligands, not just one, without wasting computational resources. Second, the method is based on finding a center point in CV space by tracking a number of walkers launched from the same starting structure. Considering the high number of walkers used and that our results pass a cross validation examination, we may conclude that the position of the center point is insensitive to the random seed of the individual MD trajectories. Thus, we expect a new set of walkers launched from the same starting structure to converge to the already sampled state.

I'd want to see error estimates on the free energies in Figure 2, plus a redo of the dimension reduction in Figure 3 to see if the ligands still separate more than the replicates of 1 ligand.

See our comments above.

On a more minor note, the use of p-values in the subsection “Ligands control efficacy by reshaping microswitches’ probability distributions”, isn't really correct. The point the authors are trying to make is that the computed value doesn't predict the experiment (for good reason), and the correlation coefficients make that point for you.

We have removed the use of the p-value and only refer to the correlation between cAMP response and microswitch expectation values.

https://doi.org/10.7554/eLife.60715.sa2

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Oliver Fleetwood
  2. Jens Carlsson
  3. Lucie Delemotte
(2021)
Identification of ligand-specific G protein-coupled receptor states and prediction of downstream efficacy via data-driven modeling
eLife 10:e60715.
https://doi.org/10.7554/eLife.60715

Share this article

https://doi.org/10.7554/eLife.60715