Author response:
The following is the authors’ response to the original reviews.
Reviewer #1 (Public Review):
The authors introduce DIPx, a deep learning framework for predicting synergistic drug combinations for cancer treatment using the AstraZeneca-Sanger (AZS) DREAM Challenge dataset. While the approach is innovative, I have the following concerns and comments which hopefully will improve the study's rigor and applicability, making it a more powerful tool in the real clinical world.
We thank to the reviewer for recognizing the innovative aspects of DIPx and for sharing their valuable comments to further refine and strengthen our study. Those comments are carefully addressed in the following point-by-point response.
(1) Test Set 1 comprises combinations already present in the training set, likely leading overfitting issue. The model might show inflated performance metrics on this test set due to prior exposure to these combinations, not accurately reflecting its true predictive power on unknown data, which is crucial for discovering new drug synergies. The testing approach reduces the generalizability of the model's findings to new, untested scenarios.
From a clinical perspective, it is useful to test whether a known (previously tested) combination can work for a new patient, which is the purpose of Test Set 1. There is no danger overfitting here, because the test set is completely independent of the discovery set, so had we only discovered a false positive the test set would not have more than power than expected under the null. Predicting the effectiveness of unknown drug combinations (Test Set 2) is indeed an important and more challenging goal of synergy prediction, but it is statistically a distinct problem. The two test sets were previously designed by the AZS DREAM Challenge [PMID: 31209238].
We have performed cross-validation on the dataset and demonstrated that the result of DIPx for Test Set 1 is not overfitting. Indeed, Figure 2—figure supplement 1 shows the 10-fold cross validation results for the training set. The median Spearman correlation between the predicted and observed Loewe scores across the 10 folds of cross-validation is 0.48, which is close to the correlation of 0.50 in Test Set 1 (red star). We have added the cross-validation results to the “Validation and Comparisons in the AZS Dataset” section (page 4).
(2) The model struggles with predicting synergies for drug combinations not included in its training data (showing only a Spearman correlation of 0.26 in Test Set 2). This limits its potential for discovering new therapeutic strategies. Utilizing techniques such as transfer learning or expanding the training dataset to encompass a wider range of drug pairs could help to address this issue.
We agree that this is an important limitation for the discovery of new therapeutic strategies. While transfer learning or expanding the training dataset could indeed help address this issue, implementing these approaches would require access to more comprehensive data, which is currently limited due to the scarcity of drug combination datasets. As more drug combination data become available in future, we plan to expand the training set to better cover a wider range of drug combinations and apply the transfer learning method to improve prediction accuracy. We have added a discussion on this in the Discussion Section.
(3) The use of pan-cancer datasets, while offering broad applicability, may not be optimal for specific cancer subtypes with distinct biological mechanisms. Developing subtype-specific models or adjusting the current model to account for these differences could improve prediction accuracy for individual cancer types.
We agree with the reviewer that the current settings of DIPx might not be optimal for specific cancers due to the cancer heterogeneity. However, building subtype-specific models is currently constrained by limitation of data availability, which in turn restricts their predictive power. In the Discussion section, we mention this as one of DIPx's limitations and suggest future improvements in cancer-specific models.
(4) Line 127, "Since DIPx uses only molecular data, to make a fair comparison, we trained TAJI using only molecular features and referred to it as TAJI-M.". TAJI was designed to use both monotherapy drug-response and molecular data, and likely won't be able to reach maximum potential if removing monotherapy drug-response from the training model. It would be critical to use the same training datasets and then compare the performances. From Figure 6 of TAJI's paper (Li et al., 2018, PMID: 30054332) , i.e., the mean Pearson correlation for breast cancer and lung cancer is around 0.5 - 0.6.
It is true that using monotherapy drug responses can enhance the performance of TAIJI as described in its original paper. In fact, TAIJI builds separate prediction modules for molecular data and monotherapy drug-response data, then combine their results to obtain the final prediction. In our paper we prioritize the exploration of molecular mechanisms in drug combinations while achieving performance comparable to the molecular model of TAIJI. DIPx can be expected to achieve similarly improved performance if we integrate the monotherapy drug response data using the same approach.
My major concerns were listed in the public review. Here are some writing issues:
(5) Some content in the Results section looks like a discussion: i.e, L129, "The extra information from the use of monotherapy data in TAJI is rather small, approximately 10% increase in the overall Spearman correlation, and, of course, we could also use such data in DIPx, so it is more convenient and informative to focus the comparisons on prediction based on molecular data alone."; L257, "As we discuss above, to get synergy, the two drugs in a combination theoretically should not have the same target. However, there is of course no guarantee that two drugs that do not share target genes can produce synergy. ".
We have revised the texts and moved them to the Discussion section.
Reviewer #2 (Public Review):
Trac, Huang, et al used the AZ Drug Combination Prediction DREAM challenge data to make a new random forest-based model for drug synergy. They make comparisons to the winning method and also show that their model has some predictive capacity for a completely different dataset. They highlight the ability of the model to be interpretable in terms of pathway and target interactions for synergistic effects. While the authors address an important question, more rigor is required to understand the full behavior of the model.
We thank the reviewer for his/her time and effort in carefully reading the manuscript and acknowledging the significance of the study.
Major Points
(1) The authors compare DIPx to the winning method of the DREAm challenge, TAJI to show that from molecular features alone they retrain TAJI to create TAJI-M without the monotherapy data inputs. They mention that "of course, we could also use such data in DIPx...", but they never show the behaviour of DIPx with these data. The authors need to demonstrate that this statement holds true or else compare it to the full TAJI.
This is similar to point 4 raised by Reviewer 1 regarding the exclusive use of molecular data in DIPx. In fact, TAIJI uses separate prediction modules for molecular data and drugresponse data which are then combined to obtain the final results. While integrating monotherapy drug data could enhance DIPx’s overall performance, for example, simply replacing TAIJI’s molecular model with DIPx in the full TAIJI to achieve comparable results, this is not the primary goal of DIPx. Our focus is on exploring the potential molecular mechanisms of drug action. Using only molecular data allows for more convenient and intuitive inference of pathway importance compared to integrating multiple data types.
We have revised the related text with the discussion in section “Validation and comparisons in the AZS dataset” of the main text.
(2) It would be neat to see how the DIPx feature importance changes with monotherapy input. For most realistic scenarios in which these models are used robust monotherapy data do exist.
Indeed, some existing models incorporate monotherapy data into their predictions; for example, a recent study [PMID: 33203866] uses only monotherapy data to predict drug combinations. TAIJI, as discussed in Point 1, uses separate models for monotherapy and molecular data. In general, both data types can be integrated into a single prediction model, allowing for the consideration of feature importance from both. While such an approach can highlight features contributing to predictive performance, the significance of a monotherapy feature does not necessarily indicate the activated pathways of a synergistic drug combination, which is the primary focus of our study. For this reason, we have excluded monotherapy data from DIPx.
(3) In Figure 2, the authors compare DIPx and TAJI-M on various test sets. If I understood correctly, they also bootstrapped the training set with n=100 and reported all the model variants in many of the comparisons. While this is a nice way of showing model robustness, calculating p-values with bootstrapped data does not make sense in my opinion as by increasing the value of n, one can make the p-value arbitrarily small.
The p-value should only be reported for the original models.
The reviewer is correct that we cannot compute the p-value by using an independent twosample test, because the bootstrap correlation values are based on the same data. However, p-values can still be computed to compare the two prediction models using the bootstrap. Theoretically, the bootstrap can be used to compute a confidence interval for the differential correlation in the test set. However, there is a close relationship between p-values and confidence intervals (see Pawitan, 2001, chapter 5; particularly p.134). Specifically, in this case, we compute the p-value as follows: (1) For each bootstrap, (i) compute the Spearman correlation between the predicted and observed scores in the test set for DIPx and TAIJI-M.
Denote this by r1 and r2. (ii) compute the difference in the Spearman correlations d= (r1-r2). (2). Repeat the bootstrap n=100 times. (3). Compute the minimum of these two proportions:
proportion of d<0 or proportion of d>0. (4). The two-sided p-value = 2x the minimum proportion in (3). To overcome the limited bootstrap sample size, we use the normal approximation in computing the proportions in (3). Note that in this method of computing the p-value, larger numbers of bootstrap replicates do not produce more significant results.
We have re-computed the p-values using this method and added this text to the ‘Methods and Materials’ Section.
(4) From Figures 2 and 3, it appears DIPx is overfit on the training set with large gaps in Spearman correlations between Test Set 2/ONeil set and Test Set 1. It also features much better in cases where it has seen both compounds. Could the authors also compare TAJI on the ONeil dataset to show if it is as much overfit?
The poor performance in ONeil dataset is not due to overfitting as such, but more likely due structural differences between the training and ONeil datasets. (To investigate the overfitting issue, we have conducted a 10-fold cross validation in the AZS training set. The median correlation between the predicted and observed Loewe score across ten folds is 0.48, which is comparable to the median of 0.50 in the Test Set 1. Therefore, the model does not suffer from overfitting issue. We have added this cross-validation result in the Section “Validation and Comparisons in the AZS Dataset” (page 4)).
We have now obtained TAIJI’s results on the ONeil dataset. TAIJI-M relies on a gene-gene interaction network to integrate the indirect drug targeting effects. This approach limits its applicability to new datasets, as it can only predict synergy scores for drug combinations present in the training dataset. Among the set of drug combinations present in the training set (n = 1102), both DIPx and TAIJI-M perform poorly, with Spearman correlations between predicted and observed synergy scores of 0.09 and 0.05, respectively.
(Additional note: The original version of TAIJI-M uses gene expression, CNV, mutation, and methylation data. However, there is no methylation data in the ONeil dataset, so we retrained TAIJI-M without the methylation features. According to the final report of TAIJI in the challenge (https://www.synapse.org/Synapse:syn5614689/wiki/396206), Guan et al. reported that methylation features do not contribute to prediction performance in the postchallenge analysis. This means that retraining TAIJI-M without the methylation data will not materially affect the comparison between DIPx and TAIJI-M on the ONeil dataset.)
Minor Points:
(5) Pg 4, line 130: Citation needed for 10% contribution of monotherapy.
(6) The general language of this paper is informal at times. I request the authors to refine it a bit.
We thank the reviewer for pointing this out. We have added the appropriate citation for the statement and carefully revised the text to make it more formal.
Reviewer #3 (Public Review):
Summary:
Predicting how two different drugs act together by looking at their specific gene targets and pathways is crucial for understanding the biological significance of drug combinations. Such combinations of drugs can lead to synergistic effects that enhance drug efficacy and decrease resistance. This study incorporates drug-specific pathway activation scores (PASs) to estimate synergy scores as one of the key advancements for synergy prediction. The new algorithm, Drug synergy Interaction Prediction (DIPx), developed in this study, uses gene expression, mutation profiles, and drug synergy data to train the model and predict synergy between two drugs and suggests the best combinations based on their functional relevance on the mechanism of action. Comprehensive validations using two different datasets and comparing them with another best-performing algorithm highlight the potential of its capabilities and broader applications. However, the study would benefit from including experimental validation of some predicted drug combinations to enhance its reliability.
Strengths:
The DIPx algorithm demonstrates the strengths listed below in its approach for personalized drug synergy prediction. One of its strengths lies in its utilization of biologically motivated cancer-specific (driver genes-based) and drug-specific (target genes-based) pathway activation scores (PASs) to predict drug synergy. This approach integrates gene expression, mutation profiles, and drug synergy data to capture information about the functional interactions between drug targets, thereby providing a potential biological explanation for the synergistic effects of combined drugs. Additionally, DIPx's performance was tested using the AstraZeneca-Sanger (AZS) DREAM Challenge dataset, especially in Test Set 1, where the Spearman correlation coefficient between predicted and observed drug synergy was 0.50 (95% CI: 0.470.53). This demonstrates the algorithm's effectiveness in handling combinations already in the training set. Furthermore, DIPx's ability to handle novel combinations, as evidenced by its performance in Test Set 2, indicates its potential for extrapolating predictions to new and untested drug combinations. This suggests that the algorithm can adapt to and make accurate predictions for previously unencountered combinations, which is crucial for its practical application in personalized medicine. Overall, DIPx's integration of pathway activation scores and its performance in predicting drug synergy for known and novel combinations underscore its potential as a valuable tool for personalized prediction of drug synergy and exploration of activated pathways related to the effects of combined drugs.
Weaknesses:
While the DIPx algorithm shows promise in predicting drug synergy based on pathway activation scores, it's essential to consider its limitations. One limitation is that the algorithm's performance was less accurate when predicting drug synergy for combinations absent from the training set. This suggests that its predictive capability may be influenced by the availability of training data for specific drug combinations. Additionally, further testing and validation across different datasets (more than the current two datasets) would be necessary to assess the algorithm's generalizability and robustness fully. It's also important to consider potential biases in the training data and ensure that DIPx predictions are validated through empirical studies including experimental testing of predicted combinations. Despite these limitations, DIPx represents a valuable step towards personalized prediction of drug synergy and warrants continued investigation and improvement. It would benefit if the algorithm's limitations are described with some examples and suggest future advancement steps.
We are grateful to the reviewer for the thoughtful and encouraging comments, and for the time and effort to read our manuscript. We have carefully addressed them in our revision.
Reviewer #3 (Recommendations For The Authors):
The authors could consider some of the recommendations below to further improve the DIPx algorithm and its application in personalized drug synergy prediction. Firstly, expanding the training dataset to include a broader range of drug combinations could improve the algorithm's predictive capabilities, especially for novel combinations. This would help address the observed decrease in performance when predicting drug synergy for combinations absent from the training set. This could help assess the robustness of the algorithm and provide a more comprehensive evaluation of its performance for untrained combinations to strengthen its application.
We agree that expanding the training dataset with a broader range of drug combinations would likely improve performance. However, the vast number of possible combinations, along with the associated cost of the experiment, limits the availability of drug combination data. To increase the size of the training data, we could combine different studies, but data from different studies are often generated using different protocols and experimental settings, introducing biases that complicate the integration. As technology continues to advance, we anticipate that more standardized and comprehensive data will become available in the future, which will help address this issue.
Furthermore, the authors may consider incorporating additional features or data sources, such as drug-specific characteristics, i.e., availability of the drug, to enrich the information utilized by the algorithm. This could potentially improve the accuracy of the predictions and provide a more holistic understanding of the factors contributing to drug synergy.
Indeed, incorporating additional information such as monotherapy data and drug-specific characteristics, as in TAIJI’s approach, could enhance overall prediction performance. As discussed in Point 5 below, the current study is focused on exploring the potential molecular mechanisms of drug combinations, rather than optimizing overall prediction accuracy. However, in its application, it is natural to add the monotherapy or drug-specific information into the algorithm, as done in TAIJI.
Finally, conducting experimental studies to validate the predictions generated by DIPx in laboratory-based cell lines would be essential to confirm its accuracy and reliability. This could involve a few drug IC50 experimental validations of predicted synergistic drug combinations and their associated pathway activations to strengthen the algorithm's clinical relevance. By considering these recommendations, the authors can further refine and advance the DIPx algorithm.
We agree that laboratory-based validation, such as IC50 experiments for predicted synergistic drug combinations and pathway activations, would indeed strengthen the clinical relevance of the algorithm. We hope future studies can build on this work by incorporating this experimental validation.
Below are my specific comments:
Major comments:
(1) The description of all the outputs of the DIPX algorithm is not clearly explained. It is unclear whether it provides only the Loewe score, the confidence score, the PAS score, or all of them. It is necessary to clarify the output of the proposed algorithm to guide the reader on what to expect while using it. The steps from PASs to synergy scores are not well explained.
We apologize for the lack of clarity. Regarding the outputs of DIPx, for any triplet (drug A + drug B, cell line C), DIPx provides both the predicted Loewe score and the corresponding confidence score as the output. PASs are used as the input data for the random forest algorithm, which processes PASs into the synergy score. We do not provide the details in the manuscript, but refer to the article by Ishwaran H et al., (2021). We have revised the first paragraph of the 'A Pathway-Based Drug Synergy Prediction Model' section (page 3) and Figure 1 to improve the presentation of the method.
(2) In Figure 1, the predicted Loewe score for the Capivasertib + Sapitinib combination is not provided. However, Figures 1e and 4a show the pathways with the highest contribution for this combination. What is the predicted Loewe score for the Capivasertib + Sapitinib combination?
Figures 1e and 4a presents the pathways with the highest contribution for the combination which are identified based on the drug-combination data from 12 cell lines, not a single data point.
We have added the median Loewe score (=7.6) across 12 cell lines in the test sets (Test 1 + Test 2) for the Capivasertib + Sapitinib combination in Figure 1e and reported related information for this combination in Supplementary Table S1. Additionally, we revised the 'Inference of the Mechanism of Action Based on PAS' section (page 7) to clarify the pathway importance inference.
(3) In Figure 1d, the combination of doxorubicin + AZ12623380 is predicted to exhibit high Loewe synergy, with a confidence score of 0.33. It is important to provide details of this prediction, including the pathway predictions, and to explain why the model suggested high synergy. Although Figure 4f contains information, it seems to be listed for the observed Loewe score rather than the predicted score provided in Figure 1d. DIPx predicts the doxorubicin + AZ12623380 combination to be synergistic, while in Figure 4, it is labeled as a non-synergistic combination. It is necessary for the authors to clearly indicate which illustration represents the predicted outcome and which hypothesis is based on the observed Loewe score.
In Figure 1d, we reported both predicted and observed Loewe score for the experiment (combination = doxorubicin + AZ12623380, cell line = SW900). Although the predicted score is high, a confidence score of 0.33 indicates that there is a low chance of the prediction is synergistic. And this is indeed confirmed by the non-synergistic observed score of -6, so it does not merit further investigation. This example highlights the value of the confidence score to supplement the predicted values.
(4) Figure 3 - The external validation using ONeil requires more rigorous analysis to understand the biological significance of the predictions. It is important to provide pathway activation scores and their potential mechanism of action predicted by the DIPx algorithm when working with a new dataset. Additionally, including the predictions of TAIJI-M on the ONeil dataset would be beneficial for comparing the performance of both algorithms on a new dataset.
We have included an example of potential pathways related to the MK2206 + Erlotinib combination in the ONeil cohort, as inferred by DIPx, in the last paragraph of the 'Inference of the Mechanism of Action Based on PAS' section (page 9). In this example, we identify 'Metabolism by CYP Enzymes' as the most significant pathway associated with this combination, which aligns with previous studies that both MK2206 and Erlotinib are metabolized by the CYP enzyme families [PMID: 24387695].
Regarding the prediction of TAIJI-M on the ONeil dataset, we have a similar request in question 4 from Reviewer 2, which we have carefully addressed above. Briefly, due to differences between two datasets, we retrained TAIJI-M without methylation data to enable prediction on the ONeil dataset. (As previously reported, methylation data did not significantly contribute to the results of TAIJI, and TAIJI-M can only predict synergy scores for drug combinations present in the training set.) Focusing on this subset of drug combinations, both TAIJI-M and DIPx perform poorly, with Spearman correlations of r=0.05 and r=0.09, respectively. The poor performance could be attributed to the limited overlap of drugs between the ONeil dataset and the AZS DREAM Challenge dataset.
(5) TAIJI by Li et al., 2018 reported a high prediction correlation (0.53) in their study, while the modified version of TAIJI, TAJI-M, shows a lower prediction correlation in this study. The authors should clarify why the performance decreased when using the same dataset. Is it because only molecular data was used, excluding the monotherapy drug-response data? There is a spelling error in calling the algorithm - it is reported as TAIJI by Li et al., 2018, whereas this study calls it TAJI - an "I" is missing in TAIJI throughout the manuscript.
Indeed, TAIJI-M has a lower prediction correlation (0.38) compared to the full TAIJI model (0.53), which includes the monotherapy data. Some studies such as [PMID: 33203866] even use only monotherapy data in prediction of drug combinations, suggesting the importance of monotherapy data in the drug-combination prediction. However, DIPx focuses on exploration of potential molecular mechanisms of drug combinations rather than overall prediction results, therefore, we exclude the monotherapy data from analysis. We have discussed on this in the 'Validation and Comparisons in the AZS Dataset' section (page 4).
We thank the reviewer for pointing the spelling error for TAIJI; this has been corrected throughout the manuscript.
(6) The authors should provide the predicted versus observed Loewe scores for all the combinations as a supplementary file. This would benefit the readers who want to replicate the results in the future. In the same way, including a sample output for the toy dataset on GitHub is required to assess the performance of the DIPx algorithm by a new user.
All predicted and observed drug synergy scores are given in Supplementary Table S2. We also have already uploaded a simple example on our GitHub page, along with detailed instructions for users on how to run the method, including generating PAS and training the prediction model. Since we do not have permission to host data from the AZS DREAM Challenge and the ONeil datasets on our GitHub page, users can download these datasets separately and directly apply the provided code.
(7) GitHub can include all the input and output data to reproduce the correlation plots in the manuscript. GitHub could also include the modified version of TAIJI-M and its corresponding input for comparison. The methods section should include how TAIJI was performed.
We have uploaded all the codes and related data to the GitHub page to allow replication of all correlation plots in the manuscript. TAIJI-M represents the molecular model of the full TAIJI model. Both TAIJI-M and TAIJI are documented on the GitHub page of the original study. We have also included a link to the source code for TAIJI-M and TAIJI in the 'Data Availability' section.
(8) Figure 5 - the data associated with this figure needs to be provided as supplementary listing the predicted values of Loewe scores for all the combinations.
We report the associated data including the median of predicted and observed Loewe scores related to Figure 5c in Supplementary Table S2.
Minor comments:
(9) Abbreviations for the pathways are not included.
We have included a list of abbreviations for all relevant pathways in Supplementary Table S5.
(10) Line: 369. What is considered as bias correction? This needs to be explained.
Bias correction refers to adjusting the original estimate of the Spearman correlation between the predicted and observed Loewe scores when there is a systematic difference between the estimates obtained from the bootstrap samples and the original correlation estimate. We revised the related text in page 13 to improve the explanation.
(11) Line 364. Formulae or details for calculating actual predicted synergy (Ps) are missing.
The predicted Loewe score, Ps, is the output of the regression random forest model. For simplicity, we do not describe the details in the manuscript, but refer to the description of the method article (Ishwaran H et al., 2021). We have revised the text accordingly.