Evaluating Study Design Rigor in Preclinical Cardiovascular Research: A Replication Study

  1. Stanford Cardiovascular Institute, Stanford, United States
  2. Saint Mary’s College of California, Moraga, United States
  3. Macalester College, Saint Paul, United States
  4. Warren Alpert Medical School of Brown University, Providence, United States
  5. Central Michigan University, Mt Pleasant, United States

Peer review process

Revised: This Reviewed Preprint has been revised by the authors in response to the previous round of peer review; the eLife assessment and the public reviews have been updated where necessary by the editors and peer reviewers.

Read more about eLife’s peer review process.

Editors

  • Reviewing Editor
    Ilse Daehn
    Icahn School of Medicine at Mount Sinai, New York, United States of America
  • Senior Editor
    Matthias Barton
    University of Zurich, Zurich, Switzerland

Reviewer #1 (Public review):

Summary:

Rigor in the design and application of scientific experiments is an ongoing concern in preclinical (animal) research. Because findings from these studies are often used in the design of clinical (human) studies, it is critical that the results of the preclinical studies are valid and replicable. However, several recent peer-reviewed published papers have shown that some of the research results in cardiovascular research literature may not be valid because their use of key design elements is unacceptably low. The current study is designed to expand on and replicate previous preclinical studies in nine leading scientific research journals. Cardiovascular research articles that were used for examination were obtained from a PubMed Search. These articles were carefully examined for four elements that are important in the design of animal experiments: use of both biological sexes, randomization of subjects for experimental groups, blinding of the experimenters, and estimating the proper size of samples for the experimental groups. The findings of the current study indicate that the use of these four design elements in the reported research in preclinical research is unacceptably low. Therefore, the results replicate previous studies and demonstrate once again that there is an ongoing problem in the experimental design of preclinical cardiovascular research.

Strengths:

This study selected four important design elements for study. The descriptions in the text and figures of this paper clearly demonstrate that the rate of use of all four design elements in the examined research articles was unacceptably low. The current study is important because it replicates previous studies and continues to call attention once again to serious problems in the design of preclinical studies, and the problem does not seem to lessen over time.

Weaknesses:

Weaknesses from the first review were adequately addressed.

Author response:

The following is the authors’ response to the original reviews.

Public Reviews:

Reviewer #1 (Public Review):

Summary:

Rigor in the design and application of scientific experiments is an ongoing concern in preclinical (animal) research. Because findings from these studies are often used in the design of clinical (human) studies, it is critical that the results of the preclinical studies are valid and replicable. However, several recent peer-reviewed published papers have shown that some of the research results in cardiovascular research literature may not be valid because their use of key design elements is unacceptably low. The current study is designed to expand on and replicate previous preclinical studies in nine leading scientific research journals. Cardiovascular research articles that were used for examination were obtained from a PubMed Search. These articles were carefully examined for four elements that are important in the design of animal experiments: use of both biological sexes, randomization of subjects for experimental groups, blinding of the experimenters, and estimating the proper size of samples for the experimental groups. The findings of the current study indicate that the use of these four design elements in the reported research in preclinical research is unacceptably low. Therefore, the results replicate previous studies and demonstrate once again that there is an ongoing problem in the experimental design of preclinical cardiovascular research.

Strengths:

This study selected four important design elements for study. The descriptions in the text and figures of this paper clearly demonstrate that the rate of use of all four design elements in the examined research articles was unacceptably low. The current study is important because it replicates previous studies and continues to call attention once again to serious problems in the design of preclinical studies, and the problem does not seem to lessen over time.

Weaknesses:

The current study uses both descriptive and inferential statistics extensively in describing the results. The descriptive statistics are clear and strong, demonstrating the main point of the study, that the use of these design elements is quite low, which may invalidate many of the reported studies. In addition, inferential statistical tests were used to compare the use of the four design elements against each other and to compare some of the journals. The use of inferential statistical tests appears weak because the wrong tests may have been used in some cases. However, the overall descriptive findings are very strong and make the major points of the study.

We sincerely appreciate the reviewer’s comments and detailed feedback and their recognition of the importance of this work in replicating previous studies and calling attention to the problems in preclinical study design. In response to the reviewer’s suggestions, we have recalculated our inferential statistics. In place of our previous inferential statistics, we have used an alternative correction calculation for p-values (Holm-Bonferroni corrections) and used median-based linear model analyses and nonparametric Kruskal-Wallis tests that are more appropriate for analyzing this dataset. Our overall trends in results remain the same.

Reviewer #2 (Public Review):

Summary

This study replicates a 2017 study in which the authors reviewed papers for four key elements of rigor: inclusion of sex as a biological variable, randomization of subjects, blinding outcomes, and pre-specified sample size estimation. Here they screened 298 published papers for the four elements. Over a 10 year period, rigor (defined as including any of the 4 elements) failed to improve. They could not detect any differences across the journals they surveyed, nor across models. They focused primarily on cardiovascular disease, which both helps focus the research but limits the potential generalizability to a broader range of scientific investigation. There is no reason, however, to believe rigor is any better or worse in other fields, and hence this study is a good 'snapshot' of the progress of improving rigor over time.

Strengths

The authors randomly selected papers from leading journals, e.g., PNAS). Each paper was reviewed by 2 investigators. They pulled papers over a 10-year period, 2011 to 2021, and have a good sample of time over which to look for changes. The analysis followed generally accepted guidelines for a structured review.

Weaknesses

The authors did not use the exact same journals as they did in the 2017 study. This makes comparing the results complicated. Also, they pulled papers from 2011 to 2021, and hence cannot assess the impact of their own prior paper.

The authors write "the proportion of studies including animals of both biological sexes generally increased between 2011 and 2021, though not significantly (R2= 0.0762, F(1,9)= 0.742, p= 0.411 (corrected p=8.2". This statement is not rigorous because the regression result is not statistically significant. Their data supports neither a claim of an increase nor a decrease over time. A similar problem repeats several times in the remainder of their results presentation.

I think the Introduction and the Discussion are somewhat repetitive and the wording could be reduced.

Impact and Context

Lack of reproducibility remains an enormous problem in science, plaguing both basic and translational investigations. With the increased scrutiny on rigor, and requirements at NIH and other funding agencies for more rigor and transparency, one would expect to find increasing rigor, as evidenced by authors including more study design elements (SDEs) that are recommended. This review found no such change, and this is quite disheartening. The data implies that journals-editors and reviewers-will have to increase their scrutiny and standards applied to preclinical and basic studies. This work could also serve as a call to action to investigators outside of cardiovascular science to reflect on their own experiences and when planning future projects.

We sincerely appreciate the reviewer’s insights and comments and recognition of our work contributing to the growing body of evidence on the lack of rigor in preclinical cardiovascular research study design. Regarding the weaknesses the reviewer noted; the referenced 2017 publication details a study by Ramirez et al, and was not conducted by our group. Our study aimed to expand upon their findings by using a more recent timeframe and an alternative list of highly respected cardiovascular research journals. We have now better clarified this distinction in the manuscript. We have also addressed our phrasing regarding the lack of statistical significance in the increase of the proportion of studies including animals of both sexes from 2011-2021.

Recommendations for the authors:

Reviewer #1 (Recommendations For The Authors):

Many of the methods in this study were strong or adequate. Although the descriptive statistics appear solid, there are significant problems that need to be addressed in the selection and use of inferential statistics.

(1) One of the design elements that was studied was sample size estimation. This is usually done by a power analysis. The authors should consider what group size for the examined journals is adequate for their statistics to be valid. Or they could report the power of their studies to achieve a given meaningful difference.

We thank the reviewer for this excellent observation. We unfortunately failed to conduct an a priori power analysis. Previous research (Gupta, et al. 2016) suggests that post-hoc power calculations should not be carried out after the study has been conducted. We acknowledge the importance of establishing a sufficient sample size to draw sound conclusions based on an adequate effect size, and we regret that we did not carry out the appropriate estimations. We are very appreciative of the reviewer’s suggestions and aim to implement such an appropriate study design element in future studies.

Gupta KK, Attri JP, Singh A, Kaur H, Kaur G. Basic concepts for sample size calculation: Critical step for any clinical trials!. Saudi J Anaesth. 2016;10(3):328-331. doi:10.4103/1658-354X.174918

(2) A Bonferroni correction was used extensively. Because of its use, the corrected p values often appear much too high. The Bonferroni test becomes much too conservative for more than 3 or 4 tests. I suggest using a different test for multiple comparisons.

We thank the reviewer for their insightful suggestion. We have updated all p-values to reflect a Holm-Bonferroni correction instead. All p-values have been corrected and updated.

(3) The use of the chi-square test for categorical data is appropriate. However, the t-test and multiple regression tests are designed for continuous variables. Here, it appears that they were used for the nominal variables (Table 1). For these nominal data, other nonparametric tests should be used.

We thank the reviewer for this valuable insight. We have updated our statistical analysis methods and now use nonparametric Kruskal-Wallis tests to analyze differences in SDE reporting across journals, instead of chi-square test. Our reported p-values have been adjusted accordingly.

(4) It is not clear exactly when each test is used. The stats section in Methods should better delineate when each test is used. In addition, it would be helpful to include the test used in the figure legends.

We thank the reviewer for bringing up this important point. We have now updated the methods section to better delineate which tests were used, and also included the specific tests in the figure legends.

(5) You will need to rewrite some sections of the text to reflect the changes due to changing your use of statistics.

We have rewritten the sections of the text to reflect the changes in our use of statistics.

Here are a few comments on the presentation.

(1) Some of the figure legends are almost impossible to read. They are too congested.

We thank the reviewer for pointing this out. We have edited the figure legends to make them more readable. We will also attach a pdf with the graphs to allow for easier formatting.

(2) Also, is it possible to drop some of the panels in Figure 1?

The panels in figure 1 have been rearranged to make them more readable. We believe that each panel provides valuable visual summaries of our data, that will aid readers in understanding our results.

(3) It is not mandatory that values of y-axis on the graphs go up 100% (Figs 2 and 3). Using a maximum value of 100% clumps the lines visually. I suggest a max value on the y-axis of the graph of 50% or 60%. That will spread the lines better visually so differences can better be seen.

We thank the reviewer for considering the experience of our paper’s readers. The y-axes of Figures 2 and 3 have been truncated to 50%. The trend lines in each Figure now appear more separated and differences can better be seen.

Reviewer #2 (Recommendations For The Authors):

The authors did not use the exact same journals as they did in the 2017 study. This makes comparing the results complicated. Also, they pulled papers from 2011 to 2021, and hence cannot assess the impact of their own prior paper.

We appreciate the reviewer’s concern in maintaining consistency with the paper published by Ramirez, et al. in 2017. To clarify, our efforts focused on providing a replication study that expanded upon the original Ramirez publication - which we have no affiliation with. For our study, we used different academic journals than those used by Ramirez, et al, and also a different time-frame. We have updated the language in the manuscript to better-clarify the purpose and parameters of our study relative to the previous, unaffiliated, study.

The authors write "the proportion of studies including animals of both biological sexes generally increased between 2011 and 2021, though not significantly (R2= 0.0762, F(1,9)= 0.742, p= 0.411 (corrected p=8.2". This statement is not rigorous because the regression result is not statistically significant. Their data supports neither a claim of an increase nor a decrease over time. A similar problem repeats several times in the remainder of their results presentation.

Thank you for bringing this information to our attention. We agree with the concern regarding the statement, “the proportion of studies including animals of both biological sexes generally increased between 2011 and 2021, though not significantly (R2= 0.0762, F(1,9)= 0.742, p= 0.411 (corrected p=8.2.” We have rephrased the statement. Our updated Holm-Bonferroni corrected p-value is now noted in this more appropriately worded description of our results. Lastly, we have addressed the wording and redundancy seen in both the introduction and discussion and have made both more concise.

I think the Introduction and the Discussion are somewhat repetitive and the wording could be reduced.

We thank the reviewer for bringing this to our attention. We have addressed the redundancy across the Introduction and the Discussion. We have also altered the wording to reflect a more concise explanation of our study.

The 'trends' are not statistically significant. A non-significant trend does not exist and no claim of a 'trend' is justified by the data.

We thank the reviewer for this observation. We have updated the phrasing of ‘trends’ in all areas of the manuscript.

  1. Howard Hughes Medical Institute
  2. Wellcome Trust
  3. Max-Planck-Gesellschaft
  4. Knut and Alice Wallenberg Foundation