Today, eLife publishes the final outputs of the Reproducibility Project: Cancer Biology, an eight-year effort to replicate experiments from 53 high-impact papers published between 2010 and 2012. Tim Errington, the Director of Research at the Center for Open Science and project leader said: “The purpose of the project was to transparently assess the extent to which there are challenges for conducting replications and obtaining similar evidence of published findings in cancer biology research.”
Launched in 2013, the Reproducibility Project: Cancer Biology was a collaboration between the Center for Open Science, a non-profit culture change organisation with a mission to improve openness, integrity, and reproducibility of research, and Science Exchange, the world’s first online R&D marketplace whose mission is to accelerate scientific discovery. With support from Arnold Ventures (formerly the Laura and John Arnold Foundation), the team conducted a systematic process to select high-impact cancer research papers published between 2010 and 2012. Based on the selection criteria, most of the papers came from high-profile journals such as Nature, Science and Cell. A total of 193 experiments were selected for replication.
The team designed replication experiments of key findings from each paper by reviewing the methodology and requesting information about protocols and availability of reagents. Then, appropriate expertise for conducting the experiments was sourced through the Science Exchange marketplace.
For each paper, the detailed protocols for the replication experiments were written up as a Registered Report and submitted to eLife for peer review; moreover, work on the replication experiments could not begin until the Registered Report had been accepted for publication. The completed replication experiments were then written up as a Replication Study, peer reviewed and published in eLife. Two of the papers published today are capstone summaries of the entire project.
The first paper, 'Challenges for Assessing Replicability in Preclinical Cancer Biology', reports on the challenges confronted when preparing and conducting replications of 193 experiments from 53 papers. None of the experiments were described in sufficient detail to design a replication without seeking clarifications from the original authors. Some authors (26%) were extremely helpful and generous with feedback, and some authors (32%) were not at all helpful or did not respond to requests. During experimentation, about two-thirds of the experiments required some modification to the protocols because, for example, model systems behaved differently than originally reported. Ultimately, 50 replication experiments from 23 papers were completed, a small proportion of what were planned. Errington noted, "we had challenges at every stage of the research process to design and conduct the replications. It was hard to understand what was originally done, we could not always get access to the original data or reagents to conduct the experiments, and model systems frequently did not behave as originally reported. The limited transparency and incomplete reporting made the efforts to replicate the findings much harder than was necessary.”
The second paper, 'Investigating the Replicability of Preclinical Cancer Biology', reports a meta-analysis of the results of the 50 replication experiments that did get completed. Many of these experiments involved measuring more than one effect (e.g., measuring the influence of an intervention on both the tumour burden and overall survival), and the 50 experiments that were completed included a total of 158 effects. Most of these effects (136) were reported as positive effects in the original papers, with 22 being reported as null effects. The meta-analysis also had to take into account that 41 of the effects were reported as images rather than as numerical values in the original papers. Replications provided much weaker evidence for the findings compared to the original experiments. For example, for original positive results, replication effect sizes were 85% smaller than the original effect sizes on average.
The team also used a number of binary criteria to assess whether a replication was successful or not. A total of 112 effects could be assessed by five of these criteria, and 18% succeeded on all five, 15% succeeded on four, 13% succeeded on three, 21% succeeded on two, 13% succeeded on one, and 20% failed on all five. Collectively, 46% of the replications were successful on more criteria than they failed and 54% of the replications failed on more criteria than they succeeded.
Summarising, Errington noted: “Of the replication experiments we were able to complete, the evidence was much weaker on average than the original findings even though all the replications underwent peer review before conducting the experiments to maximise their quality and rigour. Our findings suggest that there is room to improve replicability in preclinical cancer research.”
Brian Nosek, Executive Director from the Center for Open Science and co-author added: “Science is making substantial progress in addressing global health challenges. The evidence from this project suggests that we could be doing even better. There is unnecessary friction in the research process that is interfering with advancing knowledge, solutions and treatments. Investing in improving transparency, sharing, and rigour of preclinical research could yield huge returns on investment by removing sources of friction and accelerating science. For example, open sharing of data, materials, and code will make it easier to understand, critique and build upon each other’s work. And, preregistration of experiments and analysis plans will reduce the negative effects of publication bias and distinguish between planned tests and unplanned discoveries.”
These papers identify substantial challenges for cancer research, but they occur amid a reformation in science to address dysfunctional incentives, improve the research culture, increase transparency and sharing, and improve rigour in design and conduct of research. Science is at its best when it confronts itself and identifies ways to improve the quality and credibility of research findings. The Reproducibility Project: Cancer Biology is just one contribution in an ongoing self-examination of research practices and opportunities for improvement.
The previously published Registered Reports, Replication Studies and related commentaries are all available on eLife’s Reproducibility Project: Cancer Biology Collection page, and all data, code and supporting materials are available in the Center for Open Science's Reproducibility Project: Cancer Biology Collection.
For additional summary information about the project and links to key resources, visit http://cos.io/rpcb.
eLife is a non-profit organisation created by funders and led by researchers. Our mission is to accelerate discovery by operating a platform for research communication that encourages and recognises the most responsible behaviours. We review selected preprints in all areas of biology and medicine, while exploring new ways to improve how research is assessed and published. eLife receives financial support and strategic guidance from the Howard Hughes Medical Institute, the Knut and Alice Wallenberg Foundation, the Max Planck Society and Wellcome. Learn more at https://elifesciences.org/about.