Moderator: Benjamin Schwessinger, Postdoctoral Researcher, Australian National University and member of the eLife Early-Career Advisory Group.
Speakers: Lenny Teytelman, Co-founder of protocols.io; Simine Vazire, Professor of Psychology, University of California, Davis; Hilmar Lapp, Director of Informatics, Duke Center for Genomic and Computational Biology and member of the Data Carpentry Steering Committee; and Monica Driscoll, Professor of Molecular Biology and Biochemistry, Rutgers University
Science works by building on previous findings, yet journal articles often lack the details that other researchers need to replicate a study. A number of initiatives and tools now exist to address these problems: the challenge is to encourage more researchers to use them.
“One thing that ties together all the branches of [reproducibility] is the issue of credibility”, says Simine Vazire, whose field of social and personality psychology has been hit particularly hard by accusations of irreproducible findings. “We want to make sure that the [research] process is transparent and that the findings are solid and reproducible”. These are not new ideas, but we now have the tools and infrastructure to enact them better. In the words of Lenny Teytelman: “we’re now finally at the point where we can use the internet to create an interconnected research output”. In this model, the paper acts as a hub that points to all the data, code and methods that went in to the research.
It’s not true that reproducibility has to be done really well to be worth doing. “Any attempt to improve the reproducibility of computational science will pay off”, says Hilmar Lapp. Don’t be intimidated by the vast array of tools that are now available to support reproducible ways of working – there are common principles that apply across them all. For computational work, these principles include literate programming, automated testing, and openly archiving data, software and other research products.
Even when methods, code and reagents are shared, it can still be difficult to reproduce results. When Monica Driscoll’s lab began collaborating with two other labs on C. elegans lifespan studies – experiments that all the labs considered part of their expertise – they spent a year trying to produce a common experimental protocol that they all agreed upon. Maintaining common standards remains an ongoing process: “the technicians […] still have to have a weekly two-hour conversation to stay on the same page”. After this enormous effort, some results were still not entirely reproducible – but these turned out to be due to previously unstudied biological differences. Driscoll doesn’t believe that this level of effort “could or should” be used to reproduce all science, but thinks there are certain areas – such as preclinical drug trials – that would benefit from a rigorous approach to reproducibility.
“We’re asking new trainees to do things the reproducible way”, says Vazire, “but unless journals start rewarding cautiously written papers […] it’s going to be very hard for those people to compete with others who are willing to make grander claims”. For Lapp, “the elephant in the room here is supercompetition, where what matters more than anything else is having the discovery on your record. Whether that finding holds up and is reproducible almost takes a back seat”. To change this, we need new incentives, new rewards, and better training for researchers, funders, journals and everyone else in the research ecosystem.
“In a lab where I worked as a postdoc we could not reproduce some of our major findings”, explains Benjamin Schwessinger. “It took us two and a half years […] to recover from this and actually to retract the paper […] and publish the right story”. As this shows, working reproducibly isn’t just good for science – it also brings career benefits. In Lapp’s words: “it will be your future self who will be able to reproduce the science that you do now and they will thank you over and over”.
There are also many online resources to help you explore the motivations, tools and best practices behind working reproducibly: for example the Open Science MOOC and Kirstie Whitaker's how to guide to reproducible research.