- Views 8,467
By Michael Eisen, Editor-in-Chief
eLife was launched in 2012 with the goals of creating an open-access venue for the publication of outstanding work in the life sciences, transforming the way that these works are peer reviewed, and utilizing the internet to enhance the way works of science are presented, read and used. In the ensuing seven years, eLife has published a large number of important and influential papers that are free to anyone to read and reuse, and brought together a vibrant community of editors and reviewers who have created a fair, constructive and effective review system that forms the heart of the journal.
As part of our ongoing effort to rethink peer review and the role of journals in it, last year eLife began a trial of a new process that eliminated the accept/reject decision that typically occurs after peer review in academic journals. Instead, if our editors decided to send a paper out for peer review, we agreed in advance to publish it, along with the reviewers’ comments, the authors’ response, and an assessment from the editors of the extent to which issues raised in review were addressed.
This trial was motivated by a desire to end the needless cycles of peer review at multiple journals, to give authors more of a say in when and how their work is published, and to reduce the emphasis on journal titles in the way we evaluate works of science and the authors who carried them out.
The trial is now complete. Analyses of the different stages of the trial are described in a series of three blogposts (‘First results…’, ‘Further results...’, ‘Final results…’). Here I want to look at the broader issue of how well it accomplished its goals and what we have learned. I will also explain how the trial has informed our next steps, which I outline below, towards more effectively capturing the value of peer review.
The trial was offered as an option to authors during a six-week period beginning in June 2018. The authors of 313 papers (including one from my lab) opted in – about a third of papers received during the period. Editors selected 70 of these papers for in-depth peer review.
Under the trial system, once the reviews were received and conveyed to the authors, they could choose to have their paper published at any time. Although they were free to ignore the reviewers’ comments, the overwhelming majority responded thoroughly and rigorously to critical comments and altered their manuscripts accordingly. In most cases, the reviewers were ultimately satisfied by the authors’ arguments, manuscript modifications and/or additional data, and the papers were published in eLife.
However, as was originally envisioned, there were several papers where the reviewers and editors did not feel that their concerns had been fully resolved. In the normal process these papers would have been rejected, their ultimate publication delayed, and there would be no public record of the reviews. In contrast, in the peer-review trial these papers were published in eLife along with the critical reviews, avoiding additional rounds of review at another journal, while making the reviewers’ concerns available to readers.
In comparing the fate of papers submitted under the traditional and trial processes, it appears that editors handling papers as part of this new system were more selective when deciding which papers to send out for full peer review. The most likely explanation is that knowing that any trial paper sent out for review was going to be published by eLife made editors less willing to take risks when inviting full submission.
A major goal of this trial was to deemphasize the use of journal name as the primary indicator of a paper’s scientific merit. But moving the publication decision from after to before peer review had the undesired effects of doubling down on the journal brand and placing increased emphasis and pressure on the initial editorial evaluation.
In the current publishing ecosystem, only positive decisions – papers being accepted at a particular journal – leave a permanent record. We would like to see science move towards a system where reviews are never lost; rather, they become a part of the scientific record along with the paper. But for this to work, we have to ensure that critical reviews are made public in a constructive way.
Unfortunately, the trial did not give us as much of an opportunity to explore this issue as we had hoped, as a very high fraction of papers that were sent out for peer review as part of the trial were ultimately given the most positive rating (“all the issues have been addressed”).
Only three manuscripts received the rating that “major issues remain unresolved” from reviewers and editors after the authors had responded to the decision. One of these manuscripts was withdrawn prior to publication, while two others were published as eLife papers along with critical reviews. The small number of papers published in eLife despite not receiving a positive rating from the editors and reviewers makes it all but impossible to draw any data-driven conclusions about the reactions of authors or readers to such papers.
This trial served a useful purpose in showing that there is support for experimenting with the relationship between journals and peer review among the eLife author, reviewer and editor communities. But I do not think the particular process used in the trial is the right way to address the critical issues and challenges in optimizing the peer-review systems of eLife and other publishers. So I would like to take a step back and ask what we are trying to achieve in the long run, and what steps we can take in the short run to move towards that future.
The peer-review trial was right to aspire to give control of publishing decisions to authors. But inserting an editorial selection process in between authors and publishing is much more about empowering editors than it is about empowering authors.
Fortunately, eLife does not have to solve this problem anymore. It has already been solved by bioRxiv (and other preprint servers), which makes it possible for scientists to share their work when they feel ready to do so. In many corners of biology, papers posted on bioRxiv are treated like publications. For example, many high-profile papers posted to bioRxiv get widespread attention long before they appear in journals, they are cited in peer-reviewed publications, and a large fraction of candidates for faculty positions include bioRxiv papers in their applications.
Publishing a preprint is the ultimate form of author empowerment and is growing every month. And while not everything about universal preprinting is an unalloyed good, and a lot of details and processes need to be sorted out, having every paper published first on bioRxiv would immediately solve two of the greatest problems in science publishing: limited and delayed access. Instead of 70% of papers being behind paywalls and the average paper unavailable for nine months as it goes through peer review, everything would be immediately and freely available to anyone with an interest in the work.
The rise of bioRxiv will allow eLife to focus even more than we already do on our central and most important function: peer review.
With this increased focus comes the opportunity to think about how we carry out peer review and how to convey the results of review in the most useful way. In the system virtually all journals use today, the most prominent, and in most cases only, visible output of peer review is the title of the journal in which a work is ultimately published. Journal title is used as a proxy for judgments made by peer reviewers about audience, anticipated impact and quality of research and researchers.
The fact that journal titles are used as a proxy for the quality of science and scientists – that they have become the de facto currency of career success – is bad for science. It provides researchers with the wrong incentives, rewarding style over substance, and flashiness over the care and rigor that yields science that withstands the test of time. We also throw away much of the valuable intellectual work done by reviewers – reducing the complex, multidimensional intellectual content of peer review to a binary decision.
Since its launch, eLife has been publishing, along with accepted papers, the decision letters sent to authors following peer review and their responses. This was motivated by a desire for the peer-review process to be more transparent, to let readers see what concerns came up during the review, and how they were addressed by the authors.
These decision letters provide much more useful information than knowing that the paper was accepted for publication by eLife. We have received a lot of feedback from young scientists who find that the posted decision letters and responses provide an immensely useful window into the historically opaque review process, helping them in their emerging roles as both authors and reviewers.
But it does not seem like the posted decision letters are widely used to help readers and potential readers contextualize the paper or understand unresolved issues. This is understandable, as decision letters are written for authors, so that they can improve the manuscript, rather than for potential readers.
It is now clear that if we want people to stop using journal title as the sole proxy for the quality of the papers it contains, we need to do more than just be transparent about peer review. We need to produce something designed to replace journal title – something that helps people find papers that may be of interest to them, understand how reliable the data and conclusions of the paper are, assess the impact of the paper, and gauge the potential future contributions of the authors.
eLife is launching two new initiatives aimed at enhancing the value of the peer reviews we produce to authors, readers and the broader community.
Our first step towards making peer reviews more useful is simple. For papers accepted by eLife from now on, our editors will write a clear and simple explanation of why the paper was selected for publication in the journal. Unlike the information in decision letters written for authors, these explanations will be written for readers, potential readers and others interested in the results of peer review (e.g., hiring committees). Our goal is for evaluations like these to replace journal title as the way we convey the assessment that reviewers and editors have made about a paper, and in doing so enable many additional ways to make peer review more dynamic and effective as a means to curate the literature.
Our second step will be to launch a new service called “Preprint Review”, that offers authors who have posted a paper on bioRxiv the opportunity to have it openly reviewed by eLife on bioRxiv while also being considered for publication in the journal.
All Preprint Review submissions will be reviewed and considered for publication in exactly the same way that regular submissions to eLife are. The only difference is that Preprint Review submissions will go directly into the eLife consultative peer-review process, without the initial editorial assessment we use in the existing system.
In parallel to this regular process, we will prepare a public review of all Preprint Review submissions, irrespective of whether we decide to publish the paper in eLife. This “Preprint Review” will differ from normal reviews, which are written for authors, by focusing on information of value to readers, potential readers and others interested in our assessment. They will be posted alongside the paper on bioRxiv using their new Transparent Review in Preprint (TRiP) service.
We are trialing Preprint Review now and will launch in the new year.
The overarching idea of these two initiatives is to make the peer review carried out by eLife more valuable to the community by introducing new ways of communicating the outcomes of peer review, while maintaining a vibrant journal at the core of this system. In the long run, our hope is that people will find that these new types of peer-review outputs are more useful guides to the scientific literature than journal titles. We are excited about these new initiatives and hope you will join us in making them succeed.
Questions and comments are welcome. Please annotate publicly on the article or contact us at hello [at] elifesciences [dot] org.