- Views 761
By Mark Patterson, Executive Director at eLife.
It’s not always the case, but sometimes you come back from a conference feeling energised, inspired and ready for action. That was my overriding feeling after returning from the recent ASAPbio meeting, co-organised with the Howard Hughes Medical Institute and Wellcome, and held at the HHMI headquarters in Chevy Chase, Maryland. It felt good right from the start, mainly because of the tone set by Ron Vale in his opening remarks. Ron strongly emphasised the importance of cooperation and collaboration in our efforts to improve the way that research is evaluated and communicated.
ASAPbio is primarily an effort by the life science community to encourage the use of preprints in biology. Ron together with Jessica Polka the Director of ASAPbio are the driving forces behind the initiative. The first meeting was held two years ago and focused on preprints. The target for the second meeting was peer review.
There were several themes at this meeting and one that struck home for me was the way that the functions of a typical peer-reviewed journal are beginning to be disaggregated and performed by different organisations at different stages in the life-cycle of an article. Preprints themselves provide an example of this. As argued by Vale and Hyman, preprints deal with the disclosure function and help to communicate a new finding. Rigorous peer review then provides validation (or at least an evaluation of validity) so that the work can be communicated via a journal publication, thus further strengthening the nature and priority of a finding.
Such a separation of the basic functions that are typically bound up in journal publication can go much further. An early step after submission for example is the initial quality control that looks at issues such as image manipulation, data availability and ethical conduct. In the collaboration between bioRxiv and PLOS, announced just before the ASAPbio meeting, manuscripts submitted to PLOS that have undergone basic checks will be sent directly to bioRxiv and posted there. So PLOS is dealing with the basic QC and bioRxiv is dealing with disclosure to use Vale and Hyman’s term. PLOS then continues with peer review and (hopefully) eventual publication of a revised version.
Peer review itself can also be teased apart into different types of assessments such as: the technical rigour of the work; whether the conclusions are supported by the evidence presented; and the likely relevance or importance of the work. A potential outcome of the meeting might be for journals to consider structuring peer review reports in a more consistent way, so that the more technical evaluation is separated from assessment of importance. This could help, for example, in transferring reports from one journal to the next in the event that an article is rejected after peer review.
A more radical suggestion, which generated a fair amount of discussion came from Erin O’Shea and Bodo Stern of HHMI. One of the key parts of their proposal is that if a paper is sent for peer review, then the journal should commit to publishing the work. The reviewers’ task is to provide constructive criticism and suggestions for how the work could be strengthened, and the authors then have the responsibility to respond thoroughly and rigorously to that peer review. Along with the revised article, the review reports and the author response would then be published so that readers can take advantage of these expert opinions. After publication, O’Shea and Stern also suggest that readers could benefit from the addition of some kind of “badging” or “tagging” to indicate the integrity and quality of the research, although precisely how these would work is less certain.
If successful such ideas could save time and effort in the communication of new findings.
It’s unclear how many journals would support these approaches, but the eLife editors are considering an experiment along these lines.
It seems to me that these and other discussions are hinting at a future for research communication that involves the publication of a series of versions of an article, which steadily improves thanks to various kinds of feedback. This transparent process might begin with the early disclosure of a first version of an article - the “preprint”, although a better term might be “version 1”. As new versions are posted readers need to know what kinds of quality control steps have taken place. And if there are multiple versions of the article then they all need to be linked together, but they don’t necessarily need to be published all in one place. Peer review could focus initially on technical rigour, as is done in publications such as PLOS ONE, PeerJ, Scientific Reports and so on. I also wonder whether an article that has been evaluated for technical rigour and then published might be resubmitted to another journal that specialises in evaluation of importance for a particular field. Right now, this would be considered unethical practice, but if there is a clearer separation of the functions of peer review and there is transparency about the different versions of an article, such an approach might be worth considering. An article steadily evolves over time in light of feedback and further work, and such feedback and revision doesn’t necessarily stop with publication in a journal.
None of these ideas are particularly new of course. F1000Research, which was also presented at the meeting by Rebecca Lawrence, is a publication that brings most of these ideas together in a single platform and has been publishing for about 5 years. Articles are published as soon as they are submitted (version 1), having gone through some basic checks. Invited peer review takes place completely in the open with a focus on the technical validity of the work. In response to reviewer comments, authors can update their articles with a new version (which is subject to a further round of peer review. There is no editorial assessment of the likely importance of the work, but potentially these functions could be added later, as discussed by Lawrence and Tracz in their contribution to the ASAPbio commentary articles. The F1000 platform is also being used by funders (Wellcome, Gates and others), which will no doubt boost the general level of comfort with this approach. Two key questions are whether single monolithic platforms are better suited to research communication than a more distributed system of interoperable services and whether the foundational infrastructure supporting research communication should itself be open (unlike the technology underpinning F1000).
Although many more ideas were discussed at the meeting, one final theme that is important to mention is the role of the early-career community. Prachee Avasthi (recently appointed to the eLife Board of Directors) gave an inspiring talk on the way that preprint journal clubs can be an effective training ground for peer review by early-career researchers. One of the great strengths of the approach is that thousands of journal clubs take place across the world which makes the idea easy to scale. By focusing on preprints, there is great incentive for the early-career researchers to write up their assessments and share them with the authors.
There are already examples of authors who have found such feedback valuable in the revision of the work for publication in a journal.
By sharing their assessments openly on platforms such as PreReview, the early-career researcher can build a portfolio of work, which will also help raise their profile to the editors who are desperate to find new reviewers. It seems like a situation where everyone wins, so it’s not surprising that there was so much interest in these ideas.
Like the first ASAPbio meeting, the most recent meeting ended with a strong call to action. Back in 2016 the message was go and post preprints and tell everyone else to do the same. I didn’t attend that meeting, but from the commentary about the meeting at the time it seemed that there was strong support for the message, and this has since been reinforced by the enormous progress that has been made in the posting and use of preprints over the intervening years.
At the 2018 meeting there seemed to be several points of agreement and potential action. The biggest point of agreement was the support for posting peer review reports. eLife already does this by publishing the consolidated decision letter and author response along with the article, and EMBO Journal, BMJ and others also make review reports and other materials available too. Currently, only a tiny minority of journals do this, but coming away from the meeting, it seems that many more journals may now consider similar policies. Unlike the first meeting, however, this action is dependent on publishers rather than researchers. For the research community, the strongest message I sensed was enthusiasm for early-career researchers to review preprints and share their critiques with the authors. If that takes hold, a massive community of talented reviewers could emerge habituated in the open sharing of their opinions and ideas – a tantalising prospect for those who hope for a more transparent and collaborative scientific culture.
We welcome comments/questions from researchers as well as other journals.