Peer review process
Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, and public reviews.
Read more about eLife’s peer review process.Editors
- Reviewing EditorPeter RodgerseLife, Cambridge, United Kingdom
- Senior EditorPeter RodgerseLife, Cambridge, United Kingdom
Reviewer #1 (Public review):
Summary:
The authors performed a multi-funder study to determine if the Matthew effect and early-career setback effect were reproducible across funding programs and processes. The authors extended the analysis of these effects to all applicants and compared the results to the prior studies that only looked at near-hit/near-miss applicants to determine if the effects were generalizable to the whole applicant pool. Further, the authors included new models that also account for researcher behavior and their overall likelihood to reapply for later funding and how this behavior may resolve what appears to be a paradox between the Matthew effect and the early-career setback effect.
Strengths:
Figure 4 shows that the "Post (late) MFCR" is the same for the funded and unfunded groups, indicating that the impact of early career funding (at least, in terms of citation metrics) is transient in researcher's overall careers. This finding should encourage researchers to persevere when needed and that long-term success is attainable.
The inclusion of the collider bias in the models to account for researcher behavioral responses is a key strength of the paper and enhance the analysis and nuanced discussion of the results.
Weaknesses:
The discussion of limitations is thorough and point to the need for additional studies. One limitation that is acknowledged is that the authors only looked at applicants who reapplied for funding at the same funder. Given that the authors had the names and affiliations of the applicants from all of the funders, it would be helpful to understand why they were not able to look at applicants across their full data set. Was the limitation technical or a result of the study design? What would have to change to enable this broader analysis?
In Section 4.1, the authors make a statement that the "between MFCR" difference was seen at 5 years, but not at 10 years, and so the authors chose to use the 5-year period for the presentation of their results. It would be helpful to also see the 10-year analysis and have further justification from the authors on why they selected to look at the 5-year period and how their conclusions might or might not change if they consider the longer time period.
The discussion could also include that many funders require novel research directions as a condition of receiving an early-career award. For those who receive these awards, they must establish the new research program, begin publishing, and they may initially see a lower citation rate until the impact of the research is more broadly recognized. Are there ways to explore how these time lags impact the "Between MFCR" on those who were funded more so than those who were not funded?
Reviewer #2 (Public review):
Summary:
The manuscript evaluates the generalizability of two phenomena of great interest to early-career scientists and scientific policymakers. These phenomena describe how early funding success can promote future funding success (the Matthew Effect) and how initially unsuccessful applicants may later succeed (the early-career setback effect). Given the often-normative aspirations of science-of-science studies, the manuscript represents a much-needed and highly significant effort, as it allows a broader audience to assess whether they should reconsider their behavior or policies.
Strengths:
The evidence provided by the authors for the generalizability of the Matthew Effect is very strong and convincing. The manuscripts addresses an important topic of practical concern to early-career scientists and scientific policymakers.
Weaknesses: If I am correctly interpreting S11 and S12, the statements on the early-career setback effect could be stronger and more direct. The argument in the main text relies on assumptions and simulations to suggest that observations of the early-career setback effect may depend on reapplications. In contrast, S11 and S12 appear to provide more direct evidence against its generalizability, showing that the effect seems to exist in, and be driven by, only one of the six funding agencies considered (FWF). This narrow replication may not be obvious to readers ("the early-career setback effect also replicates, but is not robust across funders").
I would also suggest that the authors provide a more nuanced discussion of the limitations of their Bayesian model. While the model seems appropriate for accounting for major factors, it appears to exclude others, such as the emergence of new scientific fields or the strategic reorientation of funders toward such fields.
Reviewer #3 (Public review):
Summary:
This paper investigates the Matthew effect, where early success in funding peer review can translate into potentially unwarranted later success. It also investigated the previously found "setback" effect for those who narrowly miss out on funding.
Strengths:
The study used data from six funding agencies, which increases the generalisability, and was able to link bibliographic data for around 95% of applicants. The authors nicely illustrate how the previously found "setback" effect for near-miss applicants could be a collider bias due to those who chose to apply sometime later. This is a good explanation for the counter-intuitive effect and is nicely shown in Figure 5.
Weaknesses:
Most of the methods were clearly presented, but I have a few questions and comments, as outlined below.
In Figure 4(a) why are the "post" means much lower than the "pre"? This contradicts the expected research trajectory of researchers. Or is this simply due to less follow-up time? But doesn't the field citation ratio control for follow-up time?
The choice of the log-normal distribution for latent quality was not entirely clear to me. This would create some skew, rather than a symmetric distribution, which may be reasonable but log-normal distributions can have a very long tail which might not mimic reality, as I would not expect a small number of researchers to be extremely above the crowd. However, then the skew was potentially dampened by using percentile scores. Some further reasoning and plots of the priors would help.
Can the authors confirm the results of Figure S9 which show no visible effect of altering the standard deviation for the review parameter or the mean citations? Is this just because the prior for quality is dominated by the data? Could it be that the width of the distribution for quality does not matter, as it's the relative difference/ranking that counts? So the beta in equation 6 changes to adjust to the different quality scale?
The contrary result for the FWF is not explained (Table S3). Does this funder have different rules around re-applicants or many other competing funders?
The outlined qualitative research sounds worthwhile. Another potential mechanism (based on anecdote) is that some researchers react irrationally to rejection or acceptance, tending to think that the whole agency likes or hates their work based on one experience. Many researchers do not appreciate that it was a somewhat random selection of reviewers who viewed their work, and it will unlikely be the same reviewers next time.
"A key implication is the importance of encouraging promising, but initially unsuccessful applicants to reapply." Yes, A policy implication is to give people multiple chances to be lucky, perhaps by giving fewer grants to more people, which could be achieved by shortening the funding period (e.g., 4 year fellowships instead of 5 years). Although this will have some costs as applicants would need to spend more time on applications and suffer increased stress of shorter-term contracts. The bridge grants is potentially an ideal half-way house between many short-term and few long-term awards. Giving more grants to fewer people is supported by this analysis showing a diminishing returns in research outputs with more funding, DOI: 10.1371/journal.pone.0065263.
Making more room for re-applicants also made me wonder if there should be an upper cap on funding, potentially for people who have been incredibly successful. Of course, funders generally want to award successful researchers, but people who've won over some limit, for example $50 million, could likely be expected to win funding from other sources such as philanthropy and business. Graded caps could occur by career stage.