Abstract
Research funding plays an important role in academic careers. Previous research showed that researchers who obtain early-career funding are more likely to obtain later-career funding, whereas researchers who do not obtain early-career funding show a higher citation impact when reapplying for later-career funding. We replicate the so-called Matthew effect and early-career setback effect across fourteen different funding programmes from six research funders across Europe and North America. Earlier studies rely on studying applicants close to the funding line, and the inferred effects are limited to this “grey zone”. We study the robustness and generalisability of both effects to the whole population, beyond applicants in the “grey zone”. We find that the Matthew effect replicates, is robust across funders and model specifications, and generalises to the whole population. The early-career setback effect also replicates, but is not robust across funders and model specifications, and does not generalise to the whole population. We suggest that the early-career setback observation is due to a selection effect of unfunded applicants being particularly more likely to reapply later if they have a high citation impact. To address the Matthew effect, research funders and research organisations could consider stimulating promising rejected applicants to reapply. Another possibility would be to diminish potential deleterious effects of funding on academic careers.
1 Introduction
The Matthew effect is one of the most widely discussed phenomena in research funding. Merton (1968) proposed the Matthew effect to explain the accumulation of resources and recognition to the most successful, “the accruing of greater increments of recognition for particular scientific contributions to scientists of considerable repute and the withholding of such recognition from scientists who have not yet made their mark.” (Merton, 1968, p. 58). In the years that have passed since then, interest in the phenomenon has not diminished (DiPrete & Eirich, 2006; Lynn et al., 2009; Squazzoni & Gandelli, 2012), particularly due to signs of an increasing concentration of funding among a small share of researchers (Aagaard et al., 2020; Nielsen & Andersen, 2021) alongside a growing share of early-career researchers working in temporary positions (Milojevic et al., 2018).
Despite evidence for an increasing concentration of funding (Aagaard et al., 2020; Ma et al., 2015), publications (Lariviere et al., 2010), and citations (Nielsen & Andersen, 2021), the Matthew effect is more challenging to examine empirically. A number of studies have found patterns that show increasing levels of Inequality, while aiming to address the key challenge to distinguish between capability and reputation effects (Azoulay et al., 2014; Farys & Wolbrlng, 2021; Ghirelli et al., 2023; Huber, 1998; Melin & Danell, 2006; Petersen et al., 2011; Rijt et al., 2014; Sinatra et al., 2016).
Bol et al. (2018) found that researchers in the Netherlands who had early success in obtaining funding had a greater chance of accumulating subsequent funding compared to researchers who missed out on funding. Jacob and Lefgren (2011) showed similar results for the National Institutes of Health (NIH) in the USA, while Ghirelli et al. (2023) presented recent evidence about similar patterns for the European Research Council (ERC) grants in Europe. This calls into question the effectiveness of funding allocation and research culture, as researchers with otherwise strong potential may not be able to continue In academic and build a sustainable research career without funding. On the face of It, the Matthew effect in research funding may seem to be determined by reviewers of applications and funders, reflecting the weight placed on status and past achievements when assessing applications. However, researchers themselves may also affect future funding success by changing their behaviour in response to earlier funding decisions. Grantees could strengthen their efforts due to increased self-confidence or become complacent. Alternatively, applicants who experienced an early-career setback, and failed to obtain early-career funding (Sitkin, 1992), but who reapplied for funding later on, were shown to outperform (in terms of citation impact) applicants who were successful in obtaining early-career funding (Wang et al., 2019). This has been interpreted as a “what does not kill you makes you stronger” effect: the process of further developing a funding application informed by failure and funder feedback strengthens their research programme. Although the Matthew effect and the early-career setback effect can plausibly coexist, they are somewhat paradoxical and call for further research.
We replicate the studies of Bol et al. (2018) and Wang et al. (2019) across 14 different funding programmes from six research funders (see Methods) In North America and Europe, covering 109624 funded and unfunded grant applications (Supplementary Table S1). The use of identical data to replicate both studies provides a consistent basis for the effects compared to the original studies, which involved different datasets and contexts. This approach allows our results to study both effects without concerns about contextual differences. We illustrate both effects in Fig. 1. The inclusion of data from a suite of funding programmes from multiple funders provides a basis for the exploration of heterogeneous effects across multiple settings and may offer a unifying explanation of both effects.

Illustration of the Matthew effect and the early-career setback effect.
An academic, say Alice, first applies for early-career funding in 2015. She received early-career funding, and goes on to reapply for later-career funding in 2020. According to the Matthew effect, the chances of Alice receiving later-career funding is higher when she received early-career funding. Alice showed a high Mean Field Citation Rate (MFCR) before receiving her early-career funding, and similarly a high MFCR in between the early-career and later-career applications. According to the early-career setback effect, had she instead not received funding, her “Between” MFCR would have been higher. We not only study the “Between” MFCR, we also study the MFCR after the early-career application (“Post (early)”) and after the later-career application (“Post (later)”).
The methodology of Bol et al. (2018) relies on a regression discontinuity design (RDD) to study the Matthew effect. This approach compares near-miss applicants (i.e. applicants slightly above the funding line) to near-hit applicants (i.e. applicants slightly below the funding line). The critical assumption here is that near-hit and near-miss applicants are otherwise comparable and that who gets selected for funding is (near) random. With this assumption, the subsequent differences that we observe in terms of outcomes can be causally attributed to the funding decision (Cattaneo & Titiunik, 2022). However, only a local effect is identified, which applies only to applicants close to the threshold.
The approach of Wang et al. (2019) to study the “early-career setback” is very similar but suffers from one particular problem: it does not study the entire population of scholars who experienced an early-career setback, but only the subset of scholars who reapply at a later point in time. In order to address this problem, Wang et al. use various methods: (1) a RDD regression approach with controls; (2) a matching approach; and (3) a “conservative removal” approach. We use here a Bayesian model similar in spirit to their regression approach, and also consider the conservative removal approach.
In most funding programmes included in our study, there is no hard cut-off based on review scores, therefore making it difficult to implement a traditional regression discontinuity design. Instead, we implement a design akin to a fuzzy regression discontinuity design (Cattaneo & Titiunik, 2022) to identify near-hit/miss applicants (see Methods for details). We aim to study the generalisability of the effects based on a hierarchical Bayesian model using a latent “quality” variable (see Fig. 2 and Methods). Quality is a complex multidimensional concept, but we find that this term best applies to this particular setting, and does align with interpretations of quality of funding proposals. In the model higher quality is assumed to be associated with higher review scores and citation scores. Although we never observe quality directly, the model relies on the observed citations and review scores to infer quality (see Fig. S16). There is considerable uncertainty and variation in how quality translates into review scores and citation scores. To address this, we test a range of assumptions about the strength of the relationship between quality and the observed scores. These tests include extensive sensitivity analyses to study how our results change depending on these assumptions. In the main text we present results under one representative set of assumptions that assumes relatively weak associations between quality and citation and review scores respectively. The full results of the sensitivity analysis are available in the appendix (Section B)

Illustration of Bayesian model.
The latent quality affects whether a researcher applies for funding, the review score (s)he receives and his or her citation impact. The funding decision at time t affects the decision to apply, the review score and the citation impact at time t + 1. The effect of time is modelled using a survival analysis approach and also considers right censoring of the observations. Field and affiliation effects in the funding decision are not illustrated here for simplification, but are included in the model.
2 Results
2.1 Matthew effect
In our pooled dataset, we find that the percentage of applicants who receive later-career funding is higher for applicants who first received early-career funding (26%) than those who did not (15%) (Fig. 3). Although this difference provides an idea of the extent of the difference between those receiving early-career funding and those who did not, it does not represent a causal effect. We therefore compare near-hit to near-miss applicants, as discussed earlier. For near-hit/miss applicants, we observe a substantially smaller difference of 20% for funded applicants versus 18% for unfunded applicants (Fig. 3). Although funded early-career applicants are 11 percentage points more likely to obtain later-career funding, the causal effect amounts to about 3 percentage points, and the rest reflect confounding differences.

The Matthew effect replicates and generalises.
In (a) we show that funded applicants are more likely to receive later-career funding than unfunded applicants, both overall (p < 2.5 · 10–5, n = 40369) and limited to near-hit/miss applications (p = 0.087, n = 1699). In (b) we show that previously funded applicants are more likely to submit later applications than unfunded applicants (θ = 0.26 ± 4.5 · 10–3, n = 109624). In contrast, (c) shows that prior funding does not increase review scores, if anything, the effect is even slightly negative (λ = −0.058 ± 1.6 · 10–3, n = 109624). The illustrated effects are calculated using the mean inferred quality ((g) = 3.51) as a baseline. The insets show the posterior distributions of the estimated effect coefficients.
These results are consistent with a causal effect of early-career funding on later-career funding. One limitation is that they concern local effects: they only hold for applicants close to the threshold. This raises the question of what the effect looks like further away from the funding threshold. Here, the Bayesian model provides useful insights, as it incorporates all applications, while controlling for the latent quality.
There are two separate effects that can contribute to the overall effect of early-career funding on later-career funding: an application effect and a review effect. That is, funded early-career applicants may show a larger probability to reapply later. If unfunded early-career applicants do not reapply to later-career funding, they will obviously not receive funding. In addition, later-career applications may obtain higher review scores because they received early-career funding. This review effect highlights the paradox between the Matthew effect and the early-career setback effect. If unfunded early-career applicants have higher citation scores, then they may also receive higher review scores because citations and review scores are correlated in general. The early-career setback effect thus suggests that unfunded early-career applicants may be more likely to receive funding due to higher review scores, in contrast to the Matthew effect, which suggests that unfunded early-career applicants are less likely to receive funding. The Bayesian model uses a latent quality variable and models both the application process and the review process to consider the potential effects of previous funding on the probability of receiving later funding.
Based on our Bayesian model, we find a consistent positive effect of previous funding on later applications (Fig. 3). In a sensitivity analysis, we find that this effect is robust and consistent across various specifications and funders, except for Health Research BC (Fig. S9) For an applicant with the average inferred quality we find that after 5 years, unfunded applicants have a probability of having applied of 0.29±0.006 versus 0.36±0.007 for funded applicants. We find that there is a weakly negative effect of prior funding on review scores (Fig. 3). However, this finding is not robust and varies by specifications of assumptions and funders (see Section B).
Taken together, our results suggest that there is a clear Matthew effect, mainly driven by the application process, not by the review process.
2.2 Early-career setback
Wang et al. (2019) compare the citation impact of publications published between the early- and later-career application (“Between”). Here we also report the citation impact of publications in other periods: before the early-career application (“Pre”), after the early-career application (“Post (early)”) and after the later-career application (“Post (later)”). See Fig. 1 for an illustration of the various periods. For “Pre” and “Post (early)” impact, we report the citation impact of all applicants, unless otherwise stated. The “Between” and “Post (later)” period implicitly concerns only those who reapplied, since these quantities are not defined otherwise.
We find that the Mean Field Citation Rate (MFCR, see Methods) is higher for funded than for non-funded applicants across all periods (Fig. 4). Although this pattern generally also holds across funders, there are some variations and exceptions to this pattern (Section B). As with the Matthew effect, this observation is confounded by quality: those who are funded may be more likely to show a higher quality, and therefore a higher MFCR. By restricting to near-hit/miss applicants, we aim to control for this unobserved variable. The “Pre” MFCR for near-miss funded applicants is similar to that of near-hit unfunded applicants, corroborating that they have similar characteristics (Fig. 4). This is robustly the case, also across funders. Similarly, the “Post (early)” MFCR is similar for near-miss funded and near-hit unfunded applicants, showing no evidence of any causal effect of early-career funding on MFCR.

The early-career setback effect replicates but does not seem to generalise.
In (a) we show that funded applicants have a higher MFCR in all periods (p < 0.05 across all periods, n = 40369). In (b) we show that among near-hit/miss applicants, the “Between” MFCR is higher for unfunded applicants (p = 0.019, n = 1699) while the MFCR is nearly the same across all other periods (p > 0.24 across all periods, n = 1699). In (c) we show that the Bayesian model suggests a positive effect of funding on later MFCRs (γ = 0.91 ± 0.038, n = 109624). We illustrate this effect using the mean inferred quality as a baseline (‹q› = 3.51), and we display the distribution of the MFCR of 30 publications (i.e. using the mean over 30 publications). The inset shows the distribution of the estimated effect coefficient.
The “Between” MFCR shows a clear difference for near-hit/miss applicants (Fig. 4), in line with the original finding (Wang et al., 2019). However, this estimate suffers from one problem: it is conditional on having reapplied. Since the probability to reapply may be affected by quality, we may condition on a collider. (A collider is a variable that is causally affected by two other variables (→ X ←), as distinct from a mediating variable (→ X →) or a confounding variable (← X →). In Fig. 2 the application in time t + 1 is an effect of quality and funding decision at time t, making the application at time t + 1 a collider, in particular for the causal effect of the funding decision at time t on citation impact at time t + 1. See Huntington-Klein (2021) for a general introduction, and see Klebel and Traag (2024) for a discussion of such issues in the context of science studies.) . We illustrate this potential collider bias further on the basis of a simulation (Fig. 5). If collider bias plays a role, unfunded early-career applicants who reapply may show a higher quality, resulting in a possibly higher MFCR, even though this difference does not represent a causal effect. The “Post (early)” MFCR does not suffer from this collider bias and shows no significant differences between funded and unfunded applicants. The “Post (later)” MFCR shows hardly any difference between funded and unfunded applicants, although this estimate does suffer from collider bias. When conditioning on having reapplied the “Pre” MFCR indeed also tends to be higher for unfunded applicants (see Section B). These results suggest that collider bias may indeed play a role.

Illustration of the potential collider in the early-career setback effect based on a simulation.
We show quality and MCFR values for near-hit (in blue) and near-miss (in red) applicants from a simulation. Simulated applicants who reapply later are opaque, while those who do not are translucent. The marginal distributions of quality and MFCR are similar for funded and unfunded applicants (shown as translucent filled areas in the marginal plots). However, in our simulation, unfunded applicants only reapply when they have a high MFCR, while funded applicants reapply irrespective of their MFCR. The marginal distribution of the MFCR clearly shows that the MFCR for unfunded applicants is higher than for funded applicants who reapply (shown as a solid line in the marginal plots). Due to some association between quality and citations in our simulation, there is also a smaller but visible difference in the marginal distribution for quality between funded and unfunded applicants who reapply. This illustrates that results similar to an early-career setback effect may appear due to a collider bias, and need not represent a causal effect. Moreover, it also shows that the “conservative removal” procedure does not properly address this collider bias, because there are simply not as many funded applicants as unfunded applicants with equally high MFCRs.
As discussed, Wang et al. (2019) use several approaches to try to address this problem. As we showed, funded early-career applicants are more likely to reapply for funding later. Wang et al. (2019) equalise application rates by removing applications from earlier funded applicants that show the lowest “Between” MFCR, which they term the “conservative removal” procedure. We also implement this conservative removal procedure. We find similar, although somewhat smaller, differences as in the near-hit/miss comparison. However, the conservative removal procedure still has limitations. If unfunded applicants only reapply if they have a high “Between” MFCR, while funded applicants reapply irrespective of their “Between” MFCR, then the conservative removal procedure is unable to remedy this (see also Fig. 5).
Additionally, Wang et al. (2019) use a regression framework to control for potentially relevant confounding factors. Here, we use a slightly different approach, with the Bayesian model with a latent variable. Both approaches aim to control for confounding effects. In our Bayesian model, we find the effect of funding on subsequent MFCRs to be positive (Fig. 4). However, this effect is not robust to varying assumptions and becomes negative if we assume citations to be more strongly associated with quality (see Section B).
Taking all these results together, this suggests that collider bias may indeed play a role, and that the higher “Between” MFCR is the result of such a collider bias.
3 Discussion
Across a large and diverse dataset, we find consistent evidence for a Matthew effect in research funding. Applicants who received early-career funding are more likely to receive later-career funding. Although this difference can be partly explained by differences in quality, our near-hlt/mlss analysis suggests a causal mechanism, corroborating earlier research (Bol et al., 2018). While this approach makes a strong argument In favour of a causal effect, It only Identifies a local effect in applicants near the funding threshold. In contrast, our Bayesian model allows us to generalise this finding more broadly across the population of applicants. Although the interpretation of the Bayesian model depends on what assumptions are reasonable, we find that the Matthew effect is robust to varying assumptions. The observed effect Is clear, but moderate. Our results suggest that the Matthew effect is mostly driven by a higher application rate of funded applicants, not by reviewer bias towards funded applicants.
We also replicate the early-career setback effect described by Wang et al. (2019), where unfunded early-career applicants who later reapply demonstrate higher impact than their funded counterparts. We replicate this finding and find support for this in the near-hit/miss analysis. However, the effect does not robustly generalise across all funders and model assumptions, unlike the Matthew effect. We suggest that the early-career setback effect is not a causal effect, but the result of a selection bias: unfunded applicants with a high citation impact may be particularly more likely to reapply.
Taken together, these results suggest a unified explanation of the Matthew effect and the early-career setback effect. The Matthew effect is driven by a higher application rate of funded applicants, while the early-career setback observation is driven by a higher application rate of applicants with a higher citation impact, in particular if they are unfunded. Why application rates differ is not entirely clear. One possibility is that funded applicants and unfunded applicants with higher citation impact are more likely to remain in academia. Another possibility is that they are more confident in reapplying.
Not all effects were equally apparent across all funders. For instance, the Matthew effect was less evident in CIHR, and Health Research BC showed smaller differences in application rates. The early-career setback was more pronounced for FWF than other funders, with Health Research BC and SSHRC showing hardly any such differences. The effects are thus shaped by context and may depend on the funding system design, the review process, academic culture and career trajectories. How such factors influence the effects is not clear and requires further research.
3.1 Policy implications
Funding early-career researchers is key to many funders, and better understanding the Matthew effect is highly relevant to their mission. Our results offer several implications for research funding policy. First, we suggest that policies designed to mitigate or respond to cumulative advantage may be more fruitful than those based on the idea that early-career rejection itself can strengthen applicants.
A key implication is the importance of encouraging promising, but initially unsuccessful applicants to reapply. Funders and research organisations could consider interventions that give targeted feedback to unfunded near-hit applicants, provide reapplication support through training or mentorship, or fast-tracking highly ranked, unfunded proposals for reapplication. Some funders have already adopted relevant mechanisms. For example, CIHR provides Bridge Grants and Priority Announcements to support promising, highly ranked applicants who were not funded in strengthening their proposals. This might be one reason we observe a lower Matthew effect for CIHR.
Furthermore, policies that reduce longer-term career consequences of early-career funding outcomes could help create a more equitable system. Such policies could include instruments that allow for flexible timing of early-career grants, alternative funding routes for researchers whose career paths do not follow traditional trajectories, and institutional support and career retention policies for early-career researchers, in particular for promising applicants who failed to obtain early-career funding.
Introducing randomisation in funding decisions (Woods & Wilsdon, 2022), especially in the grey zone of near-hit/miss applications (Heyard et al., 2022) could also help reduce some potential reputational effects of early-career funding.
3.2 Limitations
As other studies, our study relies on data from participating funders and does not include the full funding portfolio of individual researchers. This may lead to a partial view of researchers’ longer-term success, especially for those who may have secured support from other funders not included in our dataset. However, the included funders are critical sources of external funding in their respective national systems. These funders therefore are likely to remain relevant for later-career applications, even after an initial funding rejection.
Relatedly, we do not track applicants across funders. Our findings should thus be interpreted in terms of funding trajectories within a given funder, rather than across an entire funding ecosystem. That is, we only study early-career applicants who reapply at the same funder. This may have some implications for our results. For instance, one possibility is that unfunded early-career applicants are less likely to reapply at the same funder. One reason for this could be that unfunded applicants may be more likely to move to another country. In such cases, we may overestimate the Matthew effect because we are less likely to observe later-career funding applications from unfunded early-career applicants. Studying funding across multiple funders and countries would be of great interest, and might be a future possibility using RoRI’s Funder Data Platform. Nevertheless, the harmonised structure of our dataset allows for meaningful comparisons across diverse national and institutional configurations.
We note the use of imputed data for missing values, particularly in review scores and publication metrics (MFCRs). This approach enables the inclusion of a significantly broader and more representative set of applications, avoiding the systematic exclusion of incomplete cases. Our imputation method is grounded in established Bayesian approaches and transparently reported. Dropping cases with missing bibliometric scores instead of imputing them, may lead to bias in our estimates because bibliometric scores are more likely to be missing for unfunded applicants.
Our data does not permit the use of an exact cutoff to replicate an exact regression continuity design. This necessitates relying on a fuzzy approach, which requires stronger assumptions to make causal claims. One valid concern here is that factors that play a role in funding decisions near the threshold may also affect future funding decisions, thus acting as a potential confounder. In particular, peer review shows some clear biases, for instance on age (Guthrie et al., 2018), which may act as a confounder near the threshold. On the other hand, studies of peer review in funding typically find considerably uncertainty (Cole et al., 1981; Fogelholm et al., 2012; Guthrie et al., 2018; Heyard et al., 2022; Pier et al., 2018), suggesting that decisions near the threshold are likely to be close to random. Indeed, we find that near-hit and near-miss applicants are nearly identical on a number of covariates (Fig. S5). Notably, we observe, if anything, a slight difference in favour of early-career researchers being funded, not more senior. Moreover, our modelling approach addresses potential confounders in the near-hit/miss analysis, corroborating our results. Given the limitations of working with a broader set of funding programmes with fuzzy cutoffs, we believe our approach is the best possible. Nonetheless, the results may potentially still suffer from some confounding, even if limited.
In our Bayesian model we use a latent quality variable. Although quality can encompass a wide variety of relevant consideration, in practice, it is inferred on the basis of citation impact and review scores only. In principle, there may be other factors relevant in the consideration of quality, and our model may miss such relevant factors. The fundamental problem is that quality is unobservable (Traag, 2025), and that we cannot ascertain with full certainty what factors are relevant. In our Bayesian analysis, the quality inference shows considerable uncertainty which is propagated in the uncertainty in other estimates.
As noted in the results, we suggest that collider bias might play a role In the early-career setback effect, as the analysis conditions on reapplication. This may lead to an over-representation of high-performing individuals among the reapplicants. While our results are consistent with the suggestion of such a collider bias, our current data - based on aggregated citation statistics - does not allow studying this mechanism at the level of individual publication trajectories.
3.3 Future research
Although our results confirm a Matthew effect in research funding, they also point towards a need for a deeper analysis to fully understand its underlying mechanisms, and its implications for research funding policy.
Most importantly, we need to further understand the (re)application behaviours of researchers through in-depth studies of why and where they reapply, and the implications of early-career funding success for their chances of remaining within the academic research system. This also calls for further research with individual-centered, longitudinal funding data that allows for a comprehensive analysis of how individuals select between funding opportunities based on initial funding outcomes.
In parallel, we propose that the observed patterns are likely shaped by behavioural mechanisms, such as increased confidence or institutional support following early-career success. Further research with a qualitative dimension (e.g. Including Interviews or ethnographic work), could offer critical insight into whether and how these mechanisms are at play.
Together, these lines of inquiry would support a more comprehensive understanding of how early funding outcomes shape long-term academic participation and success.
4 Methods
We collected data from six different funders:
Canadian Institutes of Health Research (CIHR);
Social Sciences and Humanities Research Council (SSHRC);
Michael Smith Health Research BC;
Luxembourg National Research Fund (FNR);
Austrian Science Fund (FWF); and
Wellcome Trust.
Each funder collected data internally, matched applicants to Dimensions and extracted information on the basis of a common script that we jointly developed. Overall, 96% of applicants were matched to Dimensions on the basis of names and affiliations. Funded applicants were slightly more likely to be matched (97%) compared to unfunded applicants (95%), presumably because unfunded early-career applicants may be less likely to appear in Dimensions. All information was extracted using the Dimensions API, limiting us to the data and information provided by Dimensions. In particular, whereas Wang et al. (2019) rely on indicators of hit papers (i.e. papers belonging to a top citation percentile), we collect the Mean Field Citation Rate (MFCR) provided by Dimensions based on their Fields of Research classification system (see https://plus.dimensions.ai/support/solutions/artlcles/23000018848-what-is-the-fcr-how-is-it-calculated- for details). The Field Citation Rate (FOR) for a publication represents the number of citations divided by the average number of citations of papers from that same publication year and field, with the MFCR representing the mean of those FCRs. Ideally, we would have collected citations with a more restrictive citation window (e.g. only considering citations up until the early-career application for the “Pre” MFCR), but this is not possible in the Dimensions API: all MFCRs are thus based on all citations up until the time of data collection. Due to privacy concerns, we limited the data that was shared, meaning that only averages and totals were provided by the funders, and no data on individual publications was shared. We provide more detailed information about the data in Section A.
We used RoRI’s Funder Data Platform (https://funderdataplatform.org) as a secure platform for performing our analyses. The Bayesian models were fitted separately using the compute resources from the Academic Leiden Interdisciplinary Cluster Environment (ALICE) provided by Leiden University.
4.1 Near-hit/miss analysis
Our analysis relies on the identification of near-hit and near-miss applications that are close to the funding threshold. In the ideal setup, the funding threshold is exact, and any application above the threshold is funded, while any application below the threshold is not funded. This is not the case for most of our funding programmes. Therefore, we identify a “funding threshold” based on Bayesian logistic regression (Gelman et al., 2008), where we regress the funding decision on the review scores. We thus identify applications close to the funding threshold, and take both the five unfunded and the five funded applications closest to that threshold, which we use as a comparison baseline.
We infer posterior distributions and quantify differences between funded and unfunded applicants as the probability that the posterior difference is larger (or smaller) than 0. Note this is different from the traditional p-value, which denotes the probability of observing such a large difference under the assumption of the null-model of no differences. The probabilities we report reflect the inferred probability that the difference is larger (or smaller) than 0. The posterior distribution is obtained through a sample of 4000 draws. If all the samples are lower (or higher) than 0, then the p-value is less than 1/4000 = 2.5 · 10–5.
We infer the probability p of being funded using
with k the number of funded applications out of n total applications, with a simple p ~ Beta(1,1) prior. For the Matthew effect, we consider all future later-career applications, and consider whether an applicant received any funding, similar to Bol et al. (2018). Restricting to early-career applications from before 2015 shows qualitatively similar results.
For each funder we collected aggregated publication data per applicant rather than individual publication data. In order to correctly infer the average impact, we therefore need to consider the number of publications as well. We infer the average by relying on a Bayesian model
with ni, the number of publications and 〈ci〉 the MFCR of applicant i. For the early-career setback effect, we consider later-career applications that happen exactly five years later, and early-career applications from before 2015. This is to ensure that sufficient time for realising impact has passed. This is similar to the approach of Wang et al. (2019), but here we focus on five years instead of ten. Results when considering a 10-year period show no early-career setback effect across any funder.
For the conservative removal we identify per funder the proportion of unfunded early-career applicants who reapplied later. We then remove funded early-career applicants with the lowest “Between” MFCR in order to ensure the proportion of early-career applicants that reapplied is the same for both unfunded and funded early-career applicants.
4.2 Bayesian model
We develop a Bayesian model to jointly study both the Matthew effect and the early-career setback effect (Fig. 2). Our approach is based on a sensitivity analysis with a latent quality variable, where we can vary our assumptions. We assume that quality is lognormally distributed, with some σquality that we can choose, such that the mean is always 1. We can then interpret a quality of 1 as the “average quality”. Hence, our prior assumption for the quality qi for each applicant i is that it is distributed as
where we set σquality = 1.3. We assume that the quality of an applicant somehow translates into the quality of a research proposal and is also reflected in citations of publications. In particular, we expect that the average (field-normalised) citations 〈Cpi〉 of a set of papers p of individual i are distributed as
with npi the number of papers in the set of papers p. Although individual citations are most likely not distributed as a normal distribution, by the law of large numbers, the average will converge to a normal distribution. This formulation also has great computational benefits, which is relevant when running this model on all 109624 observed applications.
For the Bayesian model, we normalise all review scores so that they fall in the range [0,1] (see Section A for details). We assume that review scores are related to quality based on the percentile score of the quality in the lognormal distribution that we assume as a prior distribution for the quality. Let ϕi, represent the percentile score of quality qi. We then assume that a review score for a proposal k of applicant i is distributed as (truncated to fall in the range [0,1]):
We perform a sensitivity analysis, focussing on the two variables of σcit and σreview, similar to Traag and Waltman (2019). We never observe the quality qi and therefore this is always implicitly inferred in the Bayesian model based on the observed values of cpi and rki. Setting too low values of σcit and σreview leads to estimation problems, because this generates too strong correlations between citations and review scores, which is inconsistent with the data. We report results for σcit = 20 and σreview = 0.4 in the main text and provide results for other parameters in Section B.
The funding decisions themselves are assumed to be based on the review scores, and follow a straightforward Bernoulli distribution specified in logit form (i.e. logistic regression), which is fitted for each separate funding call s (i.e. each funding call separately per funding programme, funding year, and possibly funding round in the case of multiple rounds per year)
In addition, we include affiliation effects and scientific field effects in this regression to control for potential confounding effects of affiliations and scientific fields. To avoid notational complexity, these effects are not included in Eq. 6.
For citations to a set of papers, p, after obtaining funding, we model the possibility of an effect of previous funding. We define previous funding at application k as
Similarly, for review scores after obtaining funding we model the effect of funding
We assume that applicants apply for funding at some rate ai, which depends both on the quality of the applicant and previous funding. We model this rate as a standard survival process and assume that the probability of having a time difference of tki between two applications k and k + 1 is distributed exponentially as
with τi specified as
Similar to Eq. 6 we again also model effects of affiliation and scientific fields, but we do not include these effects in Eq. 10 in order to reduce notational complexity. We also consider rightcensoring that takes place due to only observing applications up until some year. This last year of observation is something that differs slightly between funders (see Table S2).
Finally, we allow for missing data and do not drop cases with missing variables. That is, we impute missing review scores, and missing MFCRs do not inform inference of latent quality and the resulting parameters.
We fit the Bayesian model on the pooled data and all funders separately, while varying σreview and σcit. Due to the large number of missing review scores in Wellcome Trust, we do not include results for this funder separately, but we do include it in the pooled data.
4.3 Simulation
We simulate the application process to illustrate the potential collider effect in Fig. 5. We model this similar to the Bayesian model described above. For both the early-career and later-career applications, we simulate citation scores and review scores as in the Bayesian model, where we assume citation scores and review scores to be only affected by quality, not by prior funding. We assume unfunded early-career applicants to apply only if they have the highest citation score, while funded applicants apply independently of their citation scores. We then similarly identify near-hit/miss applicants. We simulate this for 200 applicants across 50 programmes with 20 applicants being funded. We similarly use σcit = 20, σreview = 0.4, and σquality = 1.3, while considering the citation score to be the average of 30 papers. Finally, we assume that 20% of the funded applicants reapply and 10% of the unfunded applicants reapply.
A Data
We collected data from six different funders, see Table S1 for an overview. Since we were interested in individual career trajectories, we aimed as much as possible to select large programmes that awarded funding to individual applicants. Overall, there is quite some heterogeneity in our data, which requires some specific approaches across funders and funding programmes. For example, for some funders, programmes are separately targeted at early or later-career researchers, while for others, single programmes targeted both early and later-career researchers.

Overview of funders, funding programmes and number of applications per programme.
In particular, there is quite some variety in review scores, which needs to be normalised. We do this with two different approaches for two different purposes. Firstly, we normalise review scores to within the range [0,1] with 1 indicating proposals most likely to be funded, and 0 indicating proposals least likely to be funded. This normalisation is used in the Bayesian model, which we discuss in more detail in Section 4.2. Secondly, we normalise review scores such that 0 corresponds to the funding threshold: proposals with review scores higher than 0 being more likely to be funded, and those below 0 less likely. Review scores closer to the 0 threshold are more uncertain to be funded. We base this normalisation on Bayesian logistic regression (Gelman et al., 2008), also in order to deal with perfect separation in some cases. Applications in most funding programmes receive review scores that are subsequently discussed in panels to come to a final decision, and for that reason review scores are usually not perfectly predictive.
We now discuss some particularities for each individual funder—the application numbers and review score ranges per programme are summarised In Tables S1 and S2.

Review scores for different funding programmes. Scores are ordered from least to most likely to be funded. In some cases, higher review scores are less likely to be funded.

Time between successive applications.

Number of funding applications over time.

Distribution of review scores normalised to the [0,1] range. The peaks are due to some funding programmes not having continuous scores, but being rounded to .5.

Distribution of review scores normalised so that 0 corresponds to the funding threshold.
A.1 CIHR
The Canadian Institutes of Health Research (CIHR) is a federal agency and the main funder of health research in Canada. For CIHR we use two programmes: New Investigator Awards and Project Grants. New Investigator Awards are targeted towards early-career researchers, while Project Grants contain a mix of early- and later-career researchers. For the New Investigator Awards we obtain data from 2001 to 2015, while the Project Grants applications cover 2017–2023.
For the Project Grants, we distinguish early- and later-career applicants on the basis of their reported academic position. Applications are scored using a 0-4.9 scale, which are normalised within each committee using the percent rank of each application, from 0 to 100%. Applications are then combined across committees, and funded from top to bottom, with some exceptions. Applications that are adjudicated by the Indigenous Health Research committee and equalised applications can still be funded even if they score below the funding threshold. Large grants are likely to remain unfunded even if they fall above the funding threshold, as they have their own restricted funding envelope to ensure that a small group of application with extremely large budgets does not drain the envelope for the whole competition.
For the New Investigator Awards, the review scores vary from 0 to 4.9. Funding for this competition is allocated in two stages. Most applications are funded based on the score they receive in stage 1. Some applications in a grey zone below the funding threshold are reviewed again in stage 2, and funded from top to bottom based on their new ranking.
A.2 FNR
The Luxembourg National Research Fund (FNR) is the main funder of research in Luxembourg, investing a mix of public and private funds in a diverse research portfolio. We use data from two funding programmes of FNR: ATTRACT and CORE. ATTRACT is explicitly aimed at early-career researchers. CORE is open to researchers at all career stages but requires applicants to indicate in their submission whether they apply under the “CORE Junior” category, enabling us to distinguish early- and later-career applicants. We classify applications based on the declared academic seniority at the time of submission.
All applications undergo peer review by at least three reviewers. Each reviewer provides individual scores according to a programme- and year-specific scoring scheme. These schemes vary substantially over time, particularly within the CORE programme. Between 2012 and 2016, CORE employs a weighted scoring system across five criteria, with maximum total scores ranging from 64 to 80. From 2017 onward, unweighted schemes are used, initially using five criteria (maximum score 20), and later four criteria from 2021 onward (maximum score 16). ATTRACT follows a similar trajectory, adopting unweighted scoring with four criteria in its later years. See Table S2 for an overview.
While our analysis relies on the reviewer scores assigned to each proposal, reviewer scores do not directly determine funding outcomes. In the CORE programme, evaluation panels discuss all eligible proposals and establish a final ranking based on the external reviewer assessments. This ranking forms the basis for the panel’s funding recommendation. In the case of ATTRACT, panels also discuss all proposals but issue a binary recommendation—either recommended or not recommended for funding—rather than producing a ranked list. Final funding decisions in both programmes are made by FNR’s decision-making bodies, which take into account the panel’s recommendations along with the constraints of the annual programme-specific budget envelope.
A.3 FWF
The Austrian Science Fund (FWF) is a federal funding agency and the main funder of research in Austria. We collect applications from three funding programmes over a period from 2009 to 2020: Erwin Schrodinger, FWF START Award and Principal Investigator Projects. The first two are targeted towards early-career researchers, while the latter is aimed at later-career researchers. The review scores for the Erwin Schrodinger and FWF START Award programmes are consistently provided between 1 and 5 throughout the entire period, with review scores of 1 being most likely to be funded and 5 least likely. Proposals are evaluated and scored by between two and five external reviewers (depending on the programme and amount of funding requested). Applications are then compared and discussed by the Scientific Board at one of the five annual meetings and binary yes/no recommendations are made. The board aims for unanimous decisions, but in some cases proposals are forwarded to the FWF Office for further discussion.
A.4 Health Research BC
Michael Smith Health Research British Columbia (Health Research BC) is a Canadian provincial public-funding body that supports health research and services in British Columbia. For Health Research BC we collect applications from two funding programmes: Research Trainee and Scholar, both covering a period from 2001 to 2024. The first is aimed towards early-career researchers, while the latter is targeted towards later-career researchers. Applications are scored by external reviewers based on several weighted metrics and then ranked. Review scores vary from 0 to 4.9 and only those scoring above 3.8 are eligible for funding, with higher scoring applications being more likely to be funded.
A.5 SSHRC
The Social Sciences and Humanities Research Council (SSHRC) is a federal funding body and the main supporter of research in the humanities and social sciences in Canada. For SSHRC we use two programmes: the Standard Research Grants, running from 1992 to 2011 and its successor called the Insights programme and Insight Development Grants, running since 2012. These programmes do not target early or later-career researcher specifically. Instead, early and later-career researchers are evaluated separately, respectively designated as “emerging” and “established” scholars. This is recorded for each application, and we use those labels to distinguish between early-career and later-career funding applications. Both programmes are also open to proposals involving multiple co-applicants, so we restricted our sample to applications involving a single applicant. Applications go to one of several multidisciplinary committees for evaluation and ranking. Different schemes were used in different years and varied quite a bit (see Table S2).
A.6 Wellcome Trust
The Wellcome Trust is a private life and health research funding body based in the United Kingdom which funds scholars from the UK, Ireland and low and middle-income countries.

Comparison of near-hit and near-miss applicants for (a) year the PhD was awarded; (b) year of birth; (c) the number of publications before the early-career application; and (d) the MFCR before the early-career application.
For Wellcome Trust we collected applications from two funding programmes: Sir Henry Wellcome (2007-2021) and Sir Henry Dale (2012-2021). The Sir Henry Wellcome programme is targeted towards early-career researchers, while the Sir Henry Dale programme is targeted towards later-career researchers. Review scores are available only from 2012 onwards and are consistently provided between 0 and 5 throughout the entire period, with review scores of 5 being most likely to be funded and 0 least likely. Applications are assessed by an external advisory committee, which scores and ranks the proposals. Wellcome staff make the final funding decision based on annual budgets and strategic priorities.
B Detailed results
B.1 Matthew effect
The Matthew effect varies across funders, from a later application success rate of roughly 8% to 32% for those who received early-career funding to 1% to 23% for those who did not S3. The percentage of applicants receiving later-career funding is higher for those who received early-career funding compared to those who did not, across all funders, except FWF with 16% for funded versus 23% for unfunded applicants. For near-hit/miss applicants, the percentage of applicants who received later-career funding is again higher for those who received early-career funding across all funders, except CIHR with 38% for funded versus 50% unfunded applicants. However, for most funders the difference between these percentages is not significant (p > 0.20). For FWF the difference is comparable to the pooled difference (19% for funded 15% for unfunded, p = 0.07), while for FNR the difference is much larger (46% versus 14%, p = 2.5 · 10–4), where the ATTRACT programme and a culture of actively encouraging follow-up applications may be at least partly responsible.
Across all σreview and σcit the funding effect on later applications is consistently positive for all funders, except for Health Research BC, for which the effect of funding on later applications is estimated to be close to 0 (Fig. S9).
The findings for the effect of previous funding on review scores is more mixed (Fig. S10). For lower σcit the effect of previous funding on review scores tends to be negative, meaning they are less likely to be funded. For higher σcit the effect tends to become less negative, and for some funders zero or even positive. Higher σreview also tends to result in more positive effects. For CIHR and FNR the results consistently point to negative effects, while for the other funders, the effects become positive for a sufficiently larger σcit and σreview. The negative effect for CIHR Is consistent with the results of the near-hlt/miss analysis. For FNR, the negative effect is not apparent In our near-hlt/miss analysis.
B.2 Early-career setback
To assess the early-career setback effect, we consider the MFCR for the full set of applicants, for the near-hit and near-miss applicants and the subset of applicants who reapplied for later-career funding. These differences are reported for the four periods of interest—Pre, Between, Post (early) and Post (later)—in Tables S7, S9 and S10 respectively.
We find that the MFCR is higher for funded than for non-funded applicants across all periods in the full set of pooled data (Table S7). The same MFCR pattern holds true individually for CIHR and SSHRC. Surprisingly the average “Pre” MFCR for the funded early-career applicants of Wellcome Trust, FWF and FNR Is lower than for non-funded applicants, although the difference Is not significant for the latter. Consistently, the “Post (later)” MFCR is higher for early-career funded applicants for the pooled data and all funders individually, except for FWF, but that difference is not significant. We find weak evidence of the early-career setback for CIHR and Health Research BC, where the “Between” MFCR is lower for the funded than non-funded applicants, but the differences are minor (0.14 and 0.42 respectively) and not significant. For FWF the “Between” MFCR is more visibly lower for applicants that were awarded early-career funding, and the difference is significant (difference 2.82, p = 2 · 10–3).
If we restrict the analysis to the near-hit/miss applicants (Table S9), the “Between” MFCR for the pooled data is lower for funded (3.90) than for unfunded (6.35) applicants consistent with the early-career setback effect. This is also the case for CIHR, FWF and FNR, although the differences are only significant for FWF, similar to the pattern for all applications. For Health Research BC and SSHRC the “Between” MFCR is higher for funded than for unfunded applicants, but these differences are not significant. For Wellcome Trust we have no near-hit and near-miss applicants who reapplied for later-career funding. As in the full set of applicants, the “Post(later)” MFCR is consistently higher for applicants who received early-career funding. In the pooled data, the applicants with early funding success show a higher “Pre” MFCR, though the difference is not significant, in line with the near-hit and near-miss groups being comparable. This is true also at the individual funder level, where the “Pre” MFCRs are comparable, with no funder showing any significant difference.

Overall probabilities to receive any later-career funding.

Matthew effect across funders.

Probabilities to receive any later-career funding for near-hit and near-miss applicants.
We further restricted the analysis to near-hit/miss applicants who reapplied later. This does not make a difference for the “Between” and “Post (later)”, since those are implicitly only defined for applicants who reapplied later (note those MFCRs remain mostly unchanged, but because estimates are inferred based on Bayesian Inference using sampling, there is some level of stochasticity in them, which may result in minor differences across inferences.) (Table S10). In this case we see that the “Pre” MFCR is higher for unfunded (4.79) than for funded (4.24) applicants overall, but this difference is not significant. At the individual funder level, the “Pre” MFCR remains higher for the unfunded applicants for FWF and Health Research BC, although differences are again not significant. Furthermore, implementing the conservative removal approach of Wang et al. (2019) on this sample of nearhit and near-miss applicants who reapply later, does not push any of the “Between” MFCR differences past the threshold of significance (see Table S11).

Matthew effect across funders for near-hit and near-miss applicants.

Probabilities to receive any later-career funding for near-hit and near-miss applicants who reapplied.
Figure S15 illustrates the effect of previous funding on later MFCRs as inferred in the Bayesian model. For higher σmfcr, we observe a positive effect of previous funding on later MFCRs and for lower σmfcr, we observe a negative effect of previous funding on later MFCRs, while varying σreview seems to have little effect on these estimates.

Matthew effect across funders for near-hit and near-miss applicants who reapplied.

Inferred effect of previous funding decision on the application rate. The hue shows the effect, while the size shows the precision of the estimates.

Inferred effect of previous funding decision on the review score. The hue shows the effect, while the size shows the precision of the estimates.

Number of reapplications for all applications.

Mean-field citation rate (MFCR) for all applications.

Number of reapplication for near-hit/miss applications.

Mean-field citation rate (MFCR) for near-hit/miss applications.

Early-career setback across funders.

Early-career setback across funders from near-hit/miss applications.

Mean-field citation rate (MFCR) for near-hit/miss applicants who reapplied.

Early-career setback across funders from near-hit/miss applicants who reapplied.

Mean-field citation rate (MFCR) for near-hit/miss applicant who reapplied with conservative removal.

Early-career setback across funders from near-hit/miss applicants who reapplied with conservative removal.

Inferred effect of previous funding decision on the MFCR. The hue shows the effect, while the size shows the precision of the estimates.
B.3 Bayesian model

Scatter plot illustrating the association between the inferred quality in the Bayesian model, the “Pre” MFCR and the review score.
Acknowledgements
This working paper forms part of the Matthew project of the Research on Research Institute (RoRI).
RoRI’s second phase (2023-2027) is funded by an international consortium of partners, including: Australian Research Council (ARC); Canadian Institutes of Health Research (CIHR); Digital Science; Dutch Research Council (NWO); Gordon and Betty Moore Foundation [Grant number GBMF12312; DOI 10.37807/GBMF12312]; King Baudouin Foundation; La Caixa Foundation; Leiden University; Luxembourg National Research Fund (FNR); Michael Smith Health Research BC; National Research Foundation of South Africa; Novo Nordisk Foundation [Grant number NNF23SA0083996]; Research England (part of UK Research and Innovation); Social Sciences and Humanities Research Council of Canada (SSHRC); Swiss National Science Foundation (SNSF); University College London (UCL); Volkswagen Foundation; and Wellcome Trust [Grant number 228086/Z/23/Z],
Sensitivity analysis of the Bayesian model was performed using the compute resources from the Academic Leiden Interdisciplinary Cluster Environment (ALICE) provided by Leiden University.
Sincere thanks to all our partners for their engagement and support. We would also like to record our gratitude to members of the project steering group for advice and guidance at every stage: Alasdair Cowie-Fraser (Wellcome Trust); Gert V. Balling (NNF); Ralph Reimann (FWF); Stirling Bryan (Health Research BC); Matthew Hogel (CIHR); Alexandra Apavaloae (SSHRC).
Responsibility for the content of RoRI outputs lies with the authors and RoRI CIC. Any views expressed do not necessarily reflect those of our partners, including CIHR and FNR. RoRI is committed to open research as an important enabler of our mission, as set out in our Open Research Policy. Any errors or omissions remain our own.
Additional information
Author contributions
CRediT: Conceptualization: VT; Data curation: VT, EB, PVL, FB, JPA; Formal Analysis: VT, EB, PVL, FB; Funding acquisition: VT; Investigation: VT, EB, PVL, FB; Methodology: VT, EB, PVL, FB; Project administration: VT, EB, PVL, FB, CLB; Resources: CLB; Software: VT; Supervision: VT, CLB; Validation: VT; Visualization: VT; Writing - original draft: VT, JPA, CB; Writing – review & editing: VT, EB, PVL, FB, CLB, JPA, CB
References
- Concentration or dispersal of research funding?Quantitative Science Studies 7:117–149https://doi.org/10.1162/qss_a_00002Google Scholar
- Matthew: Effect or fable?Manage. Sci. 60:92–109https://doi.org/10.1287/mnsc.2O13.1755Google Scholar
- The Matthew effect in science fundingProc. Natl. Acad. Sci. U. S.A. 115:4887–4890https://doi.org/10.1073/pnas.1719557115Google Scholar
- Regression Discontinuity DesignsAnnual Review of Economics 74:821–851https://doi.org/10.1146/annurev-economics-051520-021409Google Scholar
- Chance and consensus in peer reviewScience 274:881–886https://doi.org/10.1126/science.7302566Google Scholar
- Cumulative advantage as a mechanism for inequality: A review of theoretical and empirical developmentsAnnu. Rev. Sociol. 32:271–297https://doi.org/10.1146/annurev.soc.32.061604.123127Google Scholar
- Matthew effects in science and the serial diffusion of ideas testing old ideas with new methodsQuantitative Science Studies :1–36https://doi.org/10.1162/qss_a_00129Google Scholar
- Panel discussion does not improve reliability of peer review for medical research grant proposalsJournal of Clinical Epidemiology 65:47–52https://doi.org/10.1016/j.jclinepi.2011.05.001Google Scholar
- A weakly informative default prior distribution for logistic and other regression modelsThe Annals of Applied Statistics 2:1360–1383https://doi.org/10.1214/08-AOAS191Google Scholar
- The Long-Term Causal Effects of Winning an ERC GrantSSRN https://doi.org/10.2139/ssrn.4437664Google Scholar
- What do we know about grant peer review in the health sciences?F1OOOResearch 6:1335https://doi.org/10.12688/f1000research.11917.2Google Scholar
- Rethinking the Funding Line at the Swiss National Science Foundation: Bayesian Ranking and LotteryStatistics and Public Policy 9:110–121https://doi.org/10.1080/2330443X.2022.2086190Google Scholar
- Cumulative advantage and success-breeds-success: The value of time pattern analysisJournal of the American Society for Information Science 49:471–476https://doi.org/10.1002/(SICI)1097-4571(19980415)49:5<471::AID-ASI8>3.0.CO;2-TGoogle Scholar
- The Effect: An Introduction to Research Design and CausalityCRC Press Google Scholar
- The impact of research grant funding on scientific productivityJournal of Public Economics 95:1168–1177https://doi.org/10.1016/j.jpubeco.2011.05.005Google Scholar
- Introduction to causality in science studiesSocArXiv https://doi.org/10.31235/osf.io/4bw9eGoogle Scholar
- Which scientific elites? On the concentration of research funds, publications and citationsResearch Evaluation 19:45–53https://doi.org/10.3152/095820210X492495Google Scholar
- A Sociological (De)Construction of the Relationship between Status and QualityAmerican Journal of Sociology 115:755–804https://doi.org/10.1086/603537Google Scholar
- Anatomy of funded research in scienceProc. Natl. Acad. Sci. U. S. A. 112:14760–14765https://doi.org/10.1073/pnas.1513651112Google Scholar
- The top eight percent: Development of approved and rejected applicants for a prestigious grant in SwedenScience and Public Policy 33:702–712https://doi.org/10.3152/147154306781778579Google Scholar
- The Matthew Effect in Science: The reward and communication systems of science are consideredScience 159:56–63https://doi.org/10.1126/science.159.3810.56Google Scholar
- Changing demographics of scientific careers: The rise of the temporary workforceProc. Natl. Acad. Sci. U. S. A. 115:12616–12623https://doi.org/10.1073/pnas.1800478115Google Scholar
- Global citation inequality is on the riseProc. Natl. Acad. Sci. U. S. A. 118Google Scholar
- Quantitative and empirical demonstration of the Matthew effect in a study of career longevityProc. Natl. Acad. Sci. U. S.A. 108:18–23https://doi.org/10.1073/pnas.1016733108Google Scholar
- Low agreement among reviewers evaluating the same NIH grant applicationsProceedings of the National Academy of Sciences 115:2952–2957https://doi.org/10.1073/pnas.1714379115Google Scholar
- Field experiments of success-breeds-success dynamicsProc. Natl. Acad. Sci. U. S. A. 111:201316836https://doi.org/10.1073/pnas.1316836111Google Scholar
- Quantifying the evolution of individual scientific impactScience 354https://doi.org/10.1126/science.aaf5239Google Scholar
- Learning Through Failure - the Strategy of Small LossesResearch in Organizational Behavior 14:231–266Google Scholar
- Saint Matthew strikes again: An agent-based model of peer review and the scientific community structureJournal of Informetrics 6:265–275https://doi.org/10.1016/j.joi.2011.12.005Google Scholar
- Science of science—citation models and research evaluationarXiv :2207.11116https://doi.org/10.48550/arXiv.2207.11116Google Scholar
- Systematic analysis of agreement between metrics and peer review in the UK REFPalgrave Communications 5:29https://doi.org/10.1057/S41599-019-0233-xGoogle Scholar
- Early-career setback and future career impactNature Communications 10:4331https://doi.org/10.1038/s41467-019-12189-3Google Scholar
- Experiments with randomisation in research funding: Scoping and workshop report (RoRI Working Paper No.4)Research on Research Institute https://doi.org/10.6084/m9.figshare.16553067.V2Google Scholar
Article and author information
Author information
Version history
- Preprint posted:
- Sent for peer review:
- Reviewed Preprint version 1:
Cite all versions
You can cite all versions using the DOI https://doi.org/10.7554/eLife.109042. This DOI represents all versions, and will always resolve to the latest one.
Copyright
© 2025, Traag et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 0
- downloads
- 0
- citations
- 0
Views, downloads and citations are aggregated across all versions of this paper published by eLife.