1. Ecology
Download icon

Meta-analysis challenges a textbook example of status signalling and demonstrates publication bias

  1. Alfredo Sánchez-Tójar  Is a corresponding author
  2. Shinichi Nakagawa
  3. Moisès Sánchez-Fortún
  4. Dominic A Martin
  5. Sukanya Ramani
  6. Antje Girndt
  7. Veronika Bókony
  8. Bart Kempenaers
  9. András Liker
  10. David F Westneat
  11. Terry Burke
  12. Julia Schroeder
  1. Max Planck Institute for Ornithology, Germany
  2. Imperial College London, United Kingdom
  3. University of New South Wales, Australia
  4. University of Sheffield, United Kingdom
  5. Bielefeld University, Germany
  6. Hungarian Academy of Sciences, Hungary
  7. University of Pannonia, Hungary
  8. University of Kentucky, United States
Research Article
  • Cited 0
  • Views 1,345
  • Annotations
Cite this article as: eLife 2018;7:e37385 doi: 10.7554/eLife.37385

Abstract

The status signalling hypothesis aims to explain within-species variation in ornamentation by suggesting that some ornaments signal dominance status. Here, we use multilevel meta-analytic models to challenge the textbook example of this hypothesis, the black bib of male house sparrows (Passer domesticus). We conducted a systematic review, and obtained primary data from published and unpublished studies to test whether dominance rank is positively associated with bib size across studies. Contrary to previous studies, the overall effect size (i.e. meta-analytic mean) was small and uncertain. Furthermore, we found several biases in the literature that further question the support available for the status signalling hypothesis. We discuss several explanations including pleiotropic, population- and context-dependent effects. Our findings call for reconsidering this established textbook example in evolutionary and behavioural ecology, and should stimulate renewed interest in understanding within-species variation in ornamental traits.

https://doi.org/10.7554/eLife.37385.001

eLife digest

Many bird species have colourful, intricately patterned plumage. This ornamentation is generally believed to exist to attract partners. In the 1970s, however, scientists proposed an alternative idea, called the ‘status signalling hypothesis’. This suggests that some birds have plumage ornaments that indicate the fighting abilities or dominance status of their bearers, much like the military badges worn by humans. These badges of status might evolve because fights, which commonly determine who gets valuable resources such as food, are a risky business. Individuals would greatly benefit from being able to predict the fighting abilities of any potential competitor and so avoid fights that they will probably lose.

Male house sparrows have a black patch on their throat, known as the bib, that has been considered to be a textbook demonstration of the status signalling hypothesis. However, most of the studies that support this idea studied small numbers of birds and used inconsistent methods. Furthermore, some recent studies have failed to replicate previous findings.

Sánchez-Tójar et al. collected data from several house sparrow populations across the world and systematically scrutinized the published literature to find all of the studies that tested the status signalling hypothesis in house sparrows. This revealed only weak evidence that the bib of male house sparrows signals the fighting abilities of its bearer. Instead, the published literature is a biased subsample; failures to replicate the hypothesis likely remain unpublished.

Currently, failures to replicate previous findings are generally deemed uninteresting, and so are not often published. By demonstrating the need to replicate findings robustly to avoid biasing conclusions, Sánchez-Tójar et al. thus join the call for a change in incentives and scientific culture.

https://doi.org/10.7554/eLife.37385.002

Introduction

Plumage ornamentation is a striking example of colour and pattern diversity in the animal kingdom and has attracted considerable research (Hill, 2002). Most studies have focused on sexual selection as the key mechanism to explain this diversity in ornamentation (Andersson, 1994; Dale et al., 2015). The status signalling hypothesis explains within-species variation in ornaments by suggesting that these ornaments signal individual dominance status or fighting ability (Rohwer, 1975). Aggressive contests are costly in terms of energy use, and risk of injuries and predation (Jakobsson et al., 1995; Kelly and Godin, 2001; Neat et al., 1998; Prenter et al., 2006; Sneddon et al., 1998). These costs could be reduced if individuals can predict the outcome of such contests beforehand using so-called ‘badges of status’ – that is, two potential competitors could decide whether to avoid or engage in aggressive interactions based on the message provided by their opponent’s signals (Rohwer, 1975).

Patches of ornamentation have been suggested to function as badges of status in a wide range of taxa, including insects (Tibbetts and Dale, 2004), reptiles (Whiting et al., 2003) and birds (Senar, 2006). The status signalling hypothesis was originally proposed to explain variation in the size of mountain sheep horns (Beninde, 1937; Geist, 1966), but the hypothesis has become increasingly important in the study of variability in plumage ornamentation in birds (Rohwer, 1975; Senar, 2006). Among the many bird species studied (Santos et al., 2011), the house sparrow (Passer domesticus) has become the classic textbook example of status signalling (Andersson, 1994; Searcy and Nowicki, 2005; Senar, 2006; Davies et al., 2012). The house sparrow is a sexually dimorphic passerine, in which the main difference between the sexes is a prominent black patch on the male’s throat and chest (hereafter ‘bib’). Many studies have suggested that bib size serves as a badge of status, but most studies are based on limited sample sizes, and have used inconsistent methodologies for measuring bib and dominance status (Nakagawa and Cuthill, 2007; Santos et al., 2011).

Meta-analysis is a powerful tool to quantitatively test the overall (across-study) effect size (i.e. the ‘meta-analytic mean’) for a specific hypothesis. Meta-analyses are therefore able to provide more robust conclusions than single studies and are increasingly used in evolutionary ecology (Gurevitch et al., 2018; Nakagawa and Poulin, 2012a; Nakagawa and Santos, 2012b; Senior et al., 2016). Traditional meta-analyses combine summary data across different studies, where design and methodology are study-specific (e.g. effect sizes among studies are typically adjusted for different fixed effects). These differences among studies are expected to increase heterogeneity, and therefore, the uncertainty of the meta-analytic mean (Mengersen et al., 2013). Meta-analysis of primary or raw data is a specific type of meta-analysis where studies are analysed in a consistent manner (Mengersen et al., 2013). This type of meta-analysis allows methodology to be standardized so that comparable effect sizes can be obtained across studies and is, therefore, considered the gold standard in disciplines such as medicine (Simmonds et al., 2005). Unfortunately, meta-analysis of primary data is still rarely used in evolutionary ecology (but see Barrowman et al., 2003; Richards and Bass, 2005; Krasnov et al., 2009), perhaps due to the difficulty of obtaining the primary data of previously published studies until recently (Culina et al., 2018; Schmid et al., 2003).

An important feature of any meta-analysis is to identify the existence of bias in the literature (Nakagawa and Santos, 2012b; Jennions et al., 2013). For example, publication bias occurs whenever particular effect sizes (e.g. larger ones) are more likely found in the literature than others (e.g. smaller ones). This tends to be the case when statistical significance and/or direction of effect sizes determines whether results were submitted or accepted for publication (Jennions et al., 2013). Thus, publication bias can strongly affect the estimation of the meta-analytic mean, and distort the interpretation of the hypothesis (Rothstein et al., 2005). Several methods have been developed to identify this and other biases (Nakagawa and Santos, 2012b; Jennions et al., 2013); however, such methods are imperfect and dependent on the number of effect sizes available, and therefore should be considered as types of sensitivity analysis (Nakagawa et al., 2017; Nakagawa and Santos, 2012b).

Here, we meta-analytically assessed the textbook example of the status signalling hypothesis in the house sparrow. Specifically, we combined summary and primary data from published and unpublished studies to test the prediction that dominance rank is positively associated with bib size across studies. We found that the meta-analytic mean was small, uncertain and overlapped zero. Hence, our results challenge the status signalling function of the male house sparrow’s bib. Also, we identified several biases in the published literature. Finally, we discuss potential biological explanations for our results, and provide advice for future studies testing the status signalling hypothesis.

Results

Overall, we obtained the primary data for seven of 13 (54%) published studies, and we provided data for six additional unpublished studies (Table 1—Appendix 1).

Table 1
Studies used in the meta-analyses and meta-regressions testing the across-study relationship between dominance rank and bib size in male house sparrows.

More information is available in the data files provided (Sánchez-Tójar et al., 2018a).

https://doi.org/10.7554/eLife.37385.003
Study IDReferencePopulation IDPrimary data?Number of groups*Total number of malesComments
1Ritchison, 1985Kentucky
(captivity)
No335
2Møller, 1987Denmark
(wild)
Yes337
3Andersson and Åhlund, 1991Sweden
(captivity)
No1020Estimate originally reported as statistically non-significant.
4Solberg and Ringsby, 1997Norway
(captivity)
Yes544
5Liker and Barta, 2001Hungary
(captivity)
Yes110
6Gonzalez et al., 2002Spain
(captivity)
No841
7Hein et al., 2003Kentucky
(wild)
Yes439
8Riters et al., 2004Wisconsin
(captivity)
No420
9Lindström et al., 2005New Jersey
(captivity)
No428Author shared processed data, but group ID was unavailable, so data were not re-analysed.
10Bókony et al., 2006Hungary
(captivity)
Yes219
11Buchanan et al., 2010Scotland
(captivity)
No14
5
56
20
Groups were tested twice. Post-breeding estimates originally reported as statistically non-significant.
12Dolnik and Hoi, 2010Austria
(captivity)
No4
4
31
31
Groups were tested twice. Pre-infection estimates originally reported as statistically non-significant.
13Rojas Mora et al., 2016Switzerland
(captivity)
Yes1456
14Lendvai et al.Hungary
(captivity)
Yes3446Unpublished data part of: Lendvai et al., 2004; Bókony et al., 2012
15Tóth et al.Hungary
(captivity)
Yes3335Unpublished data part of: Tóth et al., 2009; Bókony et al., 2012
16Bókony et al.Hungary
(captivity)
Yes3426Unpublished data part of: Bókony et al., 2010; Bókony et al., 2012
17Sánchez-Tójar et al.Germany
(captivity)
Yes3495Unpublished study conducted in 2014.
18Sánchez-Tójar et al.Lundy Island
(wild)
Yes37172Unpublished study conducted from 2013 to 2016.
19WestneatKentucky
(captivity)
Yes31040Unpublished study conducted in 2005.
  1. *for primary data = yes, groups of birds containing less than four individuals were not included (see Materials and methods).

    †Note: since most studies analysed more than one group of birds, the total number of males is different from group size in most cases (see below).

  2. ‡Information for the unpublished datasets is available in Appendix 1—table 5.

Dominance hierarchies

Mean sampling effort was 36 interactions/individual (SD = 24), which highlights that, overall, dominance hierarchies were inferred reliably across groups (Sánchez-Tójar et al., 2018b). The mean Elo-rating repeatability was 0.92 (SD = 0.07) and the mean triangle transitivity was 0.63 (SD = 0.28). Thus, the dominance hierarchies observed across groups of house sparrows were medium in both steepness and transitivity.

Meta-analytic mean

Our meta-analyses revealed a small overall effect size with large 95% credible intervals that overlapped zero (Table 2; Figure 1). Additionally, the overall heterogeneity (I2overall) was moderate (53%; Table 2). Thus, our results suggested that generally, bib size is at best a weak and unreliable signal of dominance status in male house sparrows.

Forest plot showing the across-study effect size for the relationship between dominance rank and bib size in male house sparrows.

Both meta 1 and meta 2 include published and unpublished estimates, with meta 2 including two non-reported estimates assumed to be zero (see section ‘Meta-analyses’). We show posterior means and 95% credible intervals from multilevel meta-analyses. Estimates are presented as standardized effect sizes using Fisher’s transformation (Zr). Light, medium and dark grey show small, medium and large effect sizes, respectively (Cohen, 1988). k is the number of estimates.

https://doi.org/10.7554/eLife.37385.004
Table 2
Results of the multilevel meta-analyses on the relationship between dominance rank and bib size in male house sparrows.

Additionally, the results of the Egger’s regression tests are shown. Estimates are presented as standardized effect sizes using Fisher’s transformation (Zr). Both meta 1 and meta 2 include published and unpublished estimates, with meta 2 including two non-reported estimates assumed to be zero (see section ‘Meta-analyses’).

https://doi.org/10.7554/eLife.37385.005
Meta-analysisKMeta-analytic mean
[95% CrI]
I2population ID
[95% CrI] (%)
I2study ID
[95% CrI]
(%)
I2overall
[95% CrI]
(%)
Egger’s regression
[95% CrI]
 meta 1850.23
[−0.01,0.45]
16
[0,48]
21
[0,51]
53
[33,73]
−0.13
[−0.59,0.27]
 meta 2870.20
[−0.01,0.40]
15
[0,46]
20
[0,49]
53
[34,74]
−0.12
[−0.55,0.28]
  1. k = number of estimates; CrI = credible intervals; I2 = heterogeneity.

Moderators of the relationship between dominance rank and bib size

None of the three biological moderators studied (season, group composition and type of interactions) explained differences among studies (Table 3). Sampling effort (i.e. the ratio of interactions to individuals recorded) also was not an important moderator (Table 3).

Table 3
Results of the multilevel meta-regressions testing the effect of several moderators on the relationship between dominance rank and bib size in male house sparrows.

Estimates are presented as standardized effect sizes using Fisher’s transformation (Zr).

https://doi.org/10.7554/eLife.37385.006
Meta-regressionEstimatesMean [95% CrI]
 meta 1intercept0.17 [-0.11,0.46]
 (k = 85)season−0.11 [-0.41,0.21]
group composition0.14 [-0.34,0.59]
type of interactions0.33 [-0.17,0.91]
R2marginal=23 [2,48]
 meta 2intercept0.15 [-0.10,0.45]
 (k = 87)season−0.08 [-0.42,0.22]
group composition0.12 [-0.32,0.62]
type of interactions0.27 [-0.17,0.85]
R2marginal=20 [0,45]
 sampling effortintercept0.24 [-0.15,0.55]
 (k = 61)sampling effort0.11 [-0.49,0.74]
sampling effort2−0.14 [-0.77,0.43]
R2marginal=8 [0,24]
  1. k = number of estimates; CrI = credible intervals; R2marginal = percentage of variance explained by the moderators. The factors season (non-breeding: 0, breeding: 1), group composition (mixed-sex: 0, male-only: 1), and type of interactions (all: 0, aggressive-only: 1) were mean-centred, and the covariates ‘sampling effort’ and its squared term were z-transformed.

Detection of publication bias

There was no clear asymmetry in the funnel plots (Figure 2). Also, Egger’s regression tests did not show evidence of funnel plot asymmetry in any of the meta-analyses (Table 2). However, published effect sizes were larger than unpublished ones, and the latter were not different from zero (Table 4; Figure 3). Additionally, we found that the overall effect size decreased over time and approached zero (Table 4; Figure 4).

Funnel plots of the meta-analytic residuals against their precision for the meta-analyses used to test the across-study relationship between dominance rank and bib size in male house sparrows.

Both meta 1 and meta 2 include published (blue) and unpublished (orange) estimates, with meta 2 including two additional non-reported estimates (grey; see section ‘Meta-analyses’). Estimates are presented as standardized effect sizes using Fisher’s transformation (Zr). Precision = square root of the inverse of the variance.

https://doi.org/10.7554/eLife.37385.007
Published effect sizes for the status signalling hypothesis in male house sparrows are larger than unpublished ones.

We show posterior means and 95% credible intervals from a multilevel meta-regression. Estimates are presented as standardized effect sizes using Fisher’s transformation (Zr). Light, medium and dark grey show small, medium and large effects sizes, respectively (Cohen, 1988). k is the number of estimates.

https://doi.org/10.7554/eLife.37385.008
The overall published effect size for the status signalling hypothesis in male house sparrows has decreased over time since first described (k = 53 estimates from 12 publications).

The solid blue line represents the model estimate, and the shading shows the 95% credible intervals of a multilevel meta-regression based on published studies (see section ‘Detection of publication bias’). Estimates are presented as standardized effect sizes using Fisher’s transformation (Zr). Circle area represents the size of the group of birds tested to obtain each estimate, where light blue denotes estimates for which group size is inflated due to birds from different groups being pooled, as opposed to dark blue where group size is accurate.

https://doi.org/10.7554/eLife.37385.009
Table 4
Results of the multilevel meta-regressions testing for time-lag and publication bias in the literature on status signalling in male house sparrows.

Estimates are presented as standardized effect sizes using Fisher’s transformation (Zr). Credible intervals not overlapping zero are highlighted in bold.

https://doi.org/10.7554/eLife.37385.010
Meta-regressionEstimatesMean [95% CrI]
 time-lag biasintercept0.26 [0.03,0.57]
 (k = 53)year of publication−0.21 [-0.41,–0.01]
R2marginal=29 [0,66]
 published vs.intercept−0.09 [-0.37,0.18]
 unpublished (k = 85)publisheda0.50 [0.19,0.81]
R2marginal=38 [0,68]
  1. k = number of estimates; CrI = credible intervals; R2marginal = percentage of variance explained by the moderators; a relative to unpublished. Year of publication was z-transformed.

Discussion

The male house sparrow’s bib is not the strong across-study predictor of dominance status once believed. In contrast to the medium-to-large effect found in the previous meta-analysis (Nakagawa et al., 2007), our updated meta-analytic mean was small, uncertain and overlapped zero. Thus, the male house sparrow’s bib should not be unambiguously considered or called a badge of status. Furthermore, we found evidence for the existence of bias in the published literature that further undermines the validity of the available support for the status signalling hypothesis. First, the meta-analytic mean of unpublished studies was essentially zero, compared to the medium effect size detected in published studies. Second, we found that the effect size estimated in published studies has been decreasing over time, and recently published effects were on average no longer distinguishable from zero. Our findings call for reconsidering this textbook example in evolutionary and behavioural ecology, and should stimulate renewed attention to hypotheses explaining within-species variation in ornamentation.

The status signalling hypothesis (Rohwer, 1975) has been extensively tested to try and explain within-species trait variation (e.g. reptiles: Whiting et al., 2003; insects: Tibbetts and Dale, 2004; humans: Dixson and Vasey, 2012), particularly plumage variation (Santos et al., 2011). Soon after the first empirical tests on birds, the black bib of male house sparrows became a textbook example of the status signalling hypothesis (Andersson, 1994; Searcy and Nowicki, 2005; Senar, 2006; Davies et al., 2012), an idea that was later confirmed meta-analytically (Nakagawa et al., 2007). However, Nakagawa et al., 2007 meta-analytic mean was over-estimated because only nine low-powered studies were available (more in Button et al., 2013). Here, we updated that meta-analysis with newly published and unpublished data. Our results showed that the overall effect size is much smaller and much more uncertain than previously thought. The status signalling hypothesis is thus no longer a compelling explanation for the evolution of bib size across populations of house sparrows.

Similar contradicting conclusions have been reported for other model species. An exhaustive review and meta-analysis on plumage coloration of blue tits (Cyanistes caeruleus) revealed that, after dozens of publications studying the function of plumage ornamentation in this species, the only robust conclusion is that females’ plumage differs from that of males (Parker, 2013). Another example is the long-believed effect of leg bands of particular colours on the perceived attractiveness of male zebra finches (Taeniopygia guttata), which has been also experimentally and meta-analytically refuted (Seguin and Forstmeier, 2012; Wang et al., 2018). Finally, the existence of a badge of status in a non-bird model species, the paper wasp (Polistes dominulus; Tibbetts and Dale, 2004) has also been challenged multiple times (e.g. Cervo et al., 2008; Green and Field, 2011; Green et al., 2013), generating doubts about its generality. Our findings corroborate studies showing that abundant replication is needed before any strong or general conclusion can be drawn (Aarts et al., 2015), and highlight the existence of important impediments (e.g. publication bias) to scientific progress in evolutionary ecology (Forstmeier et al., 2017; Fraser et al., 2018).

Indeed, our results showed that the published literature on status signalling in house sparrows is likely a biased subsample. The main evidence for this is that the mean effect size of unpublished studies was essentially zero and clearly different from the mean effect size based on published studies, which was of medium size. Furthermore, this moderator (i.e. unpublished vs. published) explained a large percentage of the model’s variance. In some of our own unpublished datasets, the relationship between dominance rank and bib size was never formally tested (D.F. Westneat and V. Bókony, personal communication, February, 2018), that is, our unpublished datasets are not all examples of the ‘file drawer problem’ (sensu Rosenthal, 1979). Egger’s regression tests failed to detect any funnel plot asymmetry, even in the meta-analyses based on published effect sizes only (Appendix 2—table 1). However, because unpublished data indeed existed (i.e. those obtained for this study), the detection failure was likely the consequence of the limited number of effect sizes available (i.e. low power) and the moderate level of heterogeneity found in this study (Moreno et al., 2009; Sterne and Egger, 2005).

An additional type of publication bias is time-lag bias, where early studies report larger effect sizes than later studies (Trikalinos and Ioannidis, 2005). We detected evidence for such bias because the correlation between dominance rank and bib size in published studies has decreased over time and approached zero. Year of publication explained a large percentage of the model's variance, and accounting for year of publication resulted in a strong reduction of the mean effect size across published studies (Table 4 vs. Appendix 2—table 1). Time-lag bias has been detected in other ecological studies (Poulin, 2000); Jennions and Moller, 2002b), including a meta-analysis on status signalling across bird species (Santos et al., 2011). In the latter study, a positive overall (across-species) effect size persisted regardless of the time-lag bias, and no strong evidence for other types of biases was found (Santos et al., 2011). However, Santos et al., 2011 did not attempt to analyse unpublished data, so additional evidence is needed to determine the effect that unpublished data may have on the overall validity of the status signalling hypothesis across bird species. If effect sizes based on unpublished data for other species were of similar magnitude to those obtained for house sparrows, the validity of the status signalling hypothesis across species would need reconsideration. The existence of publication bias in ecology has long been recognized (Cassey et al., 2004Jennions and Moller, 2002bPalmer, 2000). Publication bias leads to false conclusions if not accounted for (Rothstein et al., 2005), and is, thus, a serious impediment to scientific progress.

In addition to estimating the overall effect size for a hypothesis, meta-analyses are also used to assess heterogeneity among estimates (Higgins and Thompson, 2002; Higgins et al., 2003). Understanding the sources of heterogeneity is an important step towards the correct interpretation of a meta-analytic mean, and can be done using meta-regressions (Nakagawa and Santos, 2012b). Here, we found that the percentage of variance that was not attributable to sampling error (i.e. heterogeneity) was moderate. This value is below the average calculated across ecological and evolutionary meta-analyses (Senior et al., 2016), and indicates that we accounted for large differences among estimates. Our meta-regressions based on biological moderators explained 20–23% of the variance (Table 3). However, none of the biological moderators that we tested strongly influenced the overall effect size, possibly because of limited sample sizes.

The badge of status idea is more complex than typically portrayed (reviewed by Diep and Westneat, 2013). Badges of status are expected to be particularly important in large and unstable groups of individuals where individual recognition would otherwise be difficult (Rohwer, 1975). While the evolution of badges of status in New and Old World sparrows has been related to sociality (i.e. flocking) during the non-breeding season (Tibbetts and Safran, 2009), additional factors need to be involved if the signal is to function in reducing aggression but retaining honesty (Diep and Westneat, 2013). Our results, however, did not show any evidence for a season-dependent effect as the moderator ‘season’ (breeding vs. non-breeding) was not a strong predictor in our models. Badges of status are expected to function both within and between sexes (Rohwer, 1975; Senar, 2006). Indeed, we found little evidence that the status signalling function of bib size differed between male-only and mixed-sex flocks. Interestingly, when competing for resources, possessing a badge of status would be beneficial for both males and females. However, male but not female house sparrows have a bib. This sexual dimorphism suggests that the bib’s function is likely more important when competing for resources other than essential, a priori non-sex-specific resources such as food, water, sand baths and roosting sites. Møller, 1988 and Pape Moller, 1989 reported that female house sparrows preferentially choose males with large bibs (but see Kimball, 1996), and bib size has been positively correlated with sexual behaviour (Veiga, 1996; Møller, 1990), which suggests that the bib may play a role in mate choice. Furthermore, the original status signalling hypothesis posits that the main benefit of using badges of status would be to avoid fights, which should be particularly important when interacting with unfamiliar individuals (Rohwer, 1975; Senar, 2006). Although we did not have data to test whether unfamiliarity among contestants is an important pre-requisite for the status signalling hypothesis, we found no change in mean effect size when only obviously aggressive interactions were studied. In practice, testing whether the bib is important in mediating aggression among unfamiliar individuals is difficult because the certainty of the estimates of individual dominance increases over time as more contests are recorded, but so does familiarity among contestants.

There are some additional explanations for the small and uncertain effect detected by our meta-analyses. First, different populations might be under different selective pressures regarding status signalling. Indeed, the population-specific heterogeneity (I2population ID) estimated in our meta-analyses was 15–16%, suggesting that population-dependent effects might exist. Second, although none of the moderators had a strong influence on the overall effect size, the study-specific heterogeneity estimated in our meta-analyses (I2study ID = 20–21%) suggests that the uncertainty observed could still be explained by the status signal being context-dependent. However, context-dependence is often invoked post hoc to explain variation among studies, but strong evidence for it is lacking in most cases. Last, most studies testing the status signalling hypothesis in house sparrows are observational (Table 1), and the only two experimental studies conducted so far were inconclusive (Diep, 2012; Gonzalez et al., 2002). Thus, it cannot be ruled out that the weak correlation observed between dominance status and bib size is driven by a third, unknown variable. In this respect, it has been proposed that the association between melanin-based coloration (such as the bib; e.g. Galván et al., 2015; Galván and Alonso-Alvarez, 2017) and aggression is due to pleiotropic effects of the genes involved in regulating the synthesis of melanin (reviewed by Ducrest et al., 2008). Furthermore, bib size has been shown to correlate with testosterone, a hormone often involved in aggressive behaviour (Gonzalez et al., 2001) but this relationship has not been consistently observed (Laucht et al., 2010). Future studies should shift the focus towards understanding the function of bib size in wild populations and increase considerably the number of birds studied per group. The latter is essential because the statistical power of published tests of the status signalling hypothesis in house sparrows is alarmingly low (power = 8.5% for r = 0.20, <styled-content style="background-color: #ffffff;"><styled-content style="background-color: transparent;"><styled-content>Appendix 3</styled-content>)</styled-content> and lower than the average in behavioural ecology (Jennions, 2003).</styled-content>

Our analyses have several potential limitations. First, although the number of studies included in this meta-analysis is more than double that of the previous meta-analysis (Nakagawa et al., 2007), it is still limited. Also, it is likely (see above) that additional unpublished data are stored in ‘file drawers’ (sensu Rosenthal, 1979). Second, most tests included in this study were still low-powered in terms of group size (median = 6 individuals/estimate, range = 4–41), and the sample size is inflated because some of the published studies pooled individuals from different groups (Figure 4). Third, although our results showed little evidence of an effect of sampling effort on the overall effect size, the quality of the data on dominance and bib size may still be a potential factor explaining differences among studies. Fourth, experiments will normally yield larger effect sizes than observational studies because effects of confounding factors can be reduced (Palmer, 2000). Nonetheless, our systematic review only identified two studies where the status signalling hypothesis was tested experimentally in house sparrows (Gonzalez et al., 2002; Diep, 2012), preventing us from estimating the meta-analytic mean for experimental studies. Note, however, that the results of those experiments were inconclusive, and potentially affected by regression to the mean (Forstmeier et al., 2017).

In conclusion, our results challenge an established textbook example of the status signalling hypothesis, which aims to explain within-species variation in ornament size. In house sparrows, we find no evidence that bib size consistently acts as a badge of status across studies and populations, and thus, bib size can no longer be considered a textbook example of the status signalling hypothesis. Furthermore, our analyses highlight the existence of publication biases in the literature, further undermining the validity of past conclusions. Bias against the publication of small (‘non-significant’) effects hinders scientific progress. We thus join the call for a change in incentives and scientific culture in ecology and evolution (Forstmeier et al., 2017; Ihle et al., 2017; Nakagawa and Parker, 2015; Parker et al., 2016).

Materials and methods

Systematic review

We used several approaches to maximize the identification of relevant studies. First, we included all studies reported in a previous meta-analysis that tested the relationship between dominance rank and bib size in house sparrows (Nakagawa et al., 2007). Second, we conducted a keyword search on Web of Science, PubMed and Scopus from 2006 to June 2017 to find studies published after Nakagawa et al., 2007, using the combination of keywords [‘bib/badge’, ‘sparrow’, ‘dominance/status/fighting’]. Third, we screened all studies on house sparrows used in a meta-analysis that tested the relationship between dominance and plumage ornamentation across species (Santos et al., 2011) to identify additional studies that we may have missed in our keyword search. We screened titles and abstracts of all articles and removed the irrelevant articles before examining the full texts (Supplementary file 1). We followed the preferred reporting items for systematic reviews and meta-analyses (PRISMA: Moher et al., 2009); see ‘Reporting Standards Documents’). We only included articles in which dominance was directly inferred from agonistic dyadic interactions over resources such as food, water, sand baths or roosting sites (Appendix 1—table 1).

Summary data extraction

Some studies had more than one effect size estimate per group of birds studied. When the presence of multiple estimates was due to the use of different statistical analyses on the same data, we chose a single estimate based on the following order of preference: (1) direct reports of effect size per group of birds studied (e.g. correlation coefficient), (2) inferential statistics (e.g. t, F and χ2 statistics) from analyses where group ID was accounted for and no other fixed effects were included, (3) direct reports of effect size where individuals from different groups where pooled together, (4) inferential statistics from models including other fixed effects. When the presence of multiple estimates was due to the use of different methods to estimate bib size and dominance rank on the same data, we chose a single estimate per group of birds or study based on the order of preference shown in Appendix 1—tables 13. In each case, the order of preference was determined prior to conducting any statistical analysis, and thus, method selection was blind to the outcome of the analyses (more details in Appendix 1).

Primary data acquisition

We requested primary data (i.e. agonistic dyadic interactions and bib size measures) of all relevant studies identified by our systematic review. Additionally, we asked authors to share, if available, any unpublished data that could be used to test the relationship between dominance rank and bib size in house sparrows. We emailed the corresponding author, but if no reply was received, we tried contacting all the other authors listed. One study (Møller, 1987) provided all primary data in the original publication and, therefore, its author was not contacted. Last, we included our own unpublished data (Appendix 1—table 5).

Most studies recorded data from more than one group of birds (Table 1). For each primary dataset obtained, we inferred the dominance hierarchy of each group of birds from the observed agonistic dyadic interactions (wins and losses) among individuals using the randomized Elo-rating method, which estimates dominance hierarchies more precisely than other methods (Sánchez-Tójar et al., 2018b). We then used the provided measures of individual bib size (e.g. area outlined from pictures) or, if possible, calculated bib area from length and width measures following (Møller, 1987). Subsequently, we estimated the Spearman’s rho rank correlation (ρ) between individual rank and bib size for each group of birds. For one study (Buchanan et al., 2010), we received the already inferred dominance hierarchies for each group of birds, which we then correlated with bib size to obtain ρ.

Effect size coding

Regardless of their source (primary or summary data), we transformed all estimates (e.g. ρ, F statistics, etc) into Pearson’s correlation coefficients (r), and then into standardized effect sizes using Fisher’s transformation (Zr) for among-study comparison. We used the equations from Nakagawa et al., 2007 and Lajeunesse, 2013. Since log(0) is undefined, r values equal to 1.00 and −1.00 were transformed to 0.975 and −0.975, respectively, before calculating Zr. Zr values of 0.100, 0.310 and 0.549 were considered small, medium and large effect sizes, respectively (equivalent benchmarks from Cohen, 1988). When not reported directly, the number of individuals (n) was estimated from the degrees of freedom. The variance in Zr was calculated as: VZr = 1/(n-3). Estimates (k) based on less than four individuals were discarded (k = 33 estimates discarded).

Meta-analyses

We ran two multilevel meta-analyses to test whether dominance rank and bib size were positively correlated across studies. The first meta-analysis, in other words ‘meta 1’, included published and unpublished (re-)analysed effect sizes (i.e. effect sizes estimated from the studies we obtained primary data from), plus the remaining published effect sizes obtained from summary data (i.e. effect sizes for which primary data were unavailable).

The second meta-analysis, in other words ‘meta 2’, tested the robustness of the results of meta 1 to the inclusion of non-reported estimates from studies that reported ‘statistically non-significant’ results without showing either the magnitude or the direction of the estimates (Table 1). Receipt of primary data allowed us to recover some but not all the originally non-reported estimates. Two ‘non-significant’ estimates were still missing. Thus, meta 2 was like meta 1 but included the two non-significant non-reported estimates, which were assumed to be zero (see Booksmythe et al., 2017 for a similar approach). Note that non-significant estimates can be either negative or positive, and thus, assuming that they were zero may have either underestimated or overestimated them, something we cannot know from non-reported estimates. Meta-analyses based on published studies only are shown in Appendix 2.

We investigated inconsistency across studies by estimating the heterogeneity (I2) from our meta-analyses following Nakagawa and Santos, 2012b. I2 values around 25, 50% and 75% are considered as low, moderate and high levels of heterogeneity, respectively (Higgins et al., 2003).

Meta-regressions

We tested if season, group composition and/or the type of interactions recorded had an effect on the meta-analytic mean. For that, we ran two multilevel meta-regressions that included the following moderators (hereafter ‘biological moderators’): (1) ‘season’, referring to whether the study was conducted during the non-breeding (September-February) or the breeding season (March-August); (2) ‘group composition’, referring to whether birds were kept in male-only or in mixed-sex groups; and, (3) ‘type of interactions’, referring to whether the dyadic interactions recorded were only aggressive (e.g. threats and pecks), or also included interactions that were not obviously aggressive (e.g. displacements). Because only three of 19 studies were conducted in the wild (k = 12 estimates; Table 1), we did not include a moderator testing for captive versus wild environments. The three biological moderators were mean-centred following Schielzeth, 2010 to aid interpretation.

The ratio of agonistic dyadic interactions recorded to the total number of interacting individuals observed (hereafter ‘sampling effort’) is a measure of sampling effort that correlates positively and logarithmically with the ability to infer the latent dominance hierarchy (Sánchez-Tójar et al., 2018b). The higher this ratio, the more precisely the latent hierarchy can be inferred (Sánchez-Tójar et al., 2018b). For the subset of studies for which the primary data of the agonistic dyadic interactions were available (12 out of 19 studies; Table 1), we ran a multilevel meta-regression including sampling effort and its squared term as z-transformed moderators (Schielzeth, 2010). The squared term was included because of the observed logarithmic relationship between sampling effort and the method’s performance (Sánchez-Tójar et al., 2018b). This meta-regression tested whether sampling effort had an effect on the meta-analytic mean: (i) a positive estimate would indicate that the meta-analytic mean may have been affected by the inclusion of studies with unreliable estimates of dominance rank. In contrast, (ii) a negative estimate would indicate that effect sizes were larger when based on unreliable estimates of dominance rank and hence provide evidence for the existence of publication bias.

For all meta-regressions, we estimated the percentage of variance explained by the moderators (R2marginal) following (Nakagawa and Schielzeth, 2013).

Random effects

All meta-analyses and meta-regressions included the two random effects ‘population ID’ and ‘study ID’. Population ID was related to the geographical location of the population of birds studied. We used Google maps to estimate the distance over land (i.e. avoiding large water bodies) among populations, and assumed the same population ID when the distance was below 50 km (13 populations; Table 1). Study ID encompassed those estimates obtained within each specific study (19 studies). Two studies tested the prediction twice for the same groups of birds (Table 1) and, within each population, some individuals may have been sampled more than once. However, we could not include ‘group ID’ and/or ‘individual ID’ as additional random effects due to either limited sample size or because the relevant data were not available.

Detection of publication bias

For the meta-analyses, we assessed publication bias using two methods that are based on the assumption that funnel plots should be symmetrical. First, we visually inspected asymmetry in funnel plots of meta-analytic residuals against the inverse of their precision (defined as the square root of the inverse of VZr) for each meta-analysis. Funnel plots based on meta-analytic residuals (the sum of effect-size-level effects and sampling-variance effects) are more appropriate than those based on effect sizes when multilevel models are used (Nakagawa and Santos, 2012b). Second, we ran Egger’s regressions using the meta-analytic residuals as the response variable, and the precision (see above) as the moderator (Nakagawa and Santos, 2012b) for each meta-analysis. If the intercept of such a regression does not overlap zero, estimates from the opposite direction to the meta-analytic mean might be missing and hence we consider this evidence of publication bias (Nakagawa and Santos, 2012b). Further, we tested whether published estimates differed from unpublished estimates. For that, we ran a multilevel meta-regression that included population ID and study ID as random effects, and ‘unpublished’ (two levels: yes (0), no (1)) as a moderator. This meta-regression was based on meta 1 (i.e. it did not include the two non-reported estimates). We did not use the trim-and-fill method (Duval and Tweedie, 2000a; Duval and Tweedie, 2000b) because this method has been advised against when significant heterogeneity is present (Moreno et al., 2009; Jennions et al., 2013), as it was the case in our meta-analyses (see section 'Results’).

Finally, we analysed temporal trends in effect sizes that could indicate ‘time-lag bias’. Time-lag bias is common in the literature (Jennions and Moller, 2002b; Poulin, 2000), and occurs when the effect sizes of a specific hypothesis are negatively correlated with publication date (i.e. effect sizes decrease over time; Trikalinos and Ioannidis, 2005). A decrease in effect size over time can have multiple causes. For example, initial effect sizes might be inflated due to low statistical power (‘winner’s curse’) but published more easily and/or earlier due to positive selection of statistically significant results (reviewed by Koricheva et al., 2013). We ran a multilevel meta-regression based on published effect sizes only, where ‘year of publication’ was included as a z-transformed moderator (Nakagawa and Santos, 2012b).

All analyses were run in R v. 3.4.0 (R Core Team, 2017). We inferred individual dominance ranks from agonistic dyadic interactions using the randomized Elo-rating method from the R package ‘aniDom’ v. 0.1.3 (Farine and Sánchez-Tójar, 2017; Sánchez-Tójar et al., 2018b). Additionally, we described the dominance hierarchies observed in the groups of house sparrows for which primary data was available. For that we estimated the uncertainty of the dominance hierarchies using the R package ‘aniDom’ v. 0.1.3 (Farine and Sánchez-Tójar, 2017; Sánchez-Tójar et al., 2018b) and the triangle transitivity (McDonald and Shizuka, 2013) using the R package ‘compete’ 3.1.0 (Curley, 2016). We used the R package ‘MCMCglmm’ v. 2.24 (Hadfield, 2010) to run the multilevel meta-analytic (meta-regression) models (Hadfield and Nakagawa, 2010). For each meta-analysis and meta-regression, we ran three independent MCMC chains for 2 million iterations (thinning = 1,800, burn-in = 200,000) using inverse-Gamma priors (V = 1, nu = 0.002). Model chains were checked for convergence and mixing using the Gelman-Rubin statistic. The auto-correlation within the chains was <0.1 in all cases. For each meta-analysis and meta-regression, we chose the model with the lowest DIC value to extract the posterior mean and its 95% highest posterior density intervals (hereafter 95% credible interval). We report all data exclusion criteria applied and the results of all analyses conducted in our study.

Data and code availability

We provide all of the R code and data used for our analyses (Sánchez-Tójar et al., 2018a).

Appendix 1

Information about data used in the study

Appendix 1—table 1
Summary of key differences in methodology among all studies (published and unpublished) testing the relationship between dominance rank and bib size in male house sparrows (N = 19 studies).
https://doi.org/10.7554/eLife.37385.015
VariableLevelsNumber of studiesOrder of preference*
Group compositionMales and females11-
 Males only8-
Resource competed forFood only12-
 Food, water and roosting place6-
 Females1-
Type of interactionsAggressive only12-
 Aggressive and non-aggressive7-
Interactions recording protocolLive observations11-
 Video6-
 Live and video observations2-
Type of bib size measuredVisible141
 Hidden22
 Both3-
Beak angle during measurement90°81
 180°32
 Both1-
 Unknown7-
SeasonNon-breeding13-
 Breeding5-
 Both1-
Study locationCaptive16-
 Wild2-
 Both1-
  1. *Order of preference used for the analyses (see main text). The order of preference was determined based on how frequently the method was used in previous studies.

Appendix 1—table 2
List of the different methods used to estimate bib size in all studies (published and unpublished) testing the relationship between dominance rank and bib size in male house sparrows (N = 19 studies).

Note that some studies used more than one method to estimate bib size.

https://doi.org/10.7554/eLife.37385.016
Method to estimate bib sizeNumber of times usedOrder of preference
 Area*81
Møller, 1987’s equation62
 Length and width†32
 Length only23
Møller, 1987’s drawings14
Veiga, 1993’s equation15
  1. *Area was measured from pictures (N = 5 studies), by tracing and weighing (N = 2 studies), and by tracing and ranking (N = 1 study).

    †If length and width were available, we estimated bib area using Møller, 1987’s equation.

  2. ‡Order of preference used for the analyses (see main text). The order of preference was determined based on how frequently the method was used in previous studies.

Appendix 1—table 3
List of the different methods used to infer dominance rank from dyadic interactions in published studies that tested the relationship between dominance rank and bib size in male house sparrows (N = 13 published studies, 11 different methods).

Note that some studies used more than one method to estimate dominance rank and that unpublished studies are not included in this summary.

https://doi.org/10.7554/eLife.37385.017
Method to infer dominance rankNumber of times usedOrder of preference*
Proportion of contests won44
Proportion of initiated contests35
Kendall’s linearity index23
Proportion of contests won per dyad26
Proportion of initiated contests won26
David’s score11
I and SI12
Landau’s linearity index13
Proportion of the received attacks won17
Proportion of birds dominated17
Proportion of contests won per dyad + linear assumption17
  1. *Order of preference used for the analyses (see main text). The order of preference was determined based on both how frequently the method was used in previous studies and by taking into account the (expected) performance of each of the methods. First, higher order of preference was assigned to methods specifically designed for inferring linear dominance hierarchies (i.e. David’s score, I and SI, Landau’s and Kendall’s linearity indices). We used the information available in Sánchez-Tójar et al., 2018b to rank David’s score and I and SI as first and second methods in preference, respectively. Second, we ranked the remaining (proportion-based) methods based on how frequently they were used in previous studies. Importantly, the order of preference was chosen prior to conducting any statistical analysis, and thus, method selection was blind to the outcome of the analyses.

Appendix 1—table 4
Additional comments on some of the published studies included in the meta-analysis.
https://doi.org/10.7554/eLife.37385.018
ReferenceComments
Ritchison, 1985According to the original publication, the total number of birds studied was 35, as opposed to the 25 individuals used in the meta-analyses of Nakagawa et al., 2007 and Santos et al., 2011.
Hein et al., 2003The total number of birds included in our re-analysis of the primary data is smaller than that presented in the original publication. This is because our re-analysis only included fully identified individuals (e.g. birds missing rings could not be included).
Dolnik and Hoi, 201032 males were selected for the experiment, but one bird was excluded before the start of the experiment. Thus, n was set to 31 individuals for this study.
Buchanan et al., 201096 birds were separated in 24 aviaries of four individuals each. The final n of several aviaries was less than four individuals, and therefore, these aviaries were not included in our meta-analyses (see main text, section ‘Materials and Methods’).
Rojas Mora et al., 2016According to the primary data, one male did not interact, and thus, n was set to 59 individuals in Appendix 2.
Appendix 1—table 5
Data descriptions for the unpublished data analysed in the meta-analysis.
https://doi.org/10.7554/eLife.37385.019
Study ID*Data description
1488 individuals were separated into four captive mixed-sex groups. Live observations after mild food deprivation were conducted to record agonistic dyadic interactions (i.e. fights) over (mostly) food for around one week in Feb 2003 (total = 1,563 fights). Bib length and width were measured for each male before the dominance observations using a ruler. More information can be found in Lendvai et al., 2004 and Bókony et al., 2012.
1561 individuals were separated into three captive mixed-sex groups. Live observations after mild food deprivation were conducted to record agonistic dyadic interactions (i.e. fights) over (mostly) food between Oct and Dec 2005 (two groups) and 2006 (one group; total = 2,003 fights). Bib area was measured for each male using standardized pictures taken after the dominance observations. More information can be found in Tóth et al., 2009 and Bókony et al., 2012.
1660 individuals were separated into four captive mixed-sex groups. Live and video observations after mild food deprivation were conducted to record agonistic dyadic interactions (i.e. fights) over (mostly) food for around two weeks per group between Oct 2007 and Feb 2008 (total = 6,641 fights). Bib length and width were measured for each male before the dominance observations using a ruler. More information can be found in Bókony et al., 2010 and Bókony et al., 2012.
1796 males were separated into four captive male-only groups. Videos after mild food deprivation were taken to record agonistic dyadic interactions (i.e. fights) over food for 10 days between Oct and Dec 2014 (total = 3,776 fights). Bib area was measured several times for each male (median = 3 times/male, range = 2 to 6) using standardized pictures taken from Oct to Dec 2014, and the mean bib area of each individual was used in the analyses.
18453 individuals (215 females and 238 males) were observed in seven discrete sampling events in a wild population of house sparrows at Lundy Island, UK. Videos were taken to record agonistic dyadic interactions (i.e. fights) over food for 20 days between Nov 2013 and Dec 2016 (total = 11,063 fights). Bib length was measured several times for each male (median = 1 time/male, range = 1 to 6) from Nov 2013 to Dec 2016 using a calliper, and the mean bib area of each individual in each sampling event was used in the analyses.
19128 individuals were separated into 16 captive mixed-sex groups. Live observations after mild food deprivation were conducted to record agonistic dyadic interactions (i.e. supplants and hold-offs) over food between Mar and Apr 2005 (total = 5,496 fights). Bib length and width were measured for each male before the dominance observations using a calliper as in Morrison et al., 2008.
  1. *Study ID corresponding to Table 1 in main text.

Appendix 2

Meta-analyses based on published studies only

Appendix 2—table 1
Results of two multilevel meta-analyses to test the relationship between dominance rank and bib size in male house sparrows based on published studies only.

Published 1 includes published effect sizes obtained from summary data, whereas published 2 includes published re-analysed effect sizes together with the remaining published effect sizes obtained from summary data. Additionally, the results of the Egger’s regressions are shown. Estimates are presented as standardized effect sizes using Fisher’s transformation (Zr). Credible intervals not overlapping zero are highlighted in bold.

https://doi.org/10.7554/eLife.37385.021
Meta-analysisKMeta-analytic mean
[95% CrI]
I2population ID
[95% CrI] (%)
I2study ID
[95% CrI]
(%)
I2overall
[95% CrI]
(%)
Egger’s regression
[95% CrI]
Published 1200.45
[0.26,0.63]
17
[0,51]
17
[0,53]
46
[15,78]
0.42
[−0.73,1.48]
Published 2530.40
[0.11,0.67]
14
[0,46]
13
[0,42]
46
[17,72]
−0.25
[−0.73,0.26]
  1. k = number of estimates; CrI = credible intervals; I2 = heterogeneity.

Appendix 2—figure 1
Forest plot showing the overall effect size of the relationship between dominance rank and bib size in male house sparrows based on published studies only.

Published 1 includes published effect sizes obtained from summary data, whereas published 2 includes published re-analysed effect sizes together with the remaining published effect sizes obtained from summary data. We show posterior means and 95% credible intervals from multilevel meta-analyses. Estimates are presented as standardized effect sizes using Fisher’s transformation (Zr). Light, medium and dark grey show small, medium and large effect sizes, respectively (Cohen, 1988). k is the number of estimates.

https://doi.org/10.7554/eLife.37385.022
Appendix 2—figure 2
Funnel plots of the meta-analytic residuals against their precision for the meta-analyses based on published studies only.

Published 1 includes published effect sizes obtained from summary data, whereas published 2 includes published re-analysed effect sizes together with the remaining published effect sizes obtained from summary data. Estimates are presented as standardized effect sizes using Fisher’s transformation (Zr). Precision = square root of the inverse of the variance.

https://doi.org/10.7554/eLife.37385.023

Appendix 3

Power analysis based on the estimated meta-analytic mean

R code used and explanations:

First, we need to clear up the memory and load the pwr library.

<italic># clear memory</italic>
rm(list = ls())
<italic># package needed</italic>
library(pwr)

Furthermore, we created a function to transform Zr values into r values. This is because our meta-analyses were based on Zr values, but the power analysis is based on r values.

# function to convert Zr to r
Zr.to.r<-function(Zr){
r<-(exp(2*Zr)−1)/(exp(2*Zr)+1)
}

Power analysis

Next, we estimated the sample size necessary to find an effect size as small as the one estimated by our meta-analysis (Zr = 0.20). We used a significance level of 0.05, and the recommended 80% statistical power (Cohen, 1988).

pwr.r.test(r = Zr.to.r(0.20), sig.level = 0.05, power = 0.8)
##
##    approximate correlation power calculation (arctangh transformation)
##
##            n = 198.3401
##            r = 0.1973753
##     sig.level = 0.05
##         power = 0.8
##    alternative = two.sided

This shows that we would need the dominance rank and bib size of 198 individuals to find a significant r correlation of 0.20 with an 80% statistical power.

Additionally, we estimated the across-study statistical power of the tests on status signalling in house sparrows to compare it to the overall statistical power found in the behavioural ecology literature (Jennions, 2003).

pwr.r.test(n = 10, r = Zr.to.r(0.20), sig.level = 0.05)
##
##    approximate correlation power calculation (arctangh transformation)
##
##            n = 10
##            r = 0.1973753
##     sig.level = 0.05
##         power = 0.08474157
##    alternative = two.sided

This shows that the statistical power of the sparrow literature on status signaling is as low as 8.5%, which is alarming.

References

  1. 1
  2. 2
  3. 3
    Sexual Selection
    1. M Andersson
    (1994)
    New Jersey: Princeton University Press.
  4. 4
  5. 5
    Naturgeschichte Des Rothirshes. Monographie Wildsiiugetiere IV
    1. J Beninde
    (1937)
    Leipzig: P. Schöps.
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
    Statistical Power Analysis for the Behavioral Sciences (Second edition)
    1. J Cohen
    (1988)
    New Jersey: Taylor & Francis Inc.
  15. 15
  16. 16
    compete: Analyzing Social Hierarchies
    1. JP Curley
    (2016)
    compete: Analyzing Social Hierarchies, https://cran.r-project.org/web/packages/compete/index.html.
  17. 17
  18. 18
    An Introduction to Behavioural Ecology
    1. NB Davies
    2. JR Krebs
    3. SA West
    (2012)
    Oxford: Wiley-Blackwell.
  19. 19
    The Role of Social Interactions on the Development and Honesty of a Signal of Status
    1. SK Diep
    (2012)
    University of Kentucky.
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  26. 26
    aniDom: Inferring Dominance Hierarchies and Estimating Uncertainty
    1. DR Farine
    2. A Sánchez-Tójar
    (2017)
    aniDom: Inferring Dominance Hierarchies and Estimating Uncertainty, https://cran.r-project.org/package=aniDom.
  27. 27
  28. 28
  29. 29
  30. 30
  31. 31
  32. 32
  33. 33
  34. 34
  35. 35
  36. 36
  37. 37
    MCMC methods for Multi-Response generalized linear mixed models: themcmcglmmrpackage
    1. JD Hadfield
    (2010)
    Journal of Statistical Software, 33, 10.18637/jss.v033.i02.
  38. 38
  39. 39
  40. 40
  41. 41
  42. 42
  43. 43
  44. 44
  45. 45
  46. 46
  47. 47
  48. 48
    Publication and related biases
    1. MD Jennions
    2. C Lortie
    3. M Rosenberg
    4. H Rothstein
    (2013)
    In: J Koricheva, J Gurevitch, K Mengersen, editors. Handbook of Meta-Analysis in Ecology & Evolution. Princenton: Princeton University Press. pp. 207–236.
    https://doi.org/10.1515/9781400846184-016
  49. 49
  50. 50
  51. 51
    Temporal trends in effect sizes: causes, detection, and implications
    1. J Koricheva
    2. MD Jennions
    3. J Lau
    (2013)
    In: J Koricheva, J Gurevitch, K Mengersen, editors. Handbook of Meta-Analysis in Ecology & Evolution. Princenton: Princeton University Press. pp. 237–254.
    https://doi.org/10.1515/9781400846184-017
  52. 52
  53. 53
    Recovering Missing or Partial Data from Studies: A Survey of Conversions and Imputations for Meta-analysis
    1. MJ Lajeunesse
    (2013)
    In: J Koricheva, J Gurevitch, K Mengersen, editors. Handbook of Meta-Analysis in Ecology & Evolution. Princenton: Princeton University Press. pp. 195–206.
    https://doi.org/10.1515/9781400846184-015
  54. 54
  55. 55
  56. 56
  57. 57
  58. 58
  59. 59
    Meta-analysis of primary data
    1. K Mengersen
    2. J Gurevitch
    3. CH Schmid
    (2013)
    In: J Koricheva, K Mengersen, J Gurevit, editors. Handbook of Meta-Analysis in Ecology & Evolution. Princenton: Princeton University Press. pp. 300–312.
    https://doi.org/10.1515/9781400846184-020
  60. 60
  61. 61
  62. 62
    Badge size in the house sparrow Passer domesticus - Effects of intra- and intersexual selection
    1. AP Møller
    (1988)
    Behavioral Ecology and Sociobiology 22:373–378.
  63. 63
  64. 64
  65. 65
  66. 66
  67. 67
  68. 68
  69. 69
  70. 70
  71. 71
  72. 72
  73. 73
  74. 74
  75. 75
  76. 76
  77. 77
  78. 78
  79. 79
  80. 80
    R: A language and environment for statistical computing
    1. R Core Team
    (2017)
    R Foundation for Statistical Computing, Vienna.
  81. 81
  82. 82
    Plumage variability and social status in captive male house sparrows
    1. G Ritchison
    (1985)
    Kentucky Warbler 61:39–42.
  83. 83
  84. 84
  85. 85
  86. 86
  87. 87
  88. 88
    Supporting information for " Meta-analysis challenges a textbook example of status signalling and demonstrates publication bias"
    1. A Sánchez-Tójar
    2. S Nakagawa
    3. M Sánchez-Fortún
    4. DA Martin
    5. S Ramani
    6. A Girndt
    7. V Bókony
    8. B Kempenaers
    9. A Liker
    10. DF Westneat
    11. T Burke
    12. J Schroeder
    (2018)
    Supporting information for " Meta-analysis challenges a textbook example of status signalling and demonstrates publication bias".
  89. 89
  90. 90
  91. 91
  92. 92
  93. 93
    The evolution of animal communication. Reliability and deception in signaling systems
    1. WA Searcy
    2. S Nowicki
    (2005)
    New Jersey: Princeton University Press.
  94. 94
  95. 95
    Color displays as intrasexual signals of aggression and dominance
    1. JC Senar
    (2006)
    In: G. E Hill, K. J McGraw, editors. Bird Coloration: Function and Evolution. London: Harvard University Press. pp. 87–136.
  96. 96
  97. 97
  98. 98
  99. 99
  100. 100
    Regression methods to detect publication and other bias in meta-analysis
    1. J Sterne
    2. M Egger
    (2005)
    In: H Rothstein, A Sutton, M Borenstein, editors. Publication Bias in Meta-Analysis2. Chichester: John Wiley. pp. 99–110.
    https://doi.org/10.1002/0470870168.ch6
  101. 101
  102. 102
  103. 103
  104. 104
    Assessing the evolution of effect sizes over time
    1. T Trikalinos
    2. JPA Ioannidis
    (2005)
    In: H Rothstein, A Sutton, M Borenstein, editors. Publication Bias in Meta-Analysis. Chichester: John Wiley. pp. 241–259.
    https://doi.org/10.1002/0470870168.ch13
  105. 105
  106. 106
  107. 107
  108. 108
    Evolution and maintenance of social status-signalling badges
    1. MJ Whiting
    2. KA Nagy
    3. PW Bateman
    (2003)
    In: S. F Fox, K McCoy, T. A Baird, editors. Lizard Social Behavior. Baltimore: Johns Hopkins University Press. pp. 47–82.

Decision letter

  1. Diethard Tautz
    Senior and Reviewing Editor; Max-Planck Institute for Evolutionary Biology, Germany
  2. Tim Parker
    Reviewer; Whitman College, United States

In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.

Thank you for submitting your article "Meta-analysis challenges a textbook example of status signalling and demonstrates publication bias" for consideration by eLife. Your article has been reviewed by two peer reviewers, and the evaluation has been overseen by Diethard Tautz as the Senior and Reviewing Editor.

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

Summary:

This study updates a previous meta-analysis of the correlation between a plumage ornament (bib size) and dominance status in house sparrows, using meta-analysis of the primary data from a larger sample of published and unpublished studies. House sparrows have been considered an exemplar for the 'badge of status' hypothesis to explain the evolution of male ornaments, and so this is a particularly important analysis. The present analyses find a small mean effect size with 95% credible intervals overlapping zero, indicating there is no association between bib size and dominance status across studies. They also find that the mean effect size among published studies is significantly larger than among unpublished studies, and that the mean effect size declines over time, both suggesting potential publication biases.

These results are quite a striking refutation of a previously well-accepted hypothesis, and provide clear indication that a range of biases in the publication process may lend unwarranted support to many hypotheses circulating in the literature.

The study appears thorough and well implemented. Further, the authors show a commendable degree of transparency regarding their process.

However, there are a number of points that need attention and better description before the paper can be published.

Essential revisions:

1) How did the authors choose which relationships to include in their analyses? It is not adequately clear that the authors took sufficient steps to avoid bias in which effect sizes they chose to include. Thus, we recommend that they include more explanation of how they avoided this bias, or, if the risk of bias is plausible, then a re-analysis designed to avoid bias is required.

2) Please consider the following concerns about assessment of publication bias.a) To what extent might your re-analysis of raw data have concealed publication bias? Were you using your re-analyzed data for the funnel plots and Egger regression? This was not clear from the methods.

b) On a related note, using meta-analytic residuals to test for publication bias assumes that none of the modeled variables correlate with publication bias. Is that reasonable in this case?

c) If publication bias towards strong effects were at work, we would expect to observe the strongest effects from those studies with greater sampling error (those with lower sampling effort). The observed absence of this result is what we would expect with little or weak publication bias. This should be acknowledged (though see points a) and b) above). In contrast, there is insufficient explanation of why the results in Figure 4 (the change in published effects over time) are strong evidence of publication bias.

3) Please state also the effect of using meta-analysis of the raw data, reanalysed in a consistent way, compared to using calculated effect sizes or summary statistics available in the published studies. While use of raw data does seem like a desirable standard to strive for in meta-analysis, it doesn't seem to make that big of a difference when comparing the results of 'published 1' and 'published 2' in Appendix—Table 6. Please comment whether you see this as a priority in a list of recommendations for best practice in meta-analysis (or best practice in the production of primary studies that can be effectively included in meta-analyses) or do the many other potential sources of bias have stronger effects on the confidence in/conclusions that can be drawn from meta-analytic estimates?

4) It is necessary to add a statement (and explanatory text – compare https://osf.io/hadz3/) to the paper confirming whether, for all questions, you have reported all measures, conditions, data exclusions, and how you determined sample sizes.

5) Please also address the following editorial point:

The Abstract contains a statement about "the validity of the current scientific publishing culture". Similar statements are made in the main text (Introduction, last paragraph, Discussion, first and last paragraphs), but the manuscript never goes into detail about these matters.

Please, therefore, delete the following passage from the Abstract: "raise important concerns about the validity of the current scientific publishing culture". Please also delete the corresponding statements in the last paragraph of the Introduction and the first paragraph of the Discussion. It is fine to keep the statement in the last paragraph of the Discussion.

https://doi.org/10.7554/eLife.37385.028

Author response

Essential revisions:

1) How did the authors choose which relationships to include in their analyses? It is not adequately clear that the authors took sufficient steps to avoid bias in which effect sizes they chose to include. Thus, we recommend that they include more explanation of how they avoided this bias, or, if the risk of bias is plausible, then a re-analysis designed to avoid bias is required.

We thank the reviewers for spotting this lack of transparency in our writing. We have now included explanations about how those orders of preference were determined (see Appendix 1, Appendix—tables 1-3).

To re-iterate this here in short, we took the necessary steps to avoid any bias by deciding the order of preference prior to conducting any analysis. Indeed, we did not run additional analyses based on different orders of preference, and thus, our study does not suffer from selective reporting of results.

For the methods shown in Appendix 1—tables 1 and 2, the order of preference was determined by how often the methods were used in previous studies. This decision was based on our attempt to standardize among-study methodology as much as possible.

For the methods shown in Appendix 1—table 3 (i.e. methodology to infer dominance rank), the order of preference was determined by both method performance and frequency of use. We divided the methods in two groups: (i) “linearity-based” (i.e. David’s score, I&SI, Landau’s and Kendall’s indices), and (ii) “proportion-based” methods. The first group contained methods that are either based on finding the hierarchy that best approaches linearity or that consider the strength of the opponents to infer individual success. Linearity-based methods are expected to outperform the proportion-based methods, which are based on simple proportions of contests won/lost per individual. Thus, we gave priority to the linearity-based methods, and ranked them using the results from Sánchez-Tójar et al., 2018, (see Figure 5 of that paper). The methods from the second group were then ranked based on how often they were used in previous studies. Finally, in the analyses that involved primary data, the randomized Elo-rating was prioritized due to its higher performance (Sánchez-Tójar et al., 2018) and used to infer the dominance hierarchy of all studies for which primary data were available (i.e. 12 out of 19 studies).

In addition to the explanations added in Appendix 1—tables 1-3, we have also added the following statement at the end of the “Summary data extraction” subsection:

In each case, the order of preference was determined prior to conducting any statistical analysis, and thus, method selection was blind to the outcome of the analyses (more details in Appendix 1).”

2) Please consider the following concerns about assessment of publication bias.a) To what extent might your re-analysis of raw data have concealed publication bias? Were you using your re-analyzed data for the funnel plots and Egger regression? This was not clear from the methods.

We thank the reviewers for spotting this lack of clarity in our writing. We indeed explored publication bias for each meta-analysis by running an Egger’s regression and generated a funnel plot for each of the four meta-analyses in our manuscript (Tables 2 and Appendix 2—table 6, Figures 2 and Appendix 2—figure 2). The (re-)analysed data were used in all but the meta-analysis called “published 1” (Appendix 2—table 6, Appendix 2—figure 1-2), which was based on the published original effect sizes only, and thus free from any potential concealment due to our (re-)analysis. The results of that meta-analysis agreed with those of the other meta-analyses, i.e. publication bias was neither apparent from visually inspecting funnel plots nor from the results of the Egger’s regressions. However, this is likely due to the difficulty of detecting publication bias when the number of effect sizes is limited and heterogeneity is present (see Moreno et al., 2009).

Overall, if (re-)analysing led to an increase in heterogeneity, detecting publication bias via funnel plots and Egger’s regressions could be more difficult. However, mean total heterogeneity (I2overall) did not increase when including (re-)analysed effect sizes (Appendix 2—table 6, “published 1” vs. “published 2”).

Importantly, our (re-)analysis was a necessary step to show and account for the real heterogeneity among effect sizes. We have clarified the methodology by writing “for each meta-analysis” at the end of the following sentences:

“First, we visually inspected asymmetry in funnel plots of meta-analytic residuals against the inverse of their precision (defined as the square root of the inverse of VZr) for each meta-analysis.”

“Second, we ran Egger’s regressions using the meta-analytic residuals as the response variable, and the precision (see above) as the moderator (Nakagawa and Santos, 2012) for each meta-analysis.”

b) On a related note, using meta-analytic residuals to test for publication bias assumes that none of the modeled variables correlate with publication bias. Is that reasonable in this case?

We thank the reviewers for spotting a lack of clarity in our writing. To account for this comment, we added text (see response 2a above). The meta-analytic residuals used were those from the meta-analyses, which were intercept-only models where no other variables were modelled except the random effects. The models that include moderators are named “meta-regressions”, we used that nomenclature throughout.

c) If publication bias towards strong effects were at work, we would expect to observe the strongest effects from those studies with greater sampling error (those with lower sampling effort). The observed absence of this result is what we would expect with little or weak publication bias. This should be acknowledged (though see points a) and b) above).

We would indeed expect to observe the strongest effects when precision is low, which would lead to the funnel shape observed in Figure 2 and Appendix 2—figure 2. What we would expect in case of strong publication bias is asymmetry in the funnel plots, which our analyses do not seem to support. However, as noted above, detecting publication bias by visual inspection of funnel plots and running Egger’s regressions is difficult when the number of effect sizes is limited and heterogeneity is present (Moreno et al., 2009). Since heterogeneity is typically high in ecological and evolutionary meta-analyses (Senior et al., 2016), it is challenging to conclude whether publication bias may have existed. In our study, however, we were able to circumvent that problem and detect the existence of publication bias by using an alternative approach, i.e. by comparing published vs. unpublished effect sizes, and testing for the existence of time-lag bias.

We have clarified the most likely reason why we did not detect publication bias using funnel plots inspection and Egger’s regression tests by expanding the explanation we already had in the Discussion:

“Egger’s regressions failed to detect any funnel plot asymmetry, even in the meta-analyses based on published effect sizes only (Appendix 2—able 6). However, because unpublished data indeed existed (i.e. those obtained for this study), the detection failure was likely the consequence of the limited number of effect sizes available (i.e. low power) and the moderate level of heterogeneity found in this study (Sterne and Egger 2005; Moreno et al., 2009).”

In contrast, there is insufficient explanation of why the results in Figure 4 (the change in published effects over time) are strong evidence of publication bias.

We thank the reviewers for spotting a lack of clarity in our writing. We have now briefly explained in the text some of the processes that can lead to time-lag bias and referred the interested reader to an excellent review on the topic. Specifically, we have added the two following sentences to the manuscript.

“A decrease in effect size over time can have multiple causes. For example, initial effect sizes might be inflated due to low statistical power (“winner’s curse”) but published more easily and/or earlier due to positive selection of statistically significant results (reviewed by Koricheva, Jennions, and Lau, 2013).”

“An additional type of publication bias is time-lag bias, where early studies report larger effect sizes than later studies (Trikalinos and Ioannidis, 2005).”

3) Please state also the effect of using meta-analysis of the raw data, reanalysed in a consistent way, compared to using calculated effect sizes or summary statistics available in the published studies. While use of raw data does seem like a desirable standard to strive for in meta-analysis, it doesn't seem to make that big of a difference when comparing the results of 'published 1' and 'published 2' in Appendix—Table 6. Please comment whether you see this as a priority in a list of recommendations for best practice in meta-analysis (or best practice in the production of primary studies that can be effectively included in meta-analyses) or do the many other potential sources of bias have stronger effects on the confidence in/conclusions that can be drawn from meta-analytic estimates?

We thank the reviewers for these comments and suggestions. Theoretically, one of the most appealing features of a meta-analysis based on primary data is that, by analysing all the data in a consistent manner, effect sizes of all the studies are comparable (reviewed by Mengersen et al., 2013). From our analyses it is, however, difficult to conclude about whether meta-analyses based on primary data should be the preferred option. This is because our analyses were not designed to specifically test for differences between the two approaches. The main impediment for that was that we did not have access to the primary data of around half of the published studies (data available for 7 out of 13 studies), and therefore, there is still a substantial overlap between the meta-analyses “published 1” and “published 2” that might partly explain why the results of both meta-analyses did not differ much (Appendix 2—table6). Nevertheless, “published 2” estimated heterogeneity more precisely (i.e. narrower 95% CrI) than “published 1.

Lastly, we have no reason to believe that standardizing all effect sizes by re-analysing the primary data should lead to bias in the conclusions, but rather the opposite (see the response to the comment 2a above; see also a recent review about open data meta-analysis: Culina et al., 2018).

We reference two reviews about the topic in our Introduction (Simmonds et al., 2005; Mengersen et al., 2013) and we have now added a reference for a recent call to increase the use of meta-analysis of open datasets (Culina et al., 2018) (Introduction, third paragraph). Those three references provide strong support for the assertion that meta-analysis of primary data should be considered the gold standard.

4) It is necessary to add a statement (and explanatory text – compare https://osf.io/hadz3/) to the paper confirming whether, for all questions, you have reported all measures, conditions, data exclusions, and how you determined sample sizes.

We thank the reviewers for spotting this lack of transparency. We have now included the following sentence at the end of the Materials and methods section:

“We report all data exclusion criteria applied and the results of all analyses conducted in our study.”

See also our response to comment 1.

5) Please also address the following editorial point:

The Abstract contains a statement about "the validity of the current scientific publishing culture". Similar statements are made in the main text (Introduction, last paragraph, Discussion, first and last paragraphs), but the manuscript never goes into detail about these matters.

Please, therefore, delete the following passage from the Abstract: "raise important concerns about the validity of the current scientific publishing culture". Please also delete the corresponding statements in the last paragraph of the Introduction and the first paragraph of the Discussion. It is fine to keep the statement in the last paragraph of the Discussion.

We thank the editors for these suggestions, which we have now implemented.

https://doi.org/10.7554/eLife.37385.029

Article and author information

Author details

  1. Alfredo Sánchez-Tójar

    1. Evolutionary Biology Group, Max Planck Institute for Ornithology, Seewiesen, Germany
    2. Department of Life Sciences, Imperial College London, Ascot, United Kingdom
    Present address
    Department of Evolutionary Biology, Bielefeld University, Bielefeld, Germany
    Contribution
    Conceptualization, Data curation, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing—original draft, Project administration, Writing—review and editing
    For correspondence
    alfredo.tojar@gmail.com
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-2886-0649
  2. Shinichi Nakagawa

    School of Biological, Earth and Environmental Sciences, University of New South Wales, Sidney, Australia
    Contribution
    Conceptualization, Software, Supervision, Writing—review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-7765-5182
  3. Moisès Sánchez-Fortún

    1. Evolutionary Biology Group, Max Planck Institute for Ornithology, Seewiesen, Germany
    2. Department of Animal and Plant Sciences, University of Sheffield, Sheffield, United Kingdom
    Present address
    Department de Biologia Evolutiva, Ecologia i Ciències Ambientals, University of Barcelona, Barcelona, Spain
    Contribution
    Investigation, Writing—review and editing
    Competing interests
    No competing interests declared
  4. Dominic A Martin

    Department of Life Sciences, Imperial College London, Ascot, United Kingdom
    Present address
    Biodiversity, Macroecology and Biogeography, University of Goettingen, Goettingen, Germany
    Contribution
    Investigation, Writing—review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-7197-2278
  5. Sukanya Ramani

    1. Evolutionary Biology Group, Max Planck Institute for Ornithology, Seewiesen, Germany
    2. Department of Animal Behaviour, Bielefeld University, Bielefeld, Germany
    Contribution
    Investigation, Writing—review and editing
    Competing interests
    No competing interests declared
  6. Antje Girndt

    1. Evolutionary Biology Group, Max Planck Institute for Ornithology, Seewiesen, Germany
    2. Department of Life Sciences, Imperial College London, Ascot, United Kingdom
    Contribution
    Investigation, Writing—review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9558-1201
  7. Veronika Bókony

    Lendület Evolutionary Ecology Research Group, Plant Protection Institute, Centre for Agricultural Research, Hungarian Academy of Sciences, Budapest, Hungary
    Contribution
    Investigation, Writing—review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-2136-5346
  8. Bart Kempenaers

    Department of Behavioural Ecology and Evolutionary Genetics, Max Planck Institute for Ornithology, Seewiesen, Germany
    Contribution
    Investigation, Writing—review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-7505-5458
  9. András Liker

    MTA-PE Evolutionary Ecology Research Group, University of Pannonia, Veszprém, Hungary
    Contribution
    Investigation, Writing—review and editing
    Competing interests
    No competing interests declared
  10. David F Westneat

    Department of Biology, University of Kentucky, Lexington, United States
    Contribution
    Investigation, Writing—review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-5163-8096
  11. Terry Burke

    Department of Animal and Plant Sciences, University of Sheffield, Sheffield, United Kingdom
    Contribution
    Conceptualization, Supervision, Funding acquisition, Writing—review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-3848-1244
  12. Julia Schroeder

    1. Evolutionary Biology Group, Max Planck Institute for Ornithology, Seewiesen, Germany
    2. Department of Life Sciences, Imperial College London, Ascot, United Kingdom
    Contribution
    Conceptualization, Supervision, Funding acquisition, Writing—review and editing
    Competing interests
    No competing interests declared

Funding

Max-Planck-Gesellschaft (Open-access funding)

  • Alfredo Sánchez-Tójar

Max-Planck-Gesellschaft (Funding captive house sparrow population)

  • Bart Kempenaers

National Science Foundation

  • David F Westneat

Natural Environment Research Council (NE/N013832/1)

  • Terry Burke

Volkswagen Foundation

  • Julia Schroeder

H2020 Marie Skłodowska-Curie Actions (CIG PCIG12-GA-2012-333096)

  • Julia Schroeder

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

AST and AG are grateful for the support of the International Max Planck Research School (IMPRS) for Organismal Biology. We thank Katherine L Buchanan, Sanh K Diep, Fabrice Helfenstein, Anna Kulcsár, Ádám Z Lendvai, Karin M Lindström, Thor Harald Ringsby, Alfonso Rojas Mora, Bernt-Erik Sæther, Emmi Schlicht, Erling J Solberg, Zoltán Tóth and Jarle Tufto for providing the primary data of published and unpublished studies. We thank Wolfgang Forstmeier, Lucy Winder, and Tim Parker and an anonymous reviewer for constructive feedback on the manuscript.

Senior and Reviewing Editor

  1. Diethard Tautz, Max-Planck Institute for Evolutionary Biology, Germany

Reviewer

  1. Tim Parker, Whitman College, United States

Publication history

  1. Received: April 9, 2018
  2. Accepted: October 11, 2018
  3. Version of Record published: November 13, 2018 (version 1)

Copyright

© 2018, Sánchez-Tójar et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,345
    Page views
  • 103
    Downloads
  • 0
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

Further reading

    1. Ecology
    2. Evolutionary Biology
    Adrian G Glover et al.
    Feature Article
    1. Ecology
    2. Microbiology and Infectious Disease
    Mauricio Seguel et al.
    Research Article Updated