Statistics: Sex difference analyses under scrutiny

A survey reveals that many researchers do not use appropriate statistical analyses to evaluate sex differences in biomedical research.
  1. Colby J Vorland  Is a corresponding author
  1. Department of Applied Health Science, Indiana University School of Public Health, United States

Scientific research requires the use of appropriate methods and statistical analyses, otherwise results and interpretations can be flawed. How research outcomes differ by sex, for example, has historically been understudied, and only recently have policies been implemented to require such consideration in the design of a study (e.g., NIH, 2015).

Over two decades ago, the renowned biomedical statistician Doug Altman labeled methodological weaknesses a “scandal”, raising awareness of shortcomings related to the representativeness of research as well as inappropriate research designs and statistical analysis (Altman, 1994). These methodological weaknesses extend to research on sex differences: simply adding female cells, animals, or participants to experiments does not guarantee an improved understanding of this field of research. Rather, the experiments must also be correctly designed and analyzed appropriately to examine such differences. While guidance exists for proper analysis of sex differences, the frequency of errors in published research articles related to this topic has not been well understood (e.g., Beltz et al., 2019).

Now, in eLife, Yesenia Garcia-Sifuentes and Donna Maney of Emory University fill this gap by surveying the literature to examine whether the statistical analyses used in different research articles are appropriate to support conclusions of sex differences (Garcia-Sifuentes and Maney, 2021). Drawing from a previous study that surveyed articles studying mammals from nine biological disciplines, Garcia-Sifuentes and Maney sampled 147 articles that included both males and females and performed an analysis by sex (Woitowich et al., 2020).

Over half of the articles surveyed (83, or 56%) reported a sex difference. Garcia-Sifuentes and Maney examined the statistical methods used to analyze sex differences and found that over a quarter (24 out of 83) of these articles did not perform or report a statistical analysis supporting the claim of a sex difference. A factorial design with sex as a factor is an appropriate way to examine sex differences in response to treatment, by giving each sex each treatment option (such as a treatment or control diet; see Figure 1A). A slight majority of all articles (92, or 63%) used a factorial design. Within the articles using a factorial design, however, less than one third (27) applied and reported a method appropriate to test for sex differences (e.g., testing for an interaction between sex and the exposure, such as different diets; Figure 1B). Similarly, within articles that used a factorial design and concluded a sex-specific effect, less than one third (16 out of 53) used an appropriate analysis.

Considering sex differences in experimental design.

(A) A so-called factorial design permits testing of sex differences. For example, both female (yellow boxes) and male mice (blue boxes) are fed either a treatment diet (green pellets) or control diet (orange pellets). Garcia-Sifuentes and Maney found that 63 % of articles employed a factorial design in at least one experiment with sex as a factor. (B) An appropriate way to statistically test for sex differences is with a two-way analysis of variance (ANOVA). If a statistically significant interaction is observed between sex and treatment, as shown in the figure, evidence for a sex difference is supported. Garcia-Sifuentes and Maney found that in studies using a factorial design, less than one third tested for an interaction between sex and treatment. (C) Performing a statistical test between the treatment and control groups within each sex, and comparing the nominal statistical significance, is not a valid method to look for sex differences. Yet, this method was used in nearly half of articles that used a factorial design and concluded a sex-specific effect.

Notably, nearly half of the articles (24 out of 53) that concluded a sex-specific effect statistically tested the effect of treatment within each sex and compared the resulting statistical significance. In other words, when one sex had a statistically significant change and the other did not, the authors of the original studies concluded that a sex difference existed. This approach, which is sometimes called ‘differences in nominal significance’, or ‘DINS’ error (George et al., 2016), is invalid and has been found to occur for decades among several disciplines, including neuroscience (Nieuwenhuis et al., 2011), obesity and nutrition (Bland and Altman, 2015; George et al., 2016; Vorland et al., 2021), and more general areas (Gelman and Stern, 2006; Makin, 2019; Matthews and Altman, 1996; Sainani, 2010; Figure 1C).

This approach is invalid because testing within each sex separately inflates the probability of falsely concluding that a sex-specific effect is present compared to testing between them directly. Other inappropriate analyses that were identified in the survey included testing sex within treatment and ignoring control animals; not reporting results after claiming to do an appropriate analysis; or claiming an effect when the appropriate analysis was not statistically significant despite subscribing to ‘null hypothesis significance’ testing. Finally, when articles pooled the data of males and females together in their analysis, about half of them did not first test for a sex difference, potentially masking important differences.

The results of Garcia-Sifuentes and Maney highlight the need for thoughtful planning of study design, analysis, and communication to maximize our understanding and use of biological sex differences in practice. Although the survey does not quantify what proportion of this research comes to incorrect conclusions from using inappropriate statistical methods, which would require estimation procedures or reanalyzing the data, many of these studies’ conclusions may change if they were analyzed correctly. Misleading results divert our attention and resources, contributing to the larger problem of ‘waste’ in biomedical research, that is, the avoidable costs of research that does not contribute to our understanding of what is true because it is flawed, methodologically weak, or not clearly communicated (Glasziou and Chalmers, 2018).

What can the scientific enterprise do about this problem? The survey suggests that there may be a large variability in discipline-specific practices in the design, reporting, and analysis strategies to examine sex differences. Although larger surveys are needed to assess these more comprehensively, they may imply that education and support efforts could be targeted where they are most needed. Compelling scientists to publicly share their data can facilitate reanalysis when statistical errors are discovered – though the burden on researchers performing the reanalysis is not trivial. Partnering with statisticians in the design, analysis, and interpretation of research is perhaps the most effective means of prevention.

Scientific research often does not reflect the diversity of those who benefit from it. Even when it does, using methods that are inappropriate fails to support the progress toward equity. Surely this is nothing less than a scandal.


Article and author information

Author details

  1. Colby J Vorland

    Colby J Vorland is at the Department of Applied Health Science, Indiana University School of Public Health, Bloomington, United States

    For correspondence
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4225-372X

Publication history

  1. Version of Record published: November 2, 2021 (version 1)


© 2021, Vorland

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.


  • 2,310
    Page views
  • 193
  • 1

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Colby J Vorland
Statistics: Sex difference analyses under scrutiny
eLife 10:e74135.

Further reading

    1. Epidemiology and Global Health
    2. Medicine
    Parker Tope, Eliya Farah ... Eduardo L Franco
    Research Article

    Background: The COVID-19 pandemic has disrupted cancer care, raising concerns regarding the impact of wait time, or 'lag time', on clinical outcomes. We aimed to contextualize pandemic-related lag times by mapping pre-pandemic evidence from systematic reviews and/or meta-analyses on the association between lag time to cancer diagnosis and treatment with mortality- and morbidity-related outcomes.

    Methods: We systematically searched MEDLINE, EMBASE, Web of Science, and Cochrane Library of Systematic Reviews for reviews published prior to the pandemic (1 January 2010-31 December 2019). We extracted data on methodological characteristics, lag time interval start and endpoints, qualitative findings from systematic reviews, and pooled risk estimates of mortality- (i.e., overall survival) and morbidity- (i.e., local regional control) related outcomes from meta-analyses. We categorized lag times according to milestones across the cancer care continuum and summarized outcomes by cancer site and lag time interval.

    Results: We identified 9,032 records through database searches, of which 29 were eligible. We classified 33 unique types of lag time intervals across 10 cancer sites, of which breast, colorectal, head and neck, and ovarian cancers were investigated most. Two systematic reviews investigating lag time to diagnosis reported different findings regarding survival outcomes among pediatric patients with Ewing's sarcomas or central nervous system tumours. Comparable risk estimates of mortality were found for lag time intervals from surgery to adjuvant chemotherapy for breast, colorectal, and ovarian cancers. Risk estimates of pathologic complete response indicated an optimal time window of 7-8 weeks for neoadjuvant chemotherapy completion prior to surgery for rectal cancers. In comparing methods across meta-analyses on the same cancer sites, lag times, and outcomes, we identified critical variations in lag time research design.

    Conclusions: Our review highlighted measured associations between lag time and cancer-related outcomes and identified the need for a standardized methodological approach in areas such as lag time definitions and accounting for the waiting-time paradox. Prioritization of lag time research is integral for revised cancer care guidelines under pandemic contingency and assessing the pandemic's long-term effect on patients with cancer.

    Funding: The present work was supported by the Canadian Institutes of Health Research (CIHR-COVID-19 Rapid Research Funding opportunity, VR5-172666 grant to Eduardo L. Franco). Parker Tope, Eliya Farah, and Rami Ali each received an MSc. stipend from the Gerald Bronfman Department of Oncology, McGill University.

    1. Medicine
    Richard C Lauer, Marc Barry ... Wadih Arap
    Research Article Updated


    We have previously shown that the long non-coding (lnc)RNA prostate cancer associated 3 (PCA3; formerly prostate cancer antigen 3) functions as a trans-dominant negative oncogene by targeting the previously unrecognized prostate cancer suppressor gene PRUNE2 (a homolog of the Drosophila prune gene), thereby forming a functional unit within a unique allelic locus in human cells. Here, we investigated the PCA3/PRUNE2 regulatory axis from early (tumorigenic) to late (biochemical recurrence) genetic events during human prostate cancer progression.


    The reciprocal PCA3 and PRUNE2 gene expression relationship in paired prostate cancer and adjacent normal prostate was analyzed in two independent retrospective cohorts of clinically annotated cases post-radical prostatectomy: a single-institutional discovery cohort (n=107) and a multi-institutional validation cohort (n=497). We compared the tumor gene expression of PCA3 and PRUNE2 to their corresponding expression in the normal prostate. We also serially examined clinical/pathological variables including time to disease recurrence.


    We consistently observed increased expression of PCA3 and decreased expression of PRUNE2 in prostate cancer compared with the adjacent normal prostate across all tumor grades and stages. However, there was no association between the relative gene expression levels of PCA3 or PRUNE2 and time to disease recurrence, independent of tumor grades and stages.


    We concluded that upregulation of the lncRNA PCA3 and targeted downregulation of the protein-coding PRUNE2 gene in prostate cancer could be early (rather than late) molecular events in the progression of human prostate tumorigenesis but are not associated with biochemical recurrence. Further studies of PCA3/PRUNE2 dysregulation are warranted.


    We received support from the Human Tissue Repository and Tissue Analysis Shared Resource from the Department of Pathology of the University of New Mexico School of Medicine and a pilot award from the University of New Mexico Comprehensive Cancer Center. RP and WA were supported by awards from the Levy-Longenbaugh Donor-Advised Fund and the Prostate Cancer Foundation. EDN reports research fellowship support from the Brazilian National Council for Scientific and Technological Development (CNPq), Brazil, and the Associação Beneficente Alzira Denise Hertzog Silva (ABADHS), Brazil. This work has been funded in part by the NCI Cancer Center Support Grants (CCSG; P30) to the University of New Mexico Comprehensive Cancer Center (CA118100) and the Rutgers Cancer Institute of New Jersey (CA072720).