Statistics: Sex difference analyses under scrutiny

A survey reveals that many researchers do not use appropriate statistical analyses to evaluate sex differences in biomedical research.
  1. Colby J Vorland  Is a corresponding author
  1. Department of Applied Health Science, Indiana University School of Public Health, United States

Scientific research requires the use of appropriate methods and statistical analyses, otherwise results and interpretations can be flawed. How research outcomes differ by sex, for example, has historically been understudied, and only recently have policies been implemented to require such consideration in the design of a study (e.g., NIH, 2015).

Over two decades ago, the renowned biomedical statistician Doug Altman labeled methodological weaknesses a “scandal”, raising awareness of shortcomings related to the representativeness of research as well as inappropriate research designs and statistical analysis (Altman, 1994). These methodological weaknesses extend to research on sex differences: simply adding female cells, animals, or participants to experiments does not guarantee an improved understanding of this field of research. Rather, the experiments must also be correctly designed and analyzed appropriately to examine such differences. While guidance exists for proper analysis of sex differences, the frequency of errors in published research articles related to this topic has not been well understood (e.g., Beltz et al., 2019).

Now, in eLife, Yesenia Garcia-Sifuentes and Donna Maney of Emory University fill this gap by surveying the literature to examine whether the statistical analyses used in different research articles are appropriate to support conclusions of sex differences (Garcia-Sifuentes and Maney, 2021). Drawing from a previous study that surveyed articles studying mammals from nine biological disciplines, Garcia-Sifuentes and Maney sampled 147 articles that included both males and females and performed an analysis by sex (Woitowich et al., 2020).

Over half of the articles surveyed (83, or 56%) reported a sex difference. Garcia-Sifuentes and Maney examined the statistical methods used to analyze sex differences and found that over a quarter (24 out of 83) of these articles did not perform or report a statistical analysis supporting the claim of a sex difference. A factorial design with sex as a factor is an appropriate way to examine sex differences in response to treatment, by giving each sex each treatment option (such as a treatment or control diet; see Figure 1A). A slight majority of all articles (92, or 63%) used a factorial design. Within the articles using a factorial design, however, less than one third (27) applied and reported a method appropriate to test for sex differences (e.g., testing for an interaction between sex and the exposure, such as different diets; Figure 1B). Similarly, within articles that used a factorial design and concluded a sex-specific effect, less than one third (16 out of 53) used an appropriate analysis.

Considering sex differences in experimental design.

(A) A so-called factorial design permits testing of sex differences. For example, both female (yellow boxes) and male mice (blue boxes) are fed either a treatment diet (green pellets) or control diet (orange pellets). Garcia-Sifuentes and Maney found that 63 % of articles employed a factorial design in at least one experiment with sex as a factor. (B) An appropriate way to statistically test for sex differences is with a two-way analysis of variance (ANOVA). If a statistically significant interaction is observed between sex and treatment, as shown in the figure, evidence for a sex difference is supported. Garcia-Sifuentes and Maney found that in studies using a factorial design, less than one third tested for an interaction between sex and treatment. (C) Performing a statistical test between the treatment and control groups within each sex, and comparing the nominal statistical significance, is not a valid method to look for sex differences. Yet, this method was used in nearly half of articles that used a factorial design and concluded a sex-specific effect.

Notably, nearly half of the articles (24 out of 53) that concluded a sex-specific effect statistically tested the effect of treatment within each sex and compared the resulting statistical significance. In other words, when one sex had a statistically significant change and the other did not, the authors of the original studies concluded that a sex difference existed. This approach, which is sometimes called ‘differences in nominal significance’, or ‘DINS’ error (George et al., 2016), is invalid and has been found to occur for decades among several disciplines, including neuroscience (Nieuwenhuis et al., 2011), obesity and nutrition (Bland and Altman, 2015; George et al., 2016; Vorland et al., 2021), and more general areas (Gelman and Stern, 2006; Makin, 2019; Matthews and Altman, 1996; Sainani, 2010; Figure 1C).

This approach is invalid because testing within each sex separately inflates the probability of falsely concluding that a sex-specific effect is present compared to testing between them directly. Other inappropriate analyses that were identified in the survey included testing sex within treatment and ignoring control animals; not reporting results after claiming to do an appropriate analysis; or claiming an effect when the appropriate analysis was not statistically significant despite subscribing to ‘null hypothesis significance’ testing. Finally, when articles pooled the data of males and females together in their analysis, about half of them did not first test for a sex difference, potentially masking important differences.

The results of Garcia-Sifuentes and Maney highlight the need for thoughtful planning of study design, analysis, and communication to maximize our understanding and use of biological sex differences in practice. Although the survey does not quantify what proportion of this research comes to incorrect conclusions from using inappropriate statistical methods, which would require estimation procedures or reanalyzing the data, many of these studies’ conclusions may change if they were analyzed correctly. Misleading results divert our attention and resources, contributing to the larger problem of ‘waste’ in biomedical research, that is, the avoidable costs of research that does not contribute to our understanding of what is true because it is flawed, methodologically weak, or not clearly communicated (Glasziou and Chalmers, 2018).

What can the scientific enterprise do about this problem? The survey suggests that there may be a large variability in discipline-specific practices in the design, reporting, and analysis strategies to examine sex differences. Although larger surveys are needed to assess these more comprehensively, they may imply that education and support efforts could be targeted where they are most needed. Compelling scientists to publicly share their data can facilitate reanalysis when statistical errors are discovered – though the burden on researchers performing the reanalysis is not trivial. Partnering with statisticians in the design, analysis, and interpretation of research is perhaps the most effective means of prevention.

Scientific research often does not reflect the diversity of those who benefit from it. Even when it does, using methods that are inappropriate fails to support the progress toward equity. Surely this is nothing less than a scandal.

References

Article and author information

Author details

  1. Colby J Vorland

    Colby J Vorland is at the Department of Applied Health Science, Indiana University School of Public Health, Bloomington, United States

    For correspondence
    cvorland@iu.edu
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4225-372X

Publication history

  1. Version of Record published: November 2, 2021 (version 1)

Copyright

© 2021, Vorland

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 3,171
    Page views
  • 231
    Downloads
  • 9
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Colby J Vorland
(2021)
Statistics: Sex difference analyses under scrutiny
eLife 10:e74135.
https://doi.org/10.7554/eLife.74135

Further reading

    1. Medicine
    2. Neuroscience
    Luisa Fassi, Shachar Hochman ... Roi Cohen Kadosh
    Research Article

    In recent years, there has been debate about the effectiveness of treatments from different fields, such as neurostimulation, neurofeedback, brain training, and pharmacotherapy. This debate has been fuelled by contradictory and nuanced experimental findings. Notably, the effectiveness of a given treatment is commonly evaluated by comparing the effect of the active treatment versus the placebo on human health and/or behaviour. However, this approach neglects the individual’s subjective experience of the type of treatment she or he received in establishing treatment efficacy. Here, we show that individual differences in subjective treatment - the thought of receiving the active or placebo condition during an experiment - can explain variability in outcomes better than the actual treatment. We analysed four independent datasets (N = 387 participants), including clinical patients and healthy adults from different age groups who were exposed to different neurostimulation treatments (transcranial magnetic stimulation: Studies 1 and 2; transcranial direct current stimulation: Studies 3 and 4). Our findings show that the inclusion of subjective treatment can provide a better model fit either alone or in interaction with objective treatment (defined as the condition to which participants are assigned in the experiment). These results demonstrate the significant contribution of subjective experience in explaining the variability of clinical, cognitive, and behavioural outcomes. We advocate for existing and future studies in clinical and non-clinical research to start accounting for participants’ subjective beliefs and their interplay with objective treatment when assessing the efficacy of treatments. This approach will be crucial in providing a more accurate estimation of the treatment effect and its source, allowing the development of effective and reproducible interventions.

    1. Medicine
    Hong Chen, Lijun Sun ... Yue Yin
    Research Article

    Mechanism underlying the metabolic benefit of intermittent fasting remains largely unknown. Here, we reported that intermittent fasting promoted interleukin-22 (IL-22) production by type 3 innate lymphoid cells (ILC3s) and subsequent beigeing of subcutaneous white adipose tissue. Adoptive transfer of intestinal ILC3s increased beigeing of white adipose tissue in diet-induced-obese mice. Exogenous IL-22 significantly increased the beigeing of subcutaneous white adipose tissue. Deficiency of IL-22 receptor (IL-22R) attenuated the beigeing induced by intermittent fasting. Single-cell sequencing of sorted intestinal immune cells revealed that intermittent fasting increased aryl hydrocarbon receptor signaling in ILC3s. Analysis of cell-cell ligand receptor interactions indicated that intermittent fasting may stimulate the interaction of ILC3s with dendritic cells and macrophages. These results establish the role of intestinal ILC3s in beigeing of white adipose tissue, suggesting that ILC3/IL-22/IL-22R axis contributes to the metabolic benefit of intermittent fasting.