Science Forum: Imaging methods are vastly underreported in biomedical research

  1. Guillermo Marqués  Is a corresponding author
  2. Thomas Pengo
  3. Mark A Sanders
  1. University Imaging Centers and Department of Neuroscience, University of Minnesota, United States
  2. University of Minnesota Informatics Institute , University of Minnesota, United States

Abstract

A variety of microscopy techniques are used by researchers in the life and biomedical sciences. As these techniques become more powerful and more complex, it is vital that scientific articles containing images obtained with advanced microscopes include full details about how each image was obtained. To explore the reporting of such details we examined 240 original research articles published in eight journals. We found that the quality of reporting was poor, with some articles containing no information about how images were obtained, and many articles lacking important basic details. Efforts by researchers, funding agencies, journals, equipment manufacturers and staff at shared imaging facilities are required to improve the reporting of experiments that rely on microscopy techniques.

Introduction

Over the past three centuries microscopy has evolved from being largely descriptive and qualitative to become a powerful tool that is capable of uncovering new phenomena and exploring molecular mechanisms in a way that is both visual and quantitative (Trinh and Fraser, 2015). The optical microscope has also been joined by a wide range of other imaging instruments, and images and data derived from them are crucial to many studies across the life and biomedical sciences.

The authors work at a major imaging facility (https://med.umn.edu/uic) and we are often asked to replicate or expand upon published experiments. However, these experiments are often poorly described, sometimes to the extent that it is not possible to repeat them. Such problems are not limited to microscopy, and concerns about a lack of reproducibility in certain areas of biomedical research have been growing over the past decade (Ioannidis, 2005Begley and Ellis, 2012; Baker, 2016; Drucker, 2016). Causes for concern have included: the substandard characterization of critical resources and reagents, such as antibodies (Freedman et al., 2016; Schüchner et al., 2020) and cell lines (Vaughan et al., 2017); incomplete reporting of experimental methods and reagents (Lithgow et al., 2017); bias (Macleod et al., 2015); inadequate statistics (Benjamin et al., 2018); and outright fraud (Bauchner et al., 2018).

There have been many efforts to address these problems, notably in the area of antibodies and other reagents. As regards incomplete reporting, a number of publishers and funding agencies have signed up to the TOP (Transparency and Openness Promotion) guidelines developed by the Center for Open Science (Nosek et al., 2015): signatories to these guidelines commit to promote and enforce good practices of attribution, reporting, data archival, and sharing of research tools (Sullivan et al., 2019). To that end, some publishers have established checklists that authors must complete (see, for example, Development, 2020; eLife, 2019; Marcus and the whole Cell team, 2016; NPG, 2020), and there is evidence from some areas that these interventions are having a positive effect (Macleod and the NPQIP Collaboration Group, 2017; Han et al., 2017; NPG, 2018). In this article we highlight the need for improved reporting of experiments that involve microscopy.

Results

To explore the extent and severity of this problem, we examined 240 original research articles in eight journals: Developmental Biology, Development, Developmental Cell, Journal of Cell Biology, Journal of Neuroscience, Nature Immunology, Journal of Immunology, and Biophysical Journal. Just over three-quarters of the papers (185/240 = 77%) had original images, and just over half of the figures in the papers (1439/2780 = 52%) contained images. Most of the images had been acquired by a microscope of some sort, and confocal fluorescence microscopy was the most popular technique: Supplementary file 1 lists the different imaging techniques used in each of the 185 articles. It should be noted that western blots and similar figures were not considered as images for the purposes of this study (see Materials and methods for details).

Articles about developmental biology and cell biology contained the highest proportion on images, whereas articles about immunology had the lowest (Table 1). While the number of figures in an article is a coarse metric that does not address how critical the information provided in a figure is for the conclusions reported in the article, it is an objective and quantifiable metric. It is also important to note that many of the articles contained supplemental videos, further stressing the importance of imaging in biomedical research.

Table 1
Evaluation of the reporting of imaging methods in biomedical journals.

The first column lists journal name, number of articles with images, number of articles evaluated, and the percentage of articles with images. The second column lists the percentage of figures (main and supplemental) that contain original images or quantification of imaging data. The third column lists the percentage of text in the materials and methods sections devoted to imaging (for the 185 articles that contained images). The fourth column lists the percentage of the articles containing images that pass the methods quality test (see Materials and methods for details of this test). Total developmental biology includes three journals (Dev. Biol., Development, and Dev. Cell); total immunology includes two journals (Nature Immunology and J. Immunology). * Five articles containing MRI and X-ray images were not included in the quality evaluation, so the sample for this analysis is 180 papers. Supplementary file 1 contains a list of all the articles analyzed and details for each article.

Journal (articles with imaging/total articles, percentage)Imaging figures (%)Imaging methods (%)Pass methods quality (%)
Developmental Biology (29/30, 99%)794.23.4
Development (28/28, 100%)757.014.3
Developmental Cell (32/32, 100%)694.89.4
J Cell Biology (29/30, 97%)7210.137.9
Nature Immunology (18/29, 62%)225.511.1
J Immunology (17/31, 55%)212.35.9
J Neuroscience (18/30, 60%)377.87.1
Biophysical Journal (14/30, 47%)2810.250.0
Total developmental biology
(89/90, 99%)
745.29.0
Total immunolgy
(35/60, 58%)
214.68.6
Total (185/240)526.716.7(*)

Methods for imaging experiments are described briefly, if at all

Table 1 also shows the fraction of the materials and methods section that was devoted to imaging in the 185 articles that contained images. On average just 138 words (7% of the total text in the materials and methods section) was used to describe the details of image acquisition, which seems low given the extent of the imaging results reported, and the fraction for the three developmental biology journals was even lower (5%), despite the high numbers of images in these journals. Moreover, the fractions of text devoted to imaging are over estimates as it was sometimes necessary to include the description of sample preparation in the word count for image acquisition. And somewhat alarmingly, 11 articles (with a total of 56 figures with images) contained no information whatsoever on image acquisition. Setting this group aside, it is possible that an imaging technique can be adequately described in fewer words than, say, a technique in biochemistry, genetics or molecular biology. However, regardless of the word count, it was apparent that many of the articles in our sample did not contain enough information about imaging experiments to allow these experiments to be repeated.

Few articles contain the information required to replicate the imaging experiments

We also assessed the reporting of three crucial aspects of image acquisition: i) the characteristics of the objective lens used for imaging, a critical determinant of magnification and optical resolution; ii) the digitization parameters that determine image resolution (image voxel size); iii) the spectral settings for fluorescence imaging that allow efficient signal acquisition and channel discrimination. A combined pass/fail score was then assigned (see 'Imaging materials and methods quality evaluation' in Materials and methods). Table 1 shows that the overall quality of the information provided is very poor, with less than one in five articles (16.7%) passing the test: the pass rate varied from 3% (for Developmental Biology) to 50% (for the Biophysical Journal).

It must be stressed that our quality test was of very low stringency. The information required to pass was the bare minimum to evaluate and replicate the image, and should not be considered the standard of care. Several proposals over the years have addressed the biological and experimental information that should be collected and reported in the metadata of imaging files (Huisman et al., 2019; Linkert et al., 2010; Swedlow et al., 2003) and more are in development (see, for example, www.doryworkspace.org). These approaches are comprehensive, and extremely valuable for data mining and biological analysis. Supplementary file 2 is a checklist with the minimal reportable parameters for the two most common types of imaging experiments in our dataset, wide-field fluorescence microscopy and laser-scanning fluorescence confocal microscopy: this checklist is concerned solely with image acquisition parameters, and must be seen as the minimum reporting guideline for publication. Full imaging metadata reporting requirements that are comprehensive, authoritative, and consensual await development and acceptance by appropriate parties (see 'What to do about it?' below).

It is worth noting that all the examined journals state in their instructions to authors that enough information must be provided to allow critical evaluation and replication of the results. Assessment of the suitability of other segments of the materials and methods section in these publications is beyond the scope of our study. However, spot-checks suggest a much more careful approach to the reporting of molecular biology experiments, with extensive tables of oligonucleotides and antibodies and detailed experimental conditions.

Reporting of sample preparation methods has improved, but more work is needed

One noticeable improvement brought about by the implementation of the new reporting requirements by some journals is the detailed description of antibodies and their sources. This is a critical aspect of sample preparation and reproducibility in immunofluorescence studies, but by no means the only one. Tissue harvesting and fixation and permeabilization conditions affect sample integrity (Schnell et al., 2012). Probing and washing steps and the nature of the mounting/imaging medium critically influence the quality of the images obtained (Boothe et al., 2017; Fouquet et al., 2015). Quantifying the extent to which these parameters are reported goes beyond the scope of the present study, but we did look at sample preparation for electron microscopy (EM) images because minor differences in sample processing can result in major differences in tissue ultrastructure that are harder to notice by optical methods. We found that only 4 of the 14 papers with EM images included sufficient detail to allow sample preparation to be replicated. The issues ranged from not giving any details, to pointing to inappropriate references or lacking important method details such as durations, concentrations, pH and so on. The complete reporting of sample preparation methods in optical microscopy is equally critical, particularly as optical 'super resolution' techniques begin to bridge the gap between optical and electron microscopy (Gwosch et al., 2020; Pereira et al., 2019).

While reporting of sample preparation details has improved, the adoption of STAR methods by Developmental Cell in 2017 has not resulted in adequate reporting of image acquisition details. In our dataset only 9% of the Dev. Cell articles provided enough experimental information to attempt replication of the imaging experiment. Similarly, despite Nature Immunology using the Nature Research Life Sciences Reporting Summary, only 11% of the articles passed the test (Table 1). It appears these new reporting requirements do not cover imaging appropriately.

Image processing and analysis are rarely described in detail

A final observation is that many articles contained little or no information about the procedures used for image processing, analysis, or quantification. We have not performed a quantitative analysis of this area because of the difficulty in creating a scoring matrix for a widely variable set of analysis procedures. The critical issue is that identical images can be processed in multiple ways, and many different algorithms can be used for segmentation, so the resulting quantification can be different (Botvinik-Nezer et al., 2020). Without knowing the processing steps the image went through between acquisition and quantification, it is not possible to replicate the quantitative analysis. Proper reporting of image analysis requires a detailed description of the ways the image has been processed and the parameters used, followed by details of segmentation and quantification. It is imperative that researchers keep track of the steps and report them fully and accurately in their publications: see Hofbauer et al., 2018 for a good example of how to report these details. Unfortunately, most image-analysis programs do not record these processing steps in the metadata of images. An exception to this is OMERO (Allan et al., 2012), a free, open-access image repository that allows image processing and analysis while keeping track of the image manipulations.

Discussion

Our study raises several issues. First, when it comes to imaging the "reproducibility crisis" is really a “preproducibility” crisis: in general there is not enough information in an article for anyone who wants to repeat an imaging experiment (Stark, 2018). This is a serious problem that causes unnecessary waste of researchers’ time and resources trying to figure out how an experiment was actually done, before even attempting to replicate it. Also, given the role of unexpected variability on experimental results, exacting descriptions of the materials used and procedures followed are essential to ensure reproducibility (Lithgow et al., 2017; Niepel et al., 2019).

Second, it is puzzling that authors devote a substantial effort to document other experimental techniques, but fail to do so for the basics of imaging. We do not have a good explanation for this, but it is worth noting that while formal training in biochemistry, genetics, and molecular and cellular biology is mandatory in most undergraduate and graduate biomedical programs, microscopy and imaging are rarely part of the curriculum. Our suspicion is that authors are not quite sure as to what really matters in an imaging experiment (Jost and Waters, 2019; North, 2006). It is interesting to note that the Nature Research Life Sciences Reporting Summary includes specific and detailed questionnaires for antibodies, cell lines, statistical analysis, ChIP-Seq, flow cytometry, and MRI, but not for optical imaging (https://www.nature.com/documents/nr-reporting-summary-flat.pdf). Similarly, in its editorial policies Nature encourages, but does not require, reporting of critical image acquisition and processing parameters (https://www.nature.com/nature-research/editorial-policies/image-integrity#microscopy). This does not seem to be enough to ensure accurate reporting of imaging procedures.

Third, it is hard to understand how reviewers and editors can accurately evaluate the results of a manuscript when there is not enough information on how the experiments were performed. It seems reviewers take the reported results at face value, without much consideration to the limitations that the experimental procedures may impose on the data.

Fourth, it is apparent that editors and publishers are not enforcing the requirements they have mandated. As an example, for microscopy experiments the Journal of Cell Biology requires that the numerical aperture (NA) of the objective lens used be reported (Rockefeller Press, 2019): however, almost a quarter (7/29) of the articles from this journal in our dataset failed to disclose the NA of the objective lens used.

Fifth, while we have not completed an exhaustive analysis of all biomedical areas, we are confident this problem extends to other disciplines, such as physiology and cancer biology.

What to do about it?

We believe that the massive underreporting of the details of microscopy experiments needs to be addressed urgently. As Lithgow et al. wrote: “We have learnt that to understand how life works, describing how the research was done is as important as describing what was observed” (Lithgow et al., 2017).

Authors need to improve their understanding of the imaging techniques they use in their research, and reviewers and editors need to insist that enough information is given to evaluate and replicate experimental imaging data. Mandatory deposit of original image files (including accurate metadata; Linkert et al., 2010) in a repository would be a step in the right direction. This approach was the basis for the JCB Dataviewer that was tested and ultimately discontinued by the Journal of Cell Biology (Hill, 2008). Existing image repositories such as OMERO for microscopy images (https://www.openmicroscopy.org/omero/) and the more generic BioImage Archive (https://www.ebi.ac.uk/bioimage-archive/) could serve this purpose (Allan et al., 2012; Ellenberg et al., 2018). More specialized resources – such as the Brain Research Microscopy Workspace (www.doryworkspace.org) or the Cell Image Library (www.cellimagelibrary.org) – could also contribute to the development of minimum reporting guidelines for images. These guidelines should cover the technical details of image acquisition and the biological information required to provide adequate context (Huisman et al., 2019). Vendors of imaging instrumentation, in particular microscopes, must also strengthen and standardize the procedures by which metadata is recorded in acquired images to facilitate its retrieval.

While editors, researchers and vendors have responsibilities in this area, scientists in shared imaging facilities (like ourselves) have a central role to play in ensuring accurate reporting of critical imaging parameters. This role includes educating clients on the relevant variables that affect their experimental results and on the importance of faithfully reporting that information. On a more immediate and practical matter, staff at such facilities can provide off-the-shelf descriptions and vet the methods section of manuscripts. For example, we have developed MethodsJ, a FIJI script that extracts metadata (microscope model, objective lens magnification, NA, excitation and emission wavelengths, exposure time) from a light microscopy image using the Bio-Formats library and generates text can be used in the materials and methods section of an article (Schindelin et al., 2012; Linkert et al., 2010). While MethodsJ can reliably retrieve some of the critical parameters (objective lens magnification, NA, voxel size), it is more difficult to retrieve the excitation and emission wavelengths for fluorescence: the adoption of standards for metadata by different manufacturers would also make MethodsJ more robust. The code for MethodsJ is available at https://github.com/tp81/MethodsJ (copy archived at https://github.com/elifesciences-publications/MethodsJPengo, 2020) and some example output is shown in Supplementary file 3.

On a broader scale, professional societies such as the Microscopy Society of America (MSA), Bio Imaging North America (BINA), and the Association of Biomolecular Resource Facilities (ABRF) could develop an appropriate checklist for the reporting of imaging experiments in coordination with publishers. These organizations will also be instrumental in the dissemination of these standards through meetings, workshops and educational activities. Depositing any standards in the FAIRsharing repository (fairsharing.org; Sansone et al., 2019) could also help with adoption by the community.

Ultimately, the enforcement of standards will have to fall on publishers and funding agencies. In the US the National Institutes of Health and the National Science Foundation already require that experimental methodology is reported and made publicly available, and many journals have similar requirements. It seems that further educational efforts are needed to ensure researchers report their methods fully, and a more proactive approach by journal editors seems to have a beneficial effect on the rigor of experimental reporting (Miyakawa, 2020). Endorsement and support of the image repositories by funders and publishers and mandatory deposit of fully annotated published images will be instrumental in ensuring proper reporting of imaging data and improving the reproducibility of biomedical research.

Materials and methods

Article selection

Request a detailed protocol

Issues from late 2018 and early 2019 of Development, Developmental Biology, Developmental Cell (developmental biology journals), Journal of Immunology, Nature Immunology (immunology journals), Journal of Cell Biology, Biophysical Journal, and Journal of Neuroscience were randomly selected. All original research articles in an issue were selected up to 30 articles per publication. If there were less than ~30 articles per issue, consecutive issues were used until reaching that number. Reviews, commentaries and editorials were not used in this analysis. We analyzed a total of 240 articles.

Figure classification

Request a detailed protocol

All content of the articles was included in the analysis, including supplementary text and figures. For the purposes of this study, we considered all optical and electron microscopy images and all preclinical animal imaging when assessing the extent of imaging used. Images of western blots, polyacrylamide gels, phosphorimagers and the like were not included. The number of figures in the main text and supplemental information were assigned as imaging figures if any of the panels in the figure contained an original image (not a cartoon or line diagram, schematics etc.). Figures that contained quantification of imaging data were also considered imaging figures, even if no images were displayed. Conversely, plots, graphs, and figures rendering data derived form other primary techniques, such as modeling, flow cytometry or western blots, were not considered imaging figures. To control for over estimation of the imaging content of an article, the main figures in 12 articles of an issue of Development were quantified on a panel-by-panel basis. Each panel in a figure (A, B, etc.) was assigned to imaging or not as above. The 12 articles thus analyzed contained 94 primary figures and 641 panels. 85 of the figures (90%) and 558 of the panels (87%) were classified as imaging. As we did not find a pronounced difference in the extent of imaging usage between the two methods of evaluation (whole figure and panel classifications), we stuck to the simpler whole figure classification for our analysis. All articles in a journal were used to determine the ratio of imaging to total figures, including those that did not contain any imaging data.

Imaging materials and methods quantification

Request a detailed protocol

The whole materials and methods section of the main text and supplementary materials were considered. The STAR Methods section (excluding the key resources table) was included for Developmental Cell. The Life Sciences Reporting Summary in Nature Immunology was not included because it contains a lot of boilerplate text that is not relevant in all articles. All the text was put together and subject to word counting in Word (Microsoft). All the portions of the materials and methods section that contained information on image acquisition were extracted and separately counted. Text devoted to sample preparation prior to image acquisition (for example, antibodies used, or immunofluorescence protocols) or to image analysis was not counted unless inextricably interwoven with the image acquisition description. Similarly, text about western blots etc. was not counted (as western blots etc. were not considered as images for the purposes of our study). To quantify the ratio of text about imaging to total materials and methods text, only articles that contained imaging were considered (185 out of the initial 240 articles).

Imaging materials and methods quality evaluation

Request a detailed protocol

Knowing the imaging word count in Materials and methods turned out not to be very informative, as some texts were devoid of usable information (see below). To address this issue, the imaging information was evaluated qualitatively in three separate aspects.

First, for proper evaluation and replication of any optical microscopy experiment the resolution and magnification used are essential. In addition, the degree of chromatic and planar aberration correction is needed for multichannel fluorescence imaging and quantitative microscopy (Ross et al., 2014). This requires reporting objective lens correction, magnification, and numerical aperture (NA). In order to pass this section both NA and magnification had to be reported.

Second, the parameters for digitization were evaluated. Planar optical resolution provided by the objective lens can only be adequately captured in the image if the digitization sampling pitch is correct (Stelzer, 1998). When a three-dimensional image is acquired, the interval between planes (Z step) is also needed in combination with the NA of the objective lens and the size of the confocal aperture (if used) to determine if axial digital sampling is adequate. Planar sampling parameters can be best reported as the pixel size of the digital file, but can also be derived from the total magnification and camera model in wide field images, or a combination of frame size and laser zoom for confocal microscopy. Reporting any of these parameters is enough to pass the digitization section. When three-dimensional images are reported, the voxel size or the Z step was also required to pass this section.

Third, fluorescence microscopy is by far the most common imaging technique in our dataset (confocal fluorescence microscopy is used in 62% of the imaging articles, and wide-field fluorescence microscopy is used in 40%). Adequate replication and evaluation of this technique requires knowledge of the excitation and emission bands used to capture fluorescence (Waters and Wittmann, 2014; Webb and Brown, 2013). This is particularly critical in multichannel acquisitions. When fluorescence imaging experiments were reported, the wavelengths used for excitation and emission with the different fluorochromes were needed to pass this section. Passing the objective lens section plus one of the other two sections is necessary for a global passing grade in this qualitative assessment.

When electron microscopy was reported, electron microscopy imaging was evaluated on a case-by-case basis by one of the authors (MAS), who has more than 30 years experience in the technology. While the instrument manufacturer and model may be relevant, the acquisition settings, including accelerating voltage, gun bias, magnification, and spot size must also be conveyed. These parameters determine contrast, resolution, and signal to noise ratio (Egerton, 2014). Additionally, sample preparation including fixation, dehydration, embedding and sectioning can demonstrably impact the outcome of the study and should also be reported. Evidence of adequate fixation includes uniform presence of ground structure in all organellar compartments, membrane bilayers intact and parallel, mitochondria and endoplasmic reticulum not distended nor extracted with membrane structures intact (Dykstra and Reuss, 2003).

Whole animal luminescence and fluorescence imaging methods had to provide the digitization information, and if appropriate the spectral bands used as described above for microscopy to obtain a passing grade. Supplementary file 1 lists the score for each of these categories in columns N (objective lens), O (digitization), P (spectral settings), and Q (EM); together with the final score (R, Global).

Five articles that contained only Magnetic Resonance Imaging or X-ray imaging were excluded from this analysis. While these imaging modalities should be subject to the same good reporting practices, they fall away from our area of expertise. This leaves 180 imaging articles subject to evaluation of the quality of image acquisition reporting (listed as Eligible articles in column O of the Summary Tab in Supplementary file 1).

To illustrate the quality evaluation method, the following three publications are examples of failing, passing but incomplete, and good reporting. The average number of words about imaging in the materials and methods sections for articles with images was 138 (Supplementary file 1). The first example (doi.org/10.1016/j.devcel.2018.08.006) contains 121 words about imaging methods, but these words fail to provide the basics of image acquisition: “Samples were imaged with a Nikon Eclipse inverted microscope with A1R confocal running NIS Elements and images were analyzed with Fiji (Schindelin et al., 2012). Super-resolution microscopy was performed with a Nikon A1 LUN-A inverted confocal microscopy. C2C12 myoblasts were differentiated for two days and immunostained with custom Myomaker antibodies (Gamage et al., 2017) and Myomerger antibodes described above. Images were acquired with a 100X objective NA1.45 at four times the Nyquist limit (0.03 µm pixel size). Z- stacks were acquired using a 0.4 AU pinhole yielding a 0.35 µm optical section at 0.1 µm intervals and 2X integration to avoid pixel saturation. Images were deconvolved with NIS elements using 15 iterations of the Landweber method. Images shown are a single focal plane.” No information is provided on objective lens, digitization, or fluorescence parameters for confocal microscopy. The super-resolution acquisition experiment is correctly described in objective lens and digitization, but fails in spectral parameters details. Out of 12 imaging figures in this article, just one panel is super-resolution, no information on how the other images were acquired.

In the second example (doi.org/10.1016/j.devcel.2018.07.008) 79 words are enough for a passing grade: "Quantitative Microscopy Confocal images were acquired using a Nikon Eclipse Ti-E microscope (Nikon Corp.) equipped with a swept-field confocal scanner (Prairie Technologies), a 100x Plan Apochromat objective (NA 1.45) and an Andor iXon EM-CCD camera (Andor). Widefield images were acquired with a Nikon Eclipse Ti-E microscope (Nikon Corp.) equipped with a 100x Plan Apochromat objective (NA 1.40) and an Andor Xyla 4.2 scientific CMOS camera (Andor). Laser intensity and exposures were identical for all images that were quantitatively compared." The fluorescence channel information is missing, but objective lens and digitization are properly reported.

Lastly, this example (doi.org/10.1016/j.bpj.2018.12.012) passes all three aspects with 136 words: "Cell observation. Fluorescence images were obtained by using an inverted microscope (IX81-ZDC2; Olympus, Tokyo, Japan) equipped with a motorized piezo stage and a spinning disc confocal unit (CSU-X1-A1; Yokogawa, Musashino, Japan) through a 60 [sic] oil immersion objective lens (numerical aperture 1.35; UPLSAPO 60XO; Olympus, Tokyo, Japan). PtdInsP3-GFP was excited by a 488 nm laser diode (50 mW). The images were passed through an emission filter (YOKO 520/35; Yokogawa) and captured simultaneously by a water-cooled electron-multiplying charge-coupled device camera (Evolve; Photometrics, Huntington Beach, CA). Time-lapse movies were acquired at 10 s intervals at a spatial resolution of dx = dy = 0.2666 µm and dz = 0.5 µm using z-streaming (MetaMorph 7.7.5; MetaMorph, Nashville, TN). Cells observed in the microfluidic chamber are acquired at a spatial resolution of dx = dy = 0.2222 µm and dz = 0.2 µm".

Data availability

All data generated or analyzed during this study are included in the manuscript and supporting files.

References

    1. Botvinik-Nezer R
    2. Holzmeister F
    3. Camerer CF
    4. Dreber A
    5. Huber J
    6. Johannesson M
    7. Kirchler M
    8. Iwanir R
    9. Mumford JA
    10. Adcock RA
    11. Avesani P
    12. Baczkowski BM
    13. Bajracharya A
    14. Bakst L
    15. Ball S
    16. Barilari M
    17. Bault N
    18. Beaton D
    19. Beitner J
    20. Benoit RG
    21. Berkers R
    22. Bhanji JP
    23. Biswal BB
    24. Bobadilla-Suarez S
    25. Bortolini T
    26. Bottenhorn KL
    27. Bowring A
    28. Braem S
    29. Brooks HR
    30. Brudner EG
    31. Calderon CB
    32. Camilleri JA
    33. Castrellon JJ
    34. Cecchetti L
    35. Cieslik EC
    36. Cole ZJ
    37. Collignon O
    38. Cox RW
    39. Cunningham WA
    40. Czoschke S
    41. Dadi K
    42. Davis CP
    43. Luca A
    44. Delgado MR
    45. Demetriou L
    46. Dennison JB
    47. Di X
    48. Dickie EW
    49. Dobryakova E
    50. Donnat CL
    51. Dukart J
    52. Duncan NW
    53. Durnez J
    54. Eed A
    55. Eickhoff SB
    56. Erhart A
    57. Fontanesi L
    58. Fricke GM
    59. Fu S
    60. Galván A
    61. Gau R
    62. Genon S
    63. Glatard T
    64. Glerean E
    65. Goeman JJ
    66. Golowin SAE
    67. González-García C
    68. Gorgolewski KJ
    69. Grady CL
    70. Green MA
    71. Guassi Moreira JF
    72. Guest O
    73. Hakimi S
    74. Hamilton JP
    75. Hancock R
    76. Handjaras G
    77. Harry BB
    78. Hawco C
    79. Herholz P
    80. Herman G
    81. Heunis S
    82. Hoffstaedter F
    83. Hogeveen J
    84. Holmes S
    85. Hu CP
    86. Huettel SA
    87. Hughes ME
    88. Iacovella V
    89. Iordan AD
    90. Isager PM
    91. Isik AI
    92. Jahn A
    93. Johnson MR
    94. Johnstone T
    95. Joseph MJE
    96. Juliano AC
    97. Kable JW
    98. Kassinopoulos M
    99. Koba C
    100. Kong XZ
    101. Koscik TR
    102. Kucukboyaci NE
    103. Kuhl BA
    104. Kupek S
    105. Laird AR
    106. Lamm C
    107. Langner R
    108. Lauharatanahirun N
    109. Lee H
    110. Lee S
    111. Leemans A
    112. Leo A
    113. Lesage E
    114. Li F
    115. Li MYC
    116. Lim PC
    117. Lintz EN
    118. Liphardt SW
    119. Losecaat Vermeer AB
    120. Love BC
    121. Mack ML
    122. Malpica N
    123. Marins T
    124. Maumet C
    125. McDonald K
    126. McGuire JT
    127. Melero H
    128. Méndez Leal AS
    129. Meyer B
    130. Meyer KN
    131. Mihai G
    132. Mitsis GD
    133. Moll J
    134. Nielson DM
    135. Nilsonne G
    136. Notter MP
    137. Olivetti E
    138. Onicas AI
    139. Papale P
    140. Patil KR
    141. Peelle JE
    142. Pérez A
    143. Pischedda D
    144. Poline JB
    145. Prystauka Y
    146. Ray S
    147. Reuter-Lorenz PA
    148. Reynolds RC
    149. Ricciardi E
    150. Rieck JR
    151. Rodriguez-Thompson AM
    152. Romyn A
    153. Salo T
    154. Samanez-Larkin GR
    155. Sanz-Morales E
    156. Schlichting ML
    157. Schultz DH
    158. Shen Q
    159. Sheridan MA
    160. Silvers JA
    161. Skagerlund K
    162. Smith A
    163. Smith DV
    164. Sokol-Hessner P
    165. Steinkamp SR
    166. Tashjian SM
    167. Thirion B
    168. Thorp JN
    169. Tinghög G
    170. Tisdall L
    171. Tompson SH
    172. Toro-Serey C
    173. Torre Tresols JJ
    174. Tozzi L
    175. Truong V
    176. Turella L
    177. van 't Veer AE
    178. Verguts T
    179. Vettel JM
    180. Vijayarajah S
    181. Vo K
    182. Wall MB
    183. Weeda WD
    184. Weis S
    185. White DJ
    186. Wisniewski D
    187. Xifra-Porxas A
    188. Yearling EA
    189. Yoon S
    190. Yuan R
    191. Yuen KSL
    192. Zhang L
    193. Zhang X
    194. Zosky JE
    195. Nichols TE
    196. Poldrack RA
    197. Schonberg T
    (2020) Variability in the analysis of a single neuroimaging dataset by many teams
    Nature 582:84–88.
    https://doi.org/10.1038/s41586-020-2314-9
  1. Website
    1. Rockefeller Press
    (2019) Submission guidelines
    Journal of Cell Biology. Accessed July 28, 2020.

Decision letter

  1. Peter Rodgers
    Senior and Reviewing Editor; eLife, United Kingdom
  2. Elisabeth M Bik
    Reviewer

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Thank you for submitting your article "Imaging in Biomedical Research: An Essential Tool with No Instructions" for consideration by eLife. Your article has been reviewed by three peer reviewers, and the evaluation has been overseen by the eLife Features Editor (Peter Rodgers). The following individuals involved in review of your submission have agreed to reveal their identity: Claire Brown (Reviewer #1); Elisabeth M Bik (Reviewer #3).

Summary:

This paper brings attention to the ongoing problem of lack of detail in the Materials and methods section of paper to reproduce imaging experiments. The authors do a nice study of some select literature to highlight the breadth and depth of the problem. This is an important issue that needs to be discussed in the community and dealt with and this article will help push towards correcting the problem in the field. However, a number of points need to be addressed to make the article suitable for publication.

Essential revisions:

1) The Appendix should include a clear definition of what counted as an "image" for this paper. Where Western blots or other non-microscopy photos included in this definition? Did the authors count line graphs, flow cytometry panels, or spectra as "image" or not?

Here are three examples to illustrate how the short definition of an image for this paper does not appear to match the data provided in the Source data file:

# DOI:10.1016/j.bpj.2018.12.008 is listed to include 2 images in 7 figures. But I would not be able to know which 2 were counted as images. Figure 2 and 3 contain microscopy photos, but Figures 4 and 5 also appear to contain some imaging data, albeit not a real photo but more a spectrum? Which ones were counted as an image?

# DOI:10.1038/s41590-018-0203-2 has 5 figures with images. Some panels of those figures were microscopy images, but others were immunoblots. Were both types of images evaluated for this paper or only the microscopy photos?

# DOI:10.1083/jcb.201710058 has 8 figures with photographic images, not 7 as listed in the Source Data File. That might suggest that the authors did not include Figure 4 (which has Western blot photos but not microscopy photos) in their analysis, and that they mainly focused on microscopy photos.

2) Because it is not clear what the authors counted as an "image", it is also not clear what was counted as the amount of text in the methods devoted to images. For example, if a paper contained both immunoblots as well as microscopy photos, did the authors only look for the microscopy paragraphs in the Methods, or were the immunoblot (Western blot) paragraphs also counted? Often, Western blot imaging is not described in the Methods because it is not very important how a photo is made, and a smartphone camera might suffice. For microscopy, the settings and details are much more important than for Western blots. It would be helpful if the Introduction / Discussion would better clarify that this paper focused on settings for microscopy photos, not on those for other types of photos. The examples in the Appendix were very helpful to clarify, but already defining this at the beginning of the paper would be better to avoid confusion.

3) The paper lists that 185 articles containing images were evaluated, but in the legend of Table 1, suddenly 4 papers with MRI images are removed. This is not mentioned in the main text. It would be better to describe (e.g. in the Appendix) that MRI images were not included in the definition of 'images' and list 181 as the total of papers analyzed or to include them in all analyses.

4) There is mention of the importance of sample preparation in the examples provided about EM microscopy and how that was evaluated but little is mentioned about optical microscopy and sample preparation. This is very important for quantitative and reproducible fluorescence imaging as well. Details that are important include catalog numbers and lot numbers for reagents and validating staining methods or functionality of tagged proteins.

The manuscript would be strengthened considerably if the authors could extend their analysis of the 240 papers in their sample to determine how rigorous and detailed they are when it comes to sample preparation. If this is not possible, the authors should mention the importance of reporting the details of sample preparation at appropriate places in their manuscript, and acknowledge that the percentages for imaging methods that they quote do not include sample preparation.

5) Similarly, image analysis is mentioned but not covered in detail. I think it is critical to document image analysis steps. Without this imaging experiments cannot be reproducible. Details that are important include data providence, the importance of retaining raw image data, the importance of documenting each analysis step including software versions and operating system. The OMERO figure software could be mentioned here as it offers data analysis tracking including for multiple people manipulating the same data set.

As in the previous point, the manuscript would be strengthened considerably if the authors could extend their analysis to include image analysis. If this is not possible, the authors should mention the importance of reporting the details of image analysis at appropriate places in their manuscript, and acknowledge that the percentages for imaging methods that they quote do not include image analysis.

6) The authors identify that incomplete reporting of imaging methods is a problem that a number of members of the community must address, including imaging facilities and journals. Having checklists is a great start. A preliminary checklist for critical image acquisition parameters should be included as a supplemental material for this publication. Perhaps the list of metadata pulled from images with the methodsJ plugin has that information already? Also, the parameters for image analysis (described above) should be included in the checklist. What else might journals do besides checklists (maybe use AI to screen the Materials and methods section for minimal criteria during the submission process?) And what more could core facilities of national and international communities (such as BINA and ABRF) do (maybe develop and disseminate training programs for researchers and reviewers?)

7) Another key stakeholder that is only briefly mentioned is the microscope manufacturers. They need to be guided and encouraged to include significant metadata as part of acquired images and make that information accessible to the end user on multiple different software platforms. A comment on this is needed in the discussion.

8) The authors mention funders as enforcers. Please expand on this - what could funders do to improve the reporting of imaging methods in papers?

9) A large part of the challenge in this field is to raise awareness. This article does just that. So it is also important to point people to a range of emerging resources in this area. Suggestions include:

# OMERO should be mentioned as a potential resource for centralized data management, accessibility and sharing.

# I would also suggest you reference work being done in this area by the 4D Nucleome project: "Minimum Information guidelines for fluorescence microscopy: increasing the value, quality, and fidelity of image data" The 4DN-OME model extension and metadata guidelines: https://arxiv.org/abs/1910.11370

# The brain microscopy initiative is asking for feedback on image metadata standards they are developing. https://brainmicroscopyworkspace.org/feedback.

# BioImage Archive is meant to be a repository for published microscopy image data sets so they can be used by the broader microscopy community. https://www.ebi.ac.uk/about/news/press-releases/bioimage-archive-launch

https://doi.org/10.7554/eLife.55133.sa1

Author response

Essential revisions

1) The Appendix should include a clear definition of what counted as an "image" for this paper. Where Western blots or other non-microscopy photos included in this definition? Did the authors count line graphs, flow cytometry panels, or spectra as "image" or not?

Here are three examples to illustrate how the short definition of an image for this paper does not appear to match the data provided in the Source data file:

# DOI:10.1016/j.bpj.2018.12.008 is listed to include 2 images in 7 figures. But I would not be able to know which 2 were counted as images. Figure 2 and 3 contain microscopy photos, but Figures 4 and 5 also appear to contain some imaging data, albeit not a real photo but more a spectrum? Which ones were counted as an image?

# DOI:10.1038/s41590-018-0203-2 has 5 figures with images. Some panels of those figures were microscopy images, but others were immunoblots. Were both types of images evaluated for this paper or only the microscopy photos?

# DOI:10.1083/jcb.201710058 has 8 figures with photographic images, not 7 as listed in the Source Data File. That might suggest that the authors did not include Figure 4 (which has Western blot photos but not microscopy photos) in their analysis, and that they mainly focused on microscopy photos.

Image definition and western blots.

We did not consider western and other so-called molecular imaging techniques (SDS-PAGE, Northern…) in our analysis. This was rather embarrassingly not stated in our manuscript, and it was clearly a source of confusion. We have now added a line in the main text and a paragraph in the Appendix making this exclusion criterion explicit.

Line graphs or spectra are not images and are then not considered, we have made this clearer in the Appendix.

The specific examples of figures that were not counted as imaging in DOI:10.1016/j.bpj.2018.12.008 (Deng, 2019, Figures 4 and 5) and are referred to as “spectrum” by a reviewer are, to the best of my understanding, the results of a computer simulation displayed with a colorful look up table, and do not represent actual imaging data. I have reviewed the images again and I still think they are not real imaging data.

2) Because it is not clear what the authors counted as an "image", it is also not clear what was counted as the amount of text in the methods devoted to images. For example, if a paper contained both immunoblots as well as microscopy photos, did the authors only look for the microscopy paragraphs in the Methods, or were the immunoblot (Western blot) paragraphs also counted? Often, Western blot imaging is not described in the Methods because it is not very important how a photo is made, and a smartphone camera might suffice. For microscopy, the settings and details are much more important than for Western blots. It would be helpful if the Introduction / Discussion would better clarify that this paper focused on settings for microscopy photos, not on those for other types of photos. The examples in the Appendix were very helpful to clarify, but already defining this at the beginning of the paper would be better to avoid confusion.

Imaging text

We have clarified that methods descriptions for western etc., sample prep, and image analysis are not considered in the amount of text classified as Imaging. There are exceptions due to the ways the methods are written, where one cannot cleanly extricate the imaging methods from sample prep or analysis, but these instances are a small portion of the total. At any rate, this would only cause an over count of the imaging materials and methods. We have clarified this point in the main text and the Appendix.

The focus of this article is not only microscopy, although the majority of the images are microscopy. We prefer to keep the broader imaging scope, because the reporting criteria for other biomedical imaging are similar: resolution and magnification, digitization parameters and spectral settings have to be properly reported in animal imagers the same as in microscopes.

3) The paper lists that 185 articles containing images were evaluated, but in the legend of Table 1, suddenly 4 papers with MRI images are removed. This is not mentioned in the main text. It would be better to describe (e.g. in the Appendix) that MRI images were not included in the definition of 'images' and list 181 as the total of papers analyzed or to include them in all analyses.

Articles excluded from quality evaluation

We think that the articles that contain only MRI (4, all in J. Neurosci.) and X-Ray imaging (1, in Nature Immunology) should be counted as imaging articles, contributing to the 185 imaging articles of the total 240 articles evaluated. Just because we are not comfortable evaluating them for accuracy of reporting does not mean they don’t contain images. However they are not counted in the denominator of the quality evaluation rate, articles that pass the quality criteria/imaging articles. This denominator is 180 eligible imaging articles that we evaluated for quality of reporting. These are listed under Eligible articles in column O of the Summary tab of the Source Data File. We have made this clearer in the Appendix and added a footnote to the table legend to this effect.

4) There is mention of the importance of sample preparation in the examples provided about EM microscopy and how that was evaluated but little is mentioned about optical microscopy and sample preparation. This is very important for quantitative and reproducible fluorescence imaging as well. Details that are important include catalog numbers and lot numbers for reagents and validating staining methods or functionality of tagged proteins.

The manuscript would be strengthened considerably if the authors could extend their analysis of the 240 papers in their sample to determine how rigorous and detailed they are when it comes to sample preparation. If this is not possible, the authors should mention the importance of reporting the details of sample preparation at appropriate places in their manuscript, and acknowledge that the percentages for imaging methods that they quote do not include sample preparation.

Sample preparation quality evaluation

We could not agree more strongly with the reviewer that sample preparation is a critical aspect of image acquisition. And that accurate description of what was done is critical for reproducibility. However this analysis falls way beyond the scope of this article, and would require a very substantial effort. We have added content emphasizing the importance of this topic, with some critical references.

5) Similarly, image analysis is mentioned but not covered in detail. I think it is critical to document image analysis steps. Without this imaging experiments cannot be reproducible. Details that are important include data providence, the importance of retaining raw image data, the importance of documenting each analysis step including software versions and operating system. The OMERO figure software could be mentioned here as it offers data analysis tracking including for multiple people manipulating the same data set.

As in the previous point, the manuscript would be strengthened considerably if the authors could extend their analysis to include image analysis. If this is not possible, the authors should mention the importance of reporting the details of image analysis at appropriate places in their manuscript, and acknowledge that the percentages for imaging methods that they quote do not include image analysis.

Image processing and analysis reporting evaluation

Again, a topic dear and near to our hearts, but a completely different paper. We have expanded a little bit on the topic and pointed to one of the articles in our dataset that does a really good job of describing the image processing and analysis workflow. We have also incorporated a recent reference that highlights the perils of poor analysis reporting, as well as the requested OMERO reference.

6) The authors identify that incomplete reporting of imaging methods is a problem that a number of members of the community must address, including imaging facilities and journals. Having checklists is a great start. A preliminary checklist for critical image acquisition parameters should be included as a supplemental material for this publication. Perhaps the list of metadata pulled from images with the methodsJ plugin has that information already? Also, the parameters for image analysis (described above) should be included in the checklist. What else might journals do besides checklists (maybe use AI to screen the Materials and methods section for minimal criteria during the submission process?) And what more could core facilities of national and international communities (such as BINA and ABRF) do (maybe develop and disseminate training programs for researchers and reviewers?)

A. Imaging checklist and MethodsJ reporting

The variety of imaging methodologies we encountered in our study makes not feasible to provide a specific checklist for each one of them, and as we state in our manuscript, this prescriptive task should be left to the appropriate organizations that can provide a broader, authoritative consensus. At the same time we realize that it can sound glib to state something is wrong without providing a template on how to do it right. We have provided a narrow checklist for the two most common imaging techniques in out dataset, fluorescence laser scanning confocal microscopy and fluorescence wide field microscopy (Appendix 2). We are also providing as Appendix 3 the MethodsJ output of multiple imaging files from different methods and vendors. This appendix demonstrates both the capabilities of the script and the need for a more consistent reporting of metadata by vendors (see 7 below). We have added the appropriate language in the text regarding both Appendices.

B. What else can be done by journals and professional societies?

These are critical parts of the solution and we have expanded a little bit on them. Education and dissemination will be the essential role of professional societies. The journals need to be more proactive in their requests for primary experimental data.

7) Another key stakeholder that is only briefly mentioned is the microscope manufacturers. They need to be guided and encouraged to include significant metadata as part of acquired images and make that information accessible to the end user on multiple different software platforms. A comment on this is needed in the discussion.

Microscope manufacturers

Point well taken. We have added some text on their role to ensure accurate, uniform recording of essential metadata in the image files. This partially feeds from the MethodsJ test runs in Appendix 3, that show the gold standard for imaging metadata extraction, Bio-Formats, is sometimes incapable of interpreting the manufacturer’s files.

8) The authors mention funders as enforcers. Please expand on this - what could funders do to improve the reporting of imaging methods in papers?

Funding

We have expanded a little bit on this, but this is necessarily speculative. Although if granting agencies put some teeth on their reporting mandates things would change rapidly.

9) A large part of the challenge in this field is to raise awareness. This article does just that. So it is also important to point people to a range of emerging resources in this area. Suggestions include:

# OMERO should be mentioned as a potential resource for centralized data management, accessibility and sharing.

# I would also suggest you reference work being done in this area by the 4D Nucleome project: "Minimum Information guidelines for fluorescence microscopy: increasing the value, quality, and fidelity of image data" The 4DN-OME model extension and metadata guidelines: https://arxiv.org/abs/1910.11370

# The brain microscopy initiative is asking for feedback on image metadata standards they are developing. https://brainmicroscopyworkspace.org/feedback.

# BioImage Archive is meant to be a repository for published microscopy image data sets so they can be used by the broader microscopy community. https://www.ebi.ac.uk/about/news/press-releases/bioimage-archive-launch

Raising awareness

We have incorporated those excellent suggestions.

https://doi.org/10.7554/eLife.55133.sa2

Article and author information

Author details

  1. Guillermo Marqués

    Guillermo Marqués is at the University Imaging Centers (UIC) and the Department of Neuroscience, University of Minnesota, Minneapolis, United States

    Contribution
    Conceptualization, Data curation, Formal analysis, Methodology, Writing - original draft, Writing - review and editing
    For correspondence
    marques@umn.edu
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-1478-1955
  2. Thomas Pengo

    Thomas Pengo is at the University of Minnesota Informatics Institute (UMII), University of Minnesota, Minneapolis, United States

    Contribution
    Conceptualization, Software, Writing - review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9632-918X
  3. Mark A Sanders

    Mark A Sanders is at the University Imaging Centers (UIC) and the Department of Neuroscience, University of Minnesota, Minneapolis, United States

    Contribution
    Conceptualization, Formal analysis, Writing - review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-7550-5255

Funding

No external funding was received for this work.

Acknowledgements

Dr. Daniel Ortuño-Sahagún was a key catalyst for this project. Shihab Ahmed and Neah Roakk helped greatly with the gathering of the journal articles.

Publication history

  1. Received:
  2. Accepted:
  3. Accepted Manuscript published:
  4. Version of Record published:

Copyright

© 2020, Marqués et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 9,237
    views
  • 969
    downloads
  • 46
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Guillermo Marqués
  2. Thomas Pengo
  3. Mark A Sanders
(2020)
Science Forum: Imaging methods are vastly underreported in biomedical research
eLife 9:e55133.
https://doi.org/10.7554/eLife.55133

Further reading

    1. Cell Biology
    Satoshi Ninagawa, Masaki Matsuo ... Kazutoshi Mori
    Research Advance

    How the fate (folding versus degradation) of glycoproteins is determined in the endoplasmic reticulum (ER) is an intriguing question. Monoglucosylated glycoproteins are recognized by lectin chaperones to facilitate their folding, whereas glycoproteins exposing well-trimmed mannoses are subjected to glycoprotein ER-associated degradation (gpERAD); we have elucidated how mannoses are sequentially trimmed by EDEM family members (George et al., 2020; 2021 eLife). Although reglucosylation by UGGT was previously reported to have no effect on substrate degradation, here we directly tested this notion using cells with genetically disrupted UGGT1/2. Strikingly, the results showed that UGGT1 delayed the degradation of misfolded substrates and unstable glycoproteins including ATF6α. An experiment with a point mutant of UGGT1 indicated that the glucosylation activity of UGGT1 was required for the inhibition of early glycoprotein degradation. These and overexpression-based competition experiments suggested that the fate of glycoproteins is determined by a tug-of-war between structure formation by UGGT1 and degradation by EDEMs. We further demonstrated the physiological importance of UGGT1, since ATF6α cannot function properly without UGGT1. Thus, our work strongly suggests that UGGT1 is a central factor in ER protein quality control via the regulation of both glycoprotein folding and degradation.

    1. Biochemistry and Chemical Biology
    2. Cell Biology
    Senem Ntourmas, Martin Sachs ... Dominic B Bernkopf
    Research Article

    Activation of the Wnt/β-catenin pathway crucially depends on the polymerization of dishevelled 2 (DVL2) into biomolecular condensates. However, given the low affinity of known DVL2 self-interaction sites and its low cellular concentration, it is unclear how polymers can form. Here, we detect oligomeric DVL2 complexes at endogenous protein levels in human cell lines, using a biochemical ultracentrifugation assay. We identify a low-complexity region (LCR4) in the C-terminus whose deletion and fusion decreased and increased the complexes, respectively. Notably, LCR4-induced complexes correlated with the formation of microscopically visible multimeric condensates. Adjacent to LCR4, we mapped a conserved domain (CD2) promoting condensates only. Molecularly, LCR4 and CD2 mediated DVL2 self-interaction via aggregating residues and phenylalanine stickers, respectively. Point mutations inactivating these interaction sites impaired Wnt pathway activation by DVL2. Our study discovers DVL2 complexes with functional importance for Wnt/β-catenin signaling. Moreover, we provide evidence that DVL2 condensates form in two steps by pre-oligomerization via high-affinity interaction sites, such as LCR4, and subsequent condensation via low-affinity interaction sites, such as CD2.