Peer review process
Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, and public reviews.
Read more about eLife’s peer review process.Editors
- Reviewing EditorSjors ScheresMRC Laboratory of Molecular Biology, Cambridge, United Kingdom
- Senior EditorMerritt MadukeStanford University, Stanford, United States of America
Reviewer #1 (Public Review):
This work continues a series of recent publications from the Grigorieff lab (https://doi.org/10.7554/eLife.25648, https://doi.org/10.7554/eLife.68946, https://doi.org/10.7554/eLife.79272, https://doi.org/10.1073/pnas.2301852120) showcasing the development of high-resolution 2D template matching (2DTM) for detection and reconstruction of macromolecules in cryo-electron microscopy (cryo-EM) images of crowded cellular environments. It is well known in the field of cryo-EM that searching noisy images with a template can result in retrieval of the template itself when averaging the candidate particles detected, an effect known as "Einstein-from-noise" (https://doi.org/10.1073/pnas.1314449110). Briefly, this occurs because it is statistically likely to find a match to an arbitrary motif over a large noisy dataset just by chance. The effect can be mitigated for example by limiting the resolution of the template, but this prevents the accurate detection of macromolecules in a crowded environment, as their "fingerprint" lies in the high-resolution range (https://doi.org/10.7554/eLife.25648). Here, the authors show through several experiments on in vitro and in situ data that features as small as drug compounds and water molecules can be reliably retrieved by 2DTM if they are searched by a template (the "bait") that contains expected neighboring features but not the targets themselves.
The ideas are generally clearly presented with appropriate references to related work, and claims are well supported by the data. In particular, the experiments for verifying the density of the ribosomal protein L7A as well as the systematic removal of residuals from the template model to assess bias are particularly clever.
One key point that could use further clarification is how to interpret densities in the reconstruction that do overlap with the template. If the omitted regions can be reliably reconstructed, and the density is smooth throughout, it implies the detected particles are not only (mostly) true positives but also their poses must be essentially correct. Therefore, why cannot the entire reconstruction be trusted, including portions overlapping with the template? In the "Future applications" section, the authors state that in order to obtain a reconstruction that is entirely devoid of template bias, it would be necessary to successively omit parts of the template structure through its entirety. I wonder if that is really necessary and if the presented approach of omitting template portions could be better framed as a "gold-standard" validation procedure.
In other words, given the compelling evidence provided by the reconstructions in the omitted areas, I find it hard to imagine how the procedure would be "hallucinating" features in the rest of the structure, as the entire reconstruction depends on the same pose and defocus parameters. A possible experiment to test this hypothesis would be to go the opposite way, deliberately adding an unrealistic feature to the bait and checking whether it comes up in the reconstruction, while at the same time checking how it behaves in omitted parts.
When assessing their approach to in situ data (the yeast ribosome), it is intriguing to see that the resolution downgraded from 3.1 to 8 Å when refinement of the particle poses against the current reconstruction was attempted. The authors do provide some possible explanations, such as the reduced signal of the reconstruction at high resolution and the crowded background, but it leaves one to wonder if this means that a 3.1 Å reconstruction could never be obtained from these data by conventional single-particle analysis procedures.
Furthermore, in the section "Quantifying template bias", the authors make the intriguing statement that there can still be some overfitting of noise even in true positives. I understand this overfitting would occur in the form of errors in the pose and defocus estimation, but a clarification would be helpful.
In the Discussion, the claim that "it is not necessary to use tomography to generate high-resolution reconstructions of macromolecular complexes in cells" is a misconception, at least in part. As demonstrated in works by the same group and others (https://doi.org/10.1016/j.xinn.2021.100166, https://doi.org/10.1038/s41467-023-36175-y, https://doi.org/10.1038/s41586-023-05831-0), 2D imaging of native cellular environments does offer a faster and better way to obtain high-resolution reconstructions compared to tomography. However, tomography provides the entire 3D context of the macromolecules, such as their localization to membranes and the cellular architecture, which can be readily visualized in a tomogram even at low resolution, so methods for structure determination from tilt series data such as subtomogram averaging remain of paramount importance. Most likely, a combination of 2D and 3D imaging approaches will be necessary to retrieve both the highest structural resolution and their cellular context to address biological questions.
The "Materials and Methods" section lacks a description of transmission electron microscopy data collection.
Finally, the preprint version of this work posted on bioRxiv (https://doi.org/10.1101/2023.07.03.547552) contains the following competing interests statement, which is missing from the submitted version:
"The authors are listed as inventors on a closely related patent application named "Methods and Systems for Imaging Interactions Between Particles and Fragments", filed on behalf of the University of Massachusetts."
Reviewer #2 (Public Review):
This paper by Lucas et al follows on from earlier work by the same group. They use high-resolution 2D template matching (2DTM) to find particles of a given target structure in 2D cryo-EM images, either of in vitro single-particle samples or of more complicated samples, such as FIB-milled cells (which would otherwise perhaps be used for 3D electron tomography). One major concern for high-resolution template matching has been the amount of model bias that gets introduced into a reconstruction that is calculated straight from the orientations and positions identified by the projection matching algorithm. This paper assesses the amount of model bias that gets introduced in high-resolution features of such maps.
For a high-signal-to-noise in vitro single-particle cryo-EM data set, the authors show that their approach does not yield much model bias. This is probably not very surprising, as their method is basically a low false-positive particle picker, which works very well on such data. Still, I guess that is the whole point of it, and it is good to see that they can reconstruct density for a small-molecule compound that was not present in the original template.
For FIB-milled lamella of yeast cells with stalled ribosomes, the SNR is much lower and the dangers of model bias will be higher. This is also evidenced by the observation that further refinement of initial 2DTM-identified orientations and positions worsens the map. This is obviously a more relevant SNR regime to assess their method. Still, they show convincing density for the GHX compound that was not present in the template but was there in the reconstruction from the identified particles.
Quantification of the amount of model bias is then performed using omit maps, where every 20th residue is removed from the template and corresponding reconstructions are compared (for those residues) with the full-template reconstructions. As expected, model bias increases with lower thresholds for the picking. Some model bias (Omega=8%) remains even for very high thresholds. The authors state this may be due to overfitting of noise when template-matching true particles, instead of introducing false positives. Probably, that still represents some sort of problem. Especially because the authors then go on to show that their expectation of the number of false positives does not always match the correct number of false positives, probably due to inaccuracies in the noise model for more complicated images. This may warrant further in-depth discussion in a revised manuscript.
Overall, I think this paper is well written and it has made me think differently (again) about the 2DTM technique and its usefulness in various applications, as outlined in the Discussion. Therefore, it will be a constructive contribution to the field.
Reviewer #3 (Public Review):
The authors evaluate the effect of high-resolution 2D template matching on template bias in reconstructions, and provide a quantitative metric for overfitting. It is an interesting manuscript that made me reevaluate and correct some mistakes in my understanding of overfitting and template bias, and I'm sure it will be of great use to others in the field. However, its main point is to promote high-resolution 2D template matching (2DTM) as a more universal analysis method for in vitro and, more importantly, in situ data. While the experiments performed to that end are sound and well-executed in principle, I fail to make that specific conclusion from their results.
The authors correctly point out that overfitting is largely enabled by the presence of false-positives in the data set. They go on to perform their in situ experiments with ribosomes, which provide an extremely favorable amount of signal that is unrealistic for the vast majority of the proteome. This seems cherry-picked to keep the number of false-positives and false-negatives low. The relationship between overfitting/false-positive rate and the picking threshold will remain the same for smaller proteins (which is a very useful piece of knowledge from this study). However, the false-negative rate will increase a lot compared to ribosomes if the same high picking threshold is maintained. This will limit the applicability of 2DTM, especially for less-abundant proteins.
I would like to see an ablation study: Take significantly smaller segments of the ribosome (for which the authors already have particle positions from full-template matching, which are reasonably close to the ground-truth), e.g. 50 kDa, 100 kDa, 200 kDa etc., and calculate the false-negative rate for the same picking threshold. If the resulting number of particles does plummet, it would be very helpful to discuss how that affects the utility of 2DTM for non-ribosomes in situ.
Another point of concern is the dramatic resolution decrease to 8 A after multiple iterations of refinement against experimental reconstructions described in line 159. Was this a local search from the poses provided by 2DTM, or something more global? While this is not a manifestation of overfitting as the authors have conclusively shown, I think it adds an important point to the ongoing "But do we really need tomograms, or can we just 2D everything?" debate in the field, which is also central to the 2D part of 2DTM. Reaching 8 A with 12k ribosome particles would be considered a rather poor subtomogram averaging result these days. Being in the "we need tilt series to be less affected by non-Gaussian noise" camp myself, I wonder if this indicates 2D images are inherently worse for in situ samples. If they are, the same limitations would extend to template matching. In that case, shouldn't the authors advocate for 3DTM instead of 2DTM? It may not be needed for ribosomes, but could give smaller proteins the necessary edge.
Right now, this study is also an invitation to practitioners who do not understand the picking threshold used here and cannot relate it to other template-matching programs to do a lot of questionable template matching and claim that the results are true because templates are "unoverfittable". I think such undesirable consequences should be discussed prominently.