A Bayesian approach to singleparticle electron cryotomography in RELION4.0
Abstract
We present a new approach for macromolecular structure determination from multiple particles in electron cryotomography (cryoET) data sets. Whereas existing subtomogram averaging approaches are based on 3D data models, we propose to optimise a regularised likelihood target that approximates a function of the 2D experimental images. In addition, analogous to Bayesian polishing and contrast transfer function (CTF) refinement in singleparticle analysis, we describe the approaches that exploit the increased signaltonoise ratio in the averaged structure to optimise tiltseries alignments, beaminduced motions of the particles throughout the tiltseries acquisition, defoci of the individual particles, as well as higherorder optical aberrations of the microscope. Implementation of our approaches in the opensource software package RELION aims to facilitate their general use, particularly for those researchers who are already familiar with its singleparticle analysis tools. We illustrate for three applications that our approaches allow structure determination from cryoET data to resolutions sufficient for de novo atomic modelling.
Editor's evaluation
Singleparticle tomography (SPT) is a useful method to determine the structure of proteins imaged in situ. This important work presents an easytouse tool for SPT that approximates the use of 2D tomographic projections using a ‘pseudosubtomogram’ data structure, chosen to facilitate implementation within the existing RELION codebase. The examples shown provide solid support for the claims about the efficacy of the approach.
https://doi.org/10.7554/eLife.83724.sa0Introduction
In recent years, electron cryomicroscopy (cryoEM) has allowed the 3D imaging of an increasing number of biological macromolecules at resolutions sufficient for de novo atomic modelling. This development, originally driven by advances in detector technology, was further facilitated by novel, robust image processing algorithms. In singleparticle analysis, images of multiple copies of isolated macromolecular complexes, or particles, that are suspended in random orientations in a thin layer of vitreous water are combined in a 3D reconstruction. Nowadays, many aspects of singleparticle analysis workflows can be performed with only minimal human supervision, for example, the detection, extraction, and initial classification of particles in the images (Zivanov et al., 2018; Bepler et al., 2019; Wagner et al., 2019), 3D reconstruction (Zivanov et al., 2018; Punjani et al., 2017), as well as refinement of the optical parameters (Zivanov et al., 2018; Zivanov et al., 2020; Punjani et al., 2017; Tegunov et al., 2021) and perparticle tracking of electron beaminduced motion (Zheng et al., 2017; Zivanov et al., 2019). Many of the algorithms that underlie these modern methods are built on solid statistical foundations that require few tunable parameters. This decreases the need for operator expertise and provides objectivity, as well as robustness, in obtaining optimal structures.
The singleparticle approach is, however, limited to investigating isolated protein complexes that are purified to relative homogeneity. To examine these complexes in their crowded physiological environment, electron cryotomography (cryoET) may be used instead. In the tomographic approach, the sample is tilted multiple times during image acquisition, yielding a socalled tilt series of images from which a 3D tomogram can be computed. In the same manner as singleparticle analysis, repeated occurrences of particles in those tomograms can then be aligned and averaged to obtain higherresolution reconstructions. This process is referred to as subtomogram averaging. Unlike the field of singleparticle analysis, labs use many different tools for subtomogram averaging (e.g. Kremer et al., 1996; Nickell et al., 2005; Heumann et al., 2011; CastañoDíez et al., 2012; Hrabe et al., 2012; Chen et al., 2013; GalazMontoya et al., 2016; Sanchez et al., 2019; Chen et al., 2019; Jiménez de la Morena et al., 2022) and many of the tools used require considerable levels of expertise from the operator, often in order to tune parameters that arise from heuristics in the underlying algorithms. This not only provides a barrier for new scientists entering the field, but can also lead to the calculation of suboptimal structures.
Compared to singleparticle analysis, subtomogram averaging faces several unique challenges. In addition to estimating the position and orientation of each particle, the algorithm also has to consider the geometry of the tilt series. Typically, this is solved through a set of preprocessing steps that include estimation of contrast transfer function (CTF) parameters and alignment of the tilt series, followed by the reconstruction of, often inconveniently large, tomograms for the entire field of view. Smaller subtomograms, centred around selected particles, are then extracted from the tomograms and used in a separate process of subtomogram alignment and averaging. The separation between tomogram reconstruction and subtomogram averaging can lead to an accumulation of errors, because errors in the CTF estimation or tiltseries alignments are hard to correct. In addition, because the sample cannot be rotated 180° within the microscope, the subtomograms contain empty regions in Fourier space, the socalled missing wedge, which are difficult to deal with in subtomogram averaging (e.g. see Schmid and Booth, 2008; Förster et al., 2008; Bartesaghi et al., 2008; Frank et al., 2012).
A fundamental problem with subtomogram averaging as described above is that it transforms the original 2D image data into 3D subtomograms, which are then used as a substitute for experimental data in the alignment algorithm. RELION2 introduced the concept of a 3D CTF to describe the transfer of information from the 2D images to the subtomograms, which dealt to some extent with the missing wedge and the loss of information through interpolations in the reconstruction algorithm (Bharat and Scheres, 2016). A drawback of the 3D CTF approach is that it does not deal correctly with the lower resolution regions of Fourier space, where information from different tilt images overlaps. A statistically more attractive approach would be to formulate the optimisation target function directly as a function of the actual 2D images that are measured in the microscope. This has been proposed in an approach called constrained singleparticle cryoET (Bartesaghi et al., 2012), where individually boxed particles from the tiltseries images are processed as in singleparticle analysis, but their relative orientations are kept fixed. A similar approach was also implemented in the program emClarity (Himes and Zhang, 2018). To deal with unknowns in the relative orientations of the particles from the tiltseries images, as well as their CTFs, the program M recently introduced new optimisation approaches that compare reference projections against the 2D particle images (Tegunov et al., 2021). M relies on RELION for alignment and classification of 3D subtomograms that are recalculated from the optimised parameters in M. Nevertheless, this iterative approach allows subtomogram averaging to resolutions that approach those observed for singleparticle analysis, even for particles in complex cellular environments (Tegunov et al., 2021).
Here, we describe a new approach to subtomogram averaging in RELION4.0 that optimises a regularised likelihood function that approximates the direct use of the 2D images of the tilt series. In order to do so at acceptable computational and implementation costs, we have altered the main refinement program in RELION4.0 to work with socalled pseudosubtomograms: explicitly constructed sets of 3D data arrays that contain sums of CTF premultiplied 2D tiltseries images, together with auxiliary arrays that contain the corresponding sum of squared CTFs and how often each 3D voxel has been observed. Pseudosubtomograms no longer aim to represent the actual scattering potential of the underlying particles, in the way that conventional subtomograms would. Instead, they represent a convenient way to implement an approximation to the 2D approach within the existing RELION code. Evaluation of the pseudosubtomograms by RELION4.0 approximates the likelihood of observing a hypothetical particle in the images of the entire tilt series, given the current model. Using that likelihood as a metric, operations equivalent to those in singleparticle analysis can now be performed on tomographic data, for example, 3D initial model generation, 3D classification, or highresolution refinement. In addition, we describe new methods for optimising parameters of the tilt series that exploit the increased signaltonoise ratio in the average structure. Besides optimisation of the tiltseries alignment itself, we also describe methods analogous to CTF refinement (Zivanov et al., 2018; Zivanov et al., 2020) for refining descriptors of the optical properties (defocus, astigmatism, and higherorder aberrations) and a method akin to Bayesian polishing (Zivanov et al., 2019) to model beaminduced particle motion throughout the tilt series. Once all these parameters have been optimised, new pseudosubtomograms can be constructed and the alignment can be repeated. The resulting iterative image processing workflow is similar to existing approaches for singleparticle analysis in RELION.
Methods
Particle alignment and averaging
RELION performs maximum a posteriori estimation to find the set of model parameters $\mathrm{\Theta}$ that maximise the probability of observing the experimental images $X$. Using Bayes’ theorem, we define a regularised likelihood optimisation target function as
where $P(\mathrm{\Theta})$ expresses prior information about the model, that is, that the reconstructed map has limited power in Fourier space, and $P(X\mathrm{\Theta})$ is the likelihood of observing the data given the model. A marginalised likelihood function is used, where one integrates over the unknown alignments $\varphi$ of each individual particle. For simplicity, these integrals are omitted from the notations used in this article.
The data model assumes independent Gaussian noise on the Fourier components of the cryoEM images of individual particles $p$. We therefore write the negative loglikelihood of observing a particle $p$ in a hypothetical alignment $\varphi $ as a sum over a grid of 2D Fourier pixels $\mathbf{j}\in {\mathbb{R}}^{2}$:
where ${X}^{p}$ is the Fourier transform of the experimental particle image, ${\text{CTF}}^{p}$ its contrasttransfer function, ${V}_{\mathbf{j}}^{(p)}$ denotes the 2D slice out of the 3D Fourier transform of the known map $V$ into the view of the particle, and ${\sigma}_{j}^{2}$ is the noise variance of the frequency band of $\mathbf{j}$ given by
for a 2D vector ${\mathbf{t}}_{p}$ and a $2\times 3$ matrix ${A}_{p}$ that respectively encapsulate the particle’s position and orientation, and the evaluation of $V({A}_{p}\mathbf{j})$ is achieved through linear interpolation.
In tomography, our aim is to approximate that same likelihood on tiltseries data. The equivalent is a sum over the pixels of the relevant regions of all images $f$ from the tilt series:
We model the shifts and rotations as compositions of perparticle and perimage components:
where we keep the perparticle rotation component, ${R}_{p}$, identical for all images in the tilt series, and only vary ${A}_{f}$, the rotational alignment of the tiltseries images. In turn, the tiltseries alignment ${A}_{f}$ is shared among all particles in a given tilt image. The perparticle part of the translation is modelled as a 3D vector, ${\mathbf{T}}_{pf}\in {\mathbb{R}}^{3}$, that can vary over different tilt images $f$. This contrasts with singleparticle analysis, where beaminduced motion of the particle can be corrected for as a preprocessing step (Li et al., 2013; Scheres, 2014; Zheng et al., 2017; Zivanov et al., 2019), so that each particle is associated with a single 2D translation in a motioncorrected image.
For our pseudosubtomogram approach, we now approximate the sum over 2D pixels $\mathbf{j}$ and tilt images $f$ in Equation 4 by a sum over 3D voxels $\mathbf{k}$ in the pseudosubtomogram:
Here, the data term ${D}^{p}$, the weight term ${W}^{p}$, and the multiplicity term ${M}^{p}$ are 3D arrays in the Fourier domain. Together, they constitute a pseudosubtomogram. They are constructed as follows:
where $l(\cdot )$ represents linear interpolation with forward mapping, that is, each 2D Fourier pixel $\mathbf{j}$ is projected into 3D Fourier space, updating the eight closest voxels.
Ignoring the difference of premultiplying the images with their CTFs, Equation 7 aims to be equivalent of Equation 4. The variance ${\sigma}_{\mathbf{k}}^{2}$ is equivalent to ${\sigma}_{j}^{2}$, the power of the noise in the individual Fourier components in the 2D images.
We then optimise Equation 1 by expectationmaximisation (Dempster et al., 1977), using Equation 7 to construct the likelihood function and using a prior $P(\mathrm{\Theta})\propto \mathrm{exp}{\sum}_{\mathbf{k}}\frac{{{V}_{\mathbf{k}}}^{2}}{2{\tau}_{k}^{2}}$, based on the expected frequencydependent power of the signal ${\tau}_{k}^{2}$. This leads to the following iterative algorithm:
where (n) denotes the iteration; the divisions by ${\tau}_{k}^{2}$ and ${\sigma}_{k}^{2}$ in Equation 11 are evaluated elementwise; and ${\tau}_{k}^{2}$ and ${\sigma}_{k}^{2}$ are calculated by averaging over ${\tau}_{\mathbf{k}}^{2}$ and ${\sigma}_{\mathbf{k}}^{2}$, respectively, in hollow spheres of radius $k$ and thickness 1, described by ${S}_{k}$. The ratio of the terms containing ${W}_{\mathbf{k}}$ and ${M}_{\mathbf{k}}$ in Equation 13 corrects the estimate for the power of the signal from the CTFcorrected map $V$ by the average CTF^{2} to account for the fact that the likelihood in Equation 7 was calculated for CTF premultiplied images.
Preoriented priors
Many proteins are organised in imperfect 2D arrays inside the tomograms, for example, inside membranes or as part of capsidlike structures. Often, the individual protein molecules inside these arrays exhibit limited rotational freedom with respect to the surface normal of the array, although they may be able to rotate freely around that normal. This knowledge is often exploited in subtomogram averaging approaches through local orientational searches, for example, see Förster et al., 2005. This not only accelerates the refinement, as fewer orientations need to be evaluated, it also makes it possible to solve more challenging structures because fewer solutions are allowed. In RELION, local orientational searches are implemented as Gaussian priors on the Cartesian translations and on the three Euler angles that describe rotations (Scheres, 2012). One advantage of using pseudosubtomogram alignment is that the coordinate system of the pseudosubtomograms themselves can be chosen arbitrarily. By default, pseudosubtomograms are created in the same orientation as the tomogram, but the user can choose to orient them in a more meaningful way. For example, by constructing the pseudosubtomograms with their $Z$axis parallel to the 2D array, using a rotational prior of approximately 90° on the tilt angle will limit the amount of rocking of the particles inside the array, while avoiding singularities in the definition of the Euler angles that occur when the tilt angle is close to 0°.
Tiltseries refinement
Averaging over multiple particles leads to an increased signaltonoise ratio in the estimated density map $V$. We also implemented procedures that exploit $V$ for subsequent reestimation of parameters that describe the tilt series. These procedures do not require pseudosubtomograms, but are performed by comparing projections of the density maps directly with the (Fourier) pixels of 2D boxes that are extracted from the tiltseries images, with a sufficient size to hold the CTFdelocalised signal. The various tiltseries parameters are then estimated by minimising the negative loglikelihood as defined in Equation 4, that is, the sum over noiseweighted square differences between the prediction and the observation.
The tiltseries properties that can be refined fall into two broad categories: optical and geometrical. The optical refinement concerns the different parameters of the CTF, while the geometrical refinement aims to optimise the alignment of the tilt series, as well as the beaminduced motion of the individual particles. Both sets of algorithms are closely related to the corresponding singleparticle algorithms in RELION: opticalaberration refinement (Zivanov et al., 2018; Zivanov et al., 2020) and Bayesian polishing (Zivanov et al., 2019), respectively. In spite of the similarity between the algorithms, the models that are optimised differ significantly from singleparticle analysis. Details of the implementation of the optical and geometrical refinement algorithms are given in Appendix 1. We also note that Bayesian polishing in SPA describes particle motions between individual movie frames. Although our approach for tomography can also consider movie frames, the current implementation uses the same regularisation of particle motions between movie frames within each tilt image as between the movie frames from other tilt images. Because preliminary tests showed limited benefits in considering the movie frames in this manner, only the functionality to model particle motions between the tiltseries images was exposed on the GUI.
CTF refinement for tomographic data in RELION4.0 includes optimisation of scale factors that model frequencydependent radiation damage, defocus, astigmatism, and higherorder symmetrical and antisymmetrical aberrations. Although individual particles within a field of view are at distinct defoci in the tiltseries images, their relative defoci are known from the geometry of the tilt series and the known 3D positions of the particles in the tomogram. Therefore, one can efficiently perform defocus estimation in a single pass, considering all particles in a tiltseries image simultaneously. In order to do so, we modified procedures that were developed for higherorder aberration estimation in singleparticle analysis (Zivanov et al., 2020), where the information from all particles in each tiltseries image is condensed into two images that are used to estimate a common phase shift (see Appendix).
Similar procedures can also be used to model higherorder symmetrical and antisymmetrical aberrations in the tomographic data. Analogously to our singleparticle approach, they are modelled using Zernike polynomials and estimated in the same way. Because the higherorder aberrations are often only a limiting factor at relatively high spatial frequencies, a large number of particles are needed to estimate them reliably. Optimally, higherorder aberrations would thus be estimated globally, over the entire data set, and only for cases that yield highresolution averages. If aberrations change during data collection, data sets may be split into optics groups, for which aberrations are estimated separately. Typically, the thirdorder antisymmetrical aberrations are the most important ones, that is, trefoil and axial coma, which can both be caused by a tilted beam. The resolution gains that these optimisations will yield depend on the microscope (mis)alignment. Provided alignment has been performed reasonably well, higherorder aberration correction will probably be most useful for reconstructions that extend beyond 3 Å resolution.
The geometric alignment includes both (rigid) rotational and translational realignment of the tiltseries images, as well as the modelling of beaminduced motion of individual particles throughout the tilt series. For the latter, we neglect rotations of the particles, and only model beaminduced translations. By doing so, we can precompute the likelihood of each particle being in each position around its original one, and then look for an alignment that simultaneously maximises the sum of those likelihoods over all tiltseries images and all particles, as well as a prior that ensures spatially coherent motion. This allows us to evaluate the likelihood of a hypothetical particle position by looking up a single interpolated value in an image. In this formulation, the problem becomes equivalent to the Bayesian polishing approach that we originally developed for singleparticle analysis, except for the inclusion of a third spatial dimension for the motion.
Results
We tested our approach on three test cases. Appendix 2—table 1 provides experimental details for each of the data sets; Appendix 2—table 2 provides details on the image processing.
HIV1 immature capsid
We tested the workflow above on the cryoET data set that was used to determine the structure of the immature capsid lattice and spacer peptide 1 (CASP1) regions of the Gag polyprotein from human immunodeficiency virus 1 (HIV1) (Schur et al., 2016) (EMPIAR10164). We used the same subset of five tomograms that were also used to assess the NovaCTF (Turoňová et al., 2017), emClarity (Himes and Zhang, 2018), and Warp (Tegunov and Cramer, 2019) programs. Introducing 3D CTF correction, and using the alignment parameters from the original analysis by Schur et al., NovaCTF reported a resolution of 3.9 Å (Turoňová et al., 2017). The Warp program introduced local and global motion correction in the tiltseries images, as well as optimisation of CTF parameters. The combination of Warp and subtomogram alignment and averaging in RELION3 led to a resolution of 3.8 Å. A recent application of emClarity led to a reconstruction to 3.3 Å resolution (Ni et al., 2022).
We used tiltseries projections after movie frame alignment from the original analysis (Schur et al., 2016), without any other preprocessing step, along with the tiltseries alignment data, performed with IMOD package (Kremer et al., 1996), and CTF parameters estimation using CTFFIND4 (Rohou and Grigorieff, 2015). We used 12,910 particles from the five tomograms subset, reconstructed an initial reference map using the original published particle alignment, and filtered it to 5 Å. A first alignment in 3D autorefine, followed by averaging of the initial pseudosubtomograms, led to a resolution of 3.6 Å. This average was then used for a full cycle of pseudosubtomogram improvement and realignment. We first applied CTF refinement to optimise the defoci of all particles. This improved the resolution only marginally. Subsequent optimisation of the tiltseries geometry, including modelling local particle motion, improved the resolution to 3.5 Å. Finally, realignment of newly generated pseudosubtomograms led to a resolution of 3.4 Å. A second cycle of these three steps provided 3.3 Å, while a third cycle converged to 3.2 Å (Figure 1a). Geometrical refinement was performed estimating local particle motion. The consideration of deformations did not show additional improvement. In the first cycle, where improvements in both CTFs and geometry are most obvious, the order of applying those optimisations did not alter the final result for this data set. These data and results are also distributed as part of the subtomogram tutorial in RELION4.0, as described on https://relion.readthedocs.io/en/release4.0/. Figure 1—figure supplement 1 shows the improvement in map quality during the iterative refinement process; Figure 1—figure supplement 2 shows a comparison with the 3.3 Å map from emClarity.
Analysis of the complete data set generated a structure at 3.0 Å resolution (Figure 1—figure supplement 1), which is the same resolution obtained using the M and RELION3 workflow (Tegunov et al., 2021; Figure 1—figure supplement 3), and is likely limited by flexibility and asymmetry in the CA hexamer.
Caulobacter crescentus Slayer
We also applied our approach to thin cellular appendages of C. crescentus bacteria known as stalks, which have previously been imaged using cryoET (Bharat et al., 2017). The cell body and cell stalks of C. crescentus cells are covered by a nearly hexagonal, paracrystalline array known as the surface layer (Slayer) (Smit et al., 1992). The structure of the Slayer was solved using a combination of Xray crystallography, cryoEM singleparticle analysis, and subtomogram averaging, revealing how the Slayer is attached to bacterial cells by an abundant glycolipid called lipopolysaccharide (LPS) (Bharat et al., 2017; von Kügelgen et al., 2020). Previously, cryoET of the Slayer, using 110 tilt series collected with a dosesymmetric scheme, yielded 51,866 hexamers of the Slayer. This study used a subtomogram averaging approach that is based on a constrained crosscorrelation approach implemented in the AV3 MATLAB suite (Förster and Hegerl, 2007), and which was specifically optimised for the analysis of macromolecules arranged in a lattice (Wan et al., 2017). A 7.4 Å reconstruction of the Slayer was obtained, in which alphahelices were resolved (Bharat et al., 2017). This reconstruction was improved by application of NovaCTF (Turoňová et al., 2017), leading to a 4.8 Å reconstruction, in which large amino acid residue side chains were resolved (von Kügelgen et al., 2020). Moreover, density for an LPS molecule was observed near the putative LPSbinding residues of the Slayer, in agreement with a cryoEM singleparticle structure of an in vitro reconstituted coplex (von Kügelgen et al., 2020). We used the tilt series after movie frame alignment from the initial analysis (Bharat et al., 2017), along with the tiltseries alignments performed within IMOD (Kremer et al., 1996), CTF parameters from CTFFIND4 (Rohou and Grigorieff, 2015), and the Euler angle assignments and subtomogram coordinates from the original analysis. These parameters were imported into RELION4.0, followed by multiple cycles of pseudosubtomogram generation and refinement, analogous to the immature HIV1 data set described above, leading to a 5.6 Å reconstruction of the Slayer hexamer (Figure 2a). Next, we defined a mask around the central pore of the Slayer, corresponding to the inner domain bound to LPS, to perform focused refinements. Another cycle of pseudosubtomogram reconstruction, CTF refinement, and refinement within the new mask improved the resolution to 4.4 Å. Accounting for perparticle motions with additional cycles of pseudosubtomogram improvements and refinements increased the resolution of the central pore to 4.0 Å, and the inner domain of the Slayer to 3.7 Å. Further 3D classification without alignments identified a subset of 42,990 subtomograms that gave a 3.5 Å resolution reconstruction of the inner Slayer.
The 3.5 Å map is in excellent agreement with the singleparticle structure of the in vitro reconstituted complex, including the LPS binding site (von Kügelgen et al., 2020; see Figure 2—figure supplement 1). Furthermore, divalent metal ions, known to be tightly bound to the inner Slayer (Matthew, 2021), are resolved (Figure 2b). Surprisingly, at lower isosurface contour levels, we also observed a second LPS binding site (Figure 2c and d). The size and shape of this density agree with the structure of the LPS Oantigen, illustrating how improved subtomogram averaging in RELION4.0 can help uncover new biology.
Coat protein complex II
Finally, we applied our approach to the Saccharomyces cerevisiae coat protein complex II (COPII), which mediates the transport of newly synthesised proteins from the endoplasmic reticulum (ER) to the Golgi apparatus as part of the secretory pathway. COPII is formed by five proteins that assemble sequentially on the ER membrane to induce remodelling of the bilayer into coated carriers in a process known as COPII budding, while simultaneously selectively recruiting cargo into these budding membrane carriers. COPII budding can be reconstituted in vitro from purified proteins and artificial membranes, to form small, spherical vesicles, or long, straight tubes. CryoET has previously been used to visualise the architecture of COPII on reconstituted tubules (Hutchings et al., 2018; Hutchings et al., 2021). The coat assembles into two concentric layers; the inner layer forms a pseudo helical lattice, which has previously been solved to 4.6 Å resolution using Dynamobased subtomogram averaging protocols (CastañoDíez et al., 2012).
We used the tilt series after movie alignment from the initial analysis (Hutchings et al., 2021), along with the tiltseries alignments performed in Dyname (CastañoDíez et al., 2012) and CTF parameters from CTFFIND4 (Rohou and Grigorieff, 2015). COPIIcoated tubes were manually traced in the resulting tomograms, and particles were extracted by randomly oversampling their surface, with approximate initial orientations assigned based on the cylindrical geometry. Dynamo was used for initial alignment of 8× binned subtomograms to define the centre of the particles and the directionality of individual tubes. We then imported the particle coordinates for processing in RELION4.0 using 3D refinement at 4× and 2× binning. Since we expect inner coat subunits to arrange in a lattice, we cleaned the data set by excluding any subtomograms that did not conform to the expected geometrical relationship with their neighbouring particles. A first 3D refinement of the unbinned data set gave a map at 4.4 Å resolution, which was further improved to 4.2 Å and 4.0 Å by tiltseries frame alignment and CTF refinement, respectively. Two further rounds of 3D refinement, followed by tiltseries frame alignment and CTF refinement, yielded a final map with a resolution of 3.8 Å (Figure 3).
At this resolution, most side chains are visible in the map, enabling us to build and refine an atomic model. The improved model will allow the design of point mutants to precisely disrupt interfaces between coat subunits and test their effects in COPII budding.
Discussion
We formulate the problem of averaging over multiple identical particles in tomographic tilt series in an empirical Bayesian framework that is based on a statistical model that approximates one for twodimensional experimental data. The Bayesian framework has proven effective in reducing the number of tunable parameters and in obtaining highquality reconstructions from singleparticle data (FernandezLeiro and Scheres, 2016). The twodimensional data model describes the experimental images better than alternative approaches that use 3D reconstructed subtomograms as an intermediate. One example of a problem with the intermediate 3D data model is the need for missing wedge correction, which arises from the observation that the experimental images were acquired, incompletely, in three dimensions. Artefacts related to suboptimal missing wedge correction may affect both alignment and classification of particles. By using an approximation to the 2D data model, missing wedge correction is no longer required. Instead, the problem approaches that of singleparticle analysis, where projections from different orientations and of different structural states are sorted out simultaneously. Provided the 3D Fourier transform of the distinct classes is fully sampled through the orientation distribution of the raw particles, likelihood optimisation techniques have been highly successful in tackling this problem in singleparticle analysis (Scheres et al., 2007; FernandezLeiro and Scheres, 2016).
In practice, the implementation in RELION4.0 does not use stacks of 2D projection images as input for the refinement program that performs alignment and classification. Instead, the concept of 3D pseudosubtomograms is introduced, where the tiltseries images are Fourier transformed, premultiplied with their CTF, and inserted as a slice into a 3D Fourier volume according to the best current estimates for the tiltseries geometry. The use of 3D pseudosubtomograms allowed reusing existing code for subtomogram averaging in RELION, while input stacks of 2D images would have required significant software development efforts. Nevertheless, in the future we might still choose to implement a true 2D version of the code, which would be more efficient, both in terms of processing time and disk storage requirements. In cases where the number of tilt images is small in comparison to the box size, fewer Fourier pixels need to be examined in a stack of 2D images than in a pseudosubtomogram, with a corresponding decrease in processing time. Moreover, the likelihood calculation from the 3D pseudosubtomogram approach requires separate storage of the accumulated squares of the CTFs, and the corresponding multiplicity terms. In contrast, in the 2D approach, only the 2D images need to be stored, as CTF parameters can be calculated onthefly and there is no need for a multiplicity term, giving a corresponding decrease in storage requirements. However, if one were to collect tilt series with very fine angular increments or in a continuous manner (Chreifi et al., 2019), then the current implementation may still be preferable.
Besides the alignment and classification of individual particles, the methods described here also deal with reestimation of parameters that describe the optical and geometrical features of the tilt series. As soon as an initial average structure has been obtained, its increased signaltonoise ratio can be exploited to determine these parameters more accurately than what is possible from the raw tiltseries images alone. The implementations in RELION4.0 again follow those previously implemented for singleparticle analysis, where CTF refinement is used for reestimation of the tiltseries images CTFs, and algorithms akin to Bayesian polishing are used to reestimate the tiltseries alignment, as well as the movement of individual particles throughout the tiltseries acquisition process. As better tiltseries parameters will allow better pseudosubtomograms, particle alignment and classification are iterated with the optimisation of the tiltseries parameters.
Similar tiltseries and CTF optimisation approaches have been implemented in the program M (Tegunov et al., 2021). Compared to M, RELION4.0 uses computationally more efficient algorithms; M uses GPUs to accelerate the calculations. In both tomography and SPA, RELION4.0 only models beaminduced translations of the particles, whereas M also models beaminduced rotations. Since SPA routinely reaches 2 Å resolutions without modelling beaminduced rotations, we assumed that the effect of rotations of individual particles throughout the tilt series is not large enough to warrant their correction at typical tomography resolutions. In cases where the data do allow for better than 2 Å resolutions, M could still be used to correct for beaminduced rotations in a postprocessing step, following alignment and classification of the individual particles in RELION. It is likely that adaptation of M, in order to function with the pseudosubtomograms proposed here, would lead to increased synergy between the two programs. In the meantime, external tools to convert from M parameters to RELION4.0 are already available (https://github.com/joton/reliontomotools; Zivanov, 2022 copy archived at swh:1:rev:bfa43828876ceb77bed0c7eb72f794c79c9de5e6).
Besides the reduction in tunable parameters that is characteristic of the Bayesian approach, its uptake by researchers that are new to the field is further facilitated through the implementation of a graphical user interface. This interface is already widely used for singleparticle analysis and has been extended for the processing of tomographic data in RELION4.0. Apart from the calculations that will be familiar to users of singleparticle analysis, for example, 3D classification, 3D initial model generation, and 3D autorefinement, the new interface also provides convenient access to the tomographyspecific versions for CTF refinement and Bayesian polishing, as well as preprocessing operations to calculate the pseudosubtomograms. However, tiltseries alignment, tomogram reconstruction, and particle picking are not yet part of the RELION workflow. Efforts to also implement solutions for those steps in a single tomography processing pipeline are ongoing and will be part of future RELION releases. Meanwhile, current import procedures rely on specific preprocessing operations in IMOD (Kremer et al., 1996), and particle coordinate conversion tools to use in RELION4.0 are available for a range of thirdparty software packages (Pyle et al., 2022). To further facilitate the uptake of this new software by the community, we have provided an online tutorial that uses the publicly available HIV1 immature capsid data set to describe and illustrate all steps necessary to obtain the results described in Figure 1.
In summary, we introduce new methods for subtomogram averaging to resolutions that are sufficient for de novo atomic modelling and increase the accessibility of this emerging technique. We envision that our methods will allow more researchers to calculate better structures from tomographic data, which will aid the next revolution in structural biology, where macromolecular complexes are imaged, not in isolation, but in their biologically relevant environment.
Appendix 1
CTF refinement
CTF refinement in RELION4.0 optimises the following parameters: scale, defocus, astigmatism, and higherorder (even and odd) optical aberrations. Since, save for the difference in defocus, the same CTF needs to be valid for an entire micrograph of particles, similar optimisations can be applied as in our singleparticle algorithms. All the relevant information is first consolidated into a minimal form using linear transformations, and the final, typically nonlinear, optimisation is then performed on that minimal form.
We formulate the CTF for tiltseries frame $f$ of particle $p$ as follows:
where ${\alpha}_{f}$ describes the overall scaling factor, ${\tau}_{f}(\mathbf{j})$ the empirical radiationdamage model as defined by Grant and Grigorieff, 2015, ${\gamma}_{pf\mathbf{j}}$ the symmetrical phase delay component, and ${\rho}_{f\mathbf{j}}$ the antisymmetrical one. Note that only $\gamma $ varies between particles. This is because it contains the quadratic defocus term that depends on the position of the particle. The phase delays are parametrised the same way as in singleparticle analysis in RELION3 – as a combination of explicitly named loworder terms and higherorder Zernike polynomials:
As before, the astigmaticdefocus matrix ${D}_{pf}$ is decomposed into a defocus term $\delta {z}_{p}$ and two linear astigmatism terms, a_{1} and a_{2}, while ${C}_{s}$ describes the spherical aberration of the microscope, ${\chi}_{f}$ a constant phase offset (owing to amplitude contrast and a phase plate, if one is used), $\lambda $ is the wavelength of the electron, and ${Z}_{n}^{m}$ are the higherorder even Zernike terms. One key difference to singleparticle analysis is that the defocus term $\delta {z}_{p}$ is no longer a free parameter for each particle, but it instead depends on the already known 3D position of the particle. Therefore, in tomography, the defocus term is only estimated once per tilt image, and all the particles contribute to that estimate.
The scaling factor ${\alpha}_{f}$ is estimated by computing the following two sums for each micrograph and dividing them (the † symbol indicates complex conjugation):
Note that the ${\text{CTF}}^{\prime}$ used in Equation 19 is missing its scale factor:
Alternatively, we also allow the user to fit the parameters of Lambert’s extinction model to the data instead, assuming perfectly flat samples of constant thickness. In that case, the CTF scale in image $f$ of tomogram $t$ is expressed as a function of the beam luminance ${\alpha}_{0}$, sample normal ${\mathbf{n}}_{t}$, and optical sample thickness ${\kappa}_{t}$:
If this option is used, then the CTF scales of all the tilt series in the data set are estimated together. The beam luminance ${\alpha}_{0}$ is modelled globally, while the sample thickness and normal are allowed to differ among tomograms, but not between the images of a tilt series. The vector ${\mathbf{z}}_{f}$ points in viewing direction of tilt image $f$. Note that this model does not allow for separating the geometrical sample thickness from its extinction factor, so we can only estimate the product of the two. Also, the ice normal is required to be perpendicular to the estimated tilt axis of the tilt series since its component pointing in the direction of the axis is indistinguishable from an increase in ice thickness or opacity. This global optimisation is performed using the sums ${G}_{tf}$ and ${H}_{tf}$ computed in Equations 18 and 19, where the subscript $t$ indicates tiltseries $t$. This is done by finding a global value of ${\alpha}_{0}$ and values of ${\kappa}_{t}$ and ${\mathbf{n}}_{t}\in {S}^{2}$ for all tomograms that produce ${\alpha}_{tf}$ which minimise the following quantity and thus maximise the overall likelihood in Equation 4:
To perform defocus estimation efficiently, we apply the optimisations we originally developed for the estimation of higherorder aberrations in singleparticle analysis (Zivanov et al., 2020). It allows us to determine a collective offset to $\gamma $ for a large set of particles that all have different values of $\gamma $. Specifically, it allows the change to the loglikelihood arising from changing the value of $\gamma $ at any Fourier pixel to be expressed as a pair of 2D images, independently of the number of particles. Therefore, each pixel of each particle only needs to be considered once. After that, the loglikelihood can be evaluated by iterating over the pixels of a single image.
In singleparticle analysis, this approach is used to estimate the higherorder aberrations that are shared among all the particles in a data set. In tomography, we also use this approach to condense the information from all the particles in a tilt image (all of which exhibit slightly different defoci), into two such images, and to then determine the optimal change to $\gamma $ efficiently using a nonlinear algorithm.
The two condensed images $R$ and $\widehat{\mathbf{t}}$ that we compute are the same as the ones in singleparticle analysis, except for the inclusion of the noise power ${\sigma}^{2}$. The definitions are repeated here for the sake of completeness. Note that each pixel $\mathbf{j}$ of $R$ contains a real symmetrical $2\times 2$ matrix and each pixel of $\widehat{\mathbf{t}}$ a ${\u2102}^{2}$ vector:
where ${\mathbf{d}}_{pf\mathbf{j}}\in {\mathbb{R}}^{2}$ describes the point on the unit circle corresponding to the initial phase angle ${\gamma}^{(0)}$, which is given by the initial CTF parameters:
The predicted 2D images ${\stackrel{~}{V}}_{f}^{(p)}$ contain the effect of the initial CTFs, except for the symmetrical aberration:
The vectorvalued condensed image ${\widehat{\mathbf{t}}}_{f}$ describes the most likely phase shift $\gamma $ for each pixel $\mathbf{j}$, expressed as a point on a circle, while the matrixvalued one, ${R}_{f}$, describes the anisotropic weight of that information. With these two condensed images computed for a given tilt image $f$, the change to the likelihood defined in Equation 4 resulting from the change to the phase delay $\gamma $ at any pixel can be expressed as a quadratic form. Therefore, we look for a change $\delta D$ to the astigmaticdefocus matrices ${D}_{pf}$ which produces phase delays that minimise that quadratic form:
where the perpixel error ${\mathbf{e}}_{\mathbf{j}}(\delta D)$ is given by the deviation from the optimal phase shift $\hat{\mathbf{t}}}_{\mathbf{j}$:
As an alternative to fitting $D$ independently for each tiltseries image, our program also allows the user to apply an ${L}^{2}$ regulariser to the $\delta {D}_{f}$ of the different images in the same series. In that case, the sum in Equation 27 runs over all the pixels $\mathbf{j}$ of all the tiltseries frames $f$. This helps to stabilise the CTFs of the higher tilts, but it risks impairing the estimates of the CTFs of the more important lower tilts. Formally, this is done by minimising the following cost:
Since the early tiltseries images carry more information than the later ones, their values in $R$ are typically significantly greater. Therefore, using this formulation, they automatically assume a greater weight in the estimation. The optimal weight for the regulariser itself, $\lambda $, cannot be measured from the data. Its optimal value depends on how reproducible the defocus values are for each of the tiltseries images, which in turn depends on the microscope setup, such as the stability of the stage.
Geometric refinement
Analogously to Bayesian polishing, the loglikelihood of a particle being observed at a position $\mathbf{s}$ is given by twice its crosscorrelation with the predicted image:
where IFT stands for inverse Fourier transform.
To keep the problem differentiable, the crosscorrelation CC is always accessed using cubic interpolation. After the inverse Fourier transformation, each such crosscorrelation table is cropped to a smaller size to make storing many such tables feasible, and the memory throughput efficient. The size of the tables can be controlled by the user, and should be set to the maximal positional error expected in the data set.
The geometrical model that is optimised this way projects points $\mathbf{s}\in {\mathbb{R}}^{3}$ in the tomogram to 2D positions ${\mathbf{p}}_{f}\in {\mathbb{R}}^{2}$ in each tilt image:
The initial linear projection ${\mathbf{l}}_{f}$ is obtained by multiplying $\mathbf{s}$ with a $2\times 4$ matrix $J}_{f$, and then optionally shifted by the nonlinear image distortion $W$. The cost ${C}_{\text{align}}$ that is being minimised consists of the sum over all (negative) crosscorrelation values of all particles in all images plus all regularisers for all regularised parameters:
Although our framework supports arbitrary projection matrices ${J}_{f}$, our optimisation algorithm only looks for orthogonal rotations to the initial projection matrix. This is achieved by parametrising that rotation using Tait–Bryan angles, not Euler angles. The disadvantage of Euler angles is that they are gimbal locked in the initial configuration where all three angles are zero, that is, the first and third angles refer to the same axis. The rigid alignment of the tilt image is never regularised because we do not assume to have any prior information on it.
The distortion field $W$ can take different forms. We have implemented models that express the distortion using the Fourier basis, a cubic spline basis, and an affine linear one. The intended purpose of these deformations is to model distortions of the image that arise at the image forming lenses at the bottom of the optical system. An imperfect calibration of these lenses is likely to go unnoticed as long as the microscope is only used for singleparticle analysis because the same particle is never observed at starkly different positions during the collection of a singleparticle movie. In tomography, a given particle may appear at any image position in any tilt image, so arbitrary deformations to the 2D image become relevant. We expect these deformations to be stable over time, so the intended purpose of the deformation field $W$ is to model only one such deformation per tilt series. Optionally, we also allow the user to instead model a different deformation for each tilt image, but we have not encountered any data sets where this has produced an improvement. The deformation fields are optionally regularised by penalising the squared coefficients of the respective model. This limits the extent of deformation, and it forces the system to explain changes in position through particle motion, rather than image deformations.
The quantity that we do expect to change during the collection of the tilt series is instead the 3D position of the particle, ${\mathbf{s}}_{f}$. Analogously to Bayesian polishing, we model this change as motion over time. The position in image $f$ is given by the sum over its pertiltseriesframe velocities ${\mathbf{v}}_{f}\in {\mathbb{R}}^{3}$ up to that point. Note that the velocity vector ${\mathbf{v}}_{f}$ refers to the motion between images $f$ and $f+1:$
It is important to note that the tilt images are implicitly assumed to be ordered chronologically. In practice, this is usually not given, so the images are reordered by the program based on the cumulative radiation dose of each image.
As in Bayesian polishing, the motion vectors themselves are expressed in a collective coordinate system for all the particles. This allows the spatial smoothness of a hypothetical motion field to be evaluated and used as a prior. For a more detailed derivation, we refer to the paper on Bayesian polishing for singleparticle analysis (Zivanov et al., 2019). The formal details will be given in the following for the sake of completeness and to highlight differences to the original formulation.
The collective coordinate system for particle motion is obtained through a lowrank approximation of a Gaussian process. This is done by constructing and then diagonalising the $P\times P$ covariance matrix $S$ for a set of initial particle positions (where $P$ is the number of particles). The entries of $S$ contain the value of the following squareexponential covariance function for each pair of particles $p$ and $q$:
Optionally, the user can instead also use the original formulation without the square inside the exponential:
The former option forces particles in immediate proximity to move more similarly, but it allows for a greater discrepancy at greater distances. Both the singleparticle and the tomography implementations allow the user to choose either function, but the default has changed from the latter to the former in tomography. This choice was motivated by both empirical observations and the fact that the squareexponential kernel produces fewer meaningful deformation components, which speeds up the optimisation for tomograms with hundreds of particles.
We perform a singularvalue decomposition of the covariance matrix $S$,
which allows us to construct the coordinate system as follows:
where ${\lambda}_{i}\in \mathbb{R}$ is the ${i}^{th}$ singular value and ${\mathbf{a}}_{i}\in {\mathbb{R}}^{P}$ the corresponding singular vector. Basis vectors with small ${\lambda}_{i}$ are discarded here to speed up the optimisation. Let ${P}^{\prime}$ represent the number of remaining basis vectors. In this coordinate system, the set of all particle velocities in a tilt image ${V}_{f}\in {({\mathbb{R}}^{3})}^{P}$ can be expressed as ${V}_{f}=B{Q}_{f}$, where ${Q}_{f}$ is a ${P}^{\prime}\times 3$ coefficient matrix that encodes the velocity in three spatial dimensions. Note that the same basis $B$ is shared between all three dimensions and all tiltseries frames. In this coordinate system, the negative loglikelihood of a configuration of particle velocities is given by the Frobenius norm of the coefficient vector, ${Q}_{f}$, that is, the sum of the squares of its entries. Therefore, the motion regulariser takes a simple form:
The acceleration regulariser that would penalise changes in velocity from one tiltseries frame to the next in singleparticle analysis has been omitted from tomography. This is because, unlike a singleparticle movie, the tilt images are not collected in one continuous exposure. Since they are being exposed individually, there is no reason to assume that the particle velocities will be continuous between them. Two further differences to the original Bayesian polishing are hidden in the notation: the covariance is now based on the 3D distances between the particles, and the coefficient matrix $Q$ contains three columns instead of two.
As in the original Bayesian polishing approach, the complete alignment of the tilt series is performed by finding parameters that minimise ${C}_{\text{align}}$ from Equation 36 using LBFGS (Liu and Nocedal, 1989). The set of parameters that are being optimised always includes the three Tait–Bryan angles for each tilt image and the set of initial particle positions. The latter are essential because all the information we have about their 3D positions is derived from the tilt images themselves, so changing the alignment requires the particles to be able to shift to more likely positions. Estimating the image deformations and particle motion is optional. If they are being estimated, then a set of deformation coefficients is fitted either to each tilt image or to each tilt series, while a set of motion coefficients is fitted to each image transition.
In addition to this local, LBFGSbased refinement, we also offer two methods to align only the 2D shifts of all tilt images globally. This means that instead of trying to obtain the optimal alignment through small changes to the initial one, we instead look for the best possible image shift overall, keeping all other parameters constant. This is helpful when individual tilt images are so badly aligned that a local optimisation cannot converge to the globally optimal position. Note that the initially assumed angles are rarely as incorrect as the image shifts since the angles can be controlled more effectively through the experimental setup.
There are two variants to this method. If the sample contains few particles per tomogram, then the best results are obtained by predicting an entire micrograph and computing its crosscorrelation with the original one. The maximum value in that large crosscorrelation image then indicates the optimal image shift. This approach can in theory deal with arbitrarily large misalignments. If the sample is dense, however, then this wholemicrograph approach can fail. In that case, better results are obtained by instead adding up the small, perparticle crosscorrelation images defined in Equation 31, and finding the maximum in that sum. This latter approach can only correct for misalignments smaller than half the box size of the particle, and it often produces inferior results on samples with few particles per tomogram.
Appendix 2
Data availability
We have only used previously published cryoEM data sets for testing our software. Reconstructed maps and atomic models generated in this study have been submitted to the EMDB and PDB, with entry codes as indicated in Table 1.

EmpiarID EMPIAR10164. Cryoelectron tomography of immature HIV1 dMACANC VLPs.
References

RealSpace refinement in phenix for cryoEM and crystallographyActa Crystallographica. Section D, Structural Biology 74:531–544.https://doi.org/10.1107/S2059798318006551

Classification and 3D averaging with missing wedge correction in biological electron tomographyJournal of Structural Biology 162:436–450.https://doi.org/10.1016/j.jsb.2008.02.008

Fast and accurate referencefree alignment of subtomogramsJournal of Structural Biology 182:235–245.https://doi.org/10.1016/j.jsb.2013.03.002

Rapid tiltseries acquisition for electron cryotomographyJournal of Structural Biology 205:163–169.https://doi.org/10.1016/j.jsb.2018.12.008

ISOLDE: a physically realistic environment for model building into lowresolution electrondensity mapsActa Crystallographica. Section D, Structural Biology 74:519–530.https://doi.org/10.1107/S2059798318002425

Maximum likelihood from incomplete data via the em algorithmJournal of the Royal Statistical Society 39:1–22.https://doi.org/10.1111/j.25176161.1977.tb01600.x

Structure determination in situ by averaging of tomogramsMethods in Cell Biology 79:741–767.https://doi.org/10.1016/S0091679X(06)79029X

Classification of cryoelectron subtomograms using constrained correlationJournal of Structural Biology 161:276–286.https://doi.org/10.1016/j.jsb.2007.07.006

Computational separation of conformational heterogeneity using cryoelectron tomography and 3D subvolume averagingJournal of Structural Biology 178:165–176.https://doi.org/10.1016/j.jsb.2012.01.004

Alignment algorithms and perparticle CTF correction for single particle cryoelectron tomographyJournal of Structural Biology 194:383–394.https://doi.org/10.1016/j.jsb.2016.03.018

Clustering and variance maps for cryoelectron tomography using wedgemasked differencesJournal of Structural Biology 175:288–299.https://doi.org/10.1016/j.jsb.2011.05.011

PyTom: a pythonbased toolbox for localization of macromolecules in cryoelectron tomograms and subtomogram analysisJournal of Structural Biology 178:177–188.https://doi.org/10.1016/j.jsb.2011.12.003

ScipionTomo: towards cryoelectron tomography software integration, reproducibility, and validationJournal of Structural Biology 214:107872.https://doi.org/10.1016/j.jsb.2022.107872

Computer visualization of threedimensional image data using imodJournal of Structural Biology 116:71–76.https://doi.org/10.1006/jsbi.1996.0013

SerialEM: a program for automated tilt series acquisition on tecnai microscopes using prediction of specimen positionMicroscopy and Microanalysis 9:1182–1183.https://doi.org/10.1017/S1431927603445911

Tom software toolbox: acquisition and analysis for electron tomographyJournal of Structural Biology 149:227–234.https://doi.org/10.1016/j.jsb.2004.10.006

CTFFIND4: fast and accurate defocus estimation from electron micrographsJournal of Structural Biology 192:216–221.https://doi.org/10.1016/j.jsb.2015.08.008

ConferenceFast Alignment of Limited Angle Tomograms by projected Cross Correlation2019 27th European Signal Processing Conference (EUSIPCO.https://doi.org/10.23919/EUSIPCO.2019.8903041

RELION: implementation of a bayesian approach to cryoEM structure determinationJournal of Structural Biology 180:519–530.https://doi.org/10.1016/j.jsb.2012.09.006

Methods for aligning and for averaging 3D volumes with missing dataJournal of Structural Biology 161:243–248.https://doi.org/10.1016/j.jsb.2007.09.018

RealTime cryoelectron microscopy data preprocessing with warpNature Methods 16:1146–1152.https://doi.org/10.1038/s415920190580y

Efficient 3dctf correction for cryoelectron tomography using novactf improves subtomogram averaging resolution to 3.4åJournal of Structural Biology 199:187–195.https://doi.org/10.1016/j.jsb.2017.07.007

Ctf determination and correction for low dose tomographic tilt seriesJournal of Structural Biology 168:378–387.https://doi.org/10.1016/j.jsb.2009.08.016

SoftwareReliontomotools, version swh:1:rev:bfa43828876ceb77bed0c7eb72f794c79c9de5e6Software Heritage.
Decision letter

Edward H EgelmanReviewing Editor; University of Virginia, United States

Volker DötschSenior Editor; Goethe University, Germany

Alberto BartesaghiReviewer; Duke University, United States
Our editorial process produces two outputs: (i) public reviews designed to be posted alongside the preprint for the benefit of readers; (ii) feedback on the manuscript for the authors, including requests for revisions, shown below. We also include an acceptance summary that explains what the editors found interesting or important about the work.
Decision letter after peer review:
Thank you for submitting your article "A Bayesian approach to singleparticle electron cryotomography in RELION4.0" for consideration by eLife. Your article has been reviewed by 3 peer reviewers, one of whom is a member of Our Board of Reviewing Editors, and the evaluation has been overseen by Volker Dötsch as the Senior Editor. The following individual involved in the review of your submission has agreed to reveal their identity: Alberto Bartesaghi (Reviewer #1).
The reviewers have discussed their reviews with one another, and the Reviewing Editor has drafted this to help you prepare a revised submission.
Essential revisions:
(1) Half the citations in the manuscript are selfreferences (16 out of 32!), and important/relevant work on SPT was left out and should be referenced: Himes et al., 2018 (https://doi.org/10.1038/s415920180167z), Chen et al., 2019 (https://doi.org/10.1038/s4159201905918), and Sanchez et al., 2019 (https://doi.org/10.23919/EUSIPCO.2019.8903041). Also, with the exception of their own subvolume averaging approach [4], the vast body of work on missingwedge compensation was not referenced in the introduction.
(2) In the section on "Orientation priors", they state that one advantage of the proposed approach is that "the coordinate system of the pseudo subtomograms themselves can be chosen arbitrarily" and that "this not only accelerates the refinement" but also "makes it possible to solve more challenging structures because fewer solutions are allowed". This advantage, however, is certainly not specific to the "pseudosubtomogram" data representation proposed here. Indeed, this strategy was originally proposed in the context of singleparticle cryoEM by Jinag et al., 2001 (https://doi.org/10.1006/jsbi.2001.4376) and was first used for processing cryoET data by Forster et al., 2005 (https://doi.org/10.1073/pnas.040917810), followed by many others. This point should be contextualized and the appropriate references included.
(3) Results on the HIV1 Gag benchmark dataset (EMPIAR10164) have two components: (1) processing of a "standard" subset consisting of 5 tiltseries, and (2) processing of the entire dataset (43 tiltseries). As far as I know, the best reconstruction obtained using the five tiltseries is a 3.3 A map obtained using emClarity (https://www.ebi.ac.uk/emdb/EMD13354). Since only lower resolution results obtained with NovaCTF (3.9A) and Warp (3.8A) are cited in the text, my guess is that the authors were not aware of this newer result. Given that the resolution of the emClarity map (3.3A) is similar to the one presented here (3.2A), it would be useful to compare their reconstruction against emClarity's EMD13354 (both in terms of map features and FSC).
(4) When processing the entire dataset, a 3.0A resolution map was obtained which matches the resolution obtained by Warp/M [24]. This result, however, is "not shown" (page 5). Since this is the only benchmark dataset analyzed in this study and given that many other packages have analyzed the same data, this map should be presented together with the corresponding FSC curve, data processing details, and comparisons made against M's 3.0A EMD11655 map.
(5) No experiments are presented to validate the ability of the approach to correct for higherorder aberrations. Since "higherorder aberration correction will probably be most useful for reconstructions that extend beyond 3A resolution", why didn't they validate their approach using one of the sub3A cryoET datasets available on EMPIAR (e.g., EMPIAR10491)?
(6) It would be useful to include a table with a summary of all the relevant data collection/data processing parameters (e.g., pixel size, detector, number of tiltseries and subvolumes, symmetry, resolution, number of refinement rounds, etc.) for all datasets analyzed in this study.
(7) Ony the 3.8A map for the COPII inner coat has been deposited in the EMDB. All other maps and related metadata need to be deposited as well.
(8) Computational complexity is a very important topic in the context of SPT, but no details are included in the text other than the following statement: "Compared to M, RELION4.0 uses computationally more efficient algorithms that do not require the computational power of a GPU". Does this imply that computationally inefficient algorithms require the use of GPUs? If the authors want to comment on computational complexity (which I think would be very important to do), actual numbers need to be presented, such as running times, compute resources used, storage, etc.
9) Regarding the use of a "true 2D approach" vs. "pseudovolumes" (page 6), the following claim is made: "For example, the current implementation could be used to process pairs of tilted images, but a 2D implementation would be more efficient", does this refer to efficiency in computing terms, storage, or both? Also, the authors should comment on the storage requirements of their 3D approach. Since the pseudotomograms have three components (Eqs. 810), does this mean that they require 3x more storage compared to traditional subtomogram averaging? This would be an important practical consideration for prospective users of this tool and should be adequately discussed.
10) In general, the use of the term "particle polishing" could lead to misinterpretations since it means different things in SPA and tomography. In SPA, "polishing" refers to the analysis of intermediate vidoe frames (which also exist in tomography), but the present study doesn't consider them at all. Also, I noticed that the term "frames" is used interchangeably to refer to SPA "vidoe frames" and "particle tilts" in tomography, which adds to the confusion. For example, the term "frame alignment" is mentioned twice on page 6, but I believe the authors are really referring to "tilt image alignment" instead. This should be clarified and the terms "frame" and "tilts" should be used consistently throughout the manuscript.
(11) For the Caulobacter crescentus Slayer reconstruction, the new cryoET map corresponding to the inner domain of the Slayer seems to have even better resolution than the SPA reconstruction reported in the original work [26] (3.5A vs. 3.7A for EMD10389). Since the authors say these maps are in "excellent agreement", a quantitative comparison should be included. Was the second LPS binding site visible in the 3.7A SPA map? In Figure 2d, the fit between LPS and the cryoET density (righthand side) seems to be of lower quality (i.e., no density for parts of the LPS model is seen). Without having access to the maps, it is not possible to properly evaluate these results.
(12) The paper is wellwritten and clear, but the description of the new concepts and evaluation of the method is somewhat minimalistic. While these may not be absolutely necessary, they will facilitate better understanding by a broader user community.
(i) A graph describing the workflow as implemented in RELION, e.g., as a flow chart, would increase the readability of the paper.
(ii) A figure visually describing the pseudosubtomograms would make the concept clearer to the reader.
(iii) A more indepth description of the input files (alignment files from imod, particle lists, and orientation from Dynamo/elsewhere) needed to perform alignments and averaging in RELION using pseudosubtomograms would be beneficial to the community. This could be incorporated into a flowchart as suggested in (i).
13) Along the same lines, the authors do not include any description of the particlepicking procedures or the software requirements. This seems to be important as the authors state "(particles) relative defoci are known from the geometry of the tilt series and the known 3D positions of the particles in the tomogram".
14) In section 2.2 it is shortly discussed that rotational priors can be used to constrain orientational searches. As this is often very useful for particle alignment in cryoET, this and the implementation should be described in more detail.
15) The progress in map quality is only shown by improvements to resolution in the FSC for the test datasets. It would be beneficial to also show the initial and intermediary maps for at least one of the test cases so that the reader can better assess the improvement in map quality. Overall maps should be shown to evaluate performance on the "whole particle" level, given considerations of flexibility (especially in cases where focused refinement was used). Corresponding local resolution maps would be informative in such cases.
(16) For the immature HIV capsid data, the FSC curves are only shown for the small subset of data. As it is an important point that the RELION pseudosubtomogram approach can reach the same resolution as previous software such as M, the data should also be shown for the full dataset (authors write "data not shown").
(17) In general, a proper comparison to the previous approaches/results, maybe by providing the maps for comparison in the figures, would be helpful for the reader to evaluate which approach to choose, and based on what expectations/biological system.
18) In section 4 it is briefly discussed that particle rotation is not modeled in the perparticle motion correction. They draw parallels to single particles and conclude that this might be an issue at resolutions better than 2 Å. However, could the authors comment on whether the difference in dose and time frame of data collection as well as the thickness of sample, may affect the assumption that this is only an issue at a higher resolution?
(19) Many subtomogram averages are resolved at a more moderate resolution (e.g. 820 Å) than the maps presented in this study due to different reasons. It would be beneficial for the reader if the authors could comment on/discuss which optical and geometrical parameters can be confidently modeled at a lower resolution and if they improve resolution.
(20) Could the authors comment on whether it is possible to extract the refined optical and geometrical parameters for a set of particles from a tilt series, and apply them to a different set of particles from the same tilt series (or to the full tilt series to obtain a better quality tomogram for new particle picking).
(21) Along the same line: would it be possible to refine two distinct species (e.g. two distinct conformations of the same protein or two completely unrelated proteins) together in a multispecies refinement of optical and geometrical parameters? As the overall tilt series alignment and imaging parameters are the same, this might aid in refinement for lower abundance.
(22) On a more conceptual level, the reasoning to separate the SPA/CryoET workflows completely in the software package remains unclear to me and it would be valuable if the authors could briefly discuss this.
(23) Can the authors comment on the "The data model assumes independent Gaussian noise on the Fourier components of the cryoEM images of individual particles" with respect to the more complex in situ data, where noise is expected to be more structured?
https://doi.org/10.7554/eLife.83724.sa1Author response
Essential revisions:
(1) Half the citations in the manuscript are selfreferences (16 out of 32!), and important/relevant work on SPT was left out and should be referenced: Himes et al., 2018 (https://doi.org/10.1038/s415920180167z), Chen et al., 2019 (https://doi.org/10.1038/s4159201905918), and Sanchez et al., 2019 (https://doi.org/10.23919/EUSIPCO.2019.8903041). Also, with the exception of their own subvolume averaging approach [4], the vast body of work on missingwedge compensation was not referenced in the introduction.
We have included the references above, and multiple references to alternative approaches for subtomogram averaging and approaches for missingwedge correction. The revised manuscript now contains 53 references.
(2) In the section on "Orientation priors", they state that one advantage of the proposed approach is that "the coordinate system of the pseudo subtomograms themselves can be chosen arbitrarily" and that "this not only accelerates the refinement" but also "makes it possible to solve more challenging structures because fewer solutions are allowed". This advantage, however, is certainly not specific to the "pseudosubtomogram" data representation proposed here. Indeed, this strategy was originally proposed in the context of singleparticle cryoEM by Jinag et al., 2001 (https://doi.org/10.1006/jsbi.2001.4376) and was first used for processing cryoET data by Forster et al., 2005 (https://doi.org/10.1073/pnas.040917810), followed by many others. This point should be contextualized and the appropriate references included.
This is a misunderstanding. We are not claiming that performing local orientational searches is new in our approach. Only the option to arbitrarily preorient the pseudosubtomograms is new. This makes expressing local angular searches in terms of the Euler angles more straightforward. We have completely rewritten this section to reflect this more clearly; it now reads:
“Preoriented priors
Many proteins are organised in imperfect 2D arrays inside the tomograms, for example inside membranes or as part of capsidlike structures. Often, the individual protein molecules inside these arrays exhibit limited rotational freedom with respect to the surface normal of the array, although they may be able to rotate freely around that normal. This knowledge is often exploited in subtomogram averaging approaches through local orientational searches, e.g. see Forster2005retro. This not only accelerates the refinement, as fewer orientations need to be evaluated, it also makes it possible to solve more challenging structures because fewer solutions are allowed. In RELION, local orientational searches are implemented as Gaussian priors on the Cartesian translations and on the three Euler angles that describe rotations Scheres2012relion. One advantage of using pseudo subtomogram alignment is that the coordinate system of the pseudo subtomograms themselves can be chosen arbitrarily. By default, pseudo subtomograms are created in the same orientation as the tomogram, but the user can choose to orient them in a more meaningful way. For example, by constructing the pseudo subtomograms with their Zaxis parallel to the 2D array, using a rotational prior of approximately 90^{o} on the tilt angle will limit the amount of rocking of the particles inside the array, while avoiding singularities in the definition of the Euler angles that occur when the tilt angle is close to 0^{o}.”
(3) Results on the HIV1 Gag benchmark dataset (EMPIAR10164) have two components:
(1) processing of a "standard" subset consisting of 5 tiltseries, and (2) processing of the entire dataset (43 tiltseries). As far as I know, the best reconstruction obtained using the five tiltseries is a 3.3 A map obtained using emClarity (https://www.ebi.ac.uk/emdb/EMD13354). Since only lower resolution results obtained with NovaCTF (3.9A) and Warp (3.8A) are cited in the text, my guess is that the authors were not aware of this newer result. Given that the resolution of the emClarity map (3.3A) is similar to the one presented here (3.2A), it would be useful to compare their reconstruction against emClarity's EMD13354 (both in terms of map features and FSC).
We have included a reference to the new emClarity map in the Results section on the 5tomogram HIV data set. A comparison between our map and the emClarity map is now shown in Figure 1 —figure supplement 2. The RELION map is better than the emClarity map, which used a different, more optimistic mask for its original resolution estimation.
(4) When processing the entire dataset, a 3.0A resolution map was obtained which matches the resolution obtained by Warp/M [24]. This result, however, is "not shown" (page 5). Since this is the only benchmark dataset analyzed in this study and given that many other packages have analyzed the same data, this map should be presented together with the corresponding FSC curve, data processing details, and comparisons made against M's 3.0A EMD11655 map.
We now show these results and a comparison with the map from M in Figure 1 —figure supplement 3. The two maps are very similar.
(5) No experiments are presented to validate the ability of the approach to correct for higherorder aberrations. Since "higherorder aberration correction will probably be most useful for reconstructions that extend beyond 3A resolution", why didn't they validate their approach using one of the sub3A cryoET datasets available on EMPIAR (e.g., EMPIAR10491)?
Higherorder aberration correction has now been used on a sub2A resolution apoferritin test data set from a different collaborator. “Unfortunately”, their scope was so wellaligned that not even at this resolution aberration correction led to an improvement. Therefore, and because these collaborators are planning on writing their own publication about these results, we decided not to include these data in our revision.
We merely describe aberration correction for completeness and to make our potential readers aware of the available tools, as we do not intend to write a separate paper on this functionality. We also note that the application of fast, multishot data acquisition techniques for tomography that use beam shifts (like FISE and BISECT) may lead to wider applicability in the future, or least for those who do not align their microscope optimally…
(6) It would be useful to include a table with a summary of all the relevant data collection/data processing parameters (e.g., pixel size, detector, number of tiltseries and subvolumes, symmetry, resolution, number of refinement rounds, etc.) for all datasets analyzed in this study.
We now include data acquisition details for each data set in Table 1.
(7) Only the 3.8A map for the COPII inner coat has been deposited in the EMDB. All other maps and related metadata need to be deposited as well.
The 5tomo (EMD16207) and 43tomo (EMD16209) maps of the HIV capsid lattice and the map and model for the Caulobacter crescentus Slayer (EMD16183, PDB 8bqe) have now been submitted. We have also submitted the model for the new COPII map (PDB8bsh).
(8) Computational complexity is a very important topic in the context of SPT, but no details are included in the text other than the following statement: "Compared to M, RELION4.0 uses computationally more efficient algorithms that do not require the computational power of a GPU". Does this imply that computationally inefficient algorithms require the use of GPUs? If the authors want to comment on computational complexity (which I think would be very important to do), actual numbers need to be presented, such as running times, compute resources used, storage, etc.
We did not imply this. We have rephrased the statement about GPU to:
“Compared to M, RELION4.0 uses computationally more efficient algorithms; M uses GPUs to accelerate the calculations.”
To give better insights into the computational costs, we now include details of image processing requirements for all three data sets in Table 2.
(9) Regarding the use of a "true 2D approach" vs. "pseudovolumes" (page 6), the following claim is made: "For example, the current implementation could be used to process pairs of tilted images, but a 2D implementation would be more efficient", does this refer to efficiency in computing terms, storage, or both? Also, the authors should comment on the storage requirements of their 3D approach. Since the pseudotomograms have three components (Eqs. 810), does this mean that they require 3x more storage compared to traditional subtomogram averaging? This would be an important practical consideration for prospective users of this tool and should be adequately discussed.
Yes, this is a good point, it would affect storage too, as the CTF^{2} and multiplicity volumes take up space as well. To explain this better, the revised text now reads:
“The use of 3D pseudosubtomograms allowed reusing existing code for subtomogram averaging in RELION, while input stacks of 2D images would have required significant software development efforts. Nevertheless, in the future we might still choose to implement a true 2D version of the code, which would be more efficient, both in terms of processing time and disk storage requirements. In cases where the number of tilt images is small in comparison to the box size, fewer Fourier pixels need to be examined in a stack of 2D images than in a pseudosubtomogram, with a corresponding decrease in processing time. Moreover, the likelihood calculation from the 3D pseudosubtomogram approach requires separate storage of the accumulated squares of the CTFs, and the corresponding multiplicity terms. In contrast, in the 2D approach, only the 2D images need to be stored, as CTF parameters can be calculated onthefly and there is no need for a multiplicity term, giving a corresponding decrease in storage requirements. However, if one were to collect tilt series with very fine angular increments or in a continuous manner, then the current implementation may still be preferable”.
10) In general, the use of the term "particle polishing" could lead to misinterpretations since it means different things in SPA and tomography. In SPA, "polishing" refers to the analysis of intermediate vidoe frames (which also exist in tomography), but the present study doesn't consider them at all. Also, I noticed that the term "frames" is used interchangeably to refer to SPA "vidoe frames" and "particle tilts" in tomography, which adds to the confusion. For example, the term "frame alignment" is mentioned twice on page 6, but I believe the authors are really referring to "tilt image alignment" instead. This should be clarified and the terms "frame" and "tilts" should be used consistently throughout the manuscript.
One complication here is that the terms “frame alignment” and “polishing” are already mentioned on the RELION4.0 GUI and in its documentation. Still, we recognize the ambiguity of the word frame in our manuscript. Therefore, in all instances of the word “frame”, we explicitly refer to either “vidoe frame” or “tilt series frame” in the revised version.
That said, the actual program that performs the “frame alignment” can, in fact, also work with vidoe frames from the tilt series images. One problem here is that the regularization of the motions is not done separately for vidoe frames within one tilt image and vidoe frames of the other tilt images. This may explain why preliminary tests with the individual vidoe frames did not yield worthwhile improvements. In the end, we decided not to expose the vidoeframe functionality to the enduser. We now explicitly mention in the Methods section:
“We also note that Bayesian Polishing in SPA describes particle motions between individual vidoe frames. Although our approach for tomography can also consider vidoe frames, the current implementation uses the same regularization of particle motions between vidoe frames within each tilt image as between the vidoe frames from other tilt images. Because preliminary tests showed limited benefits in considering the vidoe frames in this manner, only the functionality to model particle motions between the tilt series images were exposed on the GUI.”
11) For the Caulobacter crescentus Slayer reconstruction, the new cryoET map corresponding to the inner domain of the Slayer seems to have even better resolution than the SPA reconstruction reported in the original work [26] (3.5A vs. 3.7A for EMD10389). Since the authors say these maps are in "excellent agreement", a quantitative comparison should be included. Was the second LPS binding site visible in the 3.7A SPA map? In Figure 2d, the fit between LPS and the cryoET density (righthand side) seems to be of lower quality (i.e., no density for parts of the LPS model is seen). Without having access to the maps, it is not possible to properly evaluate these results.
All maps have now been submitted to the EMDB (also see point 7) and we have now included a comparison with the previous 3.7A map in Figure 2 —figure supplement 1. Please note that the 3.7 Å cryoEM singleparticle map was produced from data collected of an in vitro reconstituted specimen, rather than from the Slayer on native membranes on cells. The second LPS binding site was never observed in vitro, illustrating why in situ imaging is vital for obtaining physiologically relevant insights.
12) The paper is wellwritten and clear, but the description of the new concepts and evaluation of the method is somewhat minimalistic. While these may not be absolutely necessary, they will facilitate better understanding by a broader user community.
(i) A graph describing the workflow as implemented in RELION, e.g., as a flow chart, would increase the readability of the paper.
(ii) A figure visually describing the pseudosubtomograms would make the concept clearer to the reader.
(iii) A more indepth description of the input files (alignment files from imod, particle lists, and orientation from Dynamo/elsewhere) needed to perform alignments and averaging in RELION using pseudosubtomograms would be beneficial to the community. This could be incorporated into a flowchart as suggested in (i).
(13) Along the same lines, the authors do not include any description of the particlepicking procedures or the software requirements. This seems to be important as the authors state "(particles) relative defoci are known from the geometry of the tilt series and the known 3D positions of the particles in the tomogram".
This paper describes the 3D pseudosubtomogram approach with its correspondingly modified refinement algorithm, as well as the new functionality to perform CTF refinement and tilt series frame alignment in RELION4.0. Unfortunately, RELION4.0 does not yet provide a smooth pipeline for the entire processing procedure that would be captured well in a flow chart. Much improved import procedures and a smoother pipeline are the topic on ongoing work, which will be implemented in RELION4.1. We have modified the Discussion to reflect this more clearly:
“However, tilt series alignment, tomogram reconstruction and particle picking are not yet part of the RELION workflow. Efforts to also implement solutions for those steps in a single tomography processing pipeline are ongoing and will be part of future RELION releases. Meanwhile, current import procedures rely on specific preprocessing operations in IMOD kremer1996computer, and particle coordinate conversion tools to use in RELION4.0 are available for a range of thirdparty software packages pyle2022picking.”
(14) In section 2.2 it is shortly discussed that rotational priors can be used to constrain orientational searches. As this is often very useful for particle alignment in cryoET, this and the implementation should be described in more detail.
Performing local angular searches is done using RELION’s standard Gaussianshaped priors on the Euler angles, which have been described previously. A reference to the corresponding paper has been inserted. Please also see our response to point (2).
(15) The progress in map quality is only shown by improvements to resolution in the FSC for the test datasets. It would be beneficial to also show the initial and intermediary maps for at least one of the test cases so that the reader can better assess the improvement in map quality. Overall maps should be shown to evaluate performance on the "whole particle" level, given considerations of flexibility (especially in cases where focused refinement was used). Corresponding local resolution maps would be informative in such cases.
We have inserted maps describing the stepwise improvement for the HIV immature capsid data set in Figure 1 —figure supplement 1.
(16) For the immature HIV capsid data, the FSC curves are only shown for the small subset of data. As it is an important point that the RELION pseudosubtomogram approach can reach the same resolution as previous software such as M, the data should also be shown for the full dataset (authors write "data not shown").
This has been done. See point (3).
(17) In general, a proper comparison to the previous approaches/results, maybe by providing the maps for comparison in the figures, would be helpful for the reader to evaluate which approach to choose, and based on what expectations/biological system.
See points (3), (4) and (11).
(18) In section 4 it is briefly discussed that particle rotation is not modeled in the perparticle motion correction. They draw parallels to single particles and conclude that this might be an issue at resolutions better than 2 Å. However, could the authors comment on whether the difference in dose and time frame of data collection as well as the thickness of sample, may affect the assumption that this is only an issue at a higher resolution?
The lower dose in tilt series images, and thicker samples will only make it more difficult to also model the rotations, as both will decrease signaltonoise ratios in the images. The only advantage of tomographic data is that the distinct tilts provide complementary views of the same structure, which might benefit the rotational alignments.
(19) Many subtomogram averages are resolved at a more moderate resolution (e.g. 820 Å) than the maps presented in this study due to different reasons. It would be beneficial for the reader if the authors could comment on/discuss which optical and geometrical parameters can be confidently modeled at a lower resolution and if they improve resolution.
One would expect that shifts of the particles (during subtomogram alignment, but also during tilt series frame alignment) would still be reasonably well defined by such lowmedium resolutions. Probably defocus refinement would be more difficult.
(20) Could the authors comment on whether it is possible to extract the refined optical and geometrical parameters for a set of particles from a tilt series, and apply them to a different set of particles from the same tilt series (or to the full tilt series to obtain a better quality tomogram for new particle picking).
This is not yet possible, but we have plans to implement this functionality in the future.
21) Along the same line: would it be possible to refine two distinct species (e.g. two distinct conformations of the same protein or two completely unrelated proteins) together in a multispecies refinement of optical and geometrical parameters? As the overall tilt series alignment and imaging parameters are the same, this might aid in refinement for lower abundance.
Not yet; also see point (19).
(22) On a more conceptual level, the reasoning to separate the SPA/CryoET workflows completely in the software package remains unclear to me and it would be valuable if the authors could briefly discuss this.
The workflows are not separated completely. In fact, their implementation in RELION4.0 is very much intertwined. This will aid newcomers to the field of tomography, who may already be familiar with SPA processing in RELION. The paper already makes this point.
23) Can the authors comment on the "The data model assumes independent Gaussian noise on the Fourier components of the cryoEM images of individual particles" with respect to the more complex in situ data, where noise is expected to be more structured?
In neither SPA, nor tomography, is this assumption perfect. In both modalities, neighboring particles will introduce noise that is localized in realspace, and for which correlations thus exist in Fourier space. However, taking such correlations into account would be computationally difficult and expensive.
https://doi.org/10.7554/eLife.83724.sa2Article and author information
Author details
Funding
UK Research and Innovation (MC_UP_A025_1013)
 Sjors HW Scheres
UK Research and Innovation (MC_UP_1201/16)
 John AG Briggs
European Research Council (ERCCoG2014 grant 648432)
 John AG Briggs
European Research Council (ERCStG2019 grant 852915)
 Giulia Zanetti
Swiss National Science Foundation (205321_179041/1)
 Daniel CastañoDíez
UK Research and Innovation (BBSRC grant BB/T002670/1)
 Giulia Zanetti
European Research Council (ERCAdG2015 grant 692726)
 Jasenko Zivanov
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
We are grateful to the MRCLMB EM facility for help with data acquisition and to Jake Grimmett, Toby Darling, and Ivan Clayson for help with highperformance computing. This work was funded by the UK Research and Innovation (UKRI) Medical Research Council (MC_UP_A025_1013 to SHWS; and MC_UP_1201/16 to JAGB), the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (ERCCoG2014, grant 648432, MEMBRANEFUSION to JAGB and ERC StG2019, grant 852915 CRYTOCOP to GZ); the Swiss National Science Foundation (grant 205321_179041/1 to DCD), the Max Planck Society (to JAGB) and the UKRI Biotechnology and Biological Sciences Research Council (grant BB/T002670/1 to GZ). TAMB is a recipient of a Sir Henry Dale Fellowship, jointly funded by the Wellcome Trust and the Royal Society (202231/Z/16/Z). JZ was partially funded by the European Union’s Horizon 2020 research and innovation program (ERCADG2015, grant 692726, GlobalBioIm to Michael Unser). TAMB thanks the Vallee Research Foundation, the Leverhulme Trust, and the Lister Institute of Preventative Medicine for support.
Senior Editor
 Volker Dötsch, Goethe University, Germany
Reviewing Editor
 Edward H Egelman, University of Virginia, United States
Reviewer
 Alberto Bartesaghi, Duke University, United States
Publication history
 Preprint posted: February 28, 2022 (view preprint)
 Received: September 26, 2022
 Accepted: December 4, 2022
 Accepted Manuscript published: December 5, 2022 (version 1)
 Accepted Manuscript updated: December 6, 2022 (version 2)
 Version of Record published: January 5, 2023 (version 3)
Copyright
© 2022, Zivanov et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 1,910
 Page views

 380
 Downloads

 8
 Citations
Article citation count generated by polling the highest count across the following sources: PubMed Central, Crossref, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Biochemistry and Chemical Biology
 Structural Biology and Molecular Biophysics
Aurora B, together with INbox, the Cterminal part of INCENP, forms an enzymatic complex that ensures faithful cell division. The [Aurora B/INbox] complex is activated by autophosphorylation in the Aurora B activation loop and in INbox, but it is not clear how these phosphorylations activate the enzyme. We used a combination of experimental and computational studies to investigate the effects of phosphorylation on the molecular dynamics and structure of [Aurora B/INbox]. In addition, we generated partially phosphorylated intermediates to analyze the contribution of each phosphorylation independently. We found that the dynamics of Aurora and INbox are interconnected, and INbox plays both positive and negative regulatory roles depending on the phosphorylation status of the enzyme complex. Phosphorylation in the activation loop of Aurora B occurs intramolecularly and prepares the enzyme complex for activation, but two phosphorylated sites are synergistically responsible for full enzyme activity.

 Biochemistry and Chemical Biology
 Structural Biology and Molecular Biophysics
Mitochondrial ATP production in cardiac ventricular myocytes must be continually adjusted to rapidly replenish the ATP consumed by the working heart. Two systems are known to be critical in this regulation: mitochondrial matrix Ca^{2+} ([Ca^{2+}]_{m}) and blood flow that is tuned by local ventricular myocyte metabolic signaling. However, these two regulatory systems do not fully account for the physiological range of ATP consumption observed. We report here on the identity, location, and signaling cascade of a third regulatory system  CO_{2}/bicarbonate. CO_{2} is generated in the mitochondrial matrix as a metabolic waste product of the oxidation of nutrients that powers ATP production. It is a lipid soluble gas that rapidly permeates the inner mitochondrial membrane (IMM) and produces bicarbonate (HCO_{3}^{}) in a reaction accelerated by carbonic anhydrase (CA). The bicarbonate level is tracked physiologically by a bicarbonateactivated adenylyl cyclase, soluble adenylyl cyclase (sAC). Using structural Airyscan superresolution imaging and functional measurements we find that sAC is primarily inside the mitochondria of ventricular myocytes where it generates cAMP when activated by HCO_{3}^{}. Our data strongly suggest that ATP production in these mitochondria is regulated by this cAMP signaling cascade operating within the intermembrane space (IMS) by activating local EPAC1 (Exchange Protein directly Activated by cAMP) which turns on Rap1 (Rasrelated protein 1). Thus, mitochondrial ATP production is shown to be increased by bicarbonatetriggered sAC signaling through Rap1. Additional evidence is presented indicating that the cAMP signaling itself does not occur directly in the matrix. We also show that this third signaling process involving bicarbonate and sAC activates the cardiac mitochondrial ATP production machinery by working independently of, yet in conjunction with, [Ca^{2+}]_{m}dependent ATP production to meet the energy needs of cellular activity in both health and disease. We propose that the bicarbonate and calcium signaling arms function in a resonant or complementary manner to match mitochondrial ATP production to the full range of energy consumption in cardiac ventricular myocytes in health and disease.