Abstract
Untargeted metabolomic profiling through liquid chromatography-mass spectrometry (LC-MS) measures a vast array of metabolites within biospecimens, advancing drug development, disease diagnosis, and risk prediction. However, the low throughput of LC-MS poses a major challenge for biomarker discovery, annotation, and experimental comparison, necessitating the merging of multiple datasets. Current data pooling methods encounter practical limitations due to their vulnerability to data variations and hyperparameter dependence. Here we introduce GromovMatcher, a flexible and user-friendly algorithm that automatically combines LC-MS datasets using optimal transport. By capitalizing on feature intensity correlation structures, GromovMatcher delivers superior alignment accuracy and robustness compared to existing approaches. This algorithm scales to thousands of features requiring minimal hyperparameter tuning. Applying our method to experimental patient studies of liver and pancreatic cancer, we discover shared metabolic features related to patient alcohol intake, demonstrating how GromovMatcher facilitates the search for biomarkers associated with lifestyle risk factors linked to several cancer types.
Introduction
Untargeted metabolomics is a powerful analytical technique used to identify and measure a large number of metabolites in a biological sample without preselecting targets Patti (2011). This approach allows for a comprehensive overview of an individual’s metabolic profile, provides insights into the biochemical processes involved in cellular and organismal physiology Wishart (2019); Pirhaji et al. (2016), and allows for the exploration of how environmental factors impact metabolism Rappaport et al. (2014); Bedia (2022). It creates new opportunities to investigate health-related conditions, including diabetes Wang et al. (2011), inflammatory bowel diseases Franzosa et al. (2019), and various cancer types Loftfield et al. (2021); Li et al. (2020). However, a major challenge in biomarker discovery, metabolic signature identification and other untargeted metabolomic analyses lies in the low throughput of experimental data, necessitating the development of efficient pooling algorithms capable of merging datasets from multiple sources Loftfield et al. (2021).
A common experimental technique in untargeted metabolomics is liquid chromatography-mass spectrometry (LC-MS) which assembles a list of thousands of unlabeled metabolic features characterized by their mass-to-charge ratio (m/z), retention time (RT) Zhou et al. (2012), and intensity across all biological samples. Combining LC-MS datasets from multiple experimental studies remains challenging due to variation in the m/z and RT of a feature from one study to another Zhou et al. (2012); Ivanisevic and Want (2019). This problem is further compounded by differing instruments and analytical protocols across laboratories, resulting in seemingly incompatible metabolomic datasets.
Manual matching of metabolic features can be a laborious and error-prone task Loftfield et al. (2021). To address this challenge, several automated methods have been developed for metabolic feature alignment. One such method is MetaXCMS, which matches LC-MS features based on user-defined m/z and RT thresholds Tautenhahn et al. (2011). More advanced tools use information on feature intensities measured in samples. For instance, PAIRUP-MS uses known shared metabolic features to impute the intensities of all features from one dataset to another Hsu et al. (2019). MetabCombiner Habra et al. (2021) and M2S Climaco Pinto et al. (2022) compare average feature intensities, along with their m/z and RT values, to align datasets without requiring extensive knowledge of shared features. These automated alignment methods have accelerated our ability to pool and annotate datasets as well as extract biologically meaningful biomarkers. However, they demand substantial fine-tuning of user-defined parameters and ignore correlations among metabolic features which provide a wealth of additional information on shared features.
Here we introduce GromovMatcher, a user-friendly flexible algorithm which automates the matching of metabolic features across experiments. The main technical innovation of Gromov-Matcher lies in its ability to incorporate the correlation information between metabolic feature intensities, building upon the powerful mathematical framework of computational optimal transport (OT) Peyré et al. (2019); Villani (2021). OT has proven effective in solving various matching problems and has found applications in multiomics analysis Demetci et al. (2022), cell development Schiebinger et al. (2019); Yang et al. (2020), and chromatogram alignment Skoraczynski et al. (2022). Here we leverage the Gromov-Wasserstein (GW) method Memoli (2011); Solomon et al. (2016), which matches datasets based on their distance structure and has been seminally applied to spatial reconstruction problems in genomics Nitzan et al. (2019). GromovMatcher builds upon the GW algorithm to automatically uncover the shared correlation structure among metabolic feature intensities while also incorporating m/z and RT information in the final matching process.
To assess the performance of GromovMatcher, we systematically benchmark it on synthetic data with varying levels of noise, feature overlap, and data normalizations, outperforming prior state-of-the-art methods of metabCombiner Habra et al. (2021) and M2S Climaco Pinto et al. (2022). Next we apply GromovMatcher to align experimental patient studies of liver and pancreatic cancer to a reference dataset and associate the shared metabolic features to each patient’s alcohol intake. Through these efforts, we demonstrate how GromovMatcher data pooling improves our ability to discover biomarkers of lifestyle risk factors associated with several types of cancer.
Results
GromovMatcher algorithm
GromovMatcher uses the mathematical framework of OT to find all matching metabolic features between two untargeted metabolomic datasets (Fig. 1). It accepts two LC-MS datasets with possibly different numbers of metabolic features and samples. Each feature, fx، in Dataset 1 and fyj in Dataset 2, is identified by its m/z, RT, and vector of feature intensities across samples (Fig. 1a). The primary tenet of GromovMatcher is that shared metabolic features have similar correlation patterns in both datasets and can be matched based on the distance/correlations between their feature intensity vectors. Specifically, GromovMatcher computes the pairwise distances between the feature intensity vectors of each metabolic feature in a dataset and saves them into a distance matrix, one per dataset (Fig. 1b). In practice, we use either the Euclidean distance or the cosine distance (negative of correlation) to perform this step (Methods). The resulting distance matrices contain information about the feature intensity similarity within each study. Using optimal transport, we can deduce shared subsets of metabolic features in both datasets which have corresponding feature intensity distance structures.
OT was originally developed to optimize the transportation of soil for the construction of forts Monge (1781) and was later generalized through the language of probability theory and linear programming Kantorovich (2006), leading to efficient numerical algorithms and direct applications to planning problems in economics. The ability of OT to efficiently match source to target locations found applications in data science for the alignment of distributions Courty et al. (2017); Alvarez-Melis et al. (2019) and was generalized by the Gromov-Wasserstein (GW) method Peyré et al. (2016); Alvarez-Melis and Jaakkola (2018) to align datasets with features of differing dimensions.
In practice, a sizeable fraction of the metabolic features measured in one study may not be present in the other. Hence, in most cases only a subset of features in both datasets can be matched. Recent GW formulations for unbalanced matching problems Sejourne et al. (2021) allow for matching only subsets of metabolic features with similar intensity structures (Fig. 1c). To incorporate additional feature information, we modify the optimization objective of unbalanced GW to penalize feature matches whose m/z differences exceed a fixed threshold (Methods, Appendix 1). The optimization ofthis objective computes a coupling matrix where each entry indicates the level of confidence in matching metabolic feature fxi in Dataset 1 to fyj in Dataset 2.
Differences in experimental conditions can induce variations in RT between datasets that can be nonlinear and large in magnitude Zhou et al. (2012); Climaco Pinto et al. (2022); Habra et al. (2021). In the spirit of previous methods for LC-MS batch or dataset alignment Smith et al. (2006); Brunius et al. (2016); Liu et al. (2020); Vaughan et al. (2012); Habra et al. (2021); Climaco Pinto et al. (2022); Skoraczynski et al. (2022), the learned coupling is used to estimate a nonlinear map (drift function) between RTs of both datasets by weighted spline regression, which allows us to filter unlikely matches from the coupling matrix to obtain a refined coupling matrix (Fig. 1d, Methods). An optional thresholding step removes matches with small weights from the coupling matrix. The final output of GromovMatcher is a binary matching matrix M where Mij is equal to 1 if features fxi and fyj are matched and 0 otherwise. Throughout the paper, we refer to the two variants of GromovMatcher, with and without the optional thresholding step as GMT and GM respectively.
Validation on ground-truth data
We first evaluate the performance of GromovMatcher using a real-world untargeted metabolomics study of cord blood across 499 newborns containing 4,712 metabolic features characterized by their m/z, RT, and feature intensities Alfano et al. (2020). To generate ground-truth data, we randomly divide the initial dataset into two smaller datasets sharing a subset of features (Fig. 2). We simulate diverse acquisition conditions by adding noise to the m/z and RT of dataset 2, and to the feature intensities in both datasets. Moreover, we introduce an RT drift in dataset 2 to replicate the retention time variations observed in real LC-MS experiments (Methods and Materials). For comparison, we also test M2S Climaco Pinto et al. (2022) and metabCombiner Habra et al. (2021), both of which use m / z, RT, and median or mean feature intensities to match features (Fig. 3). MetabCombiner is supplied with 100 known shared metabolic features to automatically set its hyperparameters, while M2S parameters are manually fine-tuned to optimize the F1-score in each scenario (Appendix 2). We assess the performance of GM, GMT, metabCombiner, and M2S across 20 randomly generated dataset pairs in terms of their precision (fraction of true matches among the detected matches) and recall/sensitivity (fraction of true matches detected) averaged across 20 dataset pairs.
To investigate how the number of shared features affects dataset alignment, we generate pairs of LC-MS datasets with low, medium, and high feature overlap (25%, 50%, and 75%), while maintaining a medium noise level (Methods). Here we find that GM and GMT generally outperform existing alignment methods, with a recall above 0.95 while metabCombiner and M2S tend to be less sensitive (Fig. 3b). All methods drop in precision as the feature overlap is decreased, with GM and GMT still maintaining an average precision above 0.8.
Next we evaluate all four methods at low, moderate, and high noise levels for pairs of datasets with 50% over-lap in their features (Methods). Our results show that GMT, GM, and M2S maintain an average recall above 0.89, while metabCombiner’s recall drops below 0.6 for high noise. At large noise levels, RT drift estimation becomes more challenging, leading to a higher rate of false matches between metabolites (lower precision) for all four methods (Fig. 3 — figure supplement 1). Nevertheless, GMT obtains a high average precision and recall of 0.86 and 0.92 respectively.
A notable difference between GM, metabCombiner, and M2S lies in their use of feature intensities. MetabCombiner expects that the mean feature intensity rankings are identical across studies, while M2S assumes that shared features have similar median intensities. In contrast, GM uses both the mean feature intensities and their variances and covariances. In practice, differences in experimental assays or study populations can lead to greater variation in feature intensities, making matchings based on these statistics less reliable. Centering and scaling the feature intensities to unit variance avoids potential biases arising from inconsistent feature intensity magnitudes, but preserves correlations that GM leverages.
Exploring this further, we test how sensitive all four methods are to centering and scaling of feature intensities. MetabCombiner and M2S are tuned using the same methodology as for noncentered and non-scaled data. For M2S, we match features solely based on their m/z and RT. In this experiment (Figure 3 — figure supplement 2), the absence of intensity magnitude information significantly affects metabCombiner’s performance and, to a lesser extent, M2S. GM and GMT still obtain accurate matchings, due to their use of correlation structures which are preserved under centering and scaling.
Application to EPIC data
Next, we apply GM, metabCombiner and M2S to align datasets from the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort, a prospective study conducted across 23 European centers. EPIC comprises more than 500,000 participants who provided blood samples at recruitment Riboli et al. (2002). Untargeted metabolomics data were successively acquired in several studies nested within the full cohort.
In the present work, we use LC-MS data from the EPIC cross-sectional (CS) study Slimani et al. (2003) and two matched case-control studies nested within EPIC, on hepatocellular carcinoma (HCC) Stepien et al. (2016, 2021) and pancreatic cancer (PC) Gasull et al. (2019). LC-MS untargeted metabolomic data were acquired at the International Agency for Research on Cancer, making use of the same platform and methodology (Methods). The number of samples and features in each study is displayed in Fig. 4a.
Loftfield et al. Loftfield et al. (2021) previously matched features from the CS, HCC, and PC studies in EPIC for alcohol biomarker discovery. The authors first identified 205 features (163 in positive and 42 in negative mode) associated with alcohol intake in the CS study. These features were then manually matched by an expert to features in both the HCC and PC studies (Methods). In our analysis, we use these features as a validation set and compare each method’s matchings to the expert manual matchings on this subset. Due to the imbalance between the number of positive and negative mode features in the validation subset, our main analysis focuses on the alignment results of CS with HCC and CS with PC in positive mode. We delegate the matching results between the negative mode studies to Appendix 3.
In this section, we use the same settings for GM as in our simulation study, and do not apply an additional thresholding step. The parameters of metabCombiner and M2S are calibrated using the validation subset as prior knowledge (Appendix 2).
Preliminary analysis of the validation subset reveals inconsistencies in the mean feature intensities (Figure 4 — figure supplement 1), but Figure 4b shows that on centered and scaled data, the 90 expert matched features shared between the CS and HCC studies have similar correlation structures. Hence, to avoid potential errors we center and scale the feature intensities which improves the performance of all three methods tested below (Appendix 3, Appendix 3 — Table 1).
Hepatocellular carcinoma
Here we analyze the quality of the matchings obtained by GM, M2S, and metabCombiner between the CS and HCC datasets in positive mode. Both GM and M2S identify approximately 1000 shared features while metabCombiner finds a smaller number of about 700 shared features. We refer the reader to Figure 4- figure supplement 2a for the precise matched feature sizes and details on the agreement between the feature matchings of all three methods.
We evaluate the performance of metabCombiner, M2S, and GM on the validation subset in positive mode (Fig. 4c), which consist of 90 features from the CS study manually matched to features from the HCC study and 73 features specific to the CS study. MetabCombiner demonstrates precise matching but lacks sensitivity. M2S’s precision and recall are comparable with GM, in contrast to its performance on simulated data. This can be attributed to the RT drift shape between the CS and HCC studies (Appendix 2), which is estimated to be close to linear (Figure 4 — figure supplement 3). Because the parameters of M2S are fine-tuned in the validation subset, it is able to learn this linear drift and apply tight RT thresholds to achieve accurate matchings. In contrast to metabCombiner and M2S, the GM algorithm is not given any prior knowledge of the validation subset, and nevertheless demonstrates the highest precision and recall rates of the three methods (Fig. 4c). Figure 4b shows how GM recovers the majority of the expert matched pairs by leveraging the shared correlations.
Pancreatic cancer
Matching features between the CS and PC studies in positive mode, GM and M2S identify approximately 1000 common features, while metabCombiner detects approximately 600 matches (Figure 4- figure supplement 2b). We examine the performance of all three methods on the validation subset consisting of 66 manually matched features between CS and PC along with 97 features specific to the CS study. As before, GM and M2S have high recall while the recall of metabCombiner is less than 0.5.
A decrease in precision is observed for both GM and M2S compared to the previous CS-HCC matchings. We therefore manually inspect the false positive matches; the set of CS features matched by the method to the PC study but explicitly examined and left unmatched in the expert manual matching. Assessing the GM results, we identify 7 false positive feature matches. Upon secondary inspection, 3 pairs are revealed as correct matches that were not initially identified in the expert matching. M2S finds 11 false positive matches which include the 7 false positives recovered by GM. Manual examination of the 4 remaining pairs reveals 2 clear mismatches. These results highlight the advantage of using automated methods for data alignment, as both GM and M2S detect correct matches that were not identified by experts, with GM being more precise than M2S.
Illustration for alcohol biomarker discovery
Loftfield et al. Loftfield et al. (2021) identified biomarkers of habitual alcohol intake by first performing a discovery step, where they examined the relationship between alcohol intake and metabolic features in the CS study. They then manually matched the significant features in CS to features from the HCC and PC studies, and repeated the analysis with samples from the HCC and PC studies to determine whether the association with alcohol intake persisted. This led to the identification of 10 features possibly associated with alcohol intake (Fig. 5a).
To extend this analysis and illustrate the benefit of GM automatic matching for biomarker discovery, we use GM to pool features from the CS, HCC, and PC studies, and examine the relationship between metabolic features and alcohol intake in the pooled study (Methods and Fig. 5b).
Applying an FDR correction on the pooled study, we identify 243 features associated with alcohol intake, including 185 features consistent with the discovery step of Loftfield et al. Loftfield et al. (2021), and 55 newly discovered features (Fig. 5c). Using the more stringent Bonferroni correction on the pooled data, we identify 36 features shared by all three studies that are significantly associated with alcohol intake. These features include all 10 features identified in Loftfield et al. (Fig. 5c). These findings highlight the potential benefits of using GM automatic matching for biomarker discovery in untargeted metabolomics data. Additional information regarding the methodology and findings of our GM and Loftfield et al. analyses can be found in Methods and Appendix 3.
Discussion
LC-MS metabolomics has emerged as an increasingly powerful tool for biological and biomedical research, offering promising opportunities for epidemiological and clinical investigations. However, integrating data from different sources remains challenging. To address this issue, we introduce GromovMatcher, a method based on optimal transport that automatically aligns LC-MS data from pairs of studies. Our method exhibits superior performance on both simulated and real data when compared to existing approaches. Additionally, it presents a user-friendly interface with few hyperparameters.
While GromovMatcher is robust to noise and variations in data, it may face limitations when aligning LC-MS studies from populations with different characteristics, where the correlation structures between features may be inconsistent across studies. In this case, the base assumption of GromovMatcher can be relaxed by focusing on subsamples with similar characteristics, as exemplified in a recent study Gomari et al. (2022).
A current limitation is that GromovMatcher does not account for more than two datasets simultaneously, although this can be overcome by aligning multiple studies to a chosen reference dataset, as demonstrated in our biomarker experiments. The extension of Gromov-Wasserstein to multiple distributions Beier et al. (2022) is another promising approach for generalizing Gro-movMatcher to multiple dataset alignment. Further improvements can be made by incorporating existing knowledge about the studies being matched, such as known shared features, samples in common, or MS/MS data.
The results obtained from GromovMatcher are highly promising, opening the door for various analyses of metabolomic datasets acquired in different experimental laboratories. Here we demonstrated the potential of GromovMatcher in expediting the combination and meta-analysis of data for biomarker and metabolic signature discovery. The matchings learned by GromovMatcher also allow for comparison between experimental protocols by assessing the drift in m/z, RT, and feature intensities across studies. Finally, inter-institutional annotation efforts can directly benefit from incorporating this method to transfer annotations between aligned datasets. Bridging the gap between otherwise incompatible LC-MS data, GromovMatcher enables seamless comparison of untargeted metabolomics experiments.
Methods and Materials
GromovMatcher method overview
GromovMatcher accepts as input two feature tables from separate LC-MS untargeted metabolomics studies. Each feature table for dataset 1 and dataset 2 consists of n1, n2 biospecimen samples respectively and p1, p2 metabolic features respectively detected in the study. Features in dataset 1 are given the label fxi for i = 1,… ,p1. Every feature is characterized by a mass-to-charge ratio (m/z) denoted by , a retention time (RT) denoted by , and a vector of intensities across all samples written as . Similarly, features in dataset 2 are labeled as fyj for j = 1,… ,p2 and are characterized by their m/z , retention time , and a vector of intensities across all samples .
Our goal is to identify pairs of indexes (i, j) with i ∈ {1, … , p1} and j ∈ {1, …, p2}, such that fxi and fyj correspond to the same metabolic feature. More formally, we aim to identify a matching matrix such that Mij = 1 if fxi and fyj correspond to the same feature, hereafter referred to as matched features. Otherwise, we set Mij = 0.
Because the m/z and RT values of metabolomic features are often noisy and subject to experimental bias, our matching algorithm leverages metabolite feature intensities Xi,Yj to produce accurate dataset alignments. The GromovMatcher method is based on the idea that signal intensities of the same metabolites measured in two different studies should exhibit similar correlation structures, in addition to having compatible m/z and RT values. Here we define the Pearson correlation for vectors u,v ∈ ℝn as
where we define
as the mean value, Euclidean norm and inner product respectively. If measurements Xi,Yj correspond to the same underlying feature, and similarly, measurements Xk,Yt share the same an underlying feature, we expect that
This idea that the feature intensities of shared metabolites have the same correlation structure in both datasets also holds more generally for distances, under a suitable choice of distance. For example, the correlation coefficient corr(u, V) can be turned into a dissimilarity metric by defining
commonly referred to as the cosine distance. Preservation of feature intensity correlations then trivially amounts to the preservation of cosine distances.
Another classical notion of distance between vectors u, V ∈ ℝn is the normalized Euclidean distance
which is equal to the cosine distance (up to constants) when the vectors u, V are centered and scaled to have zero mean and a standard deviation of one. The Euclidean distance depends on the magnitude or mean intensity of metabolic features, and hence is a useful metric for matching metabolites as long as these mean feature intensities are reliably collected.
To summarize, the main tenant of GromovMatcher is that if measurements Xi,Yj correspond to the same feature and Xk, Yl correspond to the same feature, then for suitably chosen distances , these distances are preserved
across both datasets. In this paper, the distances dx,dy are taken to be the normalized Euclidean distances in (5). We take care to specify those experiments where the metabolic features X and Y are centered and scaled. In these cases, implicitly the Euclidean distance between normalized feature vectors becomes the cosine distance (4) between the original (unnormalized) feature vectors.
Unbalanced Gromov-Wasserstein
The goal ofGromovMatcher is to learn a matching matrix that gives an alignment between a subset of metabolites in both datasets. However, searching over the combinatorially large set of binary matrices would be an inefficient approach for dataset alignment. The mathematical framework of optimal transport Peyré et al. (2019) instead enlarges this space of binary matrices to the set of coupling matrices with real nonnegative entries . The entries ∏ij with large weights indicate that feature fxi in dataset 1 and feature fyj in dataset 2 are a likely match. Taking inspiration from (6), we minimize the following objective function
to estimate the coupling matrix ∏.
A standard approach is to optimize this objective over all coupling matrices ∏ under exact marginal constraints . Here we define 1n is the ones vector of length n, and denote the column and row sums of the coupling matrix. Objective (7) under these exact marginal constraints defines a distance between the two sets of metabolic feature vectors known as the Gromov-Wasserstein distance Mémoli (2011), a generalization of optimal transport to metric spaces. Note that for pairs Xi,Yj and Xk,Yl for which dx(Xi, Xk) ≈ dy(Yj,Yl), the entries ∏ij, ∏kl are penalized less and hence matches between features fxi ,fyj and features fxk ,fyl are more favored. In our optimization, we avoid enforcing exact marginal constraints on the marginal distributions ∏1p2 and ∏T1p1 of our coupling matrix as this would enforce that all metabolites in both datasets are matched (Appendix 1). However, without any marginal constraints on the coupling ∏, the objective function (7) is trivially minimized by ∏ = 0, leaving all metabolites in both datasets unmatched.
To account for this, we follow the ideas of unbalanced Gromov-Wasserstein (UGW) Sejourne et al. (2021) and add three regularization terms to our objective
where ρ, ε > 0 and we define a = 1p1,b = 1p2. Here ⊗ denotes the Kronecker product. We define DKL as the Kullback-Leibler (KL) divergence between two discrete distributions by
which measures the closeness of probability distributions.
The first two regularization terms in (8) enforce that the row sums and column sums of the coupling matrix ∏ do not deviate too much from a uniform distribution, leading our optimization to match as many metabolic features as possible. The magnitude of the regularizer ρ roughly enforces the fraction of metabolites in both datasets that are matched where large ρ implies most metabolites are matched across datasets. The final regularization term ε in (8) controls the smoothness (entropy) of the coupling matrix ∏ where larger values of ε encourage ∏ to put uniform weights on many of its entries, leading to less precision in the metabolite matches. However, increasing ε also leads to better numerical stability and a significant speedup of the alternating minimization algorithm used to optimize the objective function (Appendix 1). In our implementation, we set ρ and ε to the lowest possible values under which our optimization converges, with ρ = 0.05 and ε = 0.005.
Our full optimization problem can now be written as
The UGW objective function is optimized through alternating minimization based on the code of Sejourne et al. (2021) using the unbalanced Sinkhorn algorithm Séjourné et al. (2019) from optimal transport (Appendix 1).
Constraint on m/z ratios
Matched metabolic features must have compatible m/z so we enforce that ∏ij = 0 when mgap where mgap is a user-specified threshold. Based on prior literature Loftfield et al. (2021); Hsu et al. (2019); Climaco Pinto et al. (2022); Habra et al. (2021); Chen et al. (2021) we set mgap = 0.01ppm. Note that mgap is not explicitly used in (10) but is rather enforced in each iteration of our alternating minimization algorithm for the UGW objective (Appendix 1).
Unlike the m/z ratios discussed above, RTs often exhibit a non-linear deviation (drift) between studies so we cannot enforce compatibility of RTs directly in our optimization. Instead, in the following step of our pipeline we ensure matched metabolite pairs have compatible RTs by estimating the drift function and subsequently using it to filter out metabolite matches whose RT values are inconsistent with the estimated drift.
Estimation of the RT drift and filtering
Estimating the drift between RTs of two studies is a crucial step in assessing the validity of metabolite matches and discarding those pairs which are incompatible with the estimated drift.
Let be the minimizer of (10) obtained after optimization. We seek to estimate the RT drift function f : ℝ+ → ℝ+ which relates the retention times of matched features between the two studies. Namely, if feature fxi and feature fyj correspond to the same metabolic feature, then we must have that .
We propose to learn the drift f through the weighted spline regression
where Bn,k is the set of n-order B-splines with k knots. All pairs in objective (11) are weighted by the coefficients of so that larger weights are given to pairs identified with high confidence in the first step of our procedure. The order of the B-splines was set to n = 3 by default, while the number of knots k was selected by 10-fold cross-validation.
Pairs identified as incompatible with the estimated RT drift are then discarded from the coupling matrix. To do this, we first take the estimated RT drift , and the set of pairs recovered in . We then define the residual associated with (i,j) ∈ S as
The 95% prediction interval and the median absolute deviation (MAD) of these residuals are given by
where |S| is the size of S and the functions std and median denote the standard deviation and median respectively. Similar to the approach in Climaco Pinto et al. (2022), we create a new filtered coupling matrix given by
where rthresh is a given filtering threshold. Following Habra et al. (2021), the estimation and outlier detection step can be repeated for multiple iterations, to remove pairs that deviate significantly from the estimated drift and improve the robustness of the drift estimation. In our main algorithm, we use two preliminary iterations where estimate the RT drift and discard outliers outside of the 95% prediction interval by setting rthresh = PI. We the re-estimate the drift and perform a final filtering step with the more stringent MAD by setting rthresh = 2 × MAD.
At this stage, it is possible for to still contain coefficients of very small magnitude. As an optional postprocessing step, we discard these coefficients by setting all entries smaller than to zero, for some user-defined τ ∈ [0,1]. Lastly, a feature from either study could have multiple possible matches, since can have more than one non-zero coefficient per row or column. Although reporting multiple matches can be helpful in an exploratory context, for the sake of simplicity in our analysis, the final output of GromovMatcher returns a one-to-one matching, as we only keep those metabolite pairs (i, j) where the entry is largest in its corresponding row and column. All nonzero entries of which do not satisfy this criterion are set to zero. Finally, we convert into a binary matching matrix M ∈ {0, 1}p1 × p2 with ones in place of its nonzero entries and this final output is returned to the user.
As a naming convention, we use the abbreviation GM for our GromovMatcher method, and use the abbreviation GMT when running GromovMatcher with the optional τ-thresholding step with τ = 0.3.
Metrics for dataset alignment
Every alignment method studied in this paper returns a binary partial matching matrix M ∈ {0, 1} p1×p1 which has at most one nonzero entry in each row and column. Specifically, Mij = 1 if metabolic features i and j in both datasets correspond to each other and Mij = 0 otherwise. In our simulated experiments, we compare the partial matching M to a known ground-truth partial matching matrix M* ∈ {0, 1}p1×p2.
To do this, we first compute the number of true positives, false positives, true negatives, and false negatives as
where 1 denotes the indicator function. Then we use these values to compute the precision and recall as
Precision measures the fraction of correctly found matches out of all discovered metabolite matches, while recall, also know as sensitivity, measures the fraction of correctly matched pairs out of all truly matched pairs. These two statistics can be summarized into one metric called the F1-score by taking their harmonic mean
These three metrics, precision, recall, and the F1-score, are used throughout the paper to assess the performance of dataset alignment methods, both on simulated data where the ground-truth matching is known, and on the validation subset in EPIC, using results from the manual examination as the ground-truth benchmark.
Validation on simulated data
To assess the performance of GromovMatcher and compare it to existing dataset alignment methods, we simulate realistic pairs of untargeted metabolomics feature with known ground-truth matchings. This allows us to analyze the dependence of alignment methods on the number of shared metabolites, dataset noise level, and feature intensity centering and scaling.
Dataset generation
Our pairs of synthetic feature tables are generated from one real untargeted metabolomics study of 500 newborns within the EXPOsOMICS project, which uses reversed phase liquid chromatographyquadrupole time-of-flight mass spectrometry (UHPLC-QTOF-MS) system in positive ion mode Alfano et al. (2020). The original dataset is first preprocessed following the procedure detailed in Alfano et al. Alfano et al. (2020), resulting in p = 4,712 features measured in n = 499 samples available for subsequent analysis. Features and samples from the original study are then divided into two feature tables of respective size (n1, p1) and (n2,p2), with n1 + n2 = n and p1, p2 ≤ p. In order to do this, randomly chosen samples from the original study are placed into dataset 1 and the remaining samples from the original study are placed into dataset 2. Here and denote integer floor and ceiling functions. The features of the original study are randomly assigned to dataset 1, dataset 2, or both, allowing the resulting studies to have both common and studyspecific features (Fig. 2). Specifically, for a fixed overlap parameter λ ∈ [0,1], we assign a random subset of the features into both dataset 1 and dataset 2 while the remaining features are divided equally between the two studies such that p1 = p2. We choose λ ∈ {0.25, 0.5,0.75} corresponding to low, medium and high overlap.
After generating a pair of studies, random noise is added to the m/z, RT and intensity levels of features in dataset 2 to mimic variations in data acquisition across two different experiments. The noise added to each m / z value in study 2 is sampled from a uniform distribution on the interval [-σM, σM] with σM = 0.01 Climaco Pinto et al. (2022). The RTs of dataset 2 are first deviated by the function , corresponding to a systematic inter-dataset drift Habra et al. (2021); Climaco Pinto et al. (2022); Brunius et al. (2016). A uniformly distributed noise on the interval [-σRT, σRT] is added to the deviated RTs of dataset 2, with σRT ∈ {0.2, 0.5, 1} (in minutes) corresponding to low, moderate and high variations Climaco Pinto et al. (2022); Habra et al. (2021); Vaughan et al. (2012). Finally, we add a Gaussian noise to the feature intensities of both studies where σFI is the scalarvariance ofthe noise. This noise perturbs the correlation matrices of dataset 1 and dataset 2, making matching based on feature intensity correlations more challenging. We vary σFI over the set of values {0.1, 0.5, 1}.
Given this data generation process, we test the performance of the four alignment methods (M2S, metabCombiner, GM, and GMT) under the parameter settings described below.
Dependence on overlap
We first assess how the performance of the four methods is affected by the number of metabolic features shared in both datasets. For each value of λ = 0.25 ,0.5 ,0.75 (low, medium and high overlap), we randomly generate 20 pairs of datasets with noise on the m/z , RT and feature intensities set to σM = 0.01, σRT = 0.5, σFI = 0.5. The precision and recall of each method at low, medium, and high overlap is recorded for each of the repetitions.
Noise robustness
Next we test the robustness to noise of each method by fixing the metabolite overlap fraction at λ = 0.5 and generating 20 random pairs of datasets at low (σRT = 0.2, σFI = 0.1), medium (σRT = 0.5, σFI = 0.5), and high (σRT = 1, σFI = 1) noise levels. Similarly, the precision and recall of each method is saved for each noise level across the 20 repetitions.
Feature intensity centering and scaling
In order to test how all four methods are affected when the mean feature intensities and variance are not comparable across studies, we assess their performance when the feature intensities in both studies are mean centered and standardized to have unit standard deviation across all samples. We again generate 20 random pairs of datasets with medium overlap and medium noise, normalize the feature intensities in each pair of datasets, and compute the precision and recall of each method across the 20 repetitions.
EPIC data
We also evaluate our method on data collected within the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort, an ongoing multicentric prospective study with over 500,000 participants recruited between 1992 and 2000 from 23 centers in 10 European countries, and who provided blood samples at the inclusion in the study Riboli et al. (2002). In EPIC, untargeted metabolomics data were successively acquired in several studies nested within the full cohort.
In the present work, we use untargeted metabolomics data acquired in three studies nested in EPIC, namely the EPIC cross-sectional (CS) study Slimani et al. (2003) and two matched case-control studies nested within EPIC, on hepatocellular carcinoma (HCC) Stepien et al. (2016, 2021) and pancreatic cancer (PC) Gasull et al. (2019), respectively. All data were acquired at the International Agency for Research on Cancer, making use of the same plateform and methodology: UHPLC-QTOF-MS (1290 Binary Liquid chromatography system, 6550 quadrupole time-of-flight mass spectrometer, Agilent Technologies, Santa Clara, CA) using reversed phase chromatography and electrospray ionization in both positive and negative ionization mode.
In a previous analysis aiming at identifying biomarkers of habitual alcohol intake in EPIC, the 205 features associated with alcohol intake in the CS study were manually matched to features in both the HCC and PC studies Loftfield et al. (2021). The results from this manual matching are presented in Table 1.This matching process was based on the proximity of m/z and RT, using a matching tolerance of ±15 ppm and ±0.2 min, and on the comparison of the chromatograms of features in a quality control samples from both studies.
Preprocessing
In the HCC and PC studies, samples corresponding to participants selected as cases in either study (i.e., participants selected in the study because ofa diagnosis of incident HCC or PC) are excluded. Indeed, the metabolic profiles of participants selected as controls are expected to be more comparable across studies than those of cases, especially if certain features are associated with the risk of HCC or PC. Apart from this additional exclusion criterion, the untargeted metabolomics data of each study is pre-processed following the steps described in Loftfield et al. Loftfield et al. (2021), to eliminate unreliable features and samples, impute missing values and minimize technical variations in the feature intensity levels.
Alcohol biomarker discovery
Loftfield et al. Loftfield et al. (2021) used the untargeted metabolomics data of the CS, HCC and PC studies in their alcohol biomarker discovery study in EPIC, without being able to automatically match their common features and pool the 3 datasets. Instead, the authors first implemented a discovery step, examining the relationship between alcohol intake and metabolic features measured in the CS study and accounting for multiple testing using a false discovery rate (FDR) correction. This led to the identification of 205 features significantly associated with alcohol intake in the CS study. In order to gauge the robustness of these associations, the authors of Loftfield et al. Loftfield et al. (2021) then implemented a validation step using data from two independent test sets. The first test set was composed of data from the EPIC HCC and PC studies, while the second was derived from the Finnish Alpha-Tocopherol, Beta-Carotene Cancer Prevention (ATBC) study. The 205 features identified in the discovery step were manually investigated for matches in the EPIC test set, and 67 features were effectively matched to features in the HCC or PC study, or both. The authors then evaluated the association between alcohol intake and those 67 features, applying a more conservative Bonferroni correction to determine whether the association with alcohol intake persisted. This step led to the identification of 10 features associated with alcohol intake (Extended Data Fig. 5a). The second test set was then used to determine whether those 10 features were also significant in the ATBC population, which was indeed the case.
To conduct a more in-depth investigation of the matchings produced by the GromovMatcher algorithm, we build upon the analysis previously conducted by Loftfield et al. Loftfield et al. (2021) by exploring potential alcohol biomarkers using a pooled dataset created from the CS, HCC, and PC studies. Our goal is to assess whether pooling the data leads to increased statistical power and allows for the detection of more features associated with alcohol intake. Namely, we generate the pooled dataset by aligning a chosen reference dataset (CS study) with the HCC and PC studies successively using the GM matchings computed in both positive and negative mode (Methods and Extended Data Fig. 5b). Features that are not detected in either the HCC or PC studies are designated as ‘missing’ in the final pooled dataset for samples belonging to the respective studies where the feature is not found.
To evaluate the potential relationship between alcohol consumption and pooled metabolic features, we use a methodology akin to that of Loftfield et al. Loftfield et al. (2021). The self-reported alcohol intake data is adjusted for various demographic and lifestyle factors (age, sex, country, body-mass-index, smoking status and intensity, coffee consumption, and study) via the residual method in linear regression models. Feature intensities are also adjusted for technical variables (plate number and position within the plate) via linear mixed effect models. The significance of the association is assessed using correlation coefficients computed from the residuals for both selfreported alcohol intake and feature intensities. P-values are corrected using either false discovery rate (FDR) or Bonferroni correction to account for multiple testing. Corrected p-values less than 5% are considered significant.
Acknowledgements
We thank Jorn Dunkel for helpful advice on our manuscript. We acknowledge the MIT Super-Cloud and Lincoln Laboratory Supercomputing Center Reuther et al. (2018) for providing HPC resources that have contributed to the research results reported within this paper. G.S. acknowledges support through a National Science Foundation Graduate Research Fellowship under Grant No. 1745302. P.R. is supported by NSF grants IIS-1838071, DMS-2022448, and CCF-2106377. We are grateful to the Principal Investigators of each of the EPIC centers for sharing the data for our experimental application.
Data availability
The LC-MS data used to generate our simulated validation experiments is located at the bottom of the “Files” section in https://www.ebi.ac.uk/metabolights/MTBLS1684/files under filename ‘metabolomics_normalized_data.xlsx’. The EPIC data is not publicly available, but access requests can be submitted to the Steering Committee https://epic.iarc.fr/access/index.php.
All code for the data preprocessing, figure generation, as well as the GromovMatcher algorithm and its comparison to other methods are available at: https://github.com/sgstepaniants/GromovMatcher. Instructions and examples for how to run the GromovMatcher method are provided in the Github repository. The metabCombiner implementation written by the original authors was taken from their Github codebase: https://github.com/hhabra/metabCombiner. The M2S implementation of the original authors was taken from their Github codebase: https://github.com/rjdossan/M2S.
Appendix 1
In this paper, we study howto match metabolic features across two datasets where Dataset 1 has p1 metabolic features measured across n1 patients and Dataset 2 has p2 metabolic features measured across n2 patients. Our goal is to identify pairs of indexes (i,j) with i ∈ {1,… ,p1} and j ∈ {1,… ,p2}, such that feature i in Dataset 1 and feature j in Dataset 2 correspond to the same metabolic feature. More formally, we aim to identify a matching matrix such that if features i in Dataset 1 and feature j in Dataset 2 correspond to the same feature, hereafter referred to as matched features. Otherwise we set otherwise. We emphasize that a matching matrix M* can have at most one nonzero entry in each row and column.
Both of the datasets we aim to match are obtained from liquid chromatography-mass spectrometry (LC-MS) experiments. Hence, for Dataset 1 each metabolite i ∈ [p1] is labeled with a mass-to-charge (m/z) ratio as well as a retention time (RT) given by . Additionally, each metabolite has a vector of intensities across patients denoted by . Similarly, each metabolite j ∈ [p2] in Dataset 2 is labeled by its m/z ratio , its retention time and its vector of intensities across samples .
Correlations and distances between metabolomic features
Features cannot be aligned based on their m/z and RT alone as they are often too inconsistent across studies. Our method is based on the idea that, in addition to their m/z and RT being compatible, the signal intensities of metabolites measured in two different studies should exhibit similar correlation structures, or more generally exhibit similar distances between their intensity vectors. In other words, if feature intensity vectors , correspond to the same underlying feature () and similarly if , correspond to the same feature (), then we expect that
Here we define corr(u,v) to be the Pearson correlation coefficient between two feature intensity vectors u,v ∈ ℝn by
where we define
as the mean value, Euclidean norm and inner product respectively. More generally, with dx and dy denoting two given distances on and respectively, we expect that
Throughout this paper, we use the normalized Euclidean distance defined for any u,v ∈ ℝn as
where for dx and dy we take n = n1, n2 respectively. If the signal intensity vectors u,v are mean centered and normalized by their standard deviation as
and likewise for v, then it follows that
where we denote as the cosine distance. For the purposes of this paper, we will always assume that dx and dy denote the normalized Euclidean distance from (22). As shown above, this will be implicitly equal to the cosine distance from (24) on centered and scaled data.
The goal of metabolomic feature matching is to learn the binary matching matrix M* that aligns the distances between pairs of features in the most consistent way possible as shown in (21). To formalize this notion into a practical algorithm, we use the mathematical theory of optimal transport Peyré et al. (2019) which we discuss next.
Optimal transport
Optimal transport(OT) applies in the settingwhen the points and being matched live in the same dimensional space n1 = n2 = n. It aims to find a matching between each point Xi and its corresponding point Yj such that the sum of distances between matches is minimized. Matches between each pair of points can be stored in a matching matrix such that Mij = 1 if Xi and Yj are matched, and Mij = 0 otherwise. Again we note that M must have at most one nonzero entry in each row and column to be a valid matching matrix.
Instead of searching over this space of binary matching matrices, optimal transport places masses ai ≥ 0 at all points Xi for i = 1,… ,p1 and masses bj ≥ 0 at all points Yj for j = 1,… ,p2 and optimizes over the space of probabilistic couplings which move a ∏ij amount of mass from Xi to Yj. We assume here for simplicity that the sum of masses in both datasets are equal to one and that the coupling n transports all mass from a into b. More formally, optimal transport optimizes over the constrained set of couplings
where 1p denotes the all ones vector of length p. In practice, the points Xi and Yj in each dataset are all treated the same and the masses placed on the data are chosen to be uniform , and .
The cost function which optimal transport minimizes is the sum of squared distances of its transported mass
where is the Euclidean distance. The distance matrix d(Xi,Yj) in the OT objective can be replaced more generally with a cost matrix that is not necessarily a distance matrix. In this case the cost function becomes
When the transport cost cij is a distance, the OT optimization defines a valid distance metric known as the optimal transport distance between discrete distributions and in ℝn given by
When d(u, v) is Euclidean, this OT distance is also referred to as the L1 optimal transport distance, the Wasserstein 1-distance, or the Earth mover’s distance. As formulated, the computation of the optimal transport objective involves an optimization over coupling matrices n which can be solved by linear programming Peyré et al. (2019). The OT optimization problem becomes time consuming for problems with many points p1 ,p2 ≫ 1. We show in the next section how augmenting this distance with a regularization term leads to a more efficient algorithm for learning the optimal coupling ∏.
Entropic regularization
Define the Kullback-Leibler (KL) divergence between two positive vectors μ, as
Given fixed marginals and from the previous section, we can define the entropy of a coupling matrix with respect to these fixed marginals as
where (a ⊗ b)ij = aibj denotes the outer product. This can be further simplified as
where we define H(∏) by
In the second line of the derivation above, we used the fact that the entries of a, b, and n summed to one, and in the third line we used the fact that the marginals a and b were uniform. Under these assumptions, we see that the KL divergence DKL(∏, a ⊗ b) is independent of the values of the marginals a, b and is equal to H(∏) up to constants.
Although here the general definition of entropy through the KL divergence reduces to the simpler formula of H(∏), in the following sections we will need to extend our analysis to cases when a, b, and ∏ have positive values that do not sum to one (i.e. not distributions). In this context, we will no longer have that DKL(∏, a ⊗ b) = H (∏) + const but we will still be able to use DKL(∏, a ⊗ b) as a general notion of entropy for ∏.
The entropy ofa coupling DKL(∏, a ⊗ b) is an important notion because it quantifies how uniform or smooth ∏ is with respect to the product distribution a ⊗ b. In particular, if a and b are set to uniform distributions as commonly done in practice, then DKL(∏, a ⊗ b) is small when n has close to uniform entries and is large otherwise. This notion of smoothness allows us to use DKL(∏, a ⊗ b) as a regularizer in our optimal transport distance as
where ε is a small regularization parameter. Note that here we have denoted the transport cost matrix by which is not necessarily a distance matrix. The introduction of the regularizer εDKL(∏, a ⊗ b) gives us an efficient iterative algorithm known as the Sinkhorn algorithm for optimizing ∏ which we describe in the following sections.
Unbalanced optimal transport
Before we introduce the Sinkhorn algorithm, we introduce a final modification to our optimal transport distance that allows us to learn couplings between distributions a, that do not preserve mass. In other words, the coupling ∏ is not required to perfectly satisfy the marginal constraints and . In our metabolite matching problem, this is particularly useful as not all metabolites in one dataset necessarily appear in the other dataset and hence should be left unmatched. This modification of optimal transport, known as unbalanced optimal transport (UOT) Chizat et al. (2018), optimizes the following cost function
where we have added two KL terms with regularization parameter ρ to enforce that the marginals of the coupling , are approximately close to the prescribed marginals a,b respectively. We have also kept the smoothness/entropy regularizer εDKL(∏, a ⊗ b) from the previous section.
Unbalanced Sinkhorn algorithm
Now we are ready to present the unbalanced Sinkhorn algorithm Peyré et al. (2019) for optimizing the unbalanced optimal transport cost defined above. First we rewrite our optimization as
The inner minimization can be solved exactly by introducing dual variables , and writing out the Lagrange dual problem
where we have removed the terms ρDKL(u, a) and ρDKL(v, b) since they do not depend on ∏. Taking the gradient in n in the inner minimization and setting it to zero we get
which implies that
Now we can substitute this expression for n back into our Lagrange dual problem. First we compute
which implies that
Hence, the outer maximization in our Lagrange dual problem for f and g can now be written as
where we have removed the last constantsum in aibj. Finallywe can rewrite our entire minimization from the start of this section as
By strong duality, we can interchange the minimum and maximum above to write
where we define the functions
In fact, we can solve the minimizations in u* and V* in closed form to get the minimizes u* = a ⊙ exp(-f/p) and v * = b ⊙ exp(-g/p) which we can substitute back in to get
Likewise we can see that
Thus, we can rewrite our full optimization as
where we have removed the terms independent of f and g.
Note that now we can optimize the cost function above by performing an alternating minimization on the dual variables f and g. Taking the gradient in f and setting it to zero we see that
which implies that
Similarly, we can write out
We are now ready to write out the full unbalanced Sinkhorn algorithm which performs an alternating minimization on the dual potentials f, g as outlined above. We remind the reader that the coupling matrix can be recovered from the dual potentials by the formula
The unbalanced Sinkhorn algorithm proceeds as follows.
The final output of the Sinkhorn algorithm optimization is a real-valued coupling matrix ∏ ∈ . In some cases, it is desirable to transform the coupling matrix into a binary-valued matching matrix with possibly an added restriction that there is at most one nonzero element in each row and column (to obtain a valid partial matching). This can be done by either thresholding the real matrix ∏ or by assigning all maximal entries in each row (or column) to one and setting the remaining entries to zero. For our metabolomics matching problem, we describe our procedure for transforming our real-valued coupling into a binary matching matrix in the section on the GromovMatcher algorithm below.
Gromov-Wasserstein
Now that we have introduced the general formulation of unbalanced optimal transport and its corresponding Sinkhorn algorithm, we can extend this formulation to matching problems between distributions of points that live in different dimensional spaces. In our metabolomics setting, we aim to match two datasets of p1 and p2 metabolic features respectively where each feature in a dataset is associated with a feature intensity vector and respectively across samples. We assume that there exists a true matching matrix with at most one nonzero entry in each row and column such that two metabolites (i,j) are matched if .
We make the further assumption that if feature vectors Xi,Yj are matched and feature vectors Xk, Yj are matched under M*, then we expect that
where dx is a distance metric on and dy is a distance metric n . In practice, we always choose these distance metrics to be the normalized Euclidean distance defined for any u,v ∈ ℝn as
which is equal to the cosine distance dcos (i.e. one minus the correlation) for centered and scaled data. Given these two distance matrices and we would like to infer the true matching matrix M* by solving an optimization problem.
Consider the following objective function
where the matching matrices we optimize over are constrained to satisfy marginal constraints and . These marginal constraints simply impose that there is at least one nonzero entry in each row and column (i.e. each metabolite in both datasets has at least one corresponding match). Searching for the n minimizing εX, Y (∏) consists of putting the nonzero entries in ∏ such that the distance profiles of the matched features are similar, so that the minimizer of this criterion provides a good candidate estimate of ∏*. This is closely related to the Gromov-Hausdorff distance Gromov (2001), an extension of optimal transport to the case where the sets to be coupled do not lie in the same metric space.
In practice, it is often desirable to optimize over a different set of matrices in order to make the optimization problem more tractable. Here we take intuition from optimal transport, and search over the set of coupling matrices with marginal constraints
where as before, and are desired marginals which are typically set to be uniform distributions and . These marginal vectors can be interpreted as distributions of masses ai and bj on the feature vectors Xi and Yj respectively for i ∈ [p1], j ∈ [p2].
Coupling matrices in U(a,b) transport the distribution of masses a in the first dataset to the distribution of masses b in the second dataset. Now we can formulate the Gromov-Wasserstein (GW) distance, introduced by Memoli Memoli (2011), as
By optimizing this objective, each entry ∏ij now reflects the strength of the matched pair (Xi, Yj). Optimizing GW(a, b) then amounts to placing larger entries in n whose paired features have similar distance profiles. Before we develop an algorithm to optimize this objective, we first modify it to allow for unbalanced matchings where marginal constraints are not enforced exactly (e.g. features in both datasets can remain unmatched).
Unbalanced Gromov-Wasserstein
In an untargeted context, all features measured in one study are not necessarily observed in another, either because these features are truly not shared or because of measurement error. However, the constraint ∏ ∈ U(a, b) in the original GW optimization criterion (40) ensures that all the mass is transported from one set to another, resulting in all features being matched across studies. In order to discard study-specific features during the GW computation, we use the unbalanced Gromov-Wasserstein (UGW) distance with an additional entropic regularization for computational purposes, described in Séjourné et al. Sejourne et al. (2021). The optimization problem therefore reads
with ρ, ε > 0. Here DKL is the Kullback-Leibler divergence defined in the previous sections and we define the tensor product (p ⊗ p)i,j,k,l = Pi,jPk,i. Here we set the desired marginal constraints to and as before.
As in the case of unbalanced optimal transport Chizat et al. (2018), the regularization ρ times the Kullback-Leibler divergences allows for the relaxation of the marginal constraints and . The value of ρ > 0 controls the extent to which we allow for mass destruction. Smaller values of ρ tend to lessen the constraint on the marginals of ∏, while balanced GW is recovered when ρ → +∞. As proposed in the original paper Sejourne et al. (2021), our UGW cost modifies the UOT formulation by using the quadratic Kullback-Leibler divergence in and instead, hence preserving the quadratic form of the GW cost function ε(∏).
The term εDKL (∏ ⊗ ∏, (a ⊗ b)⊗2) serves as an entropic regularization, inspired again by optimal transport. Adding such a penalty is a standard way to compute an approximate solution to the optimal transport problem using the Sinkhorn algorithm as we shall show in the following section. Here again, we modify the entropic penalty in UGW to have a quadratic form in ∏ ⊗ ∏ to agree with the quadratic form of the GW cost ε(∏). The parameter ε controls the smoothness (entropy) of the coupling matrix ∏ where larger values of ε encourage n to put uniform weights on many of its entries, leading to less precision in the feature matches. However, increasing ε also leads to better numerical stability and a significant speedup of the alternating Sinkhorn algorithm used to optimize the objective function described below.
UGW optimization algorithm
Now we are ready to write out an algorithm to optimize the UGW objective in (42). First write our objective as
Using the quadratic nature of our cost function, we aim to perform an alternating minimization in the two copies of ∏. For the moment, let’s differentiate these two copies by n and r and write the new cost
Before we expand this cost, we introduce the notation m(π) to denote the sum of the elements of π which can be a vector, matrix or tensor. In general, for four positive distributions π, and γ, we have that the KL satisfies the tensorization property
Specifically, if we remove those terms that do not depend on γ we are left with
This allows us to write for the marginal constraints , and couplings ∏, that
where in the expansions above we have removed all terms that are independent of Γ. Finally, expanding out Fρ,α(∏, Γ) and keeping only those terms that depend on Γ we get
where the cost matrix is defined as
where we have hidden the dependence of c∏ on the distance matrices Dx,Dy, the marginals a, b, and the regularization parameters ρ, ε for ease of notation.
Remarkably, the cost above in Γ for fixed ∏ is in the form of an unbalanced optimal transport problem which can be solved through unbalanced Sinkhorn iterations (Algorithm 1). Note that in our derivation above, it did not matter whether we optimized Γ with ∏ fixed or vice versa because the cost Fρ,ε(∏, Γ) is symmetric in both of its arguments.
Our iterative algorithm for solving the unbalanced GW problem will proceed at each iteration by optimizing Γ to minimizethecostaboveusingthe unbalanced Sinkhorn method, setting ∏ equal to Γ and repeating. With each iteration, we expect this iterative procedure to make smaller and smaller updates to Γ until convergence. By definition, at the end of each iteration we assign ∏ = Γ so the minimizer of Fρ,ε(∏, Γ) we converge to should also be a minimizer of the original UGW cost Lρ,ε(∏) in the sense that the relaxation of Lρ,ε(∏) to Fρ,ε(∏, Γ) is tight. This is proven rigorously under strict mathematical assumptions in Sejourne et al. (2021). We state the full UGW optimization algorithm below.
Following the implementation of the UGW algorithm in Sejourne et al. (2021), we initialize both ∏ and Γ to be the product distribution of the marginals before we begin the optimization. Also, we note that if (∏, Γ) is a minimizer of our UGW objective Fρ, ε(∏,Γ), then so is () for any scale factor s > 0. Hence, we can set by choosing . This motivates the final step in the while loop of the UGW algorithm where the rescaling of Γ by the factor leads to mass equality m(∏) = m(Γ) and also stabilizes the convergence of the algorithm.
Returning to our metabolomics matching problem, we further guide our UGW optimization procedure by discouraging it from matching metabolic feature pairs whose mass-to-charge ratios are incompatible. Namely, we choose a value mgap such that for all pairs (i,j) with i ∈ [p1],j ∈ [p2] and mass-to-charge ratios , we enforce that
In practice, this is done by taking the optimal transport cost C∏ in every iteration of the UGW algorithm and premultiplying it elementwise by a factor given by
where 1χ denotes the indicator function that is one when the condition χ is satisfied and zero otherwise. Such a prefactor changes the transport cost to be very large for feature matches with incompatible mass-to-charge ratio times, and hence, the entries of ∏ set small weights at these entries. Our weighted UGW algorithm is rewritten below.
As mentioned before, the coupling matrix returned by our weighted UGW algorithm is a realvalued matrix rather than a binary matching matrix. In the next section, we describe how we incorporate metabolite retention time information to filter out unlikely pairs in our coupling matrix and transform it into a valid one-to-one matching of features across two datasets.
Retention time drift estimation and filtering
To filter out unlikely matches from the coupling matrix returned by Algorithm 3 above, we use the retention times (RTs) of the metabolites in both datasets. We remind the reader that RTs were not incorporated into the weighted UGW algorithm since they often exhibit a non-linear deviation between datasets, and hence are not directly comparable. However, using the metabolite coupling obtained from Algorithm 3, it is possible to estimate this RT drift. The estimated RT drift allows us to assess the plausibility of the pairs recovered by the restricted UGW coupling , and discard pairs incompatible with the estimated drift.
We propose to learn the drift through the weighted spline regression
where Bn,k is the set of n-order B-splines with k knots. All pairs () in objective (51) are weighted by the coefficients of so that larger weights are given to pairs identified with high confidence in the first step of our procedure.
Pairs identified as incompatible with the estimated RT drift are then discarded from the coupling matrix. To do this, we first take the estimated RT drift , and the set of pairs recovered in with nonzero entries. We then define the residual associated with (i,j) ∈ S as
The 95% prediction interval and the median absolute deviation (MAD) of these residuals are given by
where |S| is the size ofS and the functions std, median denote the standard deviation and median respectively. Following Habra et al. (2021), we then create a new filtered coupling matrix given by
where rthresh is a given filtering threshold. The procedure of estimating the drift function in (51) and filtering the coupling can be repeated for multiple iterations, to improve the drift and coupling estimation. In our main algorithm, we use two preliminary iterations where we estimate the RT drift and discard outliers with rthresh = PI, defined as points falling outside of the 95% prediction interval. We the re-estimate the drift and perform a final filtering step with the more stringent MAD by setting rthresh = 2 × MAD.
At this stage, it is possible for to still contain coefficients of very small magnitude. As an optional postprocessing step, we discard these coefficients by setting all entries smaller than to zero for some scaling constant τ ∈ [0, 1]. Lastly, a feature from either study could have multiple possible matches, since can have more than one non-zero coefficient per row or column. Although reporting multiple matches can be helpful in an exploratory context, for the sake of simplicity in our analysis, the final output of GromovMatcher returns a one-to-one matching. Consequently, we only keep those metabolite pairs (i,j) where the entry is largest in its corresponding row and column. All nonzero entries of which do not satisfy this criterion are set to zero. Finally, we convert into a binary matching matrix with ones in place of its nonzero entries and this final output is returned to the user.
As a naming convention, we use the abbreviation GM for our GromovMatcher method, and use the abbreviation GMT when running GromovMatcher with the optional τ-thresholding step.
GromovMatcher algorithm summary
In summary, our full GromovMatcher algorithm consists of (1) UGW optimization followed by (2) retention time drift estimation and filtering.
The tuning of ρ and ε was computationally driven and the two parameters were set as low as possible, with ρ = 0.05 and ε = 0.005. Based on literature Loftfield et al. (2021); Hsu et al. (2019); Climaco Pinto et al. (2022); Habra et al. (2021); Chen et al. (2021) and what is considered to be a plausible variation of a feature’s m/z, we set mgap = 0.01ppm. For RT drift estimation, the order of the B-splines was set to n = 3 by default, while the number of knots k was selected by 10-fold cross-validation. If the optional thresholding step was applied in GMT, we set τ = 0.3. Otherwise, we let τ = 0 which gives the unthresholded GM algorithm.
Appendix 2
Here we discuss existing metabolomic alignments methods and the hyperparameter experiments we perform on these methods. We consider two existing alignment methods for comparison, metabCombiner Habra et al. (2021) and M2S Climaco Pinto et al. (2022). Both of them take the same kind of input as GromovMatcher, i.e. feature tables with features identified with their m/z, RT, and intensities across samples.
MetabCombiner hyperparameter experiments
MetabCombiner Habra et al. (2021) is a three-step process that begins by grouping features based on their m/z within user-specified bins. This creates a search space for potential feature pairs. In the second step, MetabCombiner estimates the RT drift using the potential feature pairs identified in the first step, and eliminates outlying pairs over several iterations. This step can incorporate prior knowledge by identifying shared features and marking them as anchors, which are not discarded. In the final step, MetabCombiner scores the remainingfeature pairs based on their m/z, RT, and relative intensity compatibility to discriminate between multiple matches for one feature. The scoring system relies on weights assigned to m/z, RT, and feature intensities, with the magnitude of those weights reflecting the reliability of the corresponding measurements across studies.
MetabCombiner Habra et al. (2021) includes adjustable parameters throughout the pipeline. We set most of them to default values unless otherwise stated. MetabCombiner first establishes candidate pairs by binning features in the m/z dimension with a width of binGap, and pairing the features sorted by relative intensities. The ‘binGap’ parameter sets the m/z tolerance of metabCom-biner, similar to mgap in GromovMatcher. We used the same value of 0.01 as in GromovMatcher.
MetabCombiner then estimates the RT drift using basis splines, and removes pairs associated with a high residual (twice the mean model error) from the candidate set.
In our main experiment, the RT drift is estimated exclusively using candidate pairs selected by the pipeline. However, it is also possible to include known ground truth pairs as ‘anchors’ to estimate the RT drift. We choose not to rely on prior knowledge for drift estimation as Habra et al. Habra et al. (2021) show their drift estimation to be efficient and robust, even without prior knowledge. To confirm this claim, we conduct a sensitivity analysis comparing the results obtained in our main experiment with those obtained when supplying metabCombiner with known shared metabolites to anchor the RT drift estimation. We randomly select 100 anchors from the ground truth matching and compute the metabCombiner matchings with otherwise identical settings as in our main experiment. The results from this analysis (reported in Appendix 2 — Figure 1) show that the unsupervised RT drift estimation (using anchors selected by the pipeline only) performs as well as the supervised RT drift estimation, showing the drift estimation to be very consistent, with or without shared entities.
After establishing candidate pairs and filtering out those that contradict the estimated RT drift, metabCombiner discriminates between multiple matches using a scoring system that considers m/z, RT, and rankings of the median feature intensities. Each dimension has a specific weight that can be left at default, manually adjusted, or automatically tuned using known matched pairs. Habra et al. Habra et al. (2021) provide qualitative guidelines for tuning the weights manually, mainly based on the experimental conditions and visual inspection of the RT drift plot. Since this approach is difficult to implement in the various settings we consider for our simulation study, we rely on the quantitative tuning function included in the metabCombiner pipeline. This function takes into account known shared features and tunes the weights to optimize the scores of those known matches. We randomly select 100 known true matches to define the objective function metab-Combiner maximizes. We search over the recommended range of values, with the m/z weight A ∈ [50, 150], the RT weight B ∈ [5, 20] and the feature intensities weight C ∈ [0, 1]. Appendix 2 — Figure 1 presents the results obtained with the weights set at default values (A = 100, B = 15, C = 0.5), as a sensitivity analysis.
M2S hyperparameter experiments
Pinto et al. Climaco Pinto et al. (2022) introduce M2S as a more versatile alternative to metab-Combiner, while still adhering to most of its core principles. Like metabCombiner, M2S follows a three-step process. First it searches for matches within user-defined thresholds for m/z, retention time, and mean feature intensity. Next, M2S estimates m/z, RT and feature intensity drifts between datasets and removes any outlier pairs. Finally, M2S selects the best match using a scoring system that weighs each measurement, similar to metabCombiner. M2S notably stands out by providing greater flexibility in the methods and measurements used at each step of the procedure, resulting however in a larger number of parameters that require manual fine-tuning. To address this, we adopt two different approaches for the simulation study and the EPIC study alignment. In the simulation study, we set the initial thresholds to oracle values and investigate technical parameters. For the EPIC study alignment, we use the combination of technical parameters with the best average F1-score in the simulation study and select the best threshold values based on the performance on the validation subset.
More precisely, M2S first matches all pairs of metabolic features whose absolute difference in m/z, RT, and median of log10 FI are within the user-defined thresholds ‘MZ_intercept’, ‘RT.intercept’ and ‘log10FI_intercept’. On simulated data experiments, we set these thresholds to MZ_intercept = 0.01, RT_intercept = 3.5 and log10FI_intercept = 0.2 which are large enough to not exclude any true feature matches in any of the scenarios for our simulated data under low, medium, and high over-lap/noise (see Methods). M2S also offers more detailed options to match features whose absolute difference stays within two lower and upper bound lines with a given slope where the intercepts of these lines are defined using the values above. In our analysis, we set the slopes of these linear boundaries to zero so as to not remove any true matches. Because the reference and target studies we are matching in the simulated analysis are on the same scale, we set the FI adjust method to ‘none’.
The second step of M2S involves calculating penalization scores for every pair of matches which are used to determine the best set of matches between metabolic features of both datasets. This step depends on a set of hyperparameters which we perform a grid search over to optimize the performance of M2S. For estimating the m/z, RT, and FI drift, the hyperparameters are the percentage of neighbors ‘nrNeighbors‘, the neighborhood shape ‘neighMethod’, and the LOESS span percentage ‘pctPointsLoess’ used to smooth the estimated drift functions. After the drifts are estimated, they are normalized using a method specified by ‘residPercentile’ that puts the m/z, RT, and FI residuals on the same scale. We always fix residPercentile = NaN which defaults to the standard 2 X MAD normalization. Next, for every remaining metabolic feature match, the residuals/drifts of the m/z, RT, and FI are added together by taking the weighted square root sum of squares. For unnormalized data where feature intensity magnitudes are important, we weight all three drifts equally using w = (1, 1, 1) and for data with normalized feature intensities we set the FI drift weight to zero such that w = (1, 1, 0). Finally, using these weighted penalization scores, M2S selects the best matched pair within a multiple match cluster to obtain a one-to-one matching between datasets.
The third and final step of M2S involves removing those remaining matches which have large differences in m/z, RT, or FI. This can be performed using several methods indicated by the hyperparameter ‘methodType’. Each method excludes those matched pairs whose differences in m/z, RT, or FI exceed a certain number of median absolute deviations indicated by the parameter ‘nr-MAD’. The remaining one-to-one metabolic feature matches are returned as the final result of the M2S algorithm.
To optimally tune M2S on our simulated experiments, we determine the optimal M2S parameter combination for each individual simulation setting (low, medium, high overlap and noise) by performing a grid search over the product of parameter lists
nrNeighbors = [0.01, 0.05, 0.1, 0.5, 1]
neighMethod = [’cross’, ‘circle’]
pctPointsLoess = [0, 0.1, 0.5]
methodType = [’none’, ‘scores’, ‘byBins’, ‘trend_mad’, ‘residuals_mad’]
nrMAD = [1, 3, 5]
Each parameter combination for M2S is tested across 20 randomly generated datasets at the same overlap and noise settings. For each setting, the combination of parameters above with the best average F1-score across these 20 trials is used as the optimal parameter choice.
M2S applies initial RT thresholds to search for candidate pairs, which may favor settings where the RT drift follows a linear trend. Therefore, as a sensitivity analysis, we apply M2S to simulated data with a linear drift. The simulation process is identical to that of our main simulation study, except for the deviation of the RT in dataset 2. Specifically, for a given overlap value, we divide the original real-world dataset into two smaller datasets and introduce random noise to the m/z, RT and intensities of the features, without introducing a systematic deviation to the RT in dataset 2. M2S parameters are kept identical to the ones used in our main analysis in comparable settings. The results obtained by M2S on three pairs of datasets generated for three overlap values (0.25, 0.5 and 0.75) and a medium noise level are reported in Appendix 2-Table 1. While the results obtained in a high overlap setting are close to those obtained in our main analysis M2S demonstrates better performance in a low overlap setting when the RT drift is linear than in our main analysis. This observation is consistent with the results obtained by M2S on EPIC data, considering the relatively low estimated overlap between the aligned EPIC studies in our main analysis.
For the EPIC data, we select the parameter combination that yields the highest F1-score across all simulated settings. However, due to the unavailability of oracle values for setting initial thresholds, we perform a search over several MZ intercept values (0.01, 0.05, and 0.1), RT intercept values (0.1, 0.5, 1, and 5), and logFI intercept values (1, 10, and 100).
Appendix 3
Here we describe additional preprocessing details and analyses of the EPIC data.
Centered and scaled data - Negative mode
In this section, we present the results obtained on centered and scaled EPIC data in negative mode, shown in Figure 4 of our main paper. However, due to the smaller size of the validation subset (42 features examined in negative mode compared to 163 in positive mode), the evaluation of the performance of the three methods may be less reliable than in positive mode.
First, we align the CS and HCC studies in negative mode and detect a total of 449, 492, and 180 matches with GM, M2S, and metabCombiner, respectively. Similar to the positive mode analysis, we evaluate the precision and recall of the three methods on the 42 feature validation subset, of which 19 were manually matched. GM and M2S demonstrate identical F1-scores of 0.98, while metabCombiner performs poorly in comparison. GM is able to recover all 19 true matches and identified only 1 false positive, while M2S recovers no false positives but missed 1 true positive.
Next, we align the CS and PC studies in negative mode and detect a total of 485, 569, and 314 matches with GM, M2S, and metabCombiner, respectively. Again, we evaluate the precision and recall of the three methods on the 42 feature validation subset, of which 26 were manually matched. MetabCombiner performs better than in the other EPIC pairings with an F1-score of 0.857, but is still outperformed by the other two methods. GM is slightly outperformed by M2S in this setting, with an almost identical precision of 0.93, but a slightly higher recall for M2S due to detecting 1 additional true positive. However, this remains a good performance for GM since M2S was optimally tuned using the validation subset itself.
Non-centered and non-scaled data
As a sensitivity analysis, we apply the three methods to EPIC data that has not been centered or scaled. The detailed results can be found in Appendix 3 — Table 1.
M2S was tuned manually on the validation subset to ignore feature intensities in both cases. As a result, it maintains its performance compared to our main experiment. On the other hand, the performance ofGM and metabCombiner is affected by the lack of consistency in feature intensities. MetabCombiner’s recall drops slightly but its precision remains comparable to that of our main experiment, with the method clearly favoring the latter. Although GM’s recall decreases slightly in positive mode, it remains more precise than the optimally tuned M2S, and it balances precision and recall better than metabCombiner. Interestingly, GM’s results in negative mode are improved compared to our main experiment, and it outperforms both mC and M2S. However, since the validation subset in negative mode is relatively small, these differences may not be significant. Nonetheless, GM maintains a good performance, similar to that of the optimally tuned M2S.
Similar to the analysis we conducted on centered and scaled data, we find a high number of false positives when aligning the CS study and the PC study in positive mode. Therefore, we manually examine the matches recovered by GM. Our examination reveals 2 false positives, 4 unclear matches, and 3 additional good matches that GM also identifies in our main analysis. This demonstrates that the lack of centering and scaling results in two additional false positives for GM that are not present in our main results.
Illustration for alcohol biomarker discovery
Loftfield et al. Loftfield et al. (2021) identified 205 features associated with alcohol intake in the CS study, using a false discovery rate (FDR) correction to account for multiple testing. By applying an FDR correction in our pooled analysis, we identify 243 features associated with alcohol intake. Out of those 243 features, 185 are consistent with the features identified in the discovery step of Loftfield et al. Loftfield et al. (2021), while 55 features are newly discovered (Extended Data Fig. 5c). We examine the 20 features identified as significant in Loftfield et al.’s discovery analysis but that are not significant in our pooled analysis. Both manual and GM matching yield identical results for these features, indicating that the loss of significance is not due to incorrect matching. Upon further investigation, we find that these features do not demonstrate a meaningful association with alcohol intake in the HCC and PC studies. This observation is reinforced by the fact that none of these features are among the 10 features that persisted after the validation step in Loftfield et al.
Out of the 205 features initially discovered in Loftfield et al. Loftfield et al. (2021), 10 are replicated in the EPIC HCC and PC studies using the more stringent Bonferroni correction. When using a Bonferroni correction in our pooled analysis, we find significant association between alcohol intake and 92 features, 36 of which are effectively shared by the three studies. Notably, these features include all 10 features that were retained in Loftfield et al. (Extended Data Fig. 5c).
This analysis illustrates how GromovMatcher can be used in the context of biomarker discovery, and its potential to allow for increased statistical power.
References
- Approximate is better than “exact” for interval estimation of binomial proportionsAm Stat 52:19–126https://doi.org/10.2307/2685469
- A multi-omic analysis of birthweight in newborn cord blood reveals new underlying mechanisms related to cholesterol metabolismMetabolism 110https://doi.org/10.1016/j.metabol.2020.154292
- Gromov-Wasserstein Alignment of Word Embedding Spaces:1881–1890
- Towards optimal transport with global invariances:1870–1879
- Metabolomics in environmental toxicology: Applications and challengesTrends Environ Anal Chem 34https://doi.org/10.1016/j.teac.2022.e00161
- Multi-marginal Gromov-Wasserstein transport and barycentersarXiv preprint arXiv:220506725 https://doi.org/10.48550/arXiv.2205.06725
- Interval Estimation for a Binomial ProportionStat Sci 16:101–133https://doi.org/10.1214/ss/1009213286
- Large-scale untargeted LC-MS metabolomics data correction using between-batch feature alignment and cluster-based within-batch signal intensity drift correctionMetabolomics 12https://doi.org/10.1007/s11306-016-1124-4
- Metabolite discovery through global annotation of untargeted metabolomics dataNat Methods 18:1377–1385https://doi.org/10.1038/s41592-021-01303-3
- Unbalanced optimal transport: Dynamic and Kantorovich formulationsJ Funct Anal 274:3090–3123https://doi.org/10.1016/j.jfa.2018.03.008
- Finding Correspondence between Metabolomic Features in Untargeted Liquid Chromatography-Mass Spectrometry Metabolomics DatasetsAnal Chem 94:5493–5503https://doi.org/10.1021/acs.analchem.1c03592
- Joint distribution optimal transportation for domain adaptation
- SCOT: Single-Cell Multi-Omics Alignment with Optimal TransportJ Comput Biol 29:3–18https://doi.org/10.1089/cmb.2021.0446
- Gut microbiome structure and metabolic activity in inflammatory bowel diseaseNat Microbiol 4:293–305https://doi.org/10.1038/s41564-018-0306-4
- Methodological issues in a prospective study on plasma concentrations of persistent organic pollutants and pancreatic cancer risk within the EPIC cohortEnvironmental Research 169:417–433https://doi.org/10.1016/j.envres.2018.11.027
- Variational autoencoders learn transferrable representations of metabolomics dataCommun Biol 5https://doi.org/10.1038/s42003-022-03579-3
- Metric Structures for Riemannian and Non-Riemannian SpacesBoston, MA: Birkhäuser Boston, Inc. https://doi.org/10.1007/978-0-8176-4583-0
- metabCombiner: Paired Untargeted LC-HRMS Metabolomics Feature Matching and Concatenation of Disparately Acquired Data SetsAnal Chem 93:5028–5036https://doi.org/10.1021/acs.analchem.0c03693
- PAIRUP-MS: Pathway analysis and imputation to relate unknowns in profiles from mass spectrometry-based metabolite dataPLoS Comput Biol 15:1–26https://doi.org/10.1371/journal.pcbi.1006734
- From Samples to Insights into Metabolism: Uncovering Biologically Relevant Information in LC-HRMS Metabolomics DataMetabolites 9https://doi.org/10.3390/metabo9120308
- On the translocation of massesJ Math Sci 133:1381–1382https://doi.org/10.1007/s10958-006-0049-2
- Metabolomics-Based Discovery of Molecular Signatures for Triple Negative Breast Cancer in Asian Female PopulationSci Rep 10https://doi.org/10.1038/s41598-019-57068-5
- Addressing the batch effect issue for LC/MS metabolomics data in data preprocessingSci Rep 10https://doi.org/10.1038/s41598-020-70850-0
- Novel biomarkers of habitual alcohol intake and associations with risk of pancreatic and liver cancers and liver disease mortalityJ Natl Cancer Inst 113:1542–1550https://doi.org/10.1093/jnci/djab078
- Gromov-Wasserstein Distances and the Metric Approach to Object MatchingFound Comput Math 11:417–487https://doi.org/10.1007/s10208-011-9093-5
- Memoire sur la théorie des déblais et des remblaisMem Math Phys Acad Royale Sci :666–704
- Gene expression cartographyNature 576:132–137https://doi.org/1O.1O38/s41586-O19-1773-3
- Separation strategies for untargeted metabolomicsJ Sep Sci 34:3460–3469https://doi.org/10.1002/jssc.201100532
- Gromov-wasserstein averaging of kernel and distance matricesICML PMLR :2664–2672https://doi.org/10.5555/3045390.3045671
- Computational optimal transport: With applications to data scienceFound Trends Mach Learn 11:355–607https://doi.org/10.1561/22OOOOOO73
- Revealing disease-associated pathways by network integration of untargeted metabolomicsNat Methods 13:770–776https://doi.org/10.1038/nmeth.3940
- The Blood Exposome and Its Role in Discovering Causes of DiseaseEnviron Health Perspect 122:769–774https://doi.org/10.1289/ehp.1308015
- Interactive supercomputing on 40,000 cores for machine learning and data analysisHPEC IEEE :1–6https://doi.org/10.1109/HPEC.2018.8547629
- European Prospective Investigation into Cancer and Nutrition (EPIC): study populations and data collectionPublic Health Nutr 5:1113–1124https://doi.org/10.1079/PHN2002394
- Optimal-transport analysis of single-cell gene expression identifies developmental trajectories in reprogrammingCell 176:928–943https://doi.org/10.1016/j.cell.2019.01.006
- Sinkhorn divergences for unbalanced optimal transportarXiv preprint arXiv:191012958 https://doi.org/10.48550/arXiv.1910.12958
- The Unbalanced Gromov Wasserstein Distance: Conic Formulation and Relaxation:8766–8779
- Alignstein: Optimal transport for improved LC-MS retention time alignmentGigaScience 11https://doi.org/https://doi.org/10.1093/gigascience/giac101
- Group level validation of protein intakes estimated by 24-hour diet recall and dietary questionnaires against 24-hour urinary nitrogen in the European Prospective Investigation into Cancer and Nutrition (EPIC) calibration study:784–795
- XCMS: Processing Mass Spectrometry Data for Metabolite Profiling Using Nonlinear PeakAlignment, Matching, and IdentificationAnal Chem 78:779–787https://doi.org/10.1021/ac051437y
- Entropic Metric Alignment for Correspondence ProblemsACM Trans Graph 35https://doi.org/10.1145/2897824.2925903
- Alteration of amino acid and biogenic amine metabolism in hepatobiliary cancers: Findings from a prospective cohort studyInt J Cancer 138:348–360https://doi.org/10.1002/ijc.29718
- Metabolic perturbations prior to hepatocellular carcinoma diagnosis: Findings from a prospective observational cohort studyInt J Cancer 148:609–625https://doi.org/https://doi.org/10.1002/ijc.33236
- metaX-CMS: second-order analysis of untargeted metabolomics dataAnal Chem 83:696–700https://doi.org/10.1021/ac102980g
- Liquid Chromatography-Mass Spectrometry Calibration Transfer and Metabolomics Data FusionAnal Chem 84:9848–9857https://doi.org/10.1021/ac302227c
- Topics in optimal transportation, vol. 58American Mathematical Soc
- Metabolite profiles and the risk of developing diabetesNat Med 17:448–453https://doi.org/10.1038/nm.2307
- Metabolomics for Investigating Physiological and Pathophysiological ProcessesPhysiol Rev 99:1819–1875https://doi.org/10.1152/physrev.00035.2018
- Predicting cell lineages using autoencoders and optimal transportPLoS Comput Biol 16:1–20https://doi.org/10.1371/journal.pcbi.1007828
- LC-MS-based metabolomicsMol BioSyst 8:470–481https://doi.org/10.1039/C1MB05350G
Article and author information
Author information
Version history
- Sent for peer review:
- Preprint posted:
- Reviewed Preprint version 1:
- Reviewed Preprint version 2:
- Version of Record published:
Copyright
© 2023, Breeur et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 702
- downloads
- 60
- citations
- 0
Views, downloads and citations are aggregated across all versions of this paper published by eLife.