Sparse dimensionality reduction approaches in Mendelian randomisation with highly correlated exposures

  1. Vasileios Karageorgiou  Is a corresponding author
  2. Dipender Gill
  3. Jack Bowden
  4. Verena Zuber
  1. Department of Epidemiology and Biostatistics, School of Public Health, Faculty of Medicine, Imperial College London, United Kingdom
  2. University of Exeter, United Kingdom
  3. Department of Clinical Pharmacology and Therapeutics, Institute for Infection and Immunity, St George’s, University of London, United Kingdom
  4. Genetics Department, Novo Nordisk Research Centre Oxford, United Kingdom

Abstract

Multivariable Mendelian randomisation (MVMR) is an instrumental variable technique that generalises the MR framework for multiple exposures. Framed as a regression problem, it is subject to the pitfall of multicollinearity. The bias and efficiency of MVMR estimates thus depends heavily on the correlation of exposures. Dimensionality reduction techniques such as principal component analysis (PCA) provide transformations of all the included variables that are effectively uncorrelated. We propose the use of sparse PCA (sPCA) algorithms that create principal components of subsets of the exposures with the aim of providing more interpretable and reliable MR estimates. The approach consists of three steps. We first apply a sparse dimension reduction method and transform the variant-exposure summary statistics to principal components. We then choose a subset of the principal components based on data-driven cutoffs, and estimate their strength as instruments with an adjusted F-statistic. Finally, we perform MR with these transformed exposures. This pipeline is demonstrated in a simulation study of highly correlated exposures and an applied example using summary data from a genome-wide association study of 97 highly correlated lipid metabolites. As a positive control, we tested the causal associations of the transformed exposures on coronary heart disease (CHD). Compared to the conventional inverse-variance weighted MVMR method and a weak instrument robust MVMR method (MR GRAPPLE), sparse component analysis achieved a superior balance of sparsity and biologically insightful grouping of the lipid traits.

Editor's evaluation

This paper investigated the identification of causal risk factors on health outcomes. It applies sparse dimension reduction methods on highly correlated traits in the Mendelian randomization framework. The implementation of this method helps to identify risk factors when given high dimensional traits data.

https://doi.org/10.7554/eLife.80063.sa0

Introduction

Mendelian randomisation (MR) is an epidemiological study design that uses genetic variants as instrumental variables (IVs) to investigate the causal effect of a genetically predicted exposure on an outcome of interest (Smith and Ebrahim, 2003). In a randomised controlled trial (RCT) the act of randomly allocating patients to different treatment groups precludes the existence of systematic confounding between the treatment and outcome and therefore provides a strong basis for causal inference. Likewise, the alleles that determine a small proportion of variation of the exposure in MR are inherited randomly. We can therefore view the various genetically proxied levels of a lifelong modifiable exposure as a ‘natural’ RCT, avoiding the confounding that hinder traditional observational associations. Genetically predicted levels of an exposure are also less likely to be affected by reverse causation, as genetic variants are allocated before the onset of the outcomes of interest.

When evidence suggests that multiple correlated phenotypes may contribute to a health outcome, multivariable MR (MVMR), an extension of the basic univariable approach can disentangle more complex causal mechanisms and shed light on mediating pathways. Following the analogy with RCTs, the MVMR design is equivalent to a factorial trial, in which patients are simultaneously randomised to different combinations of treatments (Burgess and Thompson, 2015). An example of this would be investigation into the effect of various lipid traits on coronary heart disease (CHD) risk (Burgess and Harshfield, 2016). While MVMR can model correlated exposures, it performs suboptimally when there are many highly correlated exposures due to multicollinearity in their genetically proxied values. This can be equivalently understood as a problem of conditionally weak instruments (Sanderson et al., 2019) that is only avoided if the genetic instruments are strongly associated with each exposure conditionally on all the other included exposures. An assessment of the extent to which this assumption is satisfied can be made using the conditional F-statistic, with a value of 10 for all exposures being considered sufficiently strong (Sanderson et al., 2019). In settings when multiple highly correlated exposures are analysed, a set of genetic instruments are much more likely to be conditionally weak instruments. In this event, causal estimates can be subject to extreme bias and are therefore unreliable. Estimation bias can be addressed to a degree by fitting weak instrument robust MVMR methods (Sanderson et al., 2020; Wang et al., 2021), but at the cost of a further reduction in precision. Furthermore, MVMR models investigate causal effects for each individual exposure, under the assumption that it is possible to intervene and change each one whilst holding the others fixed. In the high-dimensional, highly correlated exposure setting, this is potentially an unachievable intervention in practice.

Our aim in this paper is instead to use dimensionality reduction approaches to concisely summarise a set of highly correlated genetically predicted exposures into a smaller set of independent principal components (PCs). We then perform MR directly on the PCs, thereby estimating their effect on health outcomes of interest. We additionally suggest employing sparsity methods to reduce the number of exposures that contribute to each PC, in order to improve their interpretability in the resulting factors.

Using summary genetic data for multiple highly correlated lipid fractions and CHD (Kettunen et al., 2016; Nelson et al., 2017), we first illustrate the pitfalls encountered by the standard MVMR approach. We then apply a range of sparse principal component analysis (sPCA) methods within an MVMR framework to the data. Finally, we examine the comparative performance of the sPCA approaches in a detailed simulation study, in a bid to understand which ones perform best in this setting.

Results

Workflow overview

Our proposed analysis strategy is presented in Figure 1. Using summary statistics for the single-nucleotide polymorphism (SNP)-exposure (γ^) and SNP-outcome (Γ^) association estimates, where γ^ (dimensionality 148 SNPs× 97 exposures) exhibits strong correlation, we initially perform a PCA on γ^. Additionally, we perform multiple sPCA modalities that aim to provide sparse loadings that are more interpretable (block 3, Figure 1). The choice of the number of PCs is guided by permutation testing or an eigenvalue threshold. Finally, the PCs are used in place of γ^ in an IVW MVMR meta-analysis to obtain an estimate of the causal effect of the PC on the outcome. Similar to PC regression and in line with unsupervised methods, the outcome (SNP-outcome associations (Γ^) and corresponding standard error (SEΓ^)) is not transformed by PCA and is used in the second-step MVMR in the original scale. In the real data application and in the simulation study, the best balance of sparsity and statistical power was observed for the method of sparse component analysis (SCA) (Chen and Rohe, 2021). This favoured method and the related steps are coded in an R function and are available at GitHub (https://github.com/vaskarageorg/SCA_MR/, copy archived at Karageorgiou, 2023).

Proposed workflow.

Step 1: MVMR on a set of highly correlated exposures. Each genetic variant contributes to each exposure. The high correlation is visualised in the similarity of the single-nucleotide polymorphism (SNP)-exposure associations in the correlation heatmap (top right). Steps 2 and 3: PCA and sparse PCA on γ^. Step 4. MVMR analysis on a low dimensional set of principal components (PCs). X: exposures; Y: outcome; k: number of exposures; PCA: principal component analysis; MVMR: multivariable Mendelian randomisation.

UVMR and MVMR

A total of 66 traits were associated with CHD at or below the Bonferroni-corrected level (p=0.05/97, Table 1). Two genetically predicted lipid exposures (M.HDL.C, M.HDL.CE) were negatively associated with CHD and 64 were positively associated (Table 3). In an MVMR model including only the 66 Bonferroni-significant traits, fitted with the purpose of illustrating the instability of IVW-MVMR in conditions of severe collinearity, conditional F-statistic (CFS) (Materials and methods) was lower than 2.2 for all exposures (with a mean of 0.81), highlighting the severe weak instrument problem. In Appendix 1—figure 3, the MVMR estimates are plotted against the corresponding univariable MR (UVMR) estimates. We interpret the reduction in identified effects as a result of the drop in precision in the MVMR model (variance inflation). Only the independent causal estimate for ApoB reached our pre-defined significance threshold and was less precise (ORMVMR (95% CI): 1.031(1.012,1.37), ORUVMR (95% CI): 1.013(1.01,1.016) (Appendix 1—figure 4). We note that, for M.LDL.PL, the UVMR estimate (1.52(1.35,1.71), p < 10-10)) had an opposite sign to the MVMR estimate (ORMVMR=0.905(0.818,1.001)).

To see if the application of a weak instrument robust MVMR method could improve the analysis, we applied MR GRAPPLE (Wang et al., 2021). As the GRAPPLE pipeline suggests, the same three-sample MR design described above is employed. In the external selection GWAS study (GLGC), a total of 148 SNPs surpass the genome-wide significance level for the 97 exposures and were used as instruments. Although the method did not identify any of the exposures as significant at nominal or Bonferroni-adjusted significance level, the strongest association among all exposures is ApoB.

Table 1
Univariable Mendelian randomisation (MR) results for the Kettunen dataset with coronary heart disease (CHD) as the outcome.

Positive: positive causal effect on CHD risk; Negative: negative causal effect on CHD risk.

PositiveNegative
VLDLAM.VLDL.C, M.VLDL.CE, M.VLDL.FC, M.VLDL.L,M.VLDL.P, M.VLDL.PL, M.VLDL.TG, XL.VLDL.L,XL.VLDL.PL, XL.VLDL.TG, XS.VLDL.L, XS.VLDL.P, XS.VLDL.PL,XS.VLDL.TG, XXL.VLDL.L, XXL.VLDL.PL,L.VLDL.C, L.VLDL.CE, L.VLDL.FC, L.VLDL.L, L.VLDL.P,L.VLDL.PL, L.VLDL.TG, SVLDL.C, S.VLDL.FC,S.VLDL.L, S.VLDL.P, S.VLDL.PL, S.VLDL.TGNone
LDLALDL.C, L.LDL.C, L.LDL.CE, L.LDL.FC, L.LDL.L, L.LDL.P, L.LDL.PL,M.LDL.C, M.LDL.CE, M.LDL.L, M.LDL.P,M.LDL.PL, S.LDL.C, S.LDL.L, S.LDL.PNone
HDLS.HDL.TG, XL.HDL.TGM.HDL.C, M.HDL.CE

PCA

Standard PCA with no sparsity constraints was used as a benchmark. PCA estimates a square loadings matrix of coefficients with dimension equal to the number of genetically proxied exposures K. The coefficients in the first column define the linear combination of exposures with the largest variability (PC1). Column 2 defines PC2, the linear combination of exposures with the largest variability that is also independent of PC1, and so on. This way, the resulting factors seek to reduce redundant information and project highly correlated SNP-exposure associations to the same PC. In PC1, very low-density lipoprotein (VLDL)- and low-density lipoprotein (LDL)-related traits were the major contributors (Figure 2a). ApoB received the 8th largest loading (0.1371, maximum was 0.1403 for cholesterol content in small VLDL) and LDL.C received the 48th largest (0.1147). In PC2, high-density lipoprotein (HDL)-related traits were predominant. The first 18 largest positive loadings are HDL-related and 12 describe either large or extra-large HDL traits. PC3 received its scores mainly from VLDL traits. Six components were deemed significant through the permutation-based approach (Figure 1, Materials and methods).

Heatmaps for the loadings matrices in the Kettunen dataset for all methods (one with no sparsity constraints [a], four with sparsity constraints under different assumptions [b–e]).

The number of the exposures plotted on the vertical axis is smaller than K=97 as the exposures that do not contribute to any of the sparse principal components (PCs) have been left out. Blue: positive loading; red: negative loading; yellow: zero.

In the second-step IVW regression (step 4 in Figure 1), MVMR results are presented. A modest yet precise (OR = 1.002(1.0015,1.0024), p<1010) association of PC1 with CHD was observed. Conversely, PC3 was marginally significant for CHD at the 5% level (OR = 0.998 (0.998, 0.999), p=0.049). Since γ^ has been transformed with linear coefficients (visualised in loadings matrix, Figure 2), the underlying causal effects are also transformed and interpreting the magnitude of an effect estimate is not straightforward, since it reflects the effect of changing the PC by one unit on the outcome; however, significance and orientation of effects can be interpreted. When positive loadings are applied to exposures that are positively associated with the outcome, the MR estimate is positive; conversely, if negative loadings are applied, the MR estimate is negative.

sPCA methods

We next employed multiple sPCA methods (Table 2) that each shrink a proportion of loadings to zero. The way this is achieved differs in each method. Their underlying assumptions and details on differences in optimisation are presented in Table 2 and further described in Materials and methods.

Table 2
Overview of sparse principal component analysis (sPCA) methods used.

KSS: Karlis-Saporta-Spinaki criterion. Package: R package implementation; Features: short description of the method; Choice: method of selection of the number of informative components in real data; PCs: number of informative PCs.

MethodPackageAuthorsFeaturesChoicePCs
RSPCApcaPPCroux et al., 2013Robust sPCA (RSPCA), different measure of dispersion (Qn)Permutation KSS6
SFPCACode in publication, Supplementary MaterialGuo et al., 2010Fused penalties for block correlationKSS6
sPCAelasticnetZou et al., 2006Formulation of sPCA as a regression problemKSS6
SCASCAChen and Rohe, 2021Rotation of eigen vectors for approximate sparsityPermutation KSS6

RSPCA (Croux et al., 2013)

Optimisation and the KSS criterion pick six PCs to be informative (Karlis et al., 2003). The loadings in Figure 2 show a VLDL-, LDL-dominant PC1, with some small and medium HDL-related traits. LDL.C and ApoB received the 5th and 40th largest positive loadings. PCs 1 and 6 are positively associated with CHD and PCs 3 and 5 negatively so (Appendix 1—table 1).

SFPCA (Guo et al., 2010)

The KSS criterion retains six PCs. The loadings matrix (Figure 2) shows the ‘fused’ loadings with the identical colouring. In the two first PCs, all groups are represented. Both ApoB and LDL.C received the seventh and tenth largest loadings, together with other metabolites (Figure 2). PC1 (all groups represented) was positively associated with CHD and PC4 (negative loadings from large HDL traits) negatively so (Appendix 1—table 1).

sPCA (Zou et al., 2006)

The number of non-zero metabolites per PC was set at 1489716 (see Appendix 1—figure 6). Under this level of sparsity, the permutation-based approach suggested that six sPCs should be retained. Seventy exposures received a zero loading across all components. PC1 is constructed predominantly from LDL traits and is positively associated with CHD, but this does not retain statistical significance at the nominal level in MVMR analysis (Figure 3). Only PC4 that is comprised of small and medium HDL traits (Figure 2b) appears to exert a negative causal effect on CHD (OR (95% CI): 0.9975(0.9955,0.9995)). The other PCs were not associated with CHD (all p values > 0.05, Appendix 1—table 1).

Comparison of univariable Mendelian randomisation (UVMR) and multivariable MR (MVMR) estimates and presentation of the major group represented in each principal component (PC) per method.

SCA (Chen and Rohe, 2021)

Six components were retained after a permutation test. In the final model, five metabolites were regularised to zero in all PCs (CH2.DB.ratio, CH2.in.FA, FAw6, S.VLDL.C, S.VLDL.FC, Figure 2). Little overlap is noted among the metabolites. PC1 receives loadings from LDL and IDL, and PC2 from VLDL. The contribution of HDL to PCs is split in two, with large and extra-large HDL traits contributing to PC3 and small and medium ones to PC4. PC1 and PC2 were positively associated with CHD (Appendix 1—table 1, Figure 3). PC4 was negatively associated with CHD.

Comparison with UVMR

In principle, all PC methods derive independent components. This is strictly the case in standard PCA, where subsequent PCs are perfectly orthogonal, but is only approximately true in sparse implementations. We hypothesised that UVMR and MVMR could provide similar causal estimates of the associations of metabolite PCs with CHD. The results are presented in Figure 3 and concordance between UVMR and MVMR is quantified with the R2 from a linear regression. The largest agreement of the causal estimates is observed in PCA. In the sparse methods, SCA (Chen and Rohe, 2021) and sPCA (Zou et al., 2006) provide similarly consistent estimates, whereas some disagreement is observed in the estimate of PC6 for RSPCA (Croux et al., 2013) on CHD.

A previous study implicated LDL.c and ApoB as causal for CHD (Zuber et al., 2020b). In Appendix 1—figure 7, we present the loadings for these two exposures across the PCs for the various methods. Ideally, we would like to see metabolites contributing to a small number of components for the sparse methods. Using a visualisation technique proposed by Kim and Kim, 2012, this is indeed observed (see Appendix 1—figure 7). In PCA, LDL.c and ApoB contribute to multiple PCs, whereas the sPCA methods limit this to one PC. Only in RSPCA do these exposures contribute to two PCs. In the second-step IVW meta-analysis, it appears that the PCs comprising of predominantly VLDL/LDL and HDL traits robustly associate with CHD, with differences among methods (Table 3).

Table 3
Results for principal component analysis (PCA) approaches.

Overlap: Percentage of metabolites receiving non-zero loadings in ≥1 component. Overlap in PC1, PC2: overlap as above but exclusively for the first two components which by definition explain the largest proportion of variance. Very low-density lipoprotein (VLDL), low-density lipoprotein (LDL), and high-density lipoprotein (HDL) significance: results of the IVW regression model with CHD as the outcome for the respective sPCs (the sPCs that mostly received loadings from these groups). The terms VLDL and LDL refer to the respective transformed blocks of correlated exposures; for instance, VLDL refers to the weighted sum of the correlated VLDL-related γ^ associations, such as VLDL phospholipid content and VLDL triglyceride content. †: RSPCA projected VLDL- and LDL-related traits to the same PC (sPC1). ‡: SCA discriminated HDL molecules in two sPCs, one for traits of small- and medium-sized molecules and one for large- and extra-large-sized.

PCARSPCASFPCAsPCASCA
Overlap10.93810.1870.196
Overlap in PC1,PC210.43310.0100
Sparse %00.4740.0820.8350.796
VLDL significance in MR†YesNoYesNoYes
LDL significance in MRNoYesNoNoYes
HDL significance in MR‡YesYesYesNoNo
Small, medium HDL significance in MRYesNoYesYesYes

Instrument strength

Instrument strength for the chosen PCs was assessed via an F-statistic, calculated using a bespoke formula that accounts for the PC process (see Materials and methods and Appendix). The F-statistics for all transformed exposures cross the cutoff of 10. There was a trend for the first components being more strongly instrumented in all methods (see Appendix 1—figure 5), which is to be expected. In the MVMR analyses, the CFS for all exposures was less than three. Thus the move to PC-based analysis significantly improved instrument strength and mitigated against weak instrument bias.

Simulation studies

We consider the case of a data generating mechanism that reflects common scenarios found in real-world applications. Specifically, we consider a set of exposures X, which can be partitioned into blocks based on shared genetics. Certain groups of variants contribute exclusively to specific blocks of exposures, while having no effect on other blocks. This in turn leads to substantial correlation among the exposure blocks and a much reduced correlation of between exposure blocks, due only to shared confounding. This is visualised in Figure 4a. This data structure acts to reduce the instruments’ strength in jointly predicting all exposures. The dataset consists of n participants, k exposures, p SNPs (with both k and p consisting of b discrete, equally sized blocks) and a continuous outcome, Y. We split the simulation results into one illustrative example (for didactic purposes) and one high-dimensional example.

Simulation Study Outline.

(a) Data generating mechanism for the simulation study, illustrative scenario with six exposures and two blocks. In red boxes, the exposures that are correlated due to a shared genetic component are highlighted. (b) Simulation results for six exposures and three methods (sparse component analysis [SCA] [Chen and Rohe, 2021], principal component analysis [PCA], multivariable Mendelian randomisation [MVMR]). The exposures that contribute to Y (X1-3) are presented in shades of green colour and those that do not in shades of red (X4-6). In the third panel, each exposure is a line. In the first and second panels, the PCs that correspond to these exposures are presented as single lines in green and red. Monte Carlo SEs are visualised as error bars. Rejection rate: proportion of simulations where the null is rejected.

Simple illustrative example

We generate data under the mechanism presented in Figure 4a. That is, with six individual exposures X1,,X6 split into two distinct blocks (X1-X3 and X4-X6). A continuous outcome Y is generated that is only causally affected by the exposures in block 1 (X1-X3). A range of sample sizes were used in the simulation in order to give a range of CFS values from approximately 2–80. We apply (a) MVMR with the six individual exposures separately, and (b) PCA and SCA. The aim of approach (b) is to demonstrate the impact of reducing the six-dimensional exposure into two PCs, so that the first PC has high loadings for block 1 (X1-X3) and the second PC has high loadings for block 2 (X4-X6). Although two PCs were chosen by both PCA methods using a KSS criterion in a large majority of cases, to simplify the simulation interpretation we fixed a priori the number of PCs at two across all simulations.

Our primary focus was to assess the rejection rates of MVMR versus PCA rather than estimation, as the two approaches are not comparable in this regard. To do this we treat each method as a test, which obtains true positive (TP), true negative (TN), false positive (FP), and false negative (FN) results. In MVMR, a TP is an exposure that is causal in the underlying model and whose causal estimate is deemed statistically significant. In the PCA and sPCA methods, this classification is determined with respect to (a) which exposure(s) determine each PC and (b) if the causal estimate of this PC is statistically significant. Exposures are considered to be major contributors to a PC if (and only if) their individual PC loading is larger than the average loading. If the causal effect estimate of a PC in the analysis deemed statistically significant, major contributors that are causal and non-causal are counted as TPs and FPs, respectively. TNs and FNs are defined similarly. Type I error therefore corresponds to the FP rate and power corresponds to the TP rate. All statistical tests were conducted at the α/B = α/2 = 0.025 level.

SCA, PCA, and MVMR type I error and power are shown in the three panels (left to right) in Figure 4b, respectively. These results suggest an improved power in identifying true causal associations both with PCA and SCA compared with MVMR when the CFS is weak, albeit at the cost of an inflated type I error rate. As sample size and CFS increase, MVMR performs better. For the PC of the second block’s null exposures, PCA seems to have a suboptimal type I error control (red in Figure 4b). In this low-dimensional setting, the benefit of PCA therefore appears to be limited.

Complex high-dimensional example

The aim of the high-dimensional simulation is to estimate the comparative performance of the methods in a wider setting that more closely resembles real data applications. We simulate genetic data and individual level exposure and outcome data for between K=30-60 exposures, arranged in B=4-6 blocks. The underlying data generating mechanism and the process of evaluating method performance is identical to the illustrative example, but the number of variants, exposures, and the blocks is increased. We amalgamate rejection rate results across all simulations, by calculating sensitivity (SNS) and specificity (SPC) as:

(1) SNS=TPTP+FNSPC=TNTN+FP,

and then compare all methods by their area under the estimated receiver-operating characteristic (ROC) curve (AUC) using the meta-analytical approach of Reitsma et al., 2005. Briefly, the Reitsma method performs a bivariate meta-analysis of multiple studies that report both sensitivity and specificity of a diagnostic test, in order to provide a summary ROC curve. A bivariate model is required because sensitivity and specificity estimates are correlated. In our setting the ‘studies’ represent the results of different simulation settings with distinct numbers of exposures and blocks. Youden’s index J (J=SNS+SPC-1) was also calculated, with high values being indicative of good performance.

Two sPCA methods (SCA [Chen and Rohe, 2021], sPCA [Zou et al., 2006]) consistently achieve the highest AUC (Figure 5). This advantage is mainly driven by an increase in sensitivity for both these methods compared with MVMR. A closer look at the individual simulation results corroborates the discriminatory ability of these two methods, as they consistently achieve high sensitivities (Appendix 1—figure 10). Both standard and Bonferroni-corrected MVMR performed poorly in terms of AUC (AUC 0.712 and 0.660, respectively), due to poor sensitivity. PCA performed poorly, with almost equal TP and FP results (AUC 0.560). PCA and RSPCA did not accurately identify negative results (PCA and RSPCA median specificity 0 and 0.192, respectively). This extreme result can be understood by looking at the individual simulation results in Appendix 1—figure 10; both PCA and RSPCA cluster to the upper right end of the plot, suggesting a consistently low performance in identifying TN exposures. Specifically, the estimates with both these methods were very precise across simulations and this resulted in many FP results and low specificity. We note a differing performance among the top ranking methods (SCA, sPCA); while both methods are on average similar, the results of SCA are more variable in both sensitivity and specificity (Table 4). The Youden’s indexes for these methods are also the highest (Figure 5a). Varying the sample sizes (mean instrument strength in γ^ from F¯=221 to 1109 and mean conditional F-statistic CFS¯=0.34-12.81) (Appendix 1—figure 9) suggests a similar benefit for sparse methods.

Extrapolated receiver-operating characteristic (ROC) curves for all methods.

SCA: sparse component analysis (Chen and Rohe, 2021) sPCA: sparse PCA (Zou et al., 2006) RSPCA: robust sparse PCA (Croux et al., 2013); PCA: principal component analysis; MVMR: multivariable Mendelian randomisation; MVMR_B: MVMR with Bonferroni correction.

Even with large sample sizes (F¯=1109.78, CFS¯=12.82), MVMR can still not discriminate between positive and negative exposures as robustly as the sPCA methods. A major determinant of the accuracy of these methods appears to be the number of truly causal exposures, as in a repeat simulation with only four of the exposures being causal, there was a drop in sensitivity and specificity across all methods. sPCA methods still outperformed other methods in this case, however (Appendix 1—table 2).

Table 4
Sensitivity and specificity presented as median and interquartile range across all simulations.

Presented as median sensitivity/specificity and interquartile range across all simulations; AUC: area under the receiver-operating characteristic (ROC) curve.

PCASCAsPCARSPCAMVMR_BMVMR
AUC0.560.9190.9410.6440.6600.712
Sensitivity1,0.11,0.211, 0.0470.667, 0.2510.222, 0.20, 0.076
Specificity0,0.020.925,0.7720.936, 0.0970.192, 0.1040.960, 0.0481,0
Youden’s J00.5840.778–0.0610.1920.044

What determines PCA performance?

In the hypothetical example of Figure 4 and indeed any other example, if two PCs are constructed, PCA cannot differentiate between causal and non-causal exposures. The only information used in this stage of the workflow (Steps 2 and 3 in Figure 1) is the SNP-X association matrix. Thus, the determinant of projection to common PCs is genetic correlation and correlation due to confounding, rather than how these blocks affect Y. Then, if only a few of the exposures truly influence Y, it is likely that, PCA will falsely identify the entire block as truly causal. This means the proportion of non-causal exposures within blocks of exposures that truly influence Y is a key determinant of specificity. To test this, we varied the proportion of non-causal exposures by varying the sparsity of the causal effect vector β vector and repeated the simulations, keeping the other simulation parameters fixed. As fewer exposures within blocks are truly causal, the performance in identifying TN results drops for SCA (Appendix 1—figure 12). However, our simulation still provides a means of making comparisons across methods for a given family of simulated data.

Discussion

We propose the use of sPCA methods in MVMR in order to reduce high-dimensional exposure data to a lower number of PCs and infer the latter’s causal contribution. As the dimensionality of available datasets for MR investigations increases (e.g. in NMR experiments [Biobank, 2018] and imaging studies), such approaches are becoming ever more useful. Our results support the notion that sPCA methods retain the information of the initial exposures. Although there is no single optimal method that correctly factorises the SNP-exposure matrix, the goal is to find some grouping of the multiple, correlated exposures such that it may resemble a latent biological variable that generates the data. The SCA (Chen and Rohe, 2021) and sPCA (Zou et al., 2006) methods performed best in simulation studies and the SCA approach performed best in the positive control example of lipids and CHD. While conventional MR approaches did not identify any protective exposures for CHD, SCA identified a cluster of small and medium HDL exposures that appeared to independently reduce the risk of CHD. This particular subset of HDL particles has previously been implicated in coronary calcification (Ditah et al., 2016) and shown to be associated with coronary plaque stability (Wang et al., 2019).

By employing sPCA methods in a real dataset (Kettunen et al., 2016), we show that the resulting PCs group VLDL, LDL, and HDL traits together, whilst metabolites acting via alternative pathways receive zero loadings. This is a desirable property and indicates that the second-step MR enacted on the PCs obtains causal estimates for intervening on biologically meaningful pathways (Chipman and Gu, 2005). This is in contrast with unconstrained PCA, in which all metabolites contribute to all PCs. Previously, Sulc et al., 2020 used PCA in MR to summarise highly correlated anthropometric variables. To our knowledge, this is the first investigation of different sPCA modalities in the context of MR. Our simulation studies revealed that sPCA methods exhibited superior performance compared to standard PCA, which had high FP rates, and MVMR, which had high FN rates. We additionally provide a number of ways to choose the number of components in a data-driven manner. Our proposed approach of an sPCA method naturally reduces overlap across components; for instance, in a paper by Sulc et al., 2020, the authors use PCA and identify four independent axes of variation of body morphology. There are PCs that are driven in common by trunk, arm, and leg lean mass, basal metabolic rate, and BMI; a hypothetical benefit with sparse methods would be reduction of this overlap. This is an important topic for further research. When using PCA without any sparsity constraints, our simulation studies revealed numerous FP results, at the opposite end of the nature of poor performance seen in MVMR; estimates were often misleadingly precise (FN). Although appropriate transformations of the exposures were achieved, we highly recommend exploring additional forms of T1E control to improve the performance of PCA. Nonetheless, sparse methods exhibited superior performance compared to both PCA and MVMR.

A previous work on sparse methods in genetics proposed their usefulness in multi-tissue transcriptome-wide association studies (Feng et al., 2021). A finding of the study is that leveraging correlated gene expressions across tissues with sparse canonical correlation analysis improves power to detect gene-trait pairs. Our approach that combines MR with sPCA also showed an improvement in power to detect causal effects of exposures on outcomes.

Our approach is conceptually different from the robust methods that have been developed for standard MVMR in the presence of weak instruments, such as MR GRAPPLE, which attempts to directly adjust point estimates for weak instrument bias, but are not a panacea, especially in the high-dimensional setting discussed here (Wang et al., 2021). Furthermore, it reduces the need for a pre-selection of which exposures to include in an MVMR model. We present a complementary workflow through which we can include all available exposures with no prior selection, collate them in uncorrelated and interpretable components, and then investigate the causal contribution of these groups of exposures. It avoids the risk of generating spurious results in such an extreme setting of high collinearity compared with MVMR IVW and MR GRAPPLE formulations. For example, a 2019 three-sample MR study that assessed 82 lipoprotein subfraction risk factors’ effects on CHD used an UVMR and a robust extension of MVMR. A positive effect of VLDL- and LDL-related subfractions on CHD was reported, consistent in magnitude across the sizes of the subfractions (Zhao et al., 2021). Results were less definitive on the effect of HDL subfractions of varying size on CHD, with both positive and negative effect estimates observed. In our study, the HDL subfractions were uniformly projected to similar subspaces, yielding a single component that was mainly HDL populated in all models, except for the SCA model 15 which projected the small/ medium and large/extra-large HDL traits in two different components. In all cases, the association of the sPCs with CHD was very low in magnitude. Nevertheless, the direction of effects was in line with the established knowledge on the relationship between lipid classes and CHD.

Within the sPCA methods, there were differences in the results. The sPCA method (Zou et al., 2006) favoured a sparser model in which less than 10 metabolites per PC were used. This observation is also made by Guo et al., 2010. The SCA method (Chen and Rohe, 2021) achieved good separation of the traits and very little overlap was observed. A separation of HDL-related traits according to size, not captured by the other methods, was noted. Clinical relevance of a more high-resolution HDL profiling, with larger HDL molecules mainly associated with worse outcomes, has been previously reported (Kontush, 2015).

Limitations

In the present study, many tuning parameters needed to be set in order to calibrate the PCA methods. We therefore caution against extending our conclusions on the best method outside the confines of our simulation and our specific real data example. Not all available sparse dimensionality reduction approaches were assessed in our investigation and other techniques could have provided better results.

The use of sparsity may raise the concern of neglecting horizontal pleiotropy if a variant influences multiple components, but its weight in a given component is shrunk to zero. This would not occur for standard PCA where no such shrinkage occurs. Currently, our approach is not robust to pleiotropy operating via exposures not included in the model. Our plan is to address this as future work by incorporating median-based MVMR models into the second stage, as done by Grant and Burgess, 2021.

Interpretability

The sPCA approach outlined in this paper enables the user to perform an MVMR analysis with a large number of highly correlated exposures, but one potential downside is that the effect sizes are not as interpretable. Interpreting the causal effects of PCA components (sparse or otherwise) poses a significant challenge. This is because intervening and lowering a specific PC could be actioned by targeting any of the exposures that have a non-zero loading within it. This is in contrast to the causal effect of a single exposure, where the mode of intervention is clear. However, the same ambiguity is often a feature of a real-world intervention, such as a pharmaceutical drug. That is, even if a drug targets a specific lipid fraction, it may exert multiple effects on other lipid fractions that are not pre-planned and are a result of interlinked physiological cascades, overlapping genetics, and interdependent relationships. Identifying such underlying biological mechanisms and pathways is a key step in deciding on the relevance of a PCA derived effect estimate compared to a standard MVMR estimate. We therefore suggest that this approach is best suited for initially testing how large groups of risk factors independently affect health outcomes, before a more focused MVMR within the PCA-identified exposures.

Another limitation of our work is that the instrument selection could have been more precise since we used an external study that investigated total lipid fractions rather than specific size and composition profile characteristics. Future more specific GWAS could improve this, leading to better separation in the genetic predictions of all lipid fractions.

Conclusion

In the present study, we underline the utility of sparse dimensionality reduction approaches for highly correlated exposures in MR. We present a comprehensive review of methods available to perform dimensionality reduction and describe their performance in a real data application and a simulation.

Materials and methods

Approximate relationship to a one-sample analysis

Request a detailed protocol

Our approach works directly with summary statistics gleaned from genome-wide association studies that are used to furnish a two-sample analysis, but it can also be motivated from the starting point of a one-sample individual level data. Specifically, assume the data generating model is y=Xβ+uX=Gγ+V and so that the second-stage model of a two-stage least squares procedure is y=X^β+u~, where X^=Gγ^. PCA on X^ is approximately equivalent to PCA on γ^ since X^TX^=γ^Tγ^ if G is normalised so that γ^ represents standardised effects. In the appendix we provide further simulation results that show that the loadings matrix derived from a PCA on X^ and γ^ are asymptotically equivalent.

Data

The risk factor dataset reports the associations of 148 genetic variants (SNPs) with 118 NMR-measured metabolites ((Table 5) , Kettunen et al., 2016). This study reports a high-resolution profiling of mainly lipid subfractions. To focus on lipid-related traits, we exclude amino acids and retain 97 exposures for further analyses. Fourteen size/density classes of lipoprotein particles (ranging from extra small [XS] HDL to extra-extra-large [XXL] VLDL) were available and, for each class, the traits of total cholesterol, triglycerides, phospholipids, and cholesterol esters content, and the average diameter of the particles were additionally provided. Through the same procedure, estimates from NMR on genetic associations of amino acids, apolipoproteins, fatty and fluid acids, ketone bodies, and glycerides were also included. Instrument selection for this dataset has been previously described (Zuber et al., 2020a). Namely, 185 variants were selected, based on association with either one of: LDL-cholesterol, HDL-cholesterol, or triglycerides in the external sample of the Global Lipid Genetics Consortium at the genome-wide level (p<5×108) (Do et al., 2013). Then, this set was pruned to avoid inclusion of SNPs in linkage disequilibrium (LD) (threshold: r2<0.05) and filtered (variants in distance less than 1 megabase pairs were excluded) resulting in the final set. This pre-processing strategy was performed with a view to study CHD risk.

Positive control outcome assessment is recommended in MR as an approach of investigating a risk factor that has an established causal effect on the outcome of interest (Burgess et al., 2020). We used CHD as a positive control outcome, given that lipid fractions are known to modulate its risk, with VLDL- and LDL-related traits being positively associated with CHD and HDL-related traits negatively (Burgess et al., 2020). Summary estimates from the CARDIoGRAMplusC4D Consortium and UK Biobank meta-analysis (Nelson et al., 2017; Deloukas et al., 2013) were used. For both datasets, a predominantly European population was included, as otherwise spurious results could have arisen due to population-specific, as opposed to CHD-specific, differences (Vilhjálmsson and Nordborg, 2013).

Table 5
Two-sample Mendelian randomisation (MR).

Study characteristics.

First authorYearPMIDNCasesControlsStudy name (population)
MetabolitesKettunen20162700577824,925NMR GWAS (European)
CHDNelson201728714975453,595113,937339,658CARDIoGRAMplusC4D (European)

MR assumptions

Request a detailed protocol

The first assumption (IV1) states that IVs should be associated with the exposure. The second assumption (IV2) states that the IVs should be independent of all confounders of the risk factor-outcome association (IV2) and, finally, independent of the outcome conditional on the exposure and the confounders (IV3). The validity of the final two assumptions cannot be directly verified with the available data. For the inferences to be valid, it is necessary that the three IV assumptions apply (Davies et al., 2018). These assumptions are illustrated for the case of two exposures in Figure 6a. In situations when IV3 is not deemed likely, additional risk factors that are possible mediators can be introduced in an MVMR model (Burgess and Harshfield, 2016). Additional assumptions should hold for MVMR results to be valid. Particularly, (a) all included exposures have to be associated with at least one of the IVs, and (b) there should be no evidence of multicollinearity. If there is significant overlap, for example if G1 and G2 are associated with both exposures X1 and X2, but only with X2 through X1 as in Figure 6b, this may result in conditionally weak instruments (Sanderson et al., 2020). The latter assumption and the way it limits eligibility of high-dimensional, correlated exposures is a key motivation for the present work.

Directed acyclic graph (DAG) for the multivariable Mendelian randomisation (MVMR) assumptions.

IV2, IV3: instrumental variable assumptions 2 and 3.

UVMR and MVMR

Request a detailed protocol

To examine how each of the metabolites is causally associated with CHD, a UVMR approach was first undertaken under the two-sample summary data framework. This uses SNP-exposure and SNP-outcome GWAS summary statistics (γ^ and Γ^, respectively), obtained by regressing the exposure or outcome trait on each SNP individually. Usually, an additional adjustment has been made for variables such as age, gender, and genetic PCs. An UVMR analysis is the most straightforward way to obtain an estimate for the causal effect of the exposure, but is only reliable if all SNPs are valid instruments. However, in the Kettunen dataset, where the exposure set comprises 97 highly correlated lipid fractions, one variant may influence multiple exposures. This will generally invalidate an UVMR analysis and an MVMR approach is more appropriate.

Estimating Equation 2 provides a general means for representing the mechanics of an UVMR or MVMR analysis:

(2) QA=j=1p(1σAj2)(Γ^jk=1Kβkγ^kj)2,where σAj2=σYj2+k=1βk2σkj2+k=1Kl=1K2βlβkσlkj,

where

  • γ^kj represents the association of SNP j with exposure k, with variance σkj2;

  • Γ^j represents the association of SNP j with the outcome, with variance σYj2;

  • βk represents the causal effect of exposure k on the outcome to be estimated.

  • σlkj represents Cov(γ^lj,γ^kj). If individual-level data is not available to obtain this quantity, it can be estimated using the intercept from LD score regression (Bulik-Sullivan et al., 2015).

In an UVMR analysis there is only one exposure, so that K=1, whereas in an MVMR analysis K2 (or in the case of the Kettunen data, K=97). In an IVW UVMR or MVMR analysis, the causal effect parameters estimated are obtained by finding the values β^1,,β^K that minimise QA, under the simplifying assumption that σA2 = σYj2. This is justified if all causal effects are zero, or if the uncertainty in the SNP-exposure associations is negligible (the no measurement error [NOME] assumption). In the general MVMR context, the NOME assumption is approximately satisfied if the CFS for each exposure are large, but if it is not, then IVW MVMR estimates will suffer from weak instrument bias. For exposure k, CFSk takes the form

CFSk=QXkp(K1),

where:

(3) QXk=j=1p(1σXkj2)(γ^kjmkδ^mγ^mj)2,where σXkj2=σXkj2+mkδ^m2σmj2+mklk2δ^mδ^lσmlj,

where δ^ is estimated by regressing an exposure Xk on all other exposures X-k by OLS. If an exposure k can be accurately predicted conditionally on other exposures, then there won’t be sufficient variation or ‘good heterogeneity’ (Sanderson et al., 2019) in QXk and CFS will generally be small. This will be the case whenever there is a high degree of correlation between the SNP-exposure association estimates, for example as in an MVMR analysis of all 118 lipid fractions in the Kettunen data. One option to address this would be to use weak instrument robust MVMR, such as MR-GRAPPLE (Wang et al., 2021). This method performs estimation using the full definition of σA2 in Equation 2 and using a robust loss function that additionally penalises larger outliers in the data. It can work well for MVMR analyses of relatively small numbers of exposures (e.g. up to 10) and relatively weak instruments (CFS as low as 3), but the dimension and high correlation of the Kettunen data is arguably too challenging. This motivates the use of dimension reduction techniques, such as PCA.

Dimension reduction via PCA

PCA is a singular value decomposition a given matrix of the p×K matrix of SNP-exposure associations γ^ as:

γ^=UDVT

where U and V are orthogonal matrices and D is a square matrix whose diagonal values are the variances explained by each component and all off-diagonal values are 0. V is the loadings matrix and serves as an indicator of the contribution of each metabolite to the transformed space of the PCs. The matrix UD (PCs/scores matrix) is used in the second-step IVW regression in place of γ^. As V estimation does not aim for sparsity, all exposures will contribute to some degree to all components, making the interpretation more complicated. Therefore, we assessed multiple sPCA methods that intentionally limit this.

sPCA (Zou et al.)

Request a detailed protocol

sPCA by Zou et al., 2006, estimates the loadings matrix through an iterative procedure that progressively penalises exposures so that they do not contribute to certain PCs. In principle, this leads to a more clear picture for the consistency of each PC. This is performed as follows:

  1. Setting a fixed matrix, the following elastic net problem is solved ξj=argminξ(αj-ξ)Tγ^Tγ^(αj-ξ)+λ1jξ+λξ2, where j is the PC.

  2. For a fixed Ξ, γT^γ^Ξ=UDVT is estimated and update A=UVT.

  3. Repeat steps 1 and 2 until convergence.

Here, λ1 is an L1 sparsity parameter that induces sparsity, λ2 is an L2 parameter that offers numerical stability, and Ξ is a matrix with sparsity constraints for each exposure (Zou and Xue, 2018). As a result of the additional λ1ξ norm, there is sparsity in the loadings matrix and only some of the SNP-exposure associations γ^ contribute to each PC, specifically a particular subset of highly correlated exposures in γ^.

RSPCA

Request a detailed protocol

This approach differs in that it employs a robust measure of dispersion that is not unduly influenced by large single values of γ^ that contribute a large amount to the total variance (Rousseeuw and Croux, 1993; Heckert, 2003). As above, an L1 norm is used to induce sparsity. For optimisation, the tradeoff product optimisation is maximised. It does not impose a single λ value on all PCs, thus allowing different degrees of sparsity.

SFPCA (Guo et al., 2010)

Request a detailed protocol

A method that can in theory exploit distinct correlation structures. Its goal is to derive a loadings matrix in which highly positively correlated variables are similar in sign and highly negative ones are opposite. Similar magnitudes also tend to be obtained for those variables that are in the same blocks in the correlation matrix. Like the sPCA optimisation in Zou et al., 2006, sparse fused PCA (SFPCA) works by assigning highly correlated variables the exact same loadings as opposed to numerically similar ones (Figure 2d). This is achieved with two norms in the objective function: λ1 which regulates the L1 norm that induces sparsity and λ2 for the L2 regularisation (squared magnitude of γ^) to guard against singular solutions. A grid search is used to identify appropriate parameters for λ1 and λ2. The following criterion is used:

minA,Ξγ^γ^ΞATF+λ1ξ+λ2|ρs,t||ξs,tsign(ρs,tξt,k)|,

such that ATA=IK. The ‘fused’ penalty (last term) purposely penalises discordant loadings for variables that are highly correlated. The choice of the sparsity parameters is based on a Bayesian information criterion (BIC).

SCA

Request a detailed protocol

SCA (Chen and Rohe, 2021) is motivated by the relative inadequacy of the classic approaches in promoting significant sparsity. It addresses this by rotating the eigenvectors to achieve approximate sparsity whilst keeping the proportion of variance explained the same. Simulation studies show the technique works especially well in high-dimensional settings such as gene expression data, among other examples (Chen and Rohe, 2021).

Choice of components

Request a detailed protocol

In all dimensionality reduction approaches applied to correlated variables, there is no upper limit to how many transformed factors can be estimated. However, only a proportion of them are likely to be informative in the sense of collectively explaining a meaningful amount of total variance in the original dataset. To guide this choice, a permutation-based approach was implemented (Coombes and Wang, 2019) as follows: Firstly, the γ^ matrix was randomly permuted and the sPCA method of interest was applied on the permuted set. The computed eigenvalues are assumed to come from a null distribution consistent with a non-informative component. This process is repeated multiple times (e.g. perm=1000) and the mean eigenvalues for all components stored. Finally, the sPCA method is performed in the original γ^ matrix and whichever component has an eigenvalue larger than the mean of the permuted sets is considered informative and kept. Due to the computational complexity of the permutation method, particularly for SFPCA, an alternate method - the KSS criterion (Karlis et al., 2003) - was also used. This is based on a simple correction on the minimum non-trivial eigenvalue (CutoffKSS=1+2K1p1). The authors show that the method is robust to non-normal distributions (Karlis et al., 2003). Although KSS was not compared with the above-described permutation approach, it performed better than simpler approaches, such as choosing those PCs whose eigenvalue is larger than 1 (Kaiser criterion), the broken stick method (Jolliffe, 2002) and the Velicer method (Velicer, 1976).

Instrument strength of PCs

Request a detailed protocol

In MVMR, the IV1 assumption requires a set of genetic variants that robustly associate with at least one of the exposures X (MR assumptions). This is quantified by CFS in Equation 2 (Sanderson et al., 2020). With summary statistics of the SNP-X associations γ^p,k (p: SNP, k: exposure), the mean F- statistic for exposure k used in a standard UVMR analysis is the far simpler expression

(4) Fk=j=1p(γ^j,kSEγ^j,k)2p

We provide a dedicated formula for estimating instrument strength measures for the F-statistic for the PCs that is closely related to Equation 3 rather than Equation 2. This simplification is due to the fact that an MVMR analysis of a set of PCs is essentially equivalent to an UMVR analysis of each exposure separately. The full derivation is reported in the section ‘Instrument strength’ of the Appendix.

Appendix 1

Instrument strength

Since we transform γ^ and obtain a matrix of lower dimensionality, formula (Equation 4) can’t be used as there is no longer a one-to-one correspondence of the SEs with the PCs. Likewise, a conditional F-statistic for the PCs also cannot be computed for this reason. We aim to arrive at a modified formula that bypasses this issue. For this purpose, we take advantage of two concepts, first an expression of the F-statistic for an exposure k (Fk) in matrix notation and, second, the use of this expression to estimate F-statistics for the PCs (FPC) from γ^ decomposition.

We make the assumption that the uncertainty in the γ^G,XK estimates is similar in all K exposures, that is γ^G,X uncertainty estimates do not substantially differ among exposures. This is not implausible as the uncertainty is predominantly driven by sample size and minor allele frequency (Giambartolomei et al., 2014). Specifically, the authors of Giambartolomei et al., 2014, show that

Var(γ^Xk)=1nkVar(Xk)MAF(1MAF),

where MAF is the minor allele frequency, nk is the sample size in exposure k, and Var(Xk) is the phenotypic variance. What this means is that, in experiments such as Kettunen et al., 2016, where nk is the same across all exposures and Var(Xk) can be standardised to 1, the main driver of differences in Var(γ^XK) is differences in MAF. As MAF is the same for each SNP across all exposures, the collation of SEs across exposures per SNP is well motivated.

We can then define a matrix Σ as follows.

Σ=[SE12¯0000SE22¯00000SEp2¯],SEj2¯=k=1KSEj,k2K

The elements in the diagonal represent the mean variance of γ^ for each SNP and all off-diagonal elements are zero. What is achieved through this is a summary of the uncertainty in the SNP-X associations that is not sensitive on the dimensions of the exposures. Instead of Equation 4, we can then express the vector of the mean F-statistics for each exposure F1-K=[F1,F2,,FK] as

(5) F1KK×1=1p×diag[γ^TK×p×Σ1p×p×γ^p×K]

where γ^ is the matrix of the SNP-exposure associations. In a simulation study, we generate data under the mechanism in Appendix 1—figure 1a. The strength of association is different in the three exposures. It is observed that the estimates with both methods (Equation 5 and Equation 4) align well (Appendix 1—figure 1b), supporting the equivalence of the two formulae.

Appendix 1—figure 1
Simplified F-statistic Estimation.

(a) Data generating mechanism. Three exposures with different degrees of strength of association with G are generated γ1=1,γ2=0.5,γ3=0.1. (b) F-statistic for the three exposures X1,X2,X3 as estimated by the formulae in Equation 5 (horizontal axis) and Equation 4 (vertical axis).

Our second aim is to use this matrix notation formula for the F-statistic to quantify the instrument strength of each PC with the respective F-statistic (FPC). As presented above, we are not limited by the dimensions of point estimates and uncertainty matching exactly and we can use the formula in Equation 5 and substitute γ^ with the PCs. For the PCA approach, where we decompose γ^ as γ^=UDVT and carry forward M<<K non-trivial PCs, we use the matrix UD in place of γ^. Then, the mean FPC can be estimated as follows.

(6) FPC1MM×1=1p×diag[UDTM×p×Σ1p×p×UDp×M]

The vector FPC1-M=[FPC1,FPC2,,FPCM] contains the FPC statistics for the M PCs. In a similar manner, we estimate FPC for the sPCA methods but, instead of the scores matrix UD, we use the scores of the sparse methods. We illustrate the performance of this approach in a simulation study with an identical configuration for exposure generation as the one presented in Figure 5. In a configuration with b=6 blocks of p=30 genetically correlated exposures (Figure 4), we vary the strength of association γ per block. This way, the first block has the highest strength of association and the last block the lowest, quantified by a lower mean F-statistic in the exposures of this block (red, Appendix 1—figure 2). The instrument strength of the PCs and the sPCs follow closely the corresponding F-statistics of the individual exposures; in other words, in a PC of five exposures, FPC1, FSCA1, and F1-5 align well (Appendix 1—figure 2).

Appendix 1—figure 2
Distributions of the F-statistics in principal component analysis (PCA) methods and individual (not transformed) exposures.

Exposure data in different blocks are simulated with a decreasing strength of association and the correlated blocks map to principal components (PCs). Each distribution represents the F-statistics for each PC. In the case of the individual exposures (red), the distributions represent the F-statistics for the corresponding exposures. Individual: individual exposures without any transformation; PCA: F-statistics for PCA; SCA: sparse component analysis (Chen and Rohe, 2021) sPCA: sparse PCA as described by Zou et al., 2006.

MVMR and UVMR

Appendix 1—figure 3
Multivariable Mendelian randomisation (MVMR) and univariable MR (UVMR) estimates.

Only ApoB is strongly associated with coronary heart disease (CHD). All SEs are larger in the MVMR model (range of SEMVMRSEUVMR2.7-225.96).

MVMR with PC scores

Appendix 1—table 1
Estimated causal effects of principal components (PCs) on coronary heart disease (CHD) risk.

PCA: principal component analysis; SCA: sparse component analysis; sPCA: sparse PCA (Zou et al., 2006); RSPCA: robust sparse PCA.

PCMethodORLCIUCI
PC1PCA1.0021.00151.0024
PC2PCA1.00020.99951.001
PC3PCA1.00131.00011.0024
PC4PCA0.99850.9970.9999
PC5PCA0.99990.99781.002
PC6PCA0.99930.99761.0009
PC1SCA1.00271.00051.0049
PC2SCA1.00271.00041.005
PC3SCA0.99970.99761.0019
PC4SCA0.99650.99410.9989
PC5SCA1.00020.9981.0024
PC6SCA1.00340.99891.0078
PC1sPCA1.00190.99991.0039
PC2sPCA1.00030.99861.002
PC3sPCA0.99880.9971.0005
PC4sPCA0.99750.99550.9995
PC5sPCA0.9980.99541.0006
PC6sPCA0.99980.99821.0014
PC1RSPCA1.00171.00061.0027
PC2RSPCA0.99980.99831.0013
PC3RSPCA0.99540.99180.999
PC4RSPCA0.99890.99691.0008
PC5RSPCA0.99440.99030.9986
PC6RSPCA1.011.00131.0188
PC1SFPCA1.0021.00151.0025
PC2SFPCA0.99910.99791.0004
PC3SFPCA0.99980.99911.0006
PC4SFPCA0.99820.99670.9997
PC5SFPCA1.00010.99771.0025
PC6SFPCA1.00090.99851.0033

MVMR with IVW and GRAPPLE

A small negative effect for M.LDL.PL is noted as nominally significant in Appendix 1—figure 4. This is not concordant with the UVMR direction of effect. In GRAPPLE, no traits surpass the nominal significance threshold.

Appendix 1—figure 4
Multivariable Mendelian randomisation (MVMR) with IVW (left) and MVMR with GRAPPLE (Zhao et al., 2021) (right).

Only the 66 exposures. that are significant in univariable MR (UVMR) are put forward in these models. In IVW (left), ApoB shows nominal significance. In MR GRAPPLE (right), apolipoprotein B has the lowest p-value but no trait reaches nominal significance.

Appendix 1—figure 5
F-statistics for principal components (PCs) and sparse PCs.

The formula derived in Equation 5 is used. Black: principal component analysis (PCA) (no sparsity constraints); yellow: sparse component analysis (SCA); red: sparse PCA (Zou); blue: sparse robust PCA; green: sparse fused PCA. The dashed line represents the cutoff of 10 that is considered the minimum desired F-statistic for an exposure to be considered well instrumented. The green line diverges from the pattern of decreasing instrument strength but, when referring to the loadings heatmap (Figure 2), it can be observed that the 4th sparse PC in the fused sPCA receives negative loadings from multiple very low-density lipoprotein (VLDL)- and low-density lipoprotein (LDL)-related traits. This may in turn cause the large F-statistic.

Appendix 1—figure 6
Bayesian information criterion (BIC) for different numbers of metabolites regularized to 0.

The lowest value is achieved for one non-zero exposure per component. However, six non-zero exposures per component also achieved a similar low BIC and this was selected.

Appendix 1—figure 7
Trajectories for the loadings of total cholesterol in low-density lipoprotein (LDL) and ApoB in all methods.

Principal component analysis (PCA) loadings imply a contribution of LDL.c and ApoB to all principal components (PCs). In the sparse methods, this is limited to one PC (two for RSPCA).

Simulation study design

We generate a minor allele frequency MAFU(0.01,0.5) and an unmeasured confounder UN(0,5). Conditionally on the MAF, we generate the genes GBin(2,MAF). We define Q blocks and divide the P genes of G to q blocks, each containing P/Q genes. This approach aims to model the biological phenomenon of multiple traits being influenced by a common set of variants. For instance, the traits of LDL cholesterol content and LDL phospholipid content appear to be associated with the variants in a largely similar manner (Kettunen et al., 2016).

For each block q=1,2,,Q, we define a matrix γq whose elements γ1-γP/Q are non-zero and N(4,1). This matrix is what will direct only certain variants to influence certain exposures, leading to multicollinearity. We then generate the K exposures sequentially per block, following the parametric equation Xq=γqGq+U+ϵX. This way, specific genetic variants generate blocks of exposures as shown in Figure 4 and the exposures X1-XK/Q are highly correlated. We derive the SNP-exposure associations from this dataset as Xγ^G. We set K=50–100 and P=100–150 and the blocks 5–8. We let these values vary across simulations in order to generate more varying values in diagnostic accuracy. For a given sample size, s=1000 simulation studies were performed.

To retain the workflow of a two-sample MR, we generate a second exposure set identically as above but on an independent sample. This step is important as it guarantees the no sample overlap assumption of two-sample MR (Burgess et al., 2016). Based on this second X matrix, we generate the outcome Y=βX+U+ϵY. The vector of causal estimates β is generated based on any number of exposures being causal in the two blocks. This includes the null. We obtain the SNP-outcome associations as YΓG. The effect of the data generating mechanism in a single dataset is visualised in Appendix 1—figure 8.

Appendix 1—figure 8
Example for the block correlation in γ^ (n=5000, K=77) induced by the data generating mechanism in Figure 4.

In this example, the mean F-statistic is 231.2 and the mean CFS is 3.21.

The methods employed are:

Due to computational complexity, sPCA in the elasticnet package (spca_en in Figure 5) was implemented with a simplification regarding the sparsity parameter. Specifically, it was assumed that the number of non-zero exposures per component was P/Q. For SCA, we use the cross-validation method in PMA R package (Witten and Tibshirani, 2020).

To generate the summary ROC curves presented in Figure 5, we treated simulation results as individual studies and meta-analysed them with the bivariate method of Reitsma et al., 2005, in the R package. The logit sensitivity and specificity (which are correlated) are jointly meta-analysed by modelling them as a bivariate normal distribution and employing a random-effects model for accomodating this correlation and framing it as heterogeneity.

The results for a set of simulations in N=3000 individuals are presented in Appendix 1—figure 10. We observe that MVMR and MVMR with Bonferroni correction estimates cluster to the bottom left of the plot, suggesting a low sensitivity. The estimates from SCA and sPCA_EN are spread in the upper left half of the plot and often achieve a sensitivity of 1.00. The PCA and RSPCA mainly provide highly sensitive estimates but perform relatively worse in specificity. The performance in increasing sample sizes is consistent with a benefit for sparse methods (Appendix 1—figure 9).

Appendix 1—figure 9
AUC performance of multivariable Mendelian randomisation (MVMR) and dimensionality reduction methods for increasing sample sizes.

Two sparse methods (sparse component analysis [SCA], sparse principal component analysis [sPCA]) perform better compared with PCA and MVMR, with improving performance as the sample size increases. CFS: conditional F-statistic.

Appendix 1—figure 10
Individual results from s=1000 simulations.
Appendix 1—table 2
Simulation study on only four exposures (out of the total K=50) contributing to the outcome Y.

A drop in sensitivity and specificity is observed for sparse component analysis (SCA) and sparse principal component analysis (sPCA) compared with the simulation configuration in Table 4.

PCASCAsPCARSPCAMVMRMVMR_B
AUC0.7990.7140.8590.4920.5110.675
SNS1,0.030.75,0.251,0.170.5,0.250.25,0.250,0
SPC0,0.20.76,0.460.66,0.180.37,0.150.94,0.071,0
Youden’s J00.4280.625–0.0290.1050.032

Comparison of PCA in 1SMR and 2SMR

We fix the number of blocks of exposures to b=2 and the number of exposures to K=6. In both 1SMR and 2SMR, the loadings matrices of X^ and γ^GX are of dimensionality p×p (p being the number of SNPs) and can be therefore compared for similarity. We use two similarity measures: a the sum of squared differences between the one-sample and two-sample loadings matrices S=pj=1Ppi=1P(Vij,1SMR-Vij,2SMR)2, and the R2 from a linear regression of the one-sample PC loadings on the two-sample PC loadings. R2 increases and S decreases with increasing sample size, as expected.

Appendix 1—figure 11
Top panel: R2; bottom panel: similarity of loadings (S) between one-sample Mendelian randomisation (MR) and two-sample MR (Nsim=10,000).

Proportion of non-causal exposures as a predictor of performance

For a given sample size N=1000, we vary the proportion of non-causal SNPs (PNC) to examine the performance. The total number of total exposures is picked from the K(100,160) range, the number of SNPs from the p(100,200), and the number of blocks of exposures from B(5,20). Since the applied pipeline does not discriminate between strength of association of X with Y in the dimensionality reduction stage, an increase in PNC was expected to induce FP. Then, in the increasingly sparser subvector βb of the causal effects of Xb on Y, all Xb exposures would be projected to the same PC. This PC is likely to still be associated with Y; however, specificity may drop. This is what was observed in the simulation study, where specificity gradually drops as PNC increases from 20% to 80%.

Appendix 1—figure 12
Specificity (ability to accurately identify true negative exposures) of sparse component analysis (SCA) as a different proportion of exposures in each block are causal for Y.

Data availability

The GWAS summary statistics for the metabolites (http://www.computationalmedicine.fi/data/NMR_GWAS/) and CHD (http://www.cardiogramplusc4d.org/) are publicly available. We provide code for the SCA function, the simulation study and related documentation on GitHub (https://github.com/vaskarageorg/SCA_MR/, copy archived at Karageorgiou, 2023).

References

    1. Deloukas P
    2. Kanoni S
    3. Willenborg C
    4. Farrall M
    5. Assimes TL
    6. Thompson JR
    7. Ingelsson E
    8. Saleheen D
    9. Erdmann J
    10. Goldstein BA
    11. Stirrups K
    12. König IR
    13. Cazier J-B
    14. Johansson A
    15. Hall AS
    16. Lee J-Y
    17. Willer CJ
    18. Chambers JC
    19. Esko T
    20. Folkersen L
    21. Goel A
    22. Grundberg E
    23. Havulinna AS
    24. Ho WK
    25. Hopewell JC
    26. Eriksson N
    27. Kleber ME
    28. Kristiansson K
    29. Lundmark P
    30. Lyytikäinen L-P
    31. Rafelt S
    32. Shungin D
    33. Strawbridge RJ
    34. Thorleifsson G
    35. Tikkanen E
    36. Van Zuydam N
    37. Voight BF
    38. Waite LL
    39. Zhang W
    40. Ziegler A
    41. Absher D
    42. Altshuler D
    43. Balmforth AJ
    44. Barroso I
    45. Braund PS
    46. Burgdorf C
    47. Claudi-Boehm S
    48. Cox D
    49. Dimitriou M
    50. Do R
    51. Doney ASF
    52. El Mokhtari N
    53. Eriksson P
    54. Fischer K
    55. Fontanillas P
    56. Franco-Cereceda A
    57. Gigante B
    58. Groop L
    59. Gustafsson S
    60. Hager J
    61. Hallmans G
    62. Han B-G
    63. Hunt SE
    64. Kang HM
    65. Illig T
    66. Kessler T
    67. Knowles JW
    68. Kolovou G
    69. Kuusisto J
    70. Langenberg C
    71. Langford C
    72. Leander K
    73. Lokki M-L
    74. Lundmark A
    75. McCarthy MI
    76. Meisinger C
    77. Melander O
    78. Mihailov E
    79. Maouche S
    80. Morris AD
    81. Müller-Nurasyid M
    82. Nikus K
    83. Peden JF
    84. Rayner NW
    85. Rasheed A
    86. Rosinger S
    87. Rubin D
    88. Rumpf MP
    89. Schäfer A
    90. Sivananthan M
    91. Song C
    92. Stewart AFR
    93. Tan S-T
    94. Thorgeirsson G
    95. van der Schoot CE
    96. Wagner PJ
    97. Wells GA
    98. Wild PS
    99. Yang T-P
    100. Amouyel P
    101. Arveiler D
    102. Basart H
    103. Boehnke M
    104. Boerwinkle E
    105. Brambilla P
    106. Cambien F
    107. Cupples AL
    108. de Faire U
    109. Dehghan A
    110. Diemert P
    111. Epstein SE
    112. Evans A
    113. Ferrario MM
    114. Ferrières J
    115. Gauguier D
    116. Go AS
    117. Goodall AH
    118. Gudnason V
    119. Hazen SL
    120. Holm H
    121. Iribarren C
    122. Jang Y
    123. Kähönen M
    124. Kee F
    125. Kim H-S
    126. Klopp N
    127. Koenig W
    128. Kratzer W
    129. Kuulasmaa K
    130. Laakso M
    131. Laaksonen R
    132. Lee J-Y
    133. Lind L
    134. Ouwehand WH
    135. Parish S
    136. Park JE
    137. Pedersen NL
    138. Peters A
    139. Quertermous T
    140. Rader DJ
    141. Salomaa V
    142. Schadt E
    143. Shah SH
    144. Sinisalo J
    145. Stark K
    146. Stefansson K
    147. Trégouët D-A
    148. Virtamo J
    149. Wallentin L
    150. Wareham N
    151. Zimmermann ME
    152. Nieminen MS
    153. Hengstenberg C
    154. Sandhu MS
    155. Pastinen T
    156. Syvänen A-C
    157. Hovingh GK
    158. Dedoussis G
    159. Franks PW
    160. Lehtimäki T
    161. Metspalu A
    162. Zalloua PA
    163. Siegbahn A
    164. Schreiber S
    165. Ripatti S
    166. Blankenberg SS
    167. Perola M
    168. Clarke R
    169. Boehm BO
    170. O’Donnell C
    171. Reilly MP
    172. März W
    173. Collins R
    174. Kathiresan S
    175. Hamsten A
    176. Kooner JS
    177. Thorsteinsdottir U
    178. Danesh J
    179. Palmer CNA
    180. Roberts R
    181. Watkins H
    182. Schunkert H
    183. Samani NJ
    184. CARDIoGRAMplusC4D Consortium
    185. DIAGRAM Consortium
    186. CARDIOGENICS Consortium
    187. MuTHER Consortium
    188. Wellcome Trust Case Control Consortium
    (2013) Large-scale Association analysis identifies new risk Loci for coronary artery disease
    Nature Genetics 45:25–33.
    https://doi.org/10.1038/ng.2480
    1. Do R
    2. Willer CJ
    3. Schmidt EM
    4. Sengupta S
    5. Gao C
    6. Peloso GM
    7. Gustafsson S
    8. Kanoni S
    9. Ganna A
    10. Chen J
    11. Buchkovich ML
    12. Mora S
    13. Beckmann JS
    14. Bragg-Gresham JL
    15. Chang H-Y
    16. Demirkan A
    17. Den Hertog HM
    18. Donnelly LA
    19. Ehret GB
    20. Esko T
    21. Feitosa MF
    22. Ferreira T
    23. Fischer K
    24. Fontanillas P
    25. Fraser RM
    26. Freitag DF
    27. Gurdasani D
    28. Heikkilä K
    29. Hyppönen E
    30. Isaacs A
    31. Jackson AU
    32. Johansson Å
    33. Johnson T
    34. Kaakinen M
    35. Kettunen J
    36. Kleber ME
    37. Li X
    38. Luan J
    39. Lyytikäinen L-P
    40. Magnusson PKE
    41. Mangino M
    42. Mihailov E
    43. Montasser ME
    44. Müller-Nurasyid M
    45. Nolte IM
    46. O’Connell JR
    47. Palmer CD
    48. Perola M
    49. Petersen A-K
    50. Sanna S
    51. Saxena R
    52. Service SK
    53. Shah S
    54. Shungin D
    55. Sidore C
    56. Song C
    57. Strawbridge RJ
    58. Surakka I
    59. Tanaka T
    60. Teslovich TM
    61. Thorleifsson G
    62. Van den Herik EG
    63. Voight BF
    64. Volcik KA
    65. Waite LL
    66. Wong A
    67. Wu Y
    68. Zhang W
    69. Absher D
    70. Asiki G
    71. Barroso I
    72. Been LF
    73. Bolton JL
    74. Bonnycastle LL
    75. Brambilla P
    76. Burnett MS
    77. Cesana G
    78. Dimitriou M
    79. Doney ASF
    80. Döring A
    81. Elliott P
    82. Epstein SE
    83. Eyjolfsson GI
    84. Gigante B
    85. Goodarzi MO
    86. Grallert H
    87. Gravito ML
    88. Groves CJ
    89. Hallmans G
    90. Hartikainen A-L
    91. Hayward C
    92. Hernandez D
    93. Hicks AA
    94. Holm H
    95. Hung Y-J
    96. Illig T
    97. Jones MR
    98. Kaleebu P
    99. Kastelein JJP
    100. Khaw K-T
    101. Kim E
    102. Klopp N
    103. Komulainen P
    104. Kumari M
    105. Langenberg C
    106. Lehtimäki T
    107. Lin S-Y
    108. Lindström J
    109. Loos RJF
    110. Mach F
    111. McArdle WL
    112. Meisinger C
    113. Mitchell BD
    114. Müller G
    115. Nagaraja R
    116. Narisu N
    117. Nieminen TVM
    118. Nsubuga RN
    119. Olafsson I
    120. Ong KK
    121. Palotie A
    122. Papamarkou T
    123. Pomilla C
    124. Pouta A
    125. Rader DJ
    126. Reilly MP
    127. Ridker PM
    128. Rivadeneira F
    129. Rudan I
    130. Ruokonen A
    131. Samani N
    132. Scharnagl H
    133. Seeley J
    134. Silander K
    135. Stančáková A
    136. Stirrups K
    137. Swift AJ
    138. Tiret L
    139. Uitterlinden AG
    140. van Pelt LJ
    141. Vedantam S
    142. Wainwright N
    143. Wijmenga C
    144. Wild SH
    145. Willemsen G
    146. Wilsgaard T
    147. Wilson JF
    148. Young EH
    149. Zhao JH
    150. Adair LS
    151. Arveiler D
    152. Assimes TL
    153. Bandinelli S
    154. Bennett F
    155. Bochud M
    156. Boehm BO
    157. Boomsma DI
    158. Borecki IB
    159. Bornstein SR
    160. Bovet P
    161. Burnier M
    162. Campbell H
    163. Chakravarti A
    164. Chambers JC
    165. Chen Y-DI
    166. Collins FS
    167. Cooper RS
    168. Danesh J
    169. Dedoussis G
    170. de Faire U
    171. Feranil AB
    172. Ferrières J
    173. Ferrucci L
    174. Freimer NB
    175. Gieger C
    176. Groop LC
    177. Gudnason V
    178. Gyllensten U
    179. Hamsten A
    180. Harris TB
    181. Hingorani A
    182. Hirschhorn JN
    183. Hofman A
    184. Hovingh GK
    185. Hsiung CA
    186. Humphries SE
    187. Hunt SC
    188. Hveem K
    189. Iribarren C
    190. Järvelin M-R
    191. Jula A
    192. Kähönen M
    193. Kaprio J
    194. Kesäniemi A
    195. Kivimaki M
    196. Kooner JS
    197. Koudstaal PJ
    198. Krauss RM
    199. Kuh D
    200. Kuusisto J
    201. Kyvik KO
    202. Laakso M
    203. Lakka TA
    204. Lind L
    205. Lindgren CM
    206. Martin NG
    207. März W
    208. McCarthy MI
    209. McKenzie CA
    210. Meneton P
    211. Metspalu A
    212. Moilanen L
    213. Morris AD
    214. Munroe PB
    215. Njølstad I
    216. Pedersen NL
    217. Power C
    218. Pramstaller PP
    219. Price JF
    220. Psaty BM
    221. Quertermous T
    222. Rauramaa R
    223. Saleheen D
    224. Salomaa V
    225. Sanghera DK
    226. Saramies J
    227. Schwarz PEH
    228. Sheu WH-H
    229. Shuldiner AR
    230. Siegbahn A
    231. Spector TD
    232. Stefansson K
    233. Strachan DP
    234. Tayo BO
    235. Tremoli E
    236. Tuomilehto J
    237. Uusitupa M
    238. van Duijn CM
    239. Vollenweider P
    240. Wallentin L
    241. Wareham NJ
    242. Whitfield JB
    243. Wolffenbuttel BHR
    244. Altshuler D
    245. Ordovas JM
    246. Boerwinkle E
    247. Palmer CNA
    248. Thorsteinsdottir U
    249. Chasman DI
    250. Rotter JI
    251. Franks PW
    252. Ripatti S
    253. Cupples LA
    254. Sandhu MS
    255. Rich SS
    256. Boehnke M
    257. Deloukas P
    258. Mohlke KL
    259. Ingelsson E
    260. Abecasis GR
    261. Daly MJ
    262. Neale BM
    263. Kathiresan S
    (2013) Common variants associated with plasma triglycerides and risk for coronary artery disease
    Nature Genetics 45:1345–1352.
    https://doi.org/10.1038/ng.2795
  1. Book
    1. Jolliffe IT
    (2002)
    Principal Component Analysis
    New York: Springer.
  2. Software
    1. Witten D
    2. Tibshirani R
    (2020)
    PMA: Penalized multivariate analysis, version 1.2.1
    R Package.
  3. Software
    1. Yavorska O
    2. Staley J
    (2020)
    Mendelianrandomization: Mendelian randomization package, version 0.5.0
    R Package.

Decision letter

  1. Siming Zhao
    Reviewing Editor; Dartmouth College, United States
  2. Martin R Pollak
    Senior Editor; Harvard Medical School, United States
  3. Stephen Burgess
    Reviewer; University of Cambridge, United Kingdom

Our editorial process produces two outputs: (i) public reviews designed to be posted alongside the preprint for the benefit of readers; (ii) feedback on the manuscript for the authors, including requests for revisions, shown below. We also include an acceptance summary that explains what the editors found interesting or important about the work.

Decision letter after peer review:

Thank you for submitting your article "Sparse Dimensionality Reduction Approaches in Mendelian Randomization with highly correlated exposures" for consideration by eLife. Your article has been reviewed by 3 peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Martin Pollak as the Senior Editor. The following individual involved in the review of your submission has agreed to reveal their identity: Stephen Burgess (Reviewer #1).

The reviewers have discussed their reviews with one another, and the Reviewing Editor has drafted this to help you prepare a revised submission.

Essential revisions:

1) Clarification of the method details. All three reviewers have clarification comments.

2) Interpretation of results. See comments from reviewer 3.

3) Demonstration of benefits in real data analysis. See comments from reviewers 1 and 3.

Reviewer #1 (Recommendations for the authors):

I'm not the best person to judge, but I'm not sure how accessible this work would be to someone who doesn't understand multivariable Mendelian randomization. Figure 1 is a great overview, but I'd appreciate the viewpoint of someone who is less familiar with these methods.

The description of the different sparse PCA methods was a helpful summary.

I'm not sure that the foray into instrument strength is valuable – it is over 2 pages and 2 figures, and I'm not sure how much this adds – maintaining high conditional F statistics is important, but their calculation is not a major part of the analysis report. The whole section is a bit distracting and disproportionate.

All MR estimates seem to be on quite odd scales – the estimates are all close to 1 – 1.001 or 0.999. Could the PCs be standardized, so that the estimates are per 1 SD increase in the PC? This would improve communication of the results. Currently, there is no sense of scale – what is the corresponding change in lipids on a meaningful scale?

While I understand that the main thrust of the manuscript is methodological, I didn't get a strong sense of what the conclusion from the applied analysis is. If the authors could communicate a strong finding, this would help to justify the use of the method. Eg what are the estimates for the HDL clusters in the SCA method? This isn't clear.

Figure 6 would be key to this – currently, it's not clear what the various PCs represent. Maybe a radar chart would help? Also, the caption for Figure 6 is incorrect – the caption from Figure 8 has inadvertently been duplicated here.

While I wouldn't want to insist that the authors change their paper, the obvious application of this work appears to be imaging studies – this is currently under-explored.

Figure 5 is a really helpful summary of the performance of the different methods in this investigation.

Reviewer #2 (Recommendations for the authors):

This is a nice paper, with convincing and very useful results. I have mainly some clarification questions and suggestions.

1. The main idea is to treat the matrix of SNP-exposures associations γ^ as explanatory variables in the regression Γ^=γ^β+e, although this is clearly not a standard regression model, just that a WLS or GLS estimator for β is equivalent to the 2sls estimator on single-sample data. I think it would therefore be instructive to link to the construction of PCs in a single-sample individual-level data sets. There the model is y=Xβ+uX=Gγ+V and one could do a PCA on X. Alternatively, as the 2sls estimator is least-squares in y=X^β+u~ where X^=Gγ~, one could perhaps consider PCA on X^. As X^TX^=γ^TGTGγ^ this is getting close to the PCA on γ^ if GTG is normalized, for example GTG/n = Ip. If it is not normalized, then it is hard to make a connection. It may therefore be worthwhile considering PCA on the weighted γ^ as per the (extended) IVW estimator in (1) on page 4. With such a link in place, it seems easier to motivate the proposed method.2. Are the covariances σlkj reported in the GWAS? I assume the estimators are simply IVW throughout, as implied at the top of page 5?3. It is unclear to me why ∆ is estimated by OLS in (2).

4. Notation. It would be easier to follow the notation if matrices are upper case and vectors lower case. That would change for example the current vector γ to a lower case letter and the current matrix γ to an upper case letter.

5. It is not straightforward to follow the steps for Sparse PCA on page 56, as not all symbols are clear. It is not clear in 1. what "Setting a fixed matrix," means and what the αj are. Also, define Xi. In 2 you simply do an svd? Should there be "hats" on γ? UDVT in step 2. are not the same values as in the svd for γ^.

Reviewer #3 (Recommendations for the authors):

Here are some detailed comments.

1. Usually a good idea to number figures in the order they appear in the text (Figure 12 out of order).

2. UVMR is referred to as UMVR three times and as UMVMR once

3. At the beginning of the Methods section, it would be good to specify the dimension of \γ. It is also important to specify how the variants used are selected.

4. Figure 1 is a bit chaotic and inconsistent with the text. Figure 1 top, the graph indicates that there are 118 exposures. I assume this was chosen because it is the number of lipid exposures in the data application but it doesn't look like the lipid data is used in this figure. Figure 1 top right, the dimension of the matrix is given as L x 118. The variable L does not appear elsewhere in the paper. What is printed appears to be a 60x60 square matrix. Is this matrix meant to represent γγ? The statement "MR estimates affected by multi-colinearity" printed next to this matrix doesn't appear to relate to the figure because the figure doesn't indicate anything about the MR estimates.5. Figure 5. It looks like none of the matrices shown have all 118 rows. The sparse PCA matrix has many fewer exposures than the other methods. It would be good to clarify this in the caption.

6. Figure 6 is labeled as "Extrapolated ROX curves" but appears to be a comparison of UVMR and MVMR results. For the top right plot, I would have expected UVMR and MVMR results to be identical since the PCs are exactly orthogonal. Why is this not the case?

7. Page 12 describes Figure 5 as "The loadings for all methods (Figures 5a-e) are presented for LDL.c, a previously prioritised trait." I am confused about what "for LDL.c" means here. It appears that LDL.c is one of the traits in Figure 5 but these appear to be loadings across some large subset of traits?

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Thank you for resubmitting your work entitled "Sparse Dimensionality Reduction Approaches in Mendelian Randomization with highly correlated exposures" for further consideration by eLife. Your revised article has been evaluated by Martin Pollak (Senior Editor) and a Reviewing Editor.

The manuscript has been improved and all reviewers agree this study will be useful for the field. However, there are remaining issues and we would like to invite you to further revise this manuscript.

Revision outline:

1. Adding a deeper more nuanced discussion of what is meant by causality and how to interpret the components. The major advancement of this study comes from method development and it is important to not overstate the significance of results. See comment 1 from reviewer 3 and comment 4 from reviewer 1.

2. The choice of method. Discussing scenarios that will work or fail for the method will be helpful for the readers to better utilize this tool. See comment 4 from reviewer 1 and comment 2 from reviewer 3.

3. Adding details of simulation. Other clarification issues raised by reviewers.

Reviewer #1 (Recommendations for the authors):

1. While I appreciate the simulation study, and believe this is worthwhile, it strikes me that the primary reason for choosing one or other of these sparse methods is not performance in a simulation study, but rather how the methods define the PCs, and how the analyst could interpret the results. As there's no point running an analysis if you can't interpret the results. This will depend on the specific characteristics of the investigation, so I am not expecting a single method recommendation. But I'd appreciate a bit more acknowledgement of this in the choice of methods – what characteristics to these methods tend to result in? Interesting (and reassuring) to see that clusters from the SCA and sPCA methods look similar in Figure 2 (although then odd that in Table 3, the significance differs for the VLDL and LDL components…).

Overall, I appreciate the authors' manuscript in that it proposes sparse dimension reduction as a method and explores the properties of MVMR estimates from this approach. My sense is that it is difficult to say anything more authoritative/general than "hey, look – we tried this and it sort of worked!", but the authors do this in a way that is mostly clear (see comments above) and should be helpful to future researchers.

Reviewer #2 (Recommendations for the authors):

I am happy with this revision. The authors have responded well to my previous comments and suggestions, or clarified their preferred choices.

Reviewer #3 (Recommendations for the authors):

The resubmission has corrected some of the issues I raised previously. However, I have some remaining concerns, primarily about the interpretation of the proposed causal effects and the simulation results.

1. My largest issue is still in the causal interpretation of genetically determined PCs are sparse components. To me, I don't think there is a way to turn the components into well-defined exposures that admit a counterfactual or interventional definition of a causal effect. This approach seems closer to "causality adjacent" methods like TWAS. There was a paper a few years ago using sparse CCA in combination with TWAS (https://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1008973) and this seems similar to that to me. I think one option for this paper would be to add a deeper discussion of what it means to test for a causal effect of a component that may or may not represent a latent biological variable. On one end of interpretation you have "association with genetically predicted exposure component" and then I think you could add as a bonus that if you have successfully identified a biological component, this could be interpretable as a causal effect.

2. Using the components introduces an issue of horizontal pleiotropy if a single variant is allowed to affect multiple components. This suggests that the component-based MVMR results would be more appropriate than component-based UVMR results, though for PCA this is not an issue due to orthogonality and approximate equivalence of UVMR and MVMR.

Some questions about the simulations:

3. In Figure 4, it seems like the MVMR plot should have 6 lines and not two. How were the 6 traits combined into two lines here? It seems unfair to average the rejection rate for the X_1-3 into the black line since, if MVMR performed perfectly, it would never reject X_3 and always reject X_1 and X_2.

4. Why is there an interval around only the black line in the MVMR plot and not the other plots?

5. The sentence in the caption presumably describing the color scheme is cutoff. "The results are split according to group n black…"

6. Is the appropriate number of components known a-priori in the simulations?

7. The description of Figure 4 suggests that PCA and SCA are out-performing MVMR. However, PCA has exactly the same rejection rate for the causal and non-causal block of traits which seems like bad performance to me. SCA has a Type 1 error rate between 20 and 30% at a nominal level of 0.05 which also seems quite bad to me. I would not view improving power at the cost of uncontrolled type 1 error as a good outcome.

8. The description of the Figure 5 at the top of page 9 does not align with the image in Figure 5. "Both standard and Bonferroni-corrected MVMR performed poorly in terms of AUC (AUC 0.660 and 0.885 respectively), due to poor sensitivity. While there was a modest improvement with PCA (AUC 0.755), it appears that PCA and RSPCA do not accurately identify negative results (median specificity 0 and 0.28 respectively)." Reading these numbers, it sounds like MVMR is the worst followed by PCA, followed by MVMR_B. However, in the figure, PCA and RSPCA both have lower AUC than MVMR and MVMR_B. These numbers are also inconsistent with Table 4.

9. I think I may be a little confused about how the ROC curves are constructed.

9a. In my experience, an ROC curve is usually constructed by varying a threshold (e.g. α). However, it seems here like α was fixed and we are averaging over the single point obtained for each simulation setting according to the Reitsma et al. method. This method makes sense when the data are real observations, however, in the case of simulations, why not use the traditional method of varying to threshold and then averaging the full curves across simulations?

9b. I think this method of displaying results also obscures the issue of calibration. If there is a nominal significance threshold, we would like to see that type 1 error is controlled below that threshold.

9c. Visually, Figure 15 looks inconsistent with Figure 5 to me. There are no blue points in the upper right of Figure 15 -- does it make sense to extrapolate this part of the curve? Figure 5 suggests that the red methods at least have the potential to do very well in differentiating true and false positives but this doesn't seem born out by Figure 15 where most of the red points cluster in the upper right.

10. On the selection of instruments -- is it correct that you are assuming that all variants affecting any of the component metabolites also affect LDL/HDL or Triglycerdies as measured in the GLGC study? Does this assumption limit your ability to identify components that separate LDL or HDL sub-components? For example, variants that affect only a small number of components will be less likely to be selected using the GLGC reference.

https://doi.org/10.7554/eLife.80063.sa1

Author response

Reviewer #1 (Recommendations for the authors):

I'm not the best person to judge, but I'm not sure how accessible this work would be to someone who doesn't understand multivariable Mendelian randomization. Figure 1 is a great overview, but I'd appreciate the viewpoint of someone who is less familiar with these methods.

We have expanded the Introduction (first and second paragraphs), stressing the link of MR with RCTs that are more widely known. The goal of confounding adjustment is very similar in the two approaches and we believe this can motivate the presented methods for a wider audience.

The description of the different sparse PCA methods was a helpful summary.

I'm not sure that the foray into instrument strength is valuable – it is over 2 pages and 2 figures, and I'm not sure how much this adds – maintaining high conditional F statistics is important, but their calculation is not a major part of the analysis report. The whole section is a bit distracting and disproportionate.

All MR estimates seem to be on quite odd scales – the estimates are all close to 1 – 1.001 or 0.999. Could the PCs be standardized, so that the estimates are per 1 SD increase in the PC? This would improve communication of the results. Currently, there is no sense of scale – what is the corresponding change in lipids on a meaningful scale?

Thank you for your suggestion, indeed the OR estimates are on a narrow scale close to 1. We relate this to the lack of definition of the estimated causal effects. For this reason, we limit the scope of the proposed method to hypothesis testing as opposed to estimation. It is encouraging that, in the positive control example of how lipids associate with CHD, we observe that, when positive loadings are used to transform the exposures, the sign is biologically meaningful (LDL and VLDL increase risk, HDL appears to be protective). We acknowledge this and further stress its implications in the Discussion/Limitations section.

While I understand that the main thrust of the manuscript is methodological, I didn't get a strong sense of what the conclusion from the applied analysis is. If the authors could communicate a strong finding, this would help to justify the use of the method. Eg what are the estimates for the HDL clusters in the SCA method? This isn't clear.

Thank you for your suggestion, indeed there is an interesting point not originally highlighted regarding the HDL cluster. In the MVMR with those exposures that were significant in UVMR, only ApoB is positively associated with CHD. No exposures were protective. However, in SCA, there is a grouping of HDL traits in two independent PCs, corresponding to small/medium and larger HDL traits. The former PC appears to be protective for CHD. This finding is supported from other observational studies (smaller HDL traits are associated with favourable cardiovascular risk profiles) that we now report in the Discussion. Thus, if the analysis was restricted to MVMR, these findings would be masked due to imprecision (additions in Discussion/First Paragraph, updated Table 4).

Figure 6 would be key to this – currently, it's not clear what the various PCs represent. Maybe a radar chart would help? Also, the caption for Figure 6 is incorrect – the caption from Figure 8 has inadvertently been duplicated here.

Thank you for your observation, we have fixed the caption. The figure has also slightly changed following the comments of Reviewer 3 on an update in the exposures that are eligible with the SNPs that we use for the instrument. What Figure 6 reports is the comparison of the univariable and multivariable estimates of the causal effect of each PC on CHD. We would expect PCA to provide orthogonal components and therefore to have identical estimates in UVMR and MVMR. In the sparse methods, where strict orthogonality is traded with interpretability of the PCs, we expected some deviation and we wanted to visually communicate this. We also included the UVMR and MVMR estimates for SFPCA which were not in the original draft.

While I wouldn't want to insist that the authors change their paper, the obvious application of this work appears to be imaging studies – this is currently under-explored.

Thank you for your observation, we agree that imaging data would be a very good candidate for this approach. We highlight this in the Discussion.

Reviewer #2 (Recommendations for the authors):

This is a nice paper, with convincing and very useful results. I have mainly some clarification questions and suggestions.

1. The main idea is to treat the matrix of SNP-exposures associations γ^ as explanatory variables in the regression Γ^=γ^β+e, although this is clearly not a standard regression model, just that a WLS or GLS estimator for β is equivalent to the 2sls estimator on single-sample data. I think it would therefore be instructive to link to the construction of PCs in a single-sample individual-level data sets. There the model is y=Xβ+uX=Gγ+V and one could do a PCA on X. Alternatively, as the 2sls estimator is least-squares in y=X^β+u~ where X^=Gγ~, one could perhaps consider PCA on X^. As X^TX^=γ^TGTGγ^ this is getting close to the PCA on γ^ if GTG is normalized, for example GTG/n = Ip. If it is not normalized, then it is hard to make a connection. It may therefore be worthwhile considering PCA on the weighted γ^ as per the (extended) IVW estimator in (1) on page 4. With such a link in place, it seems easier to motivate the proposed method.

Thank you for your suggestion. We now include a simulation study on one-sample data where we perform PCA (PCA1SMR) and one sparse implementation (SCA) on X^. To accomodate the larger computational time (matrix of n × K instead of p × n), we reduced sample size and accordingly increased the γ parameter, keeping the F−statistics and conditional F−statistics in the same range. We compare a two-stage least squares MVMR estimator for K exposures with the following procedure. We obtain PC scores from Xˆ = UDV T and then regress UD on Y. Figure XYZ shows a similar performance with individual-level data as the one reported in the original draft. Both MVMR and MVMR with a multiplicity correction suffer from imprecise results (low sensitivity in Figure). SCA performs better in identifying true causal exposures, whereas PCA provides a lot of false positive results. Therefore, we conclude that such an approach with two-stage least squares is feasible and performs better than PCA in the simulation study.

We then examine how similar PCA1SMR and PCA2SMR are. As suggested, the aim is to establish how much information is lost by opting for a PCA/SCA on summary data only. If the exposures are grouped together in PCA1SMR and PCA2SMR in a similar fashion, then this would indicate a similar performance in individual level data and summary statistics. To quantify this, we keep the simulation setup described in the Simulation Study Section and we apply the two methods.

As you suggested, the loadings matrices for both is of dimensionality K × K. We then use two measures to quantify the similarity for increasing sample sizes: a similarity measure to compare the loadings matrices

Sload=k1=1Kk2=1K(Vk1k2,1SMRVk1k22SMR)2,

where V1SMR are the loadings assigned in one-sample MR and V2SMR those of two-sample MR, and (b) the R2 of the regression of Vk1k2,1SMR on Vk1k22SMR (R2=1ϵVVTϵVVV1SMRTV1SMR, where ϵV V are the residuals of the regression of V1SMR on V2SMR). We vary the sample size and track the performance of PCA and SCA.

The results are presented in Figure 1. For SCA, PCA1SMR and PCA1SMR load the matrix in a more similar manner as sample size increases, however not converging to zero. A similar picture is drawn by the R2, with all PCs showing a closer alignment of 1SMR and 2SMR as sample size increases. A closer examination of the results in individual simulations suggests that the blocks of exposures are correctly projected to PCs (exposures 1 to 3 in one PC and 4 to 6 in another, Figure), more so in larger sample sizes We note that the loadings for each block are not identical in 1SMR and 2SMR.

As suggested, in the applied analysis we have replaced the PCA on γ approach with the one proposed. We thus standardise γ as.γGXSEγGX There are no differences in precision of the causal estimates.

2. Are the covariances σlkj reported in the GWAS? I assume the estimators are simply IVW throughout, as implied at the top of page 5?

We have used the intercept of the LD score regression as a proxy for the phenotypic correlations σlkj.

We clarify that, if individual-level data is not available, this is a sensible approach. Yes, IVW is used throughout the analyses.

Author response image 1
Top Panel: R2; Bottom Panel: Similarity of loadings (Sload) between one-sample MR and two-sample MR (Nsim = 10,000).

3. It is unclear to me why ∆ is estimated by OLS in (2).

Thank you for your comment, δ^ is estimated by regressing an exposure Xk on the other exposures Xk by OLS. If an exposure k can be accurately predicted conditionally on other exposures, then there won’t be sufficient variation in QXk (low conditional instrument strength). We have clarified this in the manuscript.

4. Notation. It would be easier to follow the notation if matrices are upper case and vectors lower case. That would change for example the current vector γ to a lower case letter and the current matrix γ to an upper case letter.

Thank you for your comment, we prefer to refer to the SNP-X associations as γ and to the SNP-Y associations as Γ as this is common practice.

5. It is not straightforward to follow the steps for Sparse PCA on page 56, as not all symbols are clear. It is not clear in 1. what "Setting a fixed matrix," means and what the αj are. Also, define Xi. In 2 you simply do an svd? Should there be "hats" on γ? UDVT in step 2. are not the same values as in the svd for γ^.

Thank you for your comment, we have updated the explanation on sparse PCA in page 5. We are rephrasing the page 272 of the Zou et al., 2006 paper that introduces the ideas of framing PCA as a ridge regression problem (L2 norm is similar to variance maximisation) and then additionally using an L1 norm to induce sparsity.

Reviewer #3 (Recommendations for the authors):

Here are some detailed comments.

1. Usually a good idea to number figures in the order they appear in the text (Figure 12 out of order).

Thank you for spotting this, we have reordered the Figures.

2. UVMR is referred to as UMVR three times and as UMVMR once

Thank you for spotting this, we have corrected this.

3. At the beginning of the Methods section, it would be good to specify the dimension of γ. It is also important to specify how the variants used are selected.

Thank you for your suggestion, we have added the dimensions of γ (148x118). The instrument selection procedure is described in the first paragraph of the Methods/Data section.

4. Figure 1 is a bit chaotic and inconsistent with the text. Figure 1 top, the graph indicates that there are 118 exposures. I assume this was chosen because it is the number of lipid exposures in the data application but it doesn't look like the lipid data is used in this figure. Figure 1 top right, the dimension of the matrix is given as L x 118. The variable L does not appear elsewhere in the paper. What is printed appears to be a 60x60 square matrix. Is this matrix meant to represent \γ^\top \γ? The statement "MR estimates affected by multi-colinearity" printed next to this matrix doesn't appear to relate to the figure because the figure doesn't indicate anything about the MR estimates.

Thank you for looking into this Figure, we indeed do not use the lipid data in this. Instead, we use simulated data. To improve the Figure, we have switched to the updated exposures γTγ matrix (147 × 99). We changed the notation to K, which is how we refer to the number of exposures throughout the text. We have moved the ’MR Estimates affected by multi-collinearity’ sentence to the legend.

5. Figure 5. It looks like none of the matrices shown have all 118 rows. The sparse PCA matrix has many fewer exposures than the other methods. It would be good to clarify this in the caption.

Thank you for your suggestion, we now clarify this in the caption.

6. Figure 6 is labeled as "Extrapolated ROX curves" but appears to be a comparison of UVMR and MVMR results. For the top right plot, I would have expected UVMR and MVMR results to be identical since the PCs are exactly orthogonal. Why is this not the case?

Thank you, we have corrected the caption. This is correct, we are comparing UVMR and MVMR estimates for the PCA methods. We agree and also hypothesised that the estimates would be identical for PCA where the PCs are strictly orthogonal and similar in the sparse implementations. In the updated analysis where aminoacids are excluded and six (instead of seven) PCs are constructed, this is also observed. The agreement of the causal estimates from PCA is the highest (R2 = 0.93). The sparse methods provide estimates that are not as concordant as in PCA but are still highly correlated. Regarding the poor performance of robust sparse PCA on this assessment, we have increased the number of sparsity parameters per component and this appeared to improve (R2 = 0.79).

7. Page 12 describes Figure 5 as "The loadings for all methods (Figures 5a-e) are presented for LDL.c, a previously prioritised trait." I am confused about what "for LDL.c" means here. It appears that LDL.c is one of the traits in Figure 5 but these appear to be loadings across some large subset of traits?

Thank you, we were presenting separately the loadings that corresponded to LDL.c and urea across all methods in Figure 12. Figure 5 was a reference to the methods but this referencing was confusing so we dropped it. In the updated Figure, we present visually how the different methods project LDL.c and ApoB to different components; PCA loadings suggest that LDL.c contributes to some degree to all PCs whereas the sparse methods reduce this to a single PC.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

The manuscript has been improved and all reviewers agree this study will be useful for the field. However, there are remaining issues and we would like to invite you to further revise this manuscript.

Thank you for your positive assessment of our manuscript. We have updated the manuscript according to the reviewers’ comments and have included our responses point-by-point in the present document.

We also like to point out a change in the senior authorship, with Prof. Jack Bowden and Dr. Verena Zuber now both being joint senior authors, as Prof. Jack Bowden contributed to the manuscript to a degree that qualifies him for this. All authors agreed to this author order.

Revision outline:

1. Adding a deeper more nuanced discussion of what is meant by causality and how to interpret the components. The major advancement of this study comes from method development and it is important to not overstate the significance of results. See comment 1 from reviewer 3 and comment 4 from reviewer 1.

Thank you for your suggestion. We have updated the interpretation of the findings in line with these two comments. We stress more on how the methodological improvements are the major focus.

2. The choice of method. Discussing scenarios that will work or fail for the method will be helpful for the readers to better utilize this tool. See comment 4 from reviewer 1 and comment 2 from reviewer 3.

Thank you for your point. We have expanded the Discussion section to describe the types of datasets that the proposed methods are applicable for. We believe this makes things clearer for practitioners.

2. Adding details of simulation. Other clarification issues raised by reviewers.

Thank you for your suggestion. We have substantially rephrased and reorganised the simulation section, with changes highlighted in blue. We have added the required details and split the simulation results in subsections to improve clarity.

Reviewer #1 (Recommendations for the authors):

1. While I appreciate the simulation study, and believe this is worthwhile, it strikes me that the primary reason for choosing one or other of these sparse methods is not performance in a simulation study, but rather how the methods define the PCs, and how the analyst could interpret the results. As there's no point running an analysis if you can't interpret the results. This will depend on the specific characteristics of the investigation, so I am not expecting a single method recommendation. But I'd appreciate a bit more acknowledgement of this in the choice of methods – what characteristics to these methods tend to result in? Interesting (and reassuring) to see that clusters from the SCA and sPCA methods look similar in Figure 2 (although then odd that in Table 3, the significance differs for the VLDL and LDL components…).

Thank you for your comment. We agree that, given the study design, final recommendations on using only one method could be missing out on subtle differences; for instance, datasets with outliers might benefit more from robust approaches but this approach didn’t seem to outperform others in our simulation study nor in the applied example. We show a substantial improvement in performance for the particular realistic simulation design that we describe, and the performance measure is based on hypothesis testing rather than how the methods define the PCs. The simulation study is now reorganised with three distinct sections and we believe it is clearer how we do measure the results of how the PCs affect the outcome (power and Type I Error).

Regarding the Table 3 results, we agree that the sPCA results are not perfectly accordant with the expectation of a LDL/VLDL positive association with CHD. We used this example as a positive control outcome and we therefore interpret this as a suboptimal performance of sPCA in this particular context. It is of course possible to imagine different scenarios regarding this difference with the simulations where sPCA performed well, the main of whom would be some form of pleiotropy. We discuss this possibility in the discussion.

Overall, I appreciate the authors' manuscript in that it proposes sparse dimension reduction as a method and explores the properties of MVMR estimates from this approach. My sense is that it is difficult to say anything more authoritative/general than "hey, look – we tried this and it sort of worked!", but the authors do this in a way that is mostly clear (see comments above) and should be helpful to future researchers.

Thank you for your positive assessment of our manuscript.

Reviewer #3 (Recommendations for the authors):

The resubmission has corrected some of the issues I raised previously. However, I have some remaining concerns, primarily about the interpretation of the proposed causal effects and the simulation results.

Thank you for your positive assessment of our manuscript.

1. My largest issue is still in the causal interpretation of genetically determined PCs are sparse components. To me, I don't think there is a way to turn the components into well-defined exposures that admit a counterfactual or interventional definition of a causal effect. This approach seems closer to "causality adjacent" methods like TWAS. There was a paper a few years ago using sparse CCA in combination with TWAS (https://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1008973) and this seems similar to that to me. I think one option for this paper would be to add a deeper discussion of what it means to test for a causal effect of a component that may or may not represent a latent biological variable. On one end of interpretation you have "association with genetically predicted exposure component" and then I think you could add as a bonus that if you have successfully identified a biological component, this could be interpretable as a causal effect.

Thank you for the article suggestion, we have read the particular application of sparse methods in TWAS and cite it in the discussion. This also helps contextualising the benefits of sparse methods in general, namely improving power.

We have supplemented our previous discussion of this difficulty in interpreting the components as actionable exposures with counterfactuals in the draft, in the Discussion/ Limitations section, bringing it forward in the new updated paragraph. We acknowledge that a strict causal interpretation is not readily available with our suggested approach; at the same time, we believe it’s highly unlikely that a pragmatic causal effect will act in a way such that only one particular exposure of the many, highly correlated ones will be the sole driver of an effective medical intervention.

2. Using the components introduces an issue of horizontal pleiotropy if a single variant is allowed to affect multiple components. This suggests that the component-based MVMR results would be more appropriate than component-based UVMR results, though for PCA this is not an issue due to orthogonality and approximate equivalence of UVMR and MVMR.

Thank you for your comment. We agree that MVMR results are better suited for this purpose and this is why we present MVMR and not UVMR in the positive control example with real data. We highlight this in the applied Results section and mention the pleiotropy concern in the limitations.

Some questions about the simulations:

3. In Figure 4, it seems like the MVMR plot should have 6 lines and not two. How were the 6 traits combined into two lines here? It seems unfair to average the rejection rate for the X_1-3 into the black line since, if MVMR performed perfectly, it would never reject X_3 and always reject X_1 and X_2.

Thank you for your comment. We have substantially revised the simulation study results, splitting it in three sections to improve clarity (Illustrative example, high-dimensional example, determinant of performance). The changes are highlighted in blue. We have also corrected the data generating mechanism in Figure 4a, to visualise the exact mechanism that was used in the simulations in Figure 4b (six exposures forming two blocks, one block / three exposures are causal for Y).

In the first scenario (Figure 4), we have compared the rejection rates of MVMR with the two informative PCs and MVMR with the six individual exposures. Therefore, the first two panels in Figure 4b include two lines (with corresponding Monte Carlo SEs), one for each PC, and the rightmost panel includes six lines, one for each exposure. We have updated the colour coding and the DAG to better present this.

In the proposed workflow, we have used only the informative components in order to efficiently reduce the dimensions. We achieved this with permutations in the real data and with the Karlis-Saporta-Spinakis criterion, a simpler method for a different eigenvalue cutoff. In the simulations, we have used the two first PCs, each of which reflected three exposures.

4. Why is there an interval around only the black line in the MVMR plot and not the other plots?

We have now modified the presentation of the Figure. What we are presenting in this plot is the rejection rates for the two components (three exposures each) of PCA and SCA. In MVMR, there are six exposures and six rejection rates. The colour-coding is green for the true causal exposures (and components) and red for those that do not affect the outcome. To make this clearer, we now show different shades of red and green for the individual exposures and red and green for the transformed components. For PCA, the causal and the non-causal groups are presented as two separate lines. It was observed that PCA and SCA performed well in mapping the correlated exposures to two components, hence the equivalence visualised by the common colour coding (see also comments 3, 6). Apologies for the confusing presentation, the lines that were interpreted as intervals are in reality lines that show the rejection rates of MVMR for each exposure. We have added a table in the figure to clarify this, and updated the figure legend.

Therefore, an ideal performance would be a nominal type I error rate for the component that reflects the negative exposures (colour-coded red) and a high value of power approaching 1 (colour coded black). To improve presentation, we have also added Monte Carlo SEs following Tim Morris et al. (Table 6 in DOI: 10.1002/sim.8086). In this updated simulation, there seems to be an appropriate performance for SCA, with T1E converging at 5%. The problematic performance of MVMR is highlighted in the lower than nominal Type 1 Error throughout and, importantly, in the low power in the low instrument strength simulation (CFS ~ 3, first result/ leftmost part of the three panels).

5. The sentence in the caption presumably describing the color scheme is cutoff. "The results are split according to group n black,…."

Thank you for your comment, we have now rephrased the Figure 4 caption.

6. Is the appropriate number of components known a-priori in the simulations?

This is correct. The two ways of determining the number of components (KSS criterion, permutation) performed well in the simulated data and we chose to present results with an a priori known number of components. We have added this in the simulation results/ Simple Illustrative Example section for transparency.

7. The description of Figure 4 suggests that PCA and SCA are out-performing MVMR. However, PCA has exactly the same rejection rate for the causal and non-causal block of traits which seems like bad performance to me. SCA has a Type 1 error rate between 20 and 30% at a nominal level of 0.05 which also seems quite bad to me. I would not view improving power at the cost of uncontrolled type 1 error as a good outcome.

Thank you for your suggestions, we have given the simulation results a more careful consideration. We agree that PCA performs suboptimally, albeit in a different manner than MVMR. MVMR seems to lack in power and PCA provides many false positive results. What seems to perform better is SCA. We also agree that a T1E of 20% is concerning. However, observing that the T1E curve didn’t seem to have fully converged for the instrument strength that we were previously reporting, we repeated the same simulation study with one modification. We further increase the sample sizes to achieve higher instrument strength. In the updated Figure 4, SCA indeed outperforms PCA and MVMR, with a well-controlled T1E for larger samples. It may be unlikely that such a high instrument strength will be observed, especially in the context of correlated exposures, however we believe this convergence to 5% is quite reassuring.

Regarding your observations on the performance of PCA, we agree and we are more critical of PCA in the Results section as we agree that false positives are a major issue. It seems that MVMR and PCA in the formulations we investigated perform poorly at the two extremes of poor performance (highly imprecise, misleadingly precise); meanwhile, sensible transformations of the exposures are achieved and we have added in the discussion the possibility of some form of T1E control to salvage PCA performance to guide further research. To clarify this, we added a segment in the discussion (Paragraph 2 ‘When using PCA without any sparsity constraints, our simulation studies revealed numerous false positive results, at the opposite end of the nature of poor performance seen in MVMR; estimates were often misleadingly precise (false negative). Although appropriate transformations of the exposures were achieved, we highly recommend exploring additional forms of T1E control to improve the performance of PCA. Nonetheless, sparse methods exhibited superior performance compared to both PCA and MVMR.6’)

8. The description of the Figure 5 at the top of page 9 does not align with the image in Figure 5. "Both standard and Bonferroni-corrected MVMR performed poorly in terms of AUC (AUC 0.660 and 0.885 respectively), due to poor sensitivity. While there was a modest improvement with PCA (AUC 0.755), it appears that PCA and RSPCA do not accurately identify negative results (median specificity 0 and 0.28 respectively)." Reading these numbers, it sounds like MVMR is the worst followed by PCA, followed by MVMR_B. However, in the figure, PCA and RSPCA both have lower AUC than MVMR and MVMR_B. These numbers are also inconsistent with Table 4.

Thank you for your comments, the purpose of direct AUC comparisons is to visualise in one figure the performances of the methods. We agree that, seen in isolation, it may be misleading to miss the differences with respect to sensitivity and specificity across methods: MVMR consistently provides imprecise results and fails to identify positive associations, whereas PCA provides very precise results that yield many false positives. We were acknowledging this previously and have further highlighted in the discussion. The comparison of PCA and MVMR solely on AUC is therefore not a reliable way to measure their efficacy. Thank you for spotting the inconsistency in Table 4, the columns are now correctly labelled.

At the same time we present the results, we’re discussing what is not directly identified. Results are driven by very different behaviour on different ends of AUC curve. We acknowledge that the results are inherently dependent on the data generating model and the specific simulation settings used in our study. If the data were generated differently, the results would differ as well.

9. I think I may be a little confused about how the ROC curves are constructed.

9a. In my experience, an ROC curve is usually constructed by varying a threshold (e.g. α). However, it seems here like α was fixed and we are averaging over the single point obtained for each simulation setting according to the Reitsma et al. method. This method makes sense when the data are real observations, however, in the case of simulations, why not use the traditional method of varying to threshold and then averaging the full curves across simulations?

Thank you for bringing up this question. We have substantially updated the simulation section and we hope the workflow is now clearer. We first present the data generating mechanism, and we provide more details on the ROC curve construction in the paragraph Simulation Studies/ Complex High-Dimensional Example.

We fix the α threshold to present the results across many individual simulations in a more concise manner. Varying the threshold and averaging the full curves across simulations would have been a valid approach for single simulation studies as well, but any summary across simulation studies would have made the presentation of results more complex and difficult to interpret.

Therefore, we use the Reitsma et al. method, a well-established approach for summarizing ROC curves in simulation studies, as it provides a comprehensive and concise summary of the results. This method jointly meta-analyses sensitivity and specificity with a bivariate mode. We further explain our rationale in the Simulation Studies/ Complex High-Dimensional Example paragraph.

9b. I think this method of displaying results also obscures the issue of calibration. If there is a nominal significance threshold, we would like to see that type 1 error is controlled below that threshold.

Thank you for your comment. We have shown that SCA achieves appropriate T1E for larger samples in the updated analysis (Simulation Results/Simple illustrative example/Figure 4).

9c. Visually, Figure 15 looks inconsistent with Figure 5 to me. There are no blue points in the upper right of Figure 15 -- does it make sense to extrapolate this part of the curve? Figure 5 suggests that the red methods at least have the potential to do very well in differentiating true and false positives but this doesn't seem born out by Figure 15 where most of the red points cluster in the upper right.

Thank you for your observation. The curve is directly derived from the bivariate meta-analysis method and the unobserved values are extrapolated from the form of the observed ones. The true performance, as observed directly from individual simulation studies, is reported in Figure 15, where many of the results for the two superior sparse PCA methods cluster in the top right corner. Additionally, there is a clear benefit in the red methods, as evidenced by a substantial scoring in the top of the y-axis (in the sensitivity = 1 area of the plot). This benefit is not observed for PCA or MVMR. When the results are meta-analysed, these latter simulation studies in the middle region are extrapolated and indicative of good performance. This is why we choose to meta-analyse, as all these factors are taken into account, but at the same time present the individual results for visual comparison.

We have to disagree with your interpretation regarding how these two results are in disagreement, as we believe Figure 15 is a transparent and complementary way of presenting the total performance in Figure 5. In Figure 15, even though there is some apparent clustering in the top right corner, there is still much more dispersion compared to PCA and especially in the middle regions (say sensitivity and specificity ~ 0.2 -0.8). These individual results give rise to the curves in the meta-analysed plot.

10. On the selection of instruments -- is it correct that you are assuming that all variants affecting any of the component metabolites also affect LDL/HDL or Triglycerdies as measured in the GLGC study? Does this assumption limit your ability to identify components that separate LDL or HDL sub-components? For example, variants that affect only a small number of components will be less likely to be selected using the GLGC reference.

This is true, ideally we would have an external study that investigates sub-components as separate phenotypes rather than serum measurements of total fractions. We are acknowledging this as another limitation in the discussion.

https://doi.org/10.7554/eLife.80063.sa2

Article and author information

Author details

  1. Vasileios Karageorgiou

    1. Department of Epidemiology and Biostatistics, School of Public Health, Faculty of Medicine, Imperial College London, London, United Kingdom
    2. University of Exeter, Exeter, United Kingdom
    Contribution
    Data curation, Software, Formal analysis, Investigation, Methodology, Writing - original draft, Writing – review and editing
    For correspondence
    vk282@exeter.ac.uk
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-7173-9967
  2. Dipender Gill

    1. Department of Epidemiology and Biostatistics, School of Public Health, Faculty of Medicine, Imperial College London, London, United Kingdom
    2. Department of Clinical Pharmacology and Therapeutics, Institute for Infection and Immunity, St George’s, University of London, London, United Kingdom
    3. Genetics Department, Novo Nordisk Research Centre Oxford, Oxford, United Kingdom
    Contribution
    Conceptualization, Data curation, Supervision, Investigation, Methodology, Writing – review and editing
    Competing interests
    is a part-time employee of Novo Nordisk
  3. Jack Bowden

    1. University of Exeter, Exeter, United Kingdom
    2. Genetics Department, Novo Nordisk Research Centre Oxford, Oxford, United Kingdom
    Contribution
    Software, Formal analysis, Supervision, Investigation, Visualization, Methodology, Writing – review and editing
    Contributed equally with
    Verena Zuber
    Competing interests
    No competing interests declared
  4. Verena Zuber

    Department of Epidemiology and Biostatistics, School of Public Health, Faculty of Medicine, Imperial College London, London, United Kingdom
    Contribution
    Conceptualization, Data curation, Software, Formal analysis, Supervision, Investigation, Visualization, Methodology, Writing – review and editing
    Contributed equally with
    Jack Bowden
    Competing interests
    No competing interests declared

Funding

State Scholarships Foundation

  • Vasileios Karageorgiou

Expanding Excellence in England

  • Vasileios Karageorgiou

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

We would like to thank Dr. Stephen Burgess and two anonymous reviewers that provided constructive comments on the extension of the approach to individual-level data, on improving and clarifying the applied analyses, and on updating and communicating the simulation results.

Senior Editor

  1. Martin R Pollak, Harvard Medical School, United States

Reviewing Editor

  1. Siming Zhao, Dartmouth College, United States

Reviewer

  1. Stephen Burgess, University of Cambridge, United Kingdom

Publication history

  1. Received: May 6, 2022
  2. Preprint posted: June 16, 2022 (view preprint)
  3. Accepted: April 18, 2023
  4. Accepted Manuscript published: April 19, 2023 (version 1)
  5. Version of Record published: May 30, 2023 (version 2)

Copyright

© 2023, Karageorgiou et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 161
    Page views
  • 38
    Downloads
  • 0
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Vasileios Karageorgiou
  2. Dipender Gill
  3. Jack Bowden
  4. Verena Zuber
(2023)
Sparse dimensionality reduction approaches in Mendelian randomisation with highly correlated exposures
eLife 12:e80063.
https://doi.org/10.7554/eLife.80063

Further reading

    1. Genetics and Genomics
    Mandy Y Boontanrart, Elia Mächler ... Jacob E Corn
    Research Article

    The b-hemoglobinopathies, such as sickle cell disease and b-thalassemia, are one of the most common genetic diseases worldwide and are caused by mutations affecting the structure or production of β-globin subunits in adult hemoglobin. Many gene editing efforts to treat the β-hemoglobinopathies attempt to correct β-globin mutations or increase γ-globin for fetal hemoglobin production. δ-globin, the subunit of adult hemoglobin A2, has high homology to β-globin and is already pan-cellularly expressed at low levels in adult red blood cells. However, upregulation of δ-globin is a relatively unexplored avenue to increase the amount of functional hemoglobin. Here, we use CRISPR-Cas9 to repair non-functional transcriptional elements in the endogenous promoter region of δ-globin to increase overall expression of adult hemoglobin 2 (HbA2). We find that insertion of a KLF1 site alone is insufficient to upregulate δ-globin. Instead, multiple transcription factor elements are necessary for robust upregulation of δ-globin from the endogenous locus. Promoter edited HUDEP-2 immortalized erythroid progenitor cells exhibit striking increases of HBD transcript, from less than 5% to over 20% of total β-like globins in clonal populations. Edited CD34+ hematopoietic stem and progenitors (HSPCs) differentiated to primary human erythroblasts express up to 46% HBD in clonal populations. These findings add mechanistic insight to globin gene regulation and offer a new therapeutic avenue to treat β-hemoglobinopathies.

    1. Genetics and Genomics
    Yiqi Yao, Alejandro Ochoa
    Research Article Updated

    Principal Component Analysis (PCA) and the Linear Mixed-effects Model (LMM), sometimes in combination, are the most common genetic association models. Previous PCA-LMM comparisons give mixed results, unclear guidance, and have several limitations, including not varying the number of principal components (PCs), simulating simple population structures, and inconsistent use of real data and power evaluations. We evaluate PCA and LMM both varying number of PCs in realistic genotype and complex trait simulations including admixed families, subpopulation trees, and real multiethnic human datasets with simulated traits. We find that LMM without PCs usually performs best, with the largest effects in family simulations and real human datasets and traits without environment effects. Poor PCA performance on human datasets is driven by large numbers of distant relatives more than the smaller number of closer relatives. While PCA was known to fail on family data, we report strong effects of family relatedness in genetically diverse human datasets, not avoided by pruning close relatives. Environment effects driven by geography and ethnicity are better modeled with LMM including those labels instead of PCs. This work better characterizes the severe limitations of PCA compared to LMM in modeling the complex relatedness structures of multiethnic human data for association studies.