1. Computational and Systems Biology
  2. Microbiology and Infectious Disease
Download icon

Phenotype inference in an Escherichia coli strain panel

  1. Marco Galardini
  2. Alexandra Koumoutsi
  3. Lucia Herrera-Dominguez
  4. Juan Antonio Cordero Varela
  5. Anja Telzerow
  6. Omar Wagih
  7. Morgane Wartel
  8. Olivier Clermont
  9. Erick Denamur
  10. Athanasios Typas Is a corresponding author
  11. Pedro Beltrao Is a corresponding author
  1. European Bioinformatics Institute (EMBL-EBI), United Kingdom
  2. European Molecular Biology Laboratory (EMBL), Germany
  3. INSERM, IAME, UMR1137, France
  4. Université Paris Diderot, France
  5. APHP, Hôpitaux Universitaires Paris Nord Val-de-Seine, France
Tools and Resources
  • Cited 0
  • Views 1,294
  • Annotations
Cite as: eLife 2017;6:e31035 doi: 10.7554/eLife.31035

Abstract

Understanding how genetic variation contributes to phenotypic differences is a fundamental question in biology. Combining high-throughput gene function assays with mechanistic models of the impact of genetic variants is a promising alternative to genome-wide association studies. Here we have assembled a large panel of 696 Escherichia coli strains, which we have genotyped and measured their phenotypic profile across 214 growth conditions. We integrated variant effect predictors to derive gene-level probabilities of loss of function for every gene across all strains. Finally, we combined these probabilities with information on conditional gene essentiality in the reference K-12 strain to compute the growth defects of each strain. Not only could we reliably predict these defects in up to 38% of tested conditions, but we could also directly identify the causal variants that were validated through complementation assays. Our work demonstrates the power of forward predictive models and the possibility of precision genetic interventions.

https://doi.org/10.7554/eLife.31035.001

Introduction

Understanding the genetic and molecular basis of phenotypic differences among individuals is a long-standing problem in biology. Genetic variants responsible for observed phenotypes are commonly discovered through statistical approaches, collectively termed Genome-Wide Association Studies (GWAS, Bush and Moore, 2012), which have dominated research in this field for the past decade. While such approaches are powerful in elucidating trait heritability and disease associations (Yang et al., 2010; Welter et al., 2014), they often fall short in pinpointing causal variants, either due to lack of power to resolve variants in linkage disequilibrium (Edwards et al., 2013) or for the dispersion of weak association signals across a large number of regions across the genome (Boyle et al., 2017). Furthermore, by definition GWAS studies are unable to assess the impact of previously unseen or rare variants, which can often have a large effect on phenotype (Bodmer and Bonilla, 2008). Therefore, the development of mechanistic models that address the impact of genetic variation on the phenotype can bypass the current bottlenecks of GWAS studies (Lehner, 2013).

In principle, phenotypes could be inferred from the genome sequence of an individual by combining molecular variant effect predictions with prior knowledge on the contribution of individual genes to a phenotype of interest. Such knowledge is now readily available by chemical genetics approaches, in which genome-wide knock-out (KO) libraries of different organisms are profiled across multiple growth conditions (Kamath et al., 2003; Dietzl et al., 2007; Hillenmeyer et al., 2008; Nichols et al., 2011; Price et al., 2016). One output of such screens are genes whose function is required for growth in a given condition (also termed ‘conditionally essential genes’). Variants negatively affecting the function of those genes are likely to be associated with individuals displaying a significant growth defect in that same condition. This approach would then effectively be a way to prioritize variants with respect to their impact on the growth on a particular condition. The impact of variants on gene function can be reliably inferred using different computational approaches (Thusberg et al., 2011; Kulshreshtha et al., 2016), which can offer mechanistic insights into how such variants impact on a protein’s structure and function. As rare or previously unseen variants can also be used by this approach, there lies the possibility to deliver predictions of phenotypes at the level of the individual, with no prior model training. Combining variant effector predictors with gene conditional essentiality has been previously tested with some success in the budding yeast Saccharomyces cerevisiae, but only on a limited number of individuals and conditions (15 and 20, respectively, Jelier et al., 2011). Given the continuous increase in genome-wide gene functional studies, there is an opportunity to apply such an approach more broadly.

In contrast to eukaryotes, diversity within the same bacterial species can result in two individuals differing by as much as half of their genomic content. Escherichia coli is not only one of the most studied organisms to date (Blount, 2015), but also one of the most genetically diverse bacterial species (Lukjancenko et al., 2010; Tenaillon et al., 2010). Individuals of this species (strains), exhibit a diverse range of genetic diversity, from highly homologous regions to large differences in gene content, collectively termed ‘pan-genome’ (Medini et al., 2005). Phenotypic variability is therefore likely to arise from a combination of single nucleotide variants (SNVs) and changes in gene content. Since gene conditional essentiality has been heavily profiled for the reference E. coli strain (K-12, Nichols et al., 2011; Price et al., 2016), here we set out to systematically test the applicability of such genotype-to-phenotype predictive models for this species. We reasoned that it would also test the limits of the underlying assumption of this model, which is that the effect of the loss of function of a gene is independent of the genetic background (Dowell et al., 2010).

We therefore collected a large and diverse panel of E. coli strains (N = 894), for which we measured growth across 214 conditions, as well as obtained the genomic sequences for the majority of the strains (N = 696). For each gene in each sequenced strain we calculated a ‘gene disruption score’ by evaluating the impact of non-synonymous variants and gene loss. We then applied a model that combines the gene disruption scores with the prior knowledge on conditional gene essentiality to predict conditional growth defects across strains. The model yielded significant predictive power for 38% of conditions having a minimum number of strains with growth defects (at least 5% of tested strains). We independently validated a number of causal variants with complementation assays, demonstrating the feasibility of precision genetic interventions. Since our predictions did not apply equally well for all conditions, we conclude that the set of conditionally essential genes has diverged substantially across strains. Overall, we anticipate that the E. coli reference panel presented here will become a community resource to address the multiple facets of the genotype to phenotype research, ultimately enabling the development of biotechnological and personalized medical applications.

Results

The phenotypic landscape of the E. coli collection

We have assembled a large genetic reference panel of 894 E. coli strains able to capture the genetic and phenotypic diversity of the species. These are broadly divided into natural isolates (N = 527) and strains derived from evolution experiments (N = 367). Out of these, 321 had available genome sequences, and we obtained the genomes of additional 375, reaching a total of 696 strains. The full list of strains, including name, collection of origin and links to relevant references is provided in Supplementary file 1 and online at https://evocellnet.github.io/ecoref.

To test our capacity to develop genotype-to-phenotype predictive models we measured the fitness of this strain collection on a large variety of conditions (N = 214). We used high-density colony arrays and measured colony size as a proxy for fitness, similar to what has been applied before for the E. coli K-12 knockout (KO) library (Nichols et al., 2011). We used the deviation of each strain’s colony size from itself across all conditions and all other strains in the same condition as our final phenotypic measure (Collins et al., 2006). Thereby we obtained a list of conditions for which we know whether a tested strain has grown significantly less than the expectation (Figure 1A and Materials and methods). Both biological replicates (Figure 1B) and strains present in two distinct plates (Figure 1—figure supplement 1) were positively correlated (Pearson’s r 0.693 and 0.648, respectively), indicating that we measured phenotypes with high confidence. Clustering of conditions based on phenotypic profiles across all strains was consistent with the conditions macro-categories (e.g. stresses versus nutrient sources) and with the number of sensitive strains (Figure 1C). The correlation of phenotypic profiles also clustered drugs with same mode of action (MoA), as shown by comparing the clusters’ purity against those of random clusters (Figure 1D). The full phenotypic matrix for each strain across all conditions contains 114,004 single measurements (Supplementary file 1).

Figure 1 with 1 supplement see all
The phenotypic landscape of the E.

coli strain collection. (A) Phenotypic screening experimental design and data analysis. (B) Phenotypic measurements replicability, as measured by pairwise comparing the S-scores of all three biological replicates. (C) Hierarchical clustering of condition correlation profiles; the threshold is defined as the furthest distance at which the minimum average Pearson’s correlation inside each cluster is above 0.3. The two colored bands on top indicate the number of strains with growth defects for each condition and its category, showing consistent clustering. Gray-colored cells in the matrix represent missing values due to poor overlap of strains tested in the two conditions. (D) Clusters purity (computed for drug targets) for each hierarchical distance threshold, against that of random clusters (100 repetitions) shows that drugs with similar target tend to cluster together. (E) Pearson’s correlation between phylogenetic and phenotypic distances, based on phylogenetic independent contrasts (see Materials and methods). (F) Core genome SNP tree for all strains in the collection. Grey shades in the inner ring indicate the number of conditions tested for each strain, red shades in the outer ring indicate the proportion of tested conditions in which the strain shows a significant growth defect. The black arrow indicates the reference strain.

https://doi.org/10.7554/eLife.31035.002

If all genetic variants were to contribute to phenotypic divergence between strains we would expect to observe a strong positive correlation between those two metrics. This was not the case even when taking into account the phylogenetic dependencies between strains (r = 0.07, see Materials and methods). This finding reinforces the idea that most variants across these strains have a neutral impact on their phenotypes. Of particular interest are those strains derived from evolution experiments, such as the members of the LTEE collection (Tenaillon et al., 2016). While most strains (189 out of 266 tested) grew as expected in all tested conditions, a significant fraction (N = 77) exhibited at least one conditional growth defect phenotype. Again, the phylogenetic distance from the parental strains (REL606 and REL607) is not correlated with the proportion of growth phenotypes (Pearson’s r: 0.08), even though hypermutators exhibit a slightly higher number of phenotypes (Cohen’s d: 0.513). These results clearly underline that simple metrics of phylogenetic similarity are not predictive of differences in phenotype. Instead, few DNA variants are sufficient to cause clear phenotypic differences, indicating the importance of statistical or predictive strategies to prioritize those variants.

In toto, we have assembled an E. coli reference strain panel, which we have broadly phenotyped. This phenotyping resource recapitulates known biology, surveying a rich phenotypic space and providing insights into the evolution of phenotypes within the species.

Gene level predictions of variant effects

As most DNA variants are likely to be neutral in their impact on gene function, variant effect predictors are required to build phenotype predictive models for each strain. For this we derived structural models and protein alignments covering 60.2% and 94.7% of the E. coli K-12 proteome, respectively (or 50.9% and 95.9% of all protein residues) and used them to compute the impact of all possible nonsynonymous variants (see Materials and methods, available at http://mutfunc.com). The impact of multiple non-synonymous variants within a gene was then combined using a probabilistic approach (Jelier et al., 2011), into a single likelihood measure of gene disruption (here termed ‘gene disruption score’, Figure 2A and Materials and methods). We also assumed that reference genes missing completely from a strain would have a high gene disruption score.

Figure 2 with 1 supplement see all
The ‘gene disruption score’, a gene-level prediction of the impact of genetic variants.

(A) Schematic representation of how all substitutions affecting a particular gene are combined to compute the gene disruption score. (B) Average gene disruption score across all strains for four categories of genes conserved in all strains (‘core genome’). Conserved genes are defined as those genes found in more (or less) than 95% of the bacterial species present in the eggNOG orthology database (Huerta-Cepas et al., 2016a). Statistically significant differences (Cohen’s d value >0.3) are reported. (C) Gene-gene correlation profile of gene disruption across all strains shows clusters of potentially functionally related proteins. (D–E) The gene disruption profiles as a predictor of genes function. (D) Pairwise correlation of gene disruption scores inside each annotation set (colored boxes) and inside a random set of genes of the same size (grey boxes). (E) Gene-gene correlation profile of gene disruption scores in protein complexes; shown here a subset with high disruption score correlation.

https://doi.org/10.7554/eLife.31035.004
Figure 3 with 2 supplements see all
Prediction of growth-defect phenotypes in the E.coli strain collection.

(A) Schematic representation of the computation of the prediction score and its evaluation; for each condition the predicted score is computed using the disruption score of the conditionally essential genes. The score is then evaluated against the actual phenotypes through a Precision-Recall curve. (B) Higher predictive power for conditions with higher proportion of growth phenotypes. For each condition set, the PR-AUC value for each condition is reported, together with the median and mean value (C) Significance of the PR-AUC value reported for the condition ‘Clindamycin 3 μg/ml’, against the distribution of three randomization strategies. ‘Shuffled strains’ indicates a prediction in which the actual strains’ phenotypes have been shuffled; ‘shuffled sets’ indicates a prediction where the conditionally essential genes of a different condition have been used, and ‘random set’ indicates a prediction where a random gene set has been used as conditionally essential genes. For all three randomizations we report a significant difference between the actual prediction and the distribution of the randomizations (q-values of 1E-30, 0.05 and 1E-22, respectively). See Figure 3—figure supplement 2 for the other conditions. (D) Genome-wide gene associations are in agreement with the predictive score; the enrichment of conditionally essential genes in the results of the gene association analysis is significantly higher in conditions with higher PR-AUC.

https://doi.org/10.7554/eLife.31035.006
Detailed example on the computation and evaluation of the predicted score on two conditions.

(A) Precision-Recall curve for the two example conditions. Dashed lines represent the average Precision-Recall curve for the same predictions carried out using 10’000 random gene sets of the same size of the actual conditionally essential genes for both conditions. Vertical lines represent the mean absolute deviation for the precision across the randomizations. The two randomization sets are significantly different than the actual predictions (q-values of 1E-33 and 0.02, respectively). (B) Receiver operating characteristic curve for the two example conditions. Dashed lines represent the same randomizations as in (A). (C–D) For each condition, the gene disruption score for the conditionally essential gene across all strains is reported, together with the resulting predicted conditional score and actual binary phenotypes (pale red: healthy and red: growth defect). Strains are sorted according to the predicted conditional score, while genes are sorted according to their weight in the predictive model; only the top 10 conditionally essential genes are shown. The inset reports the disruption score, predicted score and actual phenotypes for the top 25 strains.

https://doi.org/10.7554/eLife.31035.009

The gene disruption score can be considered as a relevant measure of the impact of mutations on gene function. Essential and phylogenetically conserved genes show lower average gene disruption scores across all strains, as compared to less conserved or random ones (Figure 2B). We also observed that genes that are predicted to lose their function together across all strains (Figure 2C) tend to be functionally associated (Figure 2D and E), in particular for genes belonging to the accessory genome (i.e. genes not present in all strains, Figure 2—figure supplement 1). These observations suggest that the gene disruption score is a biologically relevant measure of the impact of mutations at the gene level and that it could be used for growth phenotype predictions.

Predictive models of conditional growth defects

We combined the gene disruption scores for each gene in each strain with the genes’ conditional essentiality to obtain conditional growth predictive models. For each condition, we evaluated the impact of those variants affecting the genes that are essential for that same condition in the reference K-12 strain. We selected 148 conditions in which at least one strain displayed a growth defect and the E. coli K-12 KO collection had also been tested for (Nichols et al., 2011 and Herrera-Dominguez et al., unpublished). For those, we computed a conditional score that would rank the strains according to their predicted growth level in the tested condition, from normal growth to the most defective growth phenotype (see Materials and methods). Briefly, if a strain has many detrimental variants in genes important for growth in a given condition, our predictive score will rank that strain as being highly sensitive in that condition. We then compared this predicted ranking with the experimental fitness measurements. The Area Under the Curve of a Precision-Recall curve (PR-AUC) was used for assessing our predictive power (Figure 3A and Materials and methods). We note that this predictive score is condition specific and no parameter fitting or training was used.

Our predictive score is able to discriminate strains with normal growth from ones with growth defects with significantly higher power than randomized scores. Both the predicted impact of single nucleotide variants and gene presence/absence patterns contribute to the predictive power of the model (Figure 3—figure supplement 1). The predictive power increases for conditions in which more strains displayed growth defects (Figure 3B). We verified the significance of our predictions through the comparison against three randomization strategies: shuffled strains, shuffled gene sets and random gene sets (see Materials and methods). For all three strategies we observed a clear dependency between the PR-AUC value and the significance of our predictions (Figure 3C and Figure 3—figure supplement 2). Since some conditions are similar and share a significant fraction of their conditionally essential genes, we sometimes observe a skew in the performance of the ‘shuffled set’ randomizations (Figure 3C). We correctly predicted 20% of conditions that have at least 1% of poorly growing strains, with higher predictive capacity for conditions with larger number of strains with growth defects, reaching 38% for all conditions with more than 5% of poorly growing strains (Figure 3—figure supplement 1). Weighting the contribution of the conditionally essential genes to each condition also improved the predictive power, particularly for well-predicted conditions (Figure 3—figure supplement 1, Materials and methods).

To independently validate our predictive models, we carried out a GWAS analysis based on genes presence/absence and the growth phenotypes. Consistent with the validity of our models, we found that for conditions we predict with higher confidence (PR-AUC >= 0.1), there is a significant overlap between the K-12 genes predicted to be essential in the condition and the genes associated with poor growth by the association analysis (Figure 3D, Fisher’s exact test, p-value 0.005).

We further examined two well predicted conditions (PR-AUC >0.35), pseudomonic acid 2 μg/ml (the antibiotic mupirocin) and minimal media with the addition of amino acid mix, to inspect our predictive model (Figure 4). Both conditions showed an enrichment of strains with growth defects at high predicted scores (GSEA p-values of 0.001 and <10−6, respectively), which is a common property of conditions with higher PR-AUC (Figure 3—figure supplement 1). The reference K-12 strain harbors many conditionally essential genes in minimal media (N = 181), providing an example for which growth phenotypes are well predicted from deleterious effects across a large number of genes in different strains. In contrast, pseudomonic acid is a condition with few (N = 10) conditionally essential genes, thus making it easier to evaluate the contribution of single genes to the phenotype. Of the 25 strains with highest predicted score in pseudomonic acid, 13 have been misclassified by the model. Seven misclassified strains had high disruption scores in either lpcA or rfaE, two genes involved in the biosynthesis of a lipopolysaccharide (LPS) precursor (L-glycero-β-D-manno-heptose). Both, when deleted in K-12, cause a strong growth defect under pseudomonic acid. Only 1 of the correctly predicted strains had a mutation in one of those two genes (IAI36), suggesting that this pathway might be conditionally essential only in K-12. Another example of incorrect predictions involves two strains (ECOR-27 and ECOR-58) that share very similar disruption score profiles for the conditionally essential genes in pseudomonic acid (Figure 4C), but only ECOR-27 had a growth defect in this condition. Both strains harbor a single nonsynonymous mutation in acrB (E414G for ECOR-27 and I466T for ECOR-58), which, in both cases, is predicted as highly deleterious. Changes in conditional essentiality or epistatic effects are possible explanations for this misclassification, and therefore mapping and incorporating this information in our models could significantly benefit predictions in the future.

Experimental validation of causal variants

Since we use mechanistic models in our phenotype predictions to calculate the impact of non-synonymous genetic variants, we can then also directly pinpoint the causal variants and implement genetic therapies to correct growth phenotypes. We tested this by ranking the mutated conditionally essential genes in each condition according to their predicted ability to rescue growth phenotypes (Figure 5A and Materials and methods). Several genes were predicted to restore many growth phenotypes (Figure 5B), including the genes forming the AcrAB-TolC multidrug efflux pump, which were predicted to restore growth in up to ~1000 condition-strain pairs (1012 acrB, 494 acrA and 517 tolC, respectively), reflecting the importance of this efflux system in drug resistance (Li et al., 2015).

Figure 5 with 1 supplement see all
Experimental confirmation of the predicted phenotype-causing genotypes.

(A) Schematic representation of the experimental approach. (B) Distribution of the number of predicted restored phenotypes per gene; red stripes indicate the genes that were experimentally tested. (C) Growth change between the target strain with the empty plasmid against the one expressing the reference copy of the target gene (complementation). Strains for which a change in phenotype is predicted are compared to those where no change is predicted. (D–F) Detailed representation of the results of the complementation experiment in three conditions. Mean colony area and the 95% confidence intervals are reported. Significant differences (t-test p-value<0.01) between colony area of the strains with the empty and complemented plasmids are reported. ‘Other complementation’ reports strains expressing a different gene than the focal one, indicated between parenthesis. (G) Cartoon representation of AcrB 3D structure (PDB entry: 2dr6). Only one of the three monomers is highlighted in blue; colored spheres represent known non-synonymous variants in the reported strains.

https://doi.org/10.7554/eLife.31035.010

We selected eight genes to experimentally verify our predictions, including genes involved in either drug resistance (acrAB, soxSR and slt) or auxotrophic growth (glnD and proAB). We then selected conditions for which the introduction of the reference allele in the target strains was predicted to restore growth, and controls where no improvement in phenotype was expected (Materials and methods). We also introduced the plasmid expressing the reference allele in the reference strain as a negative control and in the corresponding deletion strain from the KEIO collection (Baba et al., 2006) as a positive control. Overall, we observed a high correlation between replicates of this experiment (Pearson’s r: 0.94, Figure 5—figure supplement 1). In total, we tested 64 strain-condition combinations, detecting a significant difference (p-value 8.4E-11, two-sided t-test) in colony size change between strain-condition pairs in which we had predicted growth will be restored (N = 14) versus ones we had not (N = 50) (Figure 5C and Materials and methods). This validated the effectiveness of our predictive models and more generally confirmed that genetic variants can be used to predict causal mutations and prioritise strategies for reverting/modifying phenotypes.

We investigated three conditions in more detail: pseudomonic acid 2 μg/ml with an acrB complementation, pyocyanin 10 μg/ml (a toxic secondary metabolite produced by Pseudomonas aeruginosa) with a soxSR complementation and the L-glutamine amino acid as nitrogen source with a proAB complementation (Figure 5D–F and Figure 5—figure supplement 1 for all conditions and strains). In all cases, strains harboring the gene(s) predicted to restore the phenotypes grew significantly better than strains carrying the empty plasmid (t-test p-value<0.01). In contrast, none of the strains harboring a different complementation gene showed a significant increase in colony size. In one case we detected an unexpected increase in colony size for one of the strains where we hadn’t predicted any change (strain IAI39 expressing AcrB, Figure 5D). We hypothesize that the original growth phenotype is either due to an incorrectly classified variant in acrB or due to another variant that acts on the expression of this efflux pump.

Of the five strains expressing the reference allele of acrB tested in pseudomonic acid, three harbored nonsynonymous variants in their chromosomal copy. One of them (IAI30) was predicted to increase its colony size after the complementation, whereas the other two (ECOR-42 and ECOR-28) were not predicted to be affected by the complementation (Figure 5D). We mapped those nonsynonymous variants to the three-dimensional structure of AcrB (Murakami et al., 2006) to inspect their potential impact on the protein function (Figure 5G). Both ECOR-42 and ECOR-28 harbor a single deleterious nonsynonymous variant (SIFT, p-value~0.01) in the transmembrane domain of the protein (A915D and T1013I, respectively). Strain IAI30 on the other hand carries two nonsynonymous variants: E567V and H596N, both located in the AcrB PC1 subdomain, which is involved in the entry of the ligand that will be then extruded by the efflux pump (Murakami et al., 2006; Seeger et al., 2006). Since the E567V variant is predicted to be highly deleterious (SIFT, p-value~0.001), we presume that it impairs the fundamental drug uptake function of the AcrAB-TolC multidrug efflux pump, while the variants in the other two strains might not be significantly affecting the pump’s function. This example shows how mechanistic interpretations of the impact of genetic variants can direct insights into the emergence of fitness defects and suggest potential gene therapeutic strategies, down to the level of the single genetic variant.

Discussion

As part of this study, we have assembled and phenotyped a large, phylogenetically diverse E. coli resource strain collection. We observed little to no correlation between genotypic and phenotypic distance across strains. This demonstrates the need of assessing the impact of genetic variability as most variants are likely to be neutral. We combined these mechanistic interpretations of the impact of non-synonymous variants in the genomes of all these strains with prior knowledge on E. coli K-12 gene conditional essentiality to predict conditional growth defects of strains in the collection. This resulted in reliable predictions for up to 38% of the tested growth conditions. Our mechanistic-based model directly pinpoints the causal variants and allows us to design successful genetic therapies, something that is often the bottleneck in traditional GWAS approaches. Moreover, our approach opens up the door for predicting strain functional capacities in less studied microbes, such as those present in the human microbiome, with sole requirements being the strain genome sequence and knowledge of gene function in a model strain.

Despite the significant predictive power, we still could not reliably predict the strains phenotypic response in a number of conditions. Epistatic interactions, the impact of synonymous variants, variants in non-coding regions and the large accessory genome are four possible factors influencing phenotypic diversity. Models that account for these factors could therefore be required for more accurate predictive models. As compared to the previous application of this method to S. cerevisiae strains (Jelier et al., 2011), we observed a lower overall accuracy. The larger and more diverse strains cohort, the higher diversity in conditions tested, including several concentrations for each chemical, and the larger genetic diversity in bacteria are potential causes for the differences we have observed. Most importantly, we postulate that conditional essentiality may not be conserved across strains and therefore that it interacts with each specific genetic background. Previous work in two closely related S. cerevisiae strains and five human cancer cell lines has shown that gene-specific phenotypes can diverge considerably (Ryan et al., 2012; Hart et al., 2015). Given their higher genetic variability, this genetic background dependence of gene-specific phenotypes is expected to be stronger in E. coli and other bacteria, even though this remains to be tested. Despite these limitations we believe that we have demonstrated how gene function assays and genetic variant prioritization can be leveraged to deliver growth predictions and genetic intervention strategies. Future iterations based on a more complete understanding on conservation of conditional essentiality should improve these predictions.

Finally, we propose this strain collection as a valuable community resource to develop and test genotype-to-phenotype models, following the example of previous successful genetic reference panels (Ayroles et al., 2009; Liti et al., 2009; Bennett et al., 2010; Atwell et al., 2010; 1001 Genomes Consortium, 2016; Weinstein et al., 2013; 1000 Genomes Project Consortium et al., 2015). Any additional molecular and phenotypic measurement on the collection members will amplify the benefit for the entire research community, moving us closer to understanding the basic biological question of how genetic variation translates to differences among individuals.

Materials and methods

Genome sequencing, assembly and annotation

Strains whose genome sequence were not yet available were sequenced using various Illumina paired-end platforms (Supplementary file 1). The resulting sequencing reads were quality checked using FastQC version 0.11.3, and contaminating sequencing adapters were removed using seq_crumbs version 0.1.9. Reads were assembled with Spades (Bankevich et al., 2012) version 3.5.0, using different k-mer sizes according to reads length and with the ‘careful’ option to reduce assembly errors; contigs below 200 base pairs were excluded. Resulting assembled contigs were annotated for coding genes, ribosomal RNAs and tRNAs using Prokka (Seemann, 2014) version 1.11. Strains not belonging to the E. coli species were excluded from subsequent analysis after being highlighted by Kraken (Wood and Salzberg, 2014) version 0.10.5. When available, typing information was used to spot incorrect genome sequences due to culture contaminations or other factors; known strains typing was compared to the ones predicted from the genome sequence using mlst version 2.8. Strain names were amended when possible (see the ‘Notes’ column in Supplementary file 1). The ECOR collection was carefully checked for inconsistencies, as it is well known that different ‘versions’ of this collection are circulating in the scientific community (Johnson et al., 2001; Clermont et al., 2015). The genome sequences were further checked for duplicated genomes: strains with highly similar genomes but highly divergent phenotypes (phenotypes S-score correlation below 0.6) were flagged and the genome belonging to the most likely incorrect strain was removed. If the conflict could not be resolved (i.e. using known typing information) both genomes were removed. Highly similar genomes were defined as those genomes whose distance was found to be below 0.001, as measured by mash (Ondov et al., 2016), version 1.1. Despite the draft status of the genomes presented in this study, we observed a similar core genome size as the one computed using a set of 376 complete E. coli genomes downloaded from NCBI’s RefSeq (data not shown).

SNP calling and annotation

Due to the variability in sequencing technologies or lack of the original reads for the already sequenced strains in the collection (N = 373), SNPs were called through a whole genome alignment between each strain and the genome of the reference individual (Escherichia coli str. K-12 substr. MG1655, RefSeq accession: NC_000913.3, strain collection identifier NT12001), using ParSNP (Treangen et al., 2014) version 1.2. Repeated regions in the reference genome were highlighted and masked through nucmer (Kurtz et al., 2004) version 3.1 and Bedtools (Quinlan and Hall, 2010) version 2.26.0. SNPs were then phased, and annotated using SnpEff (Cingolani et al., 2012) version 4.1g.

Pangenome analysis

Genes present in the reference individual but absent in each strain were highlighted by computing hierarchical orthologous group using OMA (Altenhoff et al., 2013) version 1.0.6. Each strain was previously re-annotated using Prokka (Seemann, 2014) version 1.11 to harmonize gene calls.

Phylogenetics

Strains phylogenetic tree was computed using a single ParSNP (Treangen et al., 2014) analysis, which generates a whole genome nucleotide alignment across all strains, which is then used as an input for FastTree (Price et al., 2010) version 2.1.7. The tree was visualized using the ete3 library (Huerta-Cepas et al., 2016a) version 3.0.0.

Phylogenetic distances between pairs of strains has been computed using phylogenetic independent contrasts, in order to account for phylogenetic interdependencies between strains (Felsenstein, 1985). In short, we have first restricted the phenotypic data to those strains and conditions with less than 10% missing data; we then imputed the rest using the average S-score for each strain, using the scikit-learn library (Pedregosa et al., 2011) version 0.17.1. For each condition we then computed the phylogenetic independent contrasts across the internal nodes of the strains’ tree, using the ape, geiger and phytools libraries (Paradis et al., 2004; Harmon et al., 2008Revell, 2012), versions 5.0, 2.0.6 and 0.6–20, respectively. The phenotypic distance between nodes was then computed using the euclidean distance between the phylogenetic independent contrasts across all conditions, using the nadist library version 0.1.0.

Computation of all possible mutations and their effect

The impact of all possible nonsynonymous substitutions on the reference individual has been precomputed to speed up the lookup process. Functional impact of nonsynonymous substitutions has been computed using SIFT (Ng and Henikoff, 2001) version 5.1.1, using uniref50 as a sequence search database. Structural impact of nonsynonymous substitutions has been computed using FoldX (Guerois et al., 2002) version 4; both 3-D structures present in the PDB database and homology models were used. Homology models were created using ModPipe (Pieper et al., 2014) version 2.2.0. Conversion from PDB to Uniprot residues coordinates was derived from the SIFTS (Velankar et al., 2013) database. PDB structures were ‘repaired’ to fix residues’ torsion angles and Van der Waals clashes before computing the impact of non-synonymous variants on structural stability. All the precomputed impacts of all possible nonsynonymous substitutions are available through the mutfunc database (http://mutfunc.com).

Computation of the disruption score

For each strain and each protein coding genes we have computed the overall impact of all nonsynonymous and nonsense substitutions, in a similar approach as the one used for Saccharomyces cerevisiae (Jelier et al., 2011). The output of each predictor (a deleterious probability for SIFT and a ΔΔG value for FoldX) has been converted to the probability of the substitution being neutral p(neutral). Mutations with known impact on the reference individual have been downloaded from Uniprot (UniProt Consortium, 2015, N = 3673) and used to derive such conversion; since only 580 mutations are reported to have a neutral impact, we added all observed nonsynonymous variants affecting known essential genes (as reported in the OGEE database, Chen et al., 2017) to the list of tolerated mutations. The distribution of the negative natural logarithm of the SIFT probability (plus a pseudocount equivalent to the lowest observed SIFT probability) for all the 6460 mutations was binned and the p(neutral) was computed as the proportion of tolerated mutations over the total number of mutations in each bin. A logistic regression curve was then fitted to the binned distribution to derive the conversion between the SIFT probability and p(neutral). For FoldX we used a similar approach, but using the computed ΔΔG value. The fitted logistic regression curves resulted in the following P(neutral)functions:

P(neutralSIFT)=1(1+e(0.625log(SIFT+1.527E4)+1.971))
P(neutralFoldX)=1(1+e(1.465FoldX+1.201))

The P(neutral) value attributed to nonsense mutations was assigned through heuristics: if the new stop codon was found within the last 16 residues of the protein it was given a P(neutral) value of 0.99, reflecting its unlikelihood of disrupting the function of the protein, 0.01 otherwise. Losing a start or a stop codon was given a P(neutral) value of 0.01, as they are very likely to impair protein function.

We gave a P(neutral) value of 0.01 to those genes that were found to be present in the reference individual but absent in the target strain, reflecting the fact that their function is most likely to be absent from the target strain.

We inferred the probability that each gene had its function affected by the ensemble of the substitutions in each strain by computing a disruption score, equivalent to the P(AF) (probability of the function being affected) used in S. cerevisiae study (Jelier et al., 2011).

P(AF)=1i=1kPi(neutral)

Where k is the ensemble of nonsynonymous and nonsense substitutions observed in each gene. When both FoldX and SIFT predictions were available for a given substitution, we used the SIFT prediction only. Variants with relatively high frequency (>=10%) in the strains collection were not considered, as well as those reference genes that are absent in a significant number of strains (>=10%), as we reasoned that they are unlikely to be deleterious given their high observed frequency. Given that many strains of the collection are closely related (e.g. strains derived from the LTEE collection), we clustered them based on phylogenetic distance before applying the filtering. We also didn’t consider variants and absent reference genes shared by all members of the LTEE strain collection, as those variants are present in the collection founder strain and therefore unlikely to affect the evolved clones phenotypes. Disruption scores for all proteins across all strains can be found in Supplementary file 5.

Use of the disruption score as a functional association predictor

We used the proteins pairwise Pearson’s correlation of disruption scores as a predictor of genes functional associations. We used four benchmarking sets: operons, as derived from the DOOR database (Mao et al., 2014), protein complexes and pathways derived from the EcoCyc database (Keseler et al., 2013), and protein-protein interactions derived from a recent yeast two-hybrid experiment (Rajagopala et al., 2014). The distribution of the disruption score correlation for each pair of related genes was compared against the same number of gene pairs randomly drawn from all reference genes. We also assessed the predictive power of the disruption score correlations by drawing a receiving operator characteristics curve (ROC) across each correlation threshold, using the scikit-learn library (Pedregosa et al., 2011) version 0.17.1.

Strains phenotyping

The strains phenotypes were measured in a similar way as the E. coli reverse genetic screen (Nichols et al., 2011). The strain collection was plated in three solid agar plates, each one containing 1536 single colonies, so that each strain was plated at least four times in each plate, each time with different neighboring strains. For each condition we prepared three replicates (each one using a different source plate to reduce batch effects) with the concentrations indicated in Supplementary file 2 and the addition of the Cosmos dye (CAS number 573-58-0) and Coomasie brilliant blue R-250 (CAS number 6104-59-2). The solution stains colonies when biofilm is being developed. Plates were stored in darkness at room temperature, unless otherwise required by the specific condition (i.e. higher temperature), and photographs of the plates were taken until colonies were found to be overgrowing into each other. Most of the conditions (N = 197) were tested at the same time and under the same laboratory conditions as the KEIO KO collection (Herrera-Dominguez et al., unpublished).

A series of colony parameters were extracted from each photograph, using Iris (Kritikos et al., 2017) version 0.9.4.71: colony size, opacity, roundness and color intensity. The most appropriate time point for each condition was determined by imposing a restriction on median colony size; between 1900 and 3600 pixels for the first two plates, and between 1300 and 3600 for the last plate, which contained the strains derived from evolution experiments, which tend to grow less than the natural isolates. The time points passing the first threshold were then sorted by the proportion of colonies with high roundness (>0.8), which is indicative of the overall quality of the plate, proportion of colonies over the minimum median colony size threshold, the spread of the colony size distribution (the lower the better), and mean colony size correlation across replicates.

A series of additional quality control measures were taken on the colony parameters. In order to remove systematic pinning defects, colonies appearing to be missing (colony size of zero pixels) in more than 66% of the tested plates were removed, unless all the internal replicates were found to be missing. Colonies with abnormal circularity were removed, as they were mostly due to incorrect colony recognition by the software: colonies with size below 1000 pixels and circularity below 0.5 and colonies with size above 1000 pixels and circularity below 0.3 were removed. Putative contaminations were spotted and removed through a variance jackknife approach: first, the size of two outermost rows and columns colonies was corrected to match the median of the rest of the plate, then for each strain, each of the four replicates inside the plate was tested whether it contributed to more than 90% of the total colony size variance. If so, the replicate was flagged as a contamination and removed. The same approach was repeated using colony circularity, with a variance threshold of 95%.

The final colony sizes were used as an input for the EMAP algorithm (Collins et al., 2006), with default parameters, in order to derive an S-score, which informs on the deviation of each strain from the expected growth in each condition. Final S-scores were quantile-normalized, and significant phenotypes were highlighted using a 5% FDR correction similar to the one used in the E. coli reverse genetic screen (Nichols et al., 2011), using the statsmodels library version 0.6.1. The phenotypic data is available in Supplementary file 4.

Computation of the conditional score and its assessment

Conditionally essential genes were derived for each condition of the E. coli reverse genetic screen overlapping with the conditions tested on the natural isolates collection. Mutants with a significant growth phenotype were considered to derive the list of conditionally essential genes. The conditional score for each strain, indicating the growth prediction, was computed as follows:

Ss,c=g=1l1EsWg,clog(1Ps,g(AF))

Where s and c represent the strain and condition, respectively, g each conditionally essential gene for condition c (with size l), and Esrepresenting a correction term for the disruption score, in order to remove the effect of phylogenetic distance (Figure 2—figure supplement 1).

Es=1ng=1nlog(1Ps,g(AF))

Where n represents all the reference genes. The term Wg,c is used to weight the contribution of each conditionally essential gene to the conditional score, and it is computed as follows:

Wg,c=log10(Fg,c)CgNc

Where Fg,c is the FDR-corrected p-value of gene g in condition c, Cg is the number of conditions in the chemical genomics screen where gene g shows a significant phenotype, and Nc the total number of tested conditions in the chemical genomics screen.

The conditional score was assessed by computing a Precision-Recall curve, whose area (PR-AUC) was used as a direct measure of the predictive power of the method; the growth phenotypes were considered true positives. The PR curve and its AUC were computed using the scikit-learn library (Pedregosa et al., 2011) version 0.17.1. Three randomization approaches were used to generate control conditional scores: one using strains shuffling, one using conditionally essential gene sets shuffling, and one using random conditionally essential gene sets. Each randomization strategy has been used to generate 10,000 randomized scores, which were scaled to the actual one before assessment.

The conditionally essential gene sets, the conditional score and the PR-AUC values are available in Supplementary file 5.

Association of accessory genes with phenotypes

Accessory genes from the strains collection pangenome were computed from the harmonized genome annotations made by Prokka (Seemann, 2014), using Roary (Page et al., 2015) version 3.6.1. The accessory genes were associated to each condition’s phenotypes using Scoary (Brynildsrud et al., 2016) version 1.4.0, with default parameters. Genes with corrected p-value (Benjamini-Hochberg) of association below 0.05 were considered significant. The enrichment of conditionally essential genes among the significant reference gene hits was assessed through a Fisher’s exact test, as implemented in the SciPy library, version 0.17.0.

Systematic in-silico complementation of conditionally essential genes

The potential to restore growth phenotypes through the introduction of reference alleles was predicted systematically in each strain by changing the disruption score to zero in each conditionally essential gene, and reporting the change in the conditional score : with respect to the maximum possible conditional score:

Ss,cmax=g=1l1EsWg,clog(1Pmax(AF))

Where Pmax(AF) is the maximum disruption score observed across all genes and strains. Any ΔSs,c higher than 1% of Ss,cmax was considered as potentially able to restore a growth phenotype.

Experimental complementation of predicted phenotype-causing genes

In order to experimentally verify our predictions, we introduced the reference (BW25113) gene in a low copy plasmid. For the slt gene, we used the available plasmid from the TransBac library (H. Dose and H. Mori, unpublished resource, Otsuka et al., 2015). For acrA, acrB and glnD, we used the available plasmid from the mobile plasmid library (Saka et al., 2005). We amplified acrAB, soxSR, proAB from BW25113 and ligated into pNTR-SD (the backbone plasmid for the mobile plasmid library). Deletions of acrAB, soxSR, proAB in the reference strain were made using the lambda red recombination approach (Datsenko and Wanner, 2000). The resulting seven plasmids and the two empty plasmid controls were introduced into BW25113 (negative control), in the deletion strains from the Keio collection or constructed by us (positive control) and in the targets strains. All resulting strains were pinned using a Singer Rotor robot in 10 different conditions, on two solid agar plates, so that each strain is pinned at least four times per plate. The plates were incubated at room temperature and multiple photographs were taken until colonies were found to be overgrowing into each other. Iris (Kritikos et al., 2017) version 0.9.7 was used to extract colony size from the pictures.

Code, data and strains collection availability

New genomic sequences have been deposited at the European Nucleotide Archive (ENA) with accession number PRJEB20550.

The source code used to perform the analysis reported here and generate the figures is available as Source code 1 and at the following URLs: https://github.com/mgalardini/screenings, https://github.com/mgalardini/pangenome_variation, https://github.com/mgalardini/ecopredict (copies archived at https://github.com/elifesciences-publications/screenings, https://github.com/elifesciences-publications/pangenome_variation and https://github.com/elifesciences-publications/ecopredict). Code is mostly based on the Python programming language, and using the following libraries: Numpy (Vanvan Derder Walt et al., 2011) version 1.10.4, SciPy version 0.17.0, Pandas (McKinney, 2010) version 0.18.0, Biopython (Cock et al., 2009) version 1.68, scikit-learn (Pedregosa et al., 2011) version 0.17.1, fastcluster (Müllner, 2013) version 1.1.20, statsmodels version 0.6.1, PyVCF version 0.6.8, ete3 (Huerta-Cepas et al., 2016a) version 3.0.0, Matplotlib (Hunter, 2007) version 1.5.1, Seaborn (Waskom et al., 2016) version 0.7.1 and svgutils version 0.2.0.

Genomic and phenotypic data, as well as relevant information on how to obtain the members of the strain collection is available at the following URL: https://evocellnet.github.io/ecoref.

References

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27
  28. 28
  29. 29
  30. 30
    Matplotlib: A 2D Graphics Environment
    1. JD Hunter
    (2007)
    Computing in Science & Engineering. IEEE Computer Society.
  31. 31
  32. 32
    Integrity of archival strain collections: The ECOR collection
    1. JR Johnson
    2. P Delavari
    3. AL Stell
    4. G Prats
    5. U Carlino
    6. TA Russo
    (2001)
    ASM News-American Society for Microbiology 67:288–289.
  33. 33
  34. 34
  35. 35
  36. 36
  37. 37
  38. 38
  39. 39
  40. 40
  41. 41
  42. 42
  43. 43
    Data Structures for Statistical Computing in PythonIn
    1. W McKinney
    (2010)
    Proceedings of the 9th Python in Science Conference. pp. 51–56.
  44. 44
  45. 45
  46. 46
    fastcluster: fast hierarchical, agglomerative clustering routines forrandpython
    1. D Müllner
    (2013)
    Journal of Statistical Software, 53, 10.18637/jss.v053.i09.
  47. 47
  48. 48
  49. 49
  50. 50
  51. 51
  52. 52
  53. 53
    “Scikit-Learn: machine learning in python.”
    1. F Pedregosa
    2. G Varoquaux
    3. A Gramfort
    4. V Michel
    5. B Thirion
    6. O Grisel
    7. M Blondel
    (2011)
    Journal of Machine Learning Research : JMLR 12:2825–2830.
  54. 54
  55. 55
  56. 56
  57. 57
  58. 58
  59. 59
  60. 60
  61. 61
  62. 62
  63. 63
  64. 64
  65. 65
  66. 66
  67. 67
  68. 68
  69. 69
  70. 70
  71. 71
  72. 72
  73. 73
  74. 74
  75. 75

Decision letter

  1. Naama Barkai
    Reviewing Editor; Weizmann Institute of Science, Israel

In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.

Thank you for submitting your article "Phenotype inference in an Escherichia coli strain panel" for consideration by eLife. Your article has been reviewed by two peer reviewers, and the evaluation has been overseen by Naama Barkai as the Senior Editor and Reviewing Editor. The reviewers have opted to remain anonymous.

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

As you will see below, the reviewers appreciated the extensive analysis and believe that the data will be a useful resource for the community. Please see below technical issues that were raised by the reviewers, which should be addressed in full, as you revise your submission for the Tools and Resources section.

Reviewer #1:

This paper does an extensive analysis and prediction of phenotypes caused by mutations in a large E. coli strain panel evaluated over several hundred conditions.

Overall, my concern with this paper applies similarly to many large-scale, high-throughput works I see: The authors have done so much, and are trying to present so many different results and approaches in one paper, that it's difficult to evaluate what exactly was achieved. This paper is either a monumental achievement or a collection of trivial results, and I find it difficult to distinguish between the two possibilities.

I am tentatively willing to give the authors the benefit of the doubt, but I have specific questions and concerns that would strengthen my confidence in the work.

- "We detected no strong positive correlation" this is a strange sentence, in particular if it then lists a significant negative correlation. I think the authors need to explain first why they would expect a positive correlation.

Also, with regards to these correlations, because of the phylogenetic dependency among strains they should be calculated on phylogenetic independent contrasts. Alternatively, it would be better to not report a p value than to report an incorrect one.

- Figure 3A: The precision-recall curves don't look particularly impressive. They rapidly decay to a precision below 0.5, at which point more cases are called incorrectly than correctly.

- Figure 3B: This figure is not properly explained. First, what are the small gray dots? Second, what are the three thick gray dots labeled as shuffled strains, shuffled sets, and random sets? Each randomization should provide a distribution which then is compared to one PR-AUC value. If the true PR-AUC value is rare under reshuffling, then it is significant. This relationship is not in any way visible in the figure.

- Figure 4: Again, the precision-recall curve looks pretty bad. I grant that there's some enrichment of phenotypes towards the top end of the predicted phenotype scale, but the prediction quality is not impressive and it's not clear to me whether a trivial predictor (e.g., one only evaluating whether an amino acid has been changed towards a highly biochemically dissimilar one) could do just as well.

- Section on "Experimental validation of causal variants": It is not entirely clear to me what has been achieved here. If a mutation knocks out an essential gene then clearly that defect can be rescued via complementation. So the achievement would have to be to predict that a gene is essential only under specific conditions. Is that what was done? And if so, how? Was essentiality simply observed, by noting that strains with mutations in certain genes wouldn't grow under specific conditions? Or was it somehow inferred from some other data?

- "with prior knowledge on gene essentiality" I'm not entirely convinced that gene essentiality matters all that much. A mutation in a non-essential gene can have a substantial fitness effect, and a mutation in an essential gene that doesn't fully disable that gene's function may have only a minor fitness effect.

- It is unclear to me whether the code underlying the mutfunc database has been provided or not. The database looks nice, but I'm generally wary of any bioinformatics tool that I can't download and run on my own machine. The authors provide several pipelines (https://github.com/mgalardini/screenings, https://github.com/mgalardini/pangenome_variation, https://github.com/mgalardini/ecopredict), but none seems to perform the task of the mutfunc database.

Reviewer #2:

We have read the study entitled "Phenotype inference in an Escherichia coli strain panel" by Galardini et al. This study performs phenotypic screens for 894 E. coli strains across 214 growth conditions. Most of the strains (696) have full genome sequence available. Gene-level functional predictions were correct in up to 38% of cases. This dataset will become an important and useful resource for the systems biology and microbiology community. We would appreciate of the authors addressed our minor comments below:

We commend the authors for putting all of their data in easily accessible format on github with links out to all of the available assemblies and phenotype measurements. This inclusion will ensure that the resource is easy to use for the research community. However, one question remains regarding the new annotations for each strain. Are these available anywhere? It was difficult to find them both on the web or in the authors’ supplemental files.

The authors previously used this approach on yeast with a median predictive performance of 0.76 (Jelier et al. 2011), but were only able to reliably predict 38% of the conditions tested in this instance. Could the authors comment on why their performance was lower?

Why did the authors use ParSNP to compute phylogeny rather than other methods (such as MLST, whole genome alignment, etc). The authors look for a correlation between phylogeny and genotype. Could the lack of correlation be due to assumptions made when calculating the phylogeny?

Why did the authors choose to incubate the most of the conditions at room temperature rather than the more common 37C? Might this significantly impact their results (i.e. would growth capabilities change if incubated at 37C?).

How do the authors define the "accessory genes/genome". Are these simply all genes that are not part of the core genome? Could the authors comment on the completeness of the genomic sequences in the data set?

Were SNPs only called for sequences in common with the reference E. coli K12 genome? Would that potentially explain some of the discrepancies between the model predictions and the experimental outcome?

Does the term "mechanistic" appropriately describe the method used? Especially when one considers disruption scores in many areas of a genome and the conditional essentiality of genes?

https://doi.org/10.7554/eLife.31035.023

Author response

Reviewer #1:

This paper does an extensive analysis and prediction of phenotypes caused by mutations in a large E. coli strain panel evaluated over several hundred conditions.

Overall, my concern with this paper applies similarly to many large-scale, high-throughput works I see: The authors have done so much, and are trying to present so many different results and approaches in one paper, that it's difficult to evaluate what exactly was achieved. This paper is either a monumental achievement or a collection of trivial results, and I find it difficult to distinguish between the two possibilities.

We can understand the reviewer’s reluctance towards large-scale projects and of course we hope to convince him/her that this is not simply a large collection of data. We are in fact trying to ask a very fundamental biological question: given all that we currently know about E. coli biology, can we predict how a specific strain of E. coli will grow in a given condition based on its genome? It is not trivial at all to be able to take a genome sequence of an E. coli strain and determine in which conditions this strain is going to grow poorly in. By asking this question we are testing the limits of our knowledge of how DNA variation affects phenotypes. Working towards these predictions implies the capacity to encapsulate in computational models all of this understanding. We upfront admit that we are not yet fully capable of making such predictions, but we also think it is very important to test these limits and to challenge ourselves and the scientific community to improve on what we have achieved in this manuscript.

In particular, we have used the knowledge on E. coli gene function as defined by the chemical genomic screens done in one particular strain, the lab E. coli K-12. We then make the assumption that a gene that is essential for growth in a given condition in the K-12 strain will also be essential for growth in the same condition in any other strain of E. coli. Based on this knowledge we then compute the effects of non-synonymous variants from genomes of different strains, using folding and sequence conservation algorithms. This allows us to predict which strains will have growth defects in specific conditions. Our predictions are significantly better than random and we can show in the complementation assays that we are in fact predicting the genes that are causing the growth defects in the right conditions. Of course, there is much room for improvement and many ways to try to extend the predictions made here as we discuss in the manuscript. For example, we have not yet taken into account mutations that could alter the expression of the genes. As we and others incorporate such additional information the predictive models will become even more accurate. At the same time, there is also great opportunity for new biology from the deficiencies in the predictions. As we mentioned in the manuscript we see cases where loss of function of a gene is known to cause a condition specific growth defect in E. coli K-12 but the same is not observed in specific strains. We and others can now try to understand why this is the case.

Finally, in order to ask this fundamental question we also needed the genomes of many strains of E. coli and their growth phenotypes across a large number of conditions as well. This large-scale data gathering effort is in itself useful as a resource, but we would like to re-emphasize that the main focus of this paper is on the fundamental biological question we are asking.

I am tentatively willing to give the authors the benefit of the doubt, but I have specific questions and concerns that would strengthen my confidence in the work.

- "We detected no strong positive correlation" this is a strange sentence, in particular if it then lists a significant negative correlation. I think the authors need to explain first why they would expect a positive correlation.

Also, with regards to these correlations, because of the phylogenetic dependency among strains they should be calculated on phylogenetic independent contrasts. Alternatively, it would be better to not report a p value than to report an incorrect one.

We agree with the reviewer that we have failed in properly explaining what our expectations were with respect to this analysis. The message we wanted to convey was that if all genetic variation contributed equally to phenotypic variation we would observe a strong positive correlation between these two variables, which is not the case. The overall negative correlation presented in the main text is present merely because of the highly genetically similar strains coming from the LTEE collection (Long Term Evolution Experiment). Given this confounding factor and the suggestion to use phylogenetic independent contrasts to account for the strains’ dependencies, we have changed Figure 1E (now Figure 2E). As suggested by the reviewer we are now deriving the correlation between genetic and phenotypic distance using the phylogenetic independent contrasts computed along the core genome tree of the strains. We show that we did not observe a positive correlation between genetic and phenotypic distance (Pearson’s r: 0.07), confirming the assumption that not all genetic variation contributes equally to differentiate phenotypes. We have also updated the Materials and methods section (“Phylogenetics”) to indicate how this analysis has been performed. We have also removed the last two panels of Figure 1—figure supplement 1.

- Figure 3A: The precision-recall curves don't look particularly impressive. They rapidly decay to a precision below 0.5, at which point more cases are called incorrectly than correctly.

We thank the reviewer for raising this point. While the curves displayed in Figure 3A are merely part of the schematic used to explain the prediction and scoring strategies, they mimic some of the real curves we observed in our data, and therefore this comment is relevant and important. We address the concerns of the reviewer with respect to the drop in precision and general aspect of the curves in the comment about Figure 3B and Figure 4. In a nutshell, we would like to point out that the predictions are indeed significantly better than the various randomization strategies we have proposed.

- Figure 3B: This figure is not properly explained. First, what are the small gray dots? Second, what are the three thick gray dots labeled as shuffled strains, shuffled sets, and random sets? Each randomization should provide a distribution which then is compared to one PR-AUC value. If the true PR-AUC value is rare under reshuffling, then it is significant. This relationship is not in any way visible in the figure.

We agree with the reviewer that the visualization we chose doesn’t provide sufficient information and, most importantly, hides the relationship between the distribution of the actual predictions versus the three randomization strategies we have adopted. We therefore decided to remove the thick gray dots that corresponded to the median results of the randomizations (“shuffled strains”, “shuffled sets”, and “random sets”) and leave the small grey dots, which represent the PR-AUC value of each condition separately. To better illustrate the randomisations we have added a panel to Figure 3 (panel C), showing an example condition (Clindamycin 3 μg/ml). As suggested by the reviewer we now show the distribution of the randomizations and in a red line the result for the actual predictor for this condition. In the figure legend we have indicated the q-value related to the likelihood that the actual predictions has been drawn from the distribution of each randomization. Furthermore, we have added a new supplementary figure (Figure 3—figure supplement 2), which shows for each condition a similar plot as the one presented in Figure 3C. We have also updated the Results section to better explain this point. We believe that these changes will help the readers understanding why the predictions presented in this work are relevant.

- Figure 4: Again, the precision-recall curve looks pretty bad. I grant that there's some enrichment of phenotypes towards the top end of the predicted phenotype scale, but the prediction quality is not impressive and it's not clear to me whether a trivial predictor (e.g., one only evaluating whether an amino acid has been changed towards a highly biochemically dissimilar one) could do just as well.

The points raised by the reviewer are important. Indeed, the precision in the two example conditions drops at relatively low recall values. It is important to point out that the number of strains with growth defects in a given condition are a small fraction of the total and it is therefore a very unbalanced set of true and false positives. Achieving high precision is particularly challenging in such an unbalanced set. Nevertheless the rankings we predict are noticeably better than expected by random chance. Author response image 1 shows the comparison against the predicted score computed using random conditionally essential gene sets. In that respect the comments about Figure 3B allowed us to make this significance more evident.

Finally, we would like to point out how the choice of showing a precision-recall curve as the sole visual indicator of our prediction accuracy may be actually undermining the value of our approach. Author response image 1 shows two additional metrics for the two example conditions presented in Figure 4A (the “bootstrap” curves representing the average performance of 100 predictions using random gene sets):

As evident from the plots in Author response image 1, the predictions made in these two conditions are significantly better than using random gene sets. This is especially obvious in the ROC curve, where the randomizations are very similar to a random predictor (ROC AUC ~= 0.5), as opposed to very high ROC AUC for both conditions for the “real” predictions (~0.76). We have now updated Figure 4A to show how the predictions are significantly different than the “random sets” bootstraps. We have also added a new panel to Figure 4B to include the ROC curve.

While we recognize that our predictions are far from perfect, we believe they represent an important demonstration that is indeed possible to combine chemical genomics data from the reference K-12 strain with mutation data to infer conditional growth defects for other strains. In particular, leveraging the information on condition-dependent essentiality is key to the significance of our predictions; if we were to only use the predicted impact of non-synonymous variants we would not be able to deliver condition specific predictions. We have expanded our Discussion to point out this important conclusion of our study.

The reviewer asked if the same predictions could be achieved with a “trivial” variant predictor such as one that evaluates the effect of mutations based on amino-acids categories. Our predictions are a combination of variant effect predictions with gene function information from the K-12 reference strains. We assume that the reviewer is here asking about how much information is added by the interpretation of the variant effects only, keeping the gene function information unchanged. We have tested a more trivial variant effect predictor for the impact of nonsynonymous substitutions as suggested by the reviewer, and as can be seen from the three plots in Author response image 2, its predictive power is inferior to the one we propose.

In short, we have divided the 20 amino acids into 8 categories: hydrophobic (A, I, L, M, V), hydrophobic (aromatic) (F, W, Y), polar (N, Q, S, T), acidic (D, E), basic (R, H, K), cysteine (C), proline (P) and glycine (G). We then have assigned a P(neutral) value of 0.99 (tolerated) for amino acid substitutions within each category, and 0.01 (deleterious) otherwise. Again, all the other predictors (accessory genes and stop codons) have been kept the same as our proposed predictor, including the use of conditionally essential genes. The two top plots show the precision recall curves for 2 conditions comparing variant effect prediction approaches. The boxplots in the bottom figure summarize the results across all conditions that have at least 5% of strains with significant growth defects. We believe that this test demonstrates that there is added value in using variant effect predictors that take into account conservation and structural information as we are able to achieve higher precision. We have decided not to include this result in the main text, as introducing an additional predictor might generate confusion in the reader.

Finally, as indicated in the Discussion, we of course concede that these predictions have room for improvement, and suggest future directions on how to do so.

- Section on "Experimental validation of causal variants": It is not entirely clear to me what has been achieved here. If a mutation knocks out an essential gene then clearly that defect can be rescued via complementation. So the achievement would have to be to predict that a gene is essential only under specific conditions. Is that what was done? And if so, how? Was essentiality simply observed, by noting that strains with mutations in certain genes wouldn't grow under specific conditions? Or was it somehow inferred from some other data?

Looking at this and the subsequent comment by the reviewer, we suspect that we might have failed in properly explaining the approach we have taken and the terminology we are using. We therefore have updated the Introduction, Results and Discussion sections to make it easier to understand what exactly our approach consists of. In short, we have used the information on gene conditional essentiality derived from pre-existing chemical genomics screens from the E. coli lab strain K-12. We therefore know which genes are essential for growth in K-12 in a given condition, and we can therefore focus on interpreting genetic variation on those genes only, as they would be more likely to influence growth if a mutation has an impact on its function in another E. coli strain. This assumes that a given gene that is essential for growth in a condition in the E. coli strain K-12 will still be equally important for that same condition in any other strain of E. coli. As we described in the Discussion one of the potential reasons why we have achieved only moderate success could very well be because the genes essential for growth under specific conditions in the K-12 strain may not be essential for growth in those same conditions for other strains. While we have not proved this in this study, this would be an extremely important finding that would highlight the plasticity of gene function.

Going back to the experimental validation of our model, based on the mutations that we observed in the genome of a given strain we can then predict: 1) if a mutation will severely impact on the function of a giving gene; 2) if that gene is known to be essential for growth in a specific condition (in K-12). We can then predict a strain that has a damaging mutation in a gene essential to grow in a certain condition should also grow poorly in that condition. Purely from the genome sequence of a strain we can then rank all mutated genes as the most likely mutated gene that is causing a grown defect in a specific condition. These sets of complementation experiments show that we can in fact predict why a given strain of E. coli is growing poorly in a specific condition.

This is the reason why we believe we have demonstrated the long-term potential of our approach to pave the way for precision genetic intervention strategies. Our approach is specific and, to a certain extent, accurate. We believe that Figure 5C is an appropriate demonstration of this statement; in the experiments we have carried out we observe a clear difference between those complementations that were predicted to restore growth in specific conditions versus those that were not predicted to have an impact.

- "with prior knowledge on gene essentiality" I'm not entirely convinced that gene essentiality matters all that much. A mutation in a non-essential gene can have a substantial fitness effect, and a mutation in an essential gene that doesn't fully disable that gene's function may have only a minor fitness effect.

We believe that we have addressed this concern in the reply to the previous question. We indeed believe that conditional essentiality (i.e. essential for a given condition) matters, and we believe to have demonstrated it when showing how our predictions compare to the randomisations with shuffled and random gene sets (Figure 3C, 4A and B and Figure 3—figure supplement 2). Again, we thank the reviewer for pointing out how we have failed in making these points clear. We hope that our revised text improves how we explain this key concept.

- It is unclear to me whether the code underlying the mutfunc database has been provided or not. The database looks nice, but I'm generally wary of any bioinformatics tool that I can't download and run on my own machine. The authors provide several pipelines (https://github.com/mgalardini/screenings, https://github.com/mgalardini/pangenome_variation, https://github.com/mgalardini/ecopredict), but none seems to perform the task of the mutfunc database.

The mutfunc database contains all the precomputed non-synonymous substitutions and their predicted impact using a variety of computational approaches. As we described in the manuscript it includes predictions made with SIFT and FoldX which are tools that are available as standalones. A lot of work on our part went into running the calculations but the database itself is a front-end for all of the precomputed scores that we have obtained after many days of total compute time. No actual computation is done by the mutfunc server as all possible variants along the genome are already pre-computed. For these reasons there is no standalone tool as such that we can create. As the reviewer may imagine, the results of the pre-computed consequences of all possible variants along the genomes of the 3 species currently in mutfunc are also not something that can be easily provided as a standalone tool. The webservice we provide in mutfunc is not limited to searching small numbers of variants. Full genome variants can be queried with a very fast response time, since it is essentially looking up the pre-computed scores. There is also no limit in getting the outputs of large numbers of queries. We have added a few more details in the Materials and methods section (“Computation of all possible mutations and their effect”) regarding how we have run Sift and FoldX. If the reviewer found specific limitations from the webserver we can try to address them. We did try very hard to make all of the work associated with this publication available even before the work is published. To further facilitate the re-use of our results, we have now added the SIFT and FoldX scores for all the nonsynonymous variants observed in the strain collection to the Ecoref website, in the download section (https://evocellnet.github.io/ecoref/download/).

Reviewer 2:

[…] We commend the authors for putting all of their data in easily accessible format on github with links out to all of the available assemblies and phenotype measurements. This inclusion will ensure that the resource is easy to use for the research community. However, one question remains regarding the new annotations for each strain. Are these available anywhere? It was difficult to find them both on the web or in the authors’ supplemental files.

We also believe that making this resource easy to use for the community is a crucial point to our manuscript and long-term vision. We recognize that the gene annotations are important for the use of this resource and we therefore thank the reviewer to raise this point. We have released the annotated genomes in the ecoref website (“strains” section). The annotation for each strains are available in the standard and popular GFF3 format.

The authors previously used this approach on yeast with a median predictive performance of 0.76 (Jelier et al. 2011), but were only able to reliably predict 38% of the conditions tested in this instance. Could the authors comment on why their performance was lower?

If we understood the reviewer’s comment correctly, there might have been a misunderstanding. We are not the authors of the 2011 yeast paper (Jelier et al., 2011). As we highlighted in the Introduction, Jelier and coauthors have applied their predictive approach to a much smaller cohort of 15 strains across 20 conditions, most of which are related to metabolism. We therefore postulate that one potential reason for our lower performance when compared to the previous study is the larger variety in the type of conditions in our screen, but also in their strength. In fact some chemicals have been tested with multiple concentrations (e.g. amoxicillin), some of which might be suboptimal to elicit a strong response from the tested strains, making a prediction potentially more difficult. Another reason may be the larger genetic diversity in bacteria in general, and of our library in particular. We have added a short sentence in the discussion to explain the potential causes of these differences.

Why did the authors use ParSNP to compute phylogeny rather than other methods (such as MLST, whole genome alignment, etc). The authors look for a correlation between phylogeny and genotype. Could the lack of correlation be due to assumptions made when calculating the phylogeny?

We believe that we might have failed to properly explain the procedure we followed to obtain the strains’ phylogeny. As the reviewers suggests, a whole genome alignment is one of the best way to obtain a phylogeny. In fact, this is exactly what ParSNP does: it aligns the whole genome of the input genomes and it then feeds such alignment to FastTree (Price et al., 2010) which computes the final phylogeny. We have updated the methods section to explicitly state how the method works.

Why did the authors choose to incubate the most of the conditions at room temperature rather than the more common 37C? Might this significantly impact their results (i.e. would growth capabilities change if incubated at 37C?).

We did the strain collection phenotyping together with that of the KEIO gene-knockout collection – latter is part of a manuscript currently in preparation (Herrera-Dominguez et al.). We opted for room temperature because then we could simultaneously measure two readouts: growth and biofilm formation. E. coli forms biofilms only at 30C or below. We have still not analyzed the biofilm readout, but will do in the future.

How do the authors define the "accessory genes/genome". Are these simply all genes that are not part of the core genome? Could the authors comment on the completeness of the genomic sequences in the data set?

As suggested by the reviewer, we indeed consider those genes that are not present in the “core genome” (shared by all strains) as part of the so-called “accessory genome”. We are in fact following the definitions set up in the seminal pangenomics paper (Medini et al., 2005).

Regarding the completeness of the genome sequences, we believe that this is a fair point of discussion whenever draft genomes are involved. However, we are confident that the draft genomes we have collected/generated as part of this manuscript are of sufficient quality for nearly all genes to be correctly annotated. As a way of example we have compared the pangenome size rarefaction curves (computed using Roary, Page et al., 2015) between a cohort of 376 complete E. coli genomes vs. the genomes from our collection. In short, we have downloaded the complete genomes from NCBI’s refseq using the ncbi-genome-download script and annotated using prokka. We have then computed their pangeome using roary, using the same options that were used for the strains presented in our study. We have then plotted the pangenome size rarefaction curves, based on 10 shufflings of the strain’s order (Author response image 3).

The rarefaction curves are indeed very similar to each other; as expected the complete genomes set has a slightly higher number of total genes. This is expected given the completeness of these genomes. The difference with the strains presented in our study is however negligible. Furthermore, the core genome size of the RefSeq strains is composed of 2889 genes, versus 2815 for the genomes presented here, when considering 376 genomes. We have updated the methods section to report on the completeness of the presented genomes.

Were SNPs only called for sequences in common with the reference E. coli K12 genome? Would that potentially explain some of the discrepancies between the model predictions and the experimental outcome?

As the conditionally essential genes are derived from the reference strain (E. coli K12), we decided to call genetic variants with respect to this strain. As we have addressed in the discussion, we also believe that taking into account the impact of non-synonymous variants and the presence/absence of genes not present in the reference strain could improve our predictions. However, at the moment we cannot reliably assign a function to accessory genes in order to extend the predictive models in this way.

Does the term "mechanistic" appropriately describe the method used? Especially when one considers disruption scores in many areas of a genome and the conditional essentiality of genes?

We decided to adopt the term “mechanistic” as a contrast to more classical GWAS-like studies, which are based on statistical inference. Since the impact of nonsynonymous variants can be related to a protein’s functional constraints (i.e. SIFT) or structural stability (i.e. FoldX), we believe that the term “mechanistic” is appropriate. We have however added a short sentence in the Introduction explaining what we intend when using this term.

https://doi.org/10.7554/eLife.31035.024

Article and author information

Author details

  1. Marco Galardini

    European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Hinxton, United Kingdom
    Contribution
    Conceptualization, Formal analysis, Investigation, Methodology, Writing—original draft, Project administration, Writing—review and editing
    Competing interests
    No competing interests declared
    ORCID icon 0000-0003-2018-8242
  2. Alexandra Koumoutsi

    Genome Biology Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany
    Contribution
    Investigation
    Competing interests
    No competing interests declared
    ORCID icon 0000-0001-8368-4193
  3. Lucia Herrera-Dominguez

    Genome Biology Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany
    Contribution
    Investigation
    Competing interests
    No competing interests declared
  4. Juan Antonio Cordero Varela

    European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Hinxton, United Kingdom
    Contribution
    Investigation
    Competing interests
    No competing interests declared
    ORCID icon 0000-0002-7373-5433
  5. Anja Telzerow

    Genome Biology Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany
    Contribution
    Investigation
    Competing interests
    No competing interests declared
  6. Omar Wagih

    European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Hinxton, United Kingdom
    Contribution
    Investigation
    Competing interests
    No competing interests declared
  7. Morgane Wartel

    Genome Biology Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany
    Contribution
    Investigation
    Competing interests
    No competing interests declared
  8. Olivier Clermont

    1. INSERM, IAME, UMR1137, Paris, France
    2. Université Paris Diderot, Paris, France
    Contribution
    Resources, Data curation, Validation
    Competing interests
    No competing interests declared
  9. Erick Denamur

    1. INSERM, IAME, UMR1137, Paris, France
    2. Université Paris Diderot, Paris, France
    3. APHP, Hôpitaux Universitaires Paris Nord Val-de-Seine, Paris, France
    Contribution
    Resources, Validation, Writing—review and editing
    Competing interests
    No competing interests declared
  10. Athanasios Typas

    Genome Biology Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany
    Contribution
    Conceptualization, Supervision, Funding acquisition, Writing—original draft, Writing—review and editing
    For correspondence
    typas@embl.de
    Competing interests
    No competing interests declared
  11. Pedro Beltrao

    European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Hinxton, United Kingdom
    Contribution
    Conceptualization, Supervision, Writing—original draft, Project administration, Writing—review and editing
    For correspondence
    pbeltrao@ebi.ac.uk
    Competing interests
    No competing interests declared
    ORCID icon 0000-0002-2724-7703

Funding

Alexander von Humboldt-Stiftung (Sofja Kovaleskaja Award)

  • Athanasios Typas

Fondation pour la Recherche Médicale (Equipe FRM 2016, DEQ20161136698)

  • Erick Denamur

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

We are particularly grateful to the various people providing us with many of the strains of this E. coli genetic reference panel, specifically (in alphabetical order): Alfredo G Torres, Catharina Svanborg, David Clarke, Erick Denamur, Ewa Bok and Pawel Pusz, Isabel Gordo and Lilia Perfeito, Jorg Weinreich and Peter Schierack, KC Huang, Lisa Nolan, Mark Goulian, Mathew Upton, Olin Silander, Richard Lenski, Scott Hultgren and Wanderley Dias da Silveira. We thank Amanda Miguel for helping in the phenotypic screen. We also thank the EMBL Gene Core, and especially Rajna Hercog and Vladimir Benes for the support in genome sequencing. We thank Ewan Birney, Oliver Stegle, KC Huang and Zam Iqbal for critical reading of the manuscript. This work was partially supported by the Sofja Kovaleskaja Award of the Alexander von Humboldt Foundation to ATy and a grant from the Fondation pour la Recherche Médicale (Equipe FRM 2016, DEQ20161136698) to ED.

Reviewing Editor

  1. Naama Barkai, Reviewing Editor, Weizmann Institute of Science, Israel

Publication history

  1. Received: August 4, 2017
  2. Accepted: December 13, 2017
  3. Version of Record published: December 27, 2017 (version 1)
  4. Version of Record updated: January 10, 2018 (version 2)

Copyright

© 2017, Galardini et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,294
    Page views
  • 192
    Downloads
  • 0
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

Further reading