Synthetic Biology: Minimal cells, maximal knowledge
If we could map and understand every single molecular process in a cell, we would have a better grasp of the fundamental principles of life. We could ultimately use this knowledge to design and create artificial organisms. An obvious way to start this endeavor is to study minimal cells, natural or synthetic organisms that contain only the bare minimum of genetic information needed to survive. By building and studying these very simplified cells – so simple they have been described as the ‘hydrogen atoms of biology’ (Morowitz, 1984) – we may be able to dissect all the molecular mechanisms required to sustain cellular life.
The elucidation of the DNA double helix in 1953, and the subsequent cracking of the genetic code, made it possible to link molecular processes to DNA sequences (Figure 1). In turn, whole genome sequencing has revealed a collection of molecular roles encoded in the genomes of a great number of organisms, starting in 1995 with the first complete bacterial genomes (Fleischmann et al., 1995; Fraser et al., 1995), and then expanding thanks to next-generation sequencing methods (McGuire et al., 2008; Spencer, 2008). Yet, this has also showed that we do not know or can only guess the roles of many genes which are essential to life.
In 2008, as large-scale sequencing projects were initiated, a group of scientists at the J. Craig Venter Institute (JCVI) artificially recreated the genome of a bacterium. The team made DNA fragments in the laboratory, and then used a combination of chemistry and biology techniques to assemble the pieces ‘in the right order’, using the genetic information of the Mycoplasma genitalium bacteria as a template (Gibson et al., 2008). This marked a significant branching point in the history of biology: while the previous decades had focused on acquiring as much knowledge as possible about natural organisms, creating a genome from scratch in a laboratory demonstrated the potential to design synthetic cells (Figure 1). This shifted synthetic biology, the field in which researchers try to build biological entities, towards an engineering discipline that could work at the scale of a genome. The same team then went on to build Mycoplasma mycoides JCVI-syn1.0, the first living cell with an entirely artificial chromosome (Gibson et al., 2010). In both cases, the artificial genetic information faithfully replicated that found in the wild-type bacteria.
The next goal was to piece together an artificial genome that contains only those genes that are absolutely necessary for life and growth. In 2016, after years of design and testing, the genetic information in JCVI-syn1.0 was whittled down to produce M. mycoides JCVI-syn3.0, which harbors the smallest genome of any free-living organism (Hutchison et al., 2016). Notably, JCVI-syn3.0 was originally reported to contain 149 genes whose roles were unknown. Since then this number has shrunk to 91, and further reducing this figure still represents the next challenge in synthetic biology (Danchin and Fang, 2016).
Now, in eLife, Zan Luthey-Schulten and colleagues at the JCVI, the University of Illinois at Urbana-Champaign, the University of California at San Diego, and the University of Florida – including Marian Breuer as first author – report the first computational or 'in silico' model for a synthetic minimal organism (Breuer et al., 2019). The team reconstructed the complete set of chemical reactions that take place in the organism (that is, its metabolism). This effort bridges the gap between DNA sequences and molecular processes at the level of an entire biological system.
Breuer et al. performed their modeling work on M. mycoides JCVI-syn3.0A, a robust variation of JCVI-syn3.0 that contains 11 more genes. This was required because genome reduction involves a high number of genetic modifications, which tend to produce weaker cells that are harder to grow under laboratory conditions (Choe et al., 2019). To create their computational model, the team used the biochemical knowledge readily available for the parent strain JCVI-syn1.0 and identified the remaining candidate genes that participate in metabolism in JCVI-syn3.0A. These genes were then associated with cellular chemical reactions and, step-by-step, the entire metabolic network was modeled. This approach regroups the extensive knowledge on the metabolism of JCVI-syn3.0A in a single, highly valuable community resource that can help interrogate missing roles in the metabolic network and integrate experimental data.
Once a genome-scale model was obtained, it became possible to use it to perform computer simulations of different cellular phenotypes. Briefly, the in silico model represents the optimal metabolic state of the cell as an optimization problem on which constraints are applied. For instance, the metabolic models are constrained by the balance of reactants and products in a given chemical reaction (stoichiometry), and the conversion rates of the metabolites (flux bounds). Breuer et al. simulated the growth phenotype of JCVI-syn3.0A by optimizing for the production of cellular biomass, and then juxtaposed the predictions with real-life data, such as results from quantitative proteomics studies. In particular, they compared the genes that the model deemed essential with those highlighted when systematically mutating the genome of JCVI-syn3.0A. This revealed 30 genes that are required for survival but whose role is unknown. Understanding what these genes do is the next priority in the effort to complete the characterization of all molecular processes in a cell.
Overall, the model and experimental data generally agreed on their identification of essential genes; yet, a perfect match was not achieved, as is also the case when similar computational models are applied to natural organisms. Still, one would imagine that if this standard were within reach, it would be achieved first for minimal cells. To improve the quality of prediction, constraints that are more accurate need to be applied, and this would require additional information. For example, a completely defined media that contains only the necessary nutrients for JCVI-syn3.0A should be generated. It would also prove useful to have a precise biomass composition, that is, a detailed report of the proportion of major molecules and metabolites in the cell. Finally, many biochemical processes, such as isozymes (when enzymes with different structures catalyze the same reaction) or promiscuous reactions (when an enzyme can participate in many reactions) would need to be carefully investigated.
Such constraint-based modeling may be key to help with the generation of working genomes from square one, and in this regard, the model generated by Breuer et al. is the first of many steps to perfectly mirror a synthetic cell in silico. Next, the simulation could be expanded beyond metabolism to include other sets of biological processes, such as the gene expression machinery. This would help identify key constraints and trade-offs that cells must deal with in the struggle for life. In turn, these constraints could become the framework required to artificially design increasingly complex organisms, much like the hydrogen atom paved the way to understanding the behavior of more complex elements.
Adaptive laboratory evolution of a genome-reduced Escherichia coliNature Communications 10:935.https://doi.org/10.1038/s41467-019-08888-6
Unknown unknowns: essential genes in quest for functionMicrobial Biotechnology 9:530–540.https://doi.org/10.1111/1751-7915.12384
Special guest lecture the completeness of molecular biologyIsrael Journal of Medical Sciences 2:.
Article and author information
- Version of Record published: March 12, 2019 (version 1)
© 2019, Lachance et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
- Page views
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
- Computational and Systems Biology
Seizure generation, propagation, and termination occur through spatiotemporal brain networks. In this paper, we demonstrate the significance of large-scale brain interactions in high-frequency (80–200Hz) for the identification of the epileptogenic zone (EZ) and seizure evolution. To incorporate the continuity of neural dynamics, here we have modeled brain connectivity constructed from stereoelectroencephalography (SEEG) data during seizures using multilayer networks. After introducing a new measure of brain connectivity for temporal networks, named multilayer eigenvector centrality (mlEVC), we applied a consensus hierarchical clustering on the developed model to identify the EZ as a cluster of nodes with distinctive brain connectivity in the ictal period. Our algorithm could successfully predict electrodes inside the resected volume as EZ for 88% of participants, who all were seizure-free for at least 12 months after surgery. Our findings illustrated significant and unique desynchronization between EZ and the rest of the brain in the early to mid-seizure. We showed that aging and the duration of epilepsy intensify this desynchronization, which can be the outcome of abnormal neuroplasticity. Additionally, we illustrated that seizures evolve with various network topologies, confirming the existence of different epileptogenic networks in each patient. Our findings suggest not only the importance of early intervention in epilepsy but possible factors that correlate with disease severity. Moreover, by analyzing the propagation patterns of different seizures, we demonstrate the necessity of collecting sufficient data for identifying epileptogenic networks.
- Computational and Systems Biology
- Genetics and Genomics
Cardiometabolic diseases encompass a range of interrelated conditions that arise from underlying metabolic perturbations precipitated by genetic, environmental, and lifestyle factors. While obesity, dyslipidaemia, smoking, and insulin resistance are major risk factors for cardiometabolic diseases, individuals still present in the absence of such traditional risk factors, making it difficult to determine those at greatest risk of disease. Thus, it is crucial to elucidate the genetic, environmental, and molecular underpinnings to better understand, diagnose, and treat cardiometabolic diseases. Much of this information can be garnered using systems genetics, which takes population-based approaches to investigate how genetic variance contributes to complex traits. Despite the important advances made by human genome-wide association studies (GWAS) in this space, corroboration of these findings has been hampered by limitations including the inability to control environmental influence, limited access to pertinent metabolic tissues, and often, poor classification of diseases or phenotypes. A complementary approach to human GWAS is the utilisation of model systems such as genetically diverse mouse panels to study natural genetic and phenotypic variation in a controlled environment. Here, we review mouse genetic reference panels and the opportunities they provide for the study of cardiometabolic diseases and related traits. We discuss how the post-GWAS era has prompted a shift in focus from discovery of novel genetic variants to understanding gene function. Finally, we highlight key advantages and challenges of integrating complementary genetic and multi-omics data from human and mouse populations to advance biological discovery.