Active contraction of microtubule networks
Many cellular processes are driven by cytoskeletal assemblies. It remains unclear how cytoskeletal filaments and motor proteins organize into cellular scale structures and how molecular properties of cytoskeletal components affect the large scale behaviors of these systems. Here we investigate the self-organization of stabilized microtubules in Xenopus oocyte extracts and find that they can form macroscopic networks that spontaneously contract. We propose that these contractions are driven by the clustering of microtubule minus ends by dynein. Based on this idea, we construct an active fluid theory of network contractions which predicts a dependence of the timescale of contraction on initial network geometry, a development of density inhomogeneities during contraction, a constant final network density, and a strong influence of dynein inhibition on the rate of contraction, all in quantitative agreement with experiments. These results demonstrate that the motor-driven clustering of filament ends is a generic mechanism leading to contraction.
Article and author information
Animal experimentation: All animals were handled according to approved institutional animal care and use committee (IACUC) protocols (#28-18) of Harvard University.
- Anna Akhmanova, Utrecht University, Netherlands
- Received: August 18, 2015
- Accepted: December 20, 2015
- Accepted Manuscript published: December 23, 2015 (version 1)
- Version of Record published: February 8, 2016 (version 2)
© 2015, Foster et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
- Page views
Article citation count generated by polling the highest count across the following sources: Crossref, Scopus, PubMed Central.
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
- Computational and Systems Biology
Biological age, distinct from an individual's chronological age, has been studied extensively through predictive aging clocks. However, these clocks have limited accuracy in short time-scales. Here we trained deep learning models on fundus images from the EyePACS dataset to predict individuals' chronological age. Our retinal aging clocking, 'eyeAge', predicted chronological age more accurately than other aging clocks (mean absolute error of 2.86 and 3.30 years on quality-filtered data from EyePACS and UK Biobank, respectively). Additionally, eyeAge was independent of blood marker-based measures of biological age, maintaining an all-cause mortality hazard ratio of 1.026 even when adjusted for phenotypic age. The individual-specific nature of eyeAge was reinforced via multiple GWAS hits in the UK Biobank cohort. The top GWAS locus was further validated via knockdown of the fly homolog, Alk, which slowed age-related decline in vision in flies. This study demonstrates the potential utility of a retinal aging clock for studying aging and age-related diseases and quantitatively measuring aging on very short time-scales, opening avenues for quick and actionable evaluation of gero-protective therapeutics.
- Computational and Systems Biology
Computational models starting from large ensembles of evolutionarily related protein sequences capture a representation of protein families and learn constraints associated to protein structure and function. They thus open the possibility for generating novel sequences belonging to protein families. Protein language models trained on multiple sequence alignments, such as MSA Transformer, are highly attractive candidates to this end. We propose and test an iterative method that directly employs the masked language modeling objective to generate sequences using MSA Transformer. We demonstrate that the resulting sequences score as well as natural sequences, for homology, coevolution, and structure-based measures. For large protein families, our synthetic sequences have similar or better properties compared to sequences generated by Potts models, including experimentally validated ones. Moreover, for small protein families, our generation method based on MSA Transformer outperforms Potts models. Our method also more accurately reproduces the higher-order statistics and the distribution of sequences in sequence space of natural data than Potts models. MSA Transformer is thus a strong candidate for protein sequence generation and protein design.