1. Computational and Systems Biology
  2. Neuroscience
Download icon

Deep Learning: Branching into brains

  1. Adam Shai
  2. Matthew Evan Larkum  Is a corresponding author
  1. Stanford University, United States
  2. Humboldt University, Germany
  • Cited 2
  • Views 5,816
  • Annotations
Cite this article as: eLife 2017;6:e33066 doi: 10.7554/eLife.33066


What can artificial intelligence learn from neuroscience, and vice versa?

Main text

Deep learning is a subfield of machine learning that focuses on training artificial systems to find useful representations of inputs. Recent advances in deep learning have propelled the once arcane field of artificial neural networks into mainstream technology (LeCun et al., 2015). Deep neural networks now regularly outperform humans on difficult problems like face recognition and games such as Go (He et al., 2015; Silver et al., 2017). Traditional neuroscientists have also taken an interest in deep learning because it seemed initially that there were telling analogies between deep networks and the human brain. Nevertheless, there is a growing impression that the field might be approaching a new ‘wall’ and that deep networks and the brain are intrinsically different.

Chief among these differences is the widely held belief that backpropagation, the learning algorithm at the heart of modern artificial neural networks, is biologically implausible. This issue is so central to current thinking about the relationship between artificial and real brains that it has its own name: the credit assignment problem. The error in the output of a neural network (that is, the difference between the output and the 'correct' answer) can be reported or 'backpropagated' to any connection in the network, no matter where it is, to teach the network how to refine the output. But for a biological brain, neurons only receive information from the neurons they are connected to, making credit assignment a real problem. How does the brain blindly adjust the strength of the connections between neurons that are far removed from the output of the network? In the absence of a solution, we may be forced to conclude that deep learning and brains are incompatible after all.

Now, in eLife, Jordan Guerguiev, Timothy Lillicrap and Blake Richards propose a biologically inspired solution to the credit assignment problem (Guerguiev et al., 2017). Central to their model is the structure of the pyramidal neuron, which is the most prevalent cell type in the cortex (the outer layer of the brain). Pyramidal neurons have been a source of aesthetic pleasure and interesting research questions for neuroscientists for decades. Each neuron is shaped like a tree with a trunk reaching up and dividing into branches near the surface of the brain as if extending toward a source of energy or information. Can it be that, while most cells of the body have relatively simple shapes, evolution has seen to it that cortical neurons are so intricately shaped as to be apparently impractical?

Guerguiev et al. – who are based at the University of Toronto, the Canadian Institute for Advanced Research, and DeepMind – report that this impractical shape has an advantage: the long branched structure means that error signals at one end of the neuron and sensory input at the other end are kept separate from each other. These sources of information can then be brought together at the right moment in order to find the best solution to a problem.

As Guerguiev et al. note, many facts about real neurons and the structure of the cortex turn out to be just right to find optimal solutions to problems. For instance, the bottoms of cortical neurons are located just where they need to be to receive signals about sensory input, while the tops of these neurons are well placed to receive feedback error signals (Cauller, 1995; Larkum, 2013). The key to this design principle seems to be to keep these distinct information streams largely independent. At the same time, ion channels under the control of a host of other nearby neurons process and gate the transfer of information within the neuron.

Taking inspiration from these facts Guerguiev et al. implement a deep network with units that have different compartments, just like real neurons, that can separate sensory input from feedback error signals. These units have all the information they need to know in order to nudge the network toward the desired output. Guerguiev et al. prove formally that this approach is mathematically sound. Moreover, their new, biologically plausible deep network is able to perform well on a task to identify handwritten numbers, and does so by creating what are referred to as hierarchical representations. This phenomenon refers to the increasingly complex nature of the responses of the network's layers, commonly found in more traditional deep learning models, and in the sensory cortices of biological brains.

Doubtless, there will be more twists and turns to this story as more biological details are incorporated into the model. For instance the brain also faces a time-based credit assignment problem (Friedrich et al., 2011; Gütig, 2016). Guerguiev et al. admit that this network does not outperform non-biologically derived deep networks – yet. Nevertheless, the model they present paves the way for future work that links biological networks to machine learning. The hope is that this can be a two-way process, in which insights from the brain can be used to improve artificial intelligence, and insights from artificial intelligence can be used to reveal how the brain operates.


  1. Conference
    1. He K
    2. Zhang X
    3. Ren S
    4. Sun J
    Delving deep into rectifiers: Surpassing human-level performance on imagenet classification
    Proceedings of the IEEE International Conference on Computer Vision. pp. 1026–1034.

Article and author information

Author details

  1. Adam Shai

    Adam Shai is in the Department of Biology, Stanford University, Stanford, United States

    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-1833-3906
  2. Matthew Evan Larkum

    Matthew Evan Larkum is at the Neurocure Cluster of Excellence, Humboldt University of Berlin, Germany

    For correspondence
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-9799-2656

Publication history

  1. Version of Record published: December 5, 2017 (version 1)


© 2017, Shai et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.


  • 5,816
    Page views
  • 647
  • 2

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

Further reading

    1. Computational and Systems Biology
    2. Physics of Living Systems
    Jean-Benoît Lalanne, Gene-Wei Li
    Research Article Updated

    Enzymatic pathways have evolved uniquely preferred protein expression stoichiometry in living cells, but our ability to predict the optimal abundances from basic properties remains underdeveloped. Here, we report a biophysical, first-principles model of growth optimization for core mRNA translation, a multi-enzyme system that involves proteins with a broadly conserved stoichiometry spanning two orders of magnitude. We show that predictions from maximization of ribosome usage in a parsimonious flux model constrained by proteome allocation agree with the conserved ratios of translation factors. The analytical solutions, without free parameters, provide an interpretable framework for the observed hierarchy of expression levels based on simple biophysical properties, such as diffusion constants and protein sizes. Our results provide an intuitive and quantitative understanding for the construction of a central process of life, as well as a path toward rational design of pathway-specific enzyme expression stoichiometry.

    1. Cancer Biology
    2. Computational and Systems Biology
    Kevin Hu et al.
    Research Article Updated

    In cancer, telomere maintenance is critical for the development of replicative immortality. Using genome sequences from the Cancer Cell Line Encyclopedia and Genomics of Drug Sensitivity in Cancer Project, we calculated telomere content across 1299 cancer cell lines. We find that telomerase reverse transcriptase (TERT) expression correlates with telomere content in lung, central nervous system, and leukemia cell lines. Using CRISPR/Cas9 screening data, we show that lower telomeric content is associated with dependency of CST telomere maintenance genes. Increased dependencies of shelterin members are associated with wild-type TP53 status. Investigating the epigenetic regulation of TERT, we find widespread allele-specific expression in promoter-wildtype contexts. TERT promoter-mutant cell lines exhibit hypomethylation at PRC2-repressed regions, suggesting a cooperative global epigenetic state in the reactivation of telomerase. By incorporating telomere content with genomic features across comprehensively characterized cell lines, we provide further insights into the role of telomere regulation in cancer immortality.