Computational Neuroscience: A faster way to model neuronal circuitry

Artificial neural networks could pave the way for efficiently simulating large-scale models of neuronal networks in the nervous system.
  1. Andrew P Davison  Is a corresponding author
  2. Shailesh Appukuttan
  1. Institut des Neurosciences Paris-Saclay, Université Paris-Saclay, CNRS, France

Computational modelling and simulation are widely used to help understand the brain. To represent the billions of neurons and trillions of synapses that make up our nervous system, models express electrical and chemical activity mathematically, using equations that they solve with computational methods.

Coarse-grained models of the brain – where each equation represents the collective activity of hundreds of thousands or millions of neurons – have been valuable in helping us understand the coordination of activity across the whole brain (Sanz Leon et al., 2013). The equations from these models can be solved using a normal computer that any researcher might have on their desk. But if we start to investigate how individual neurons and synapses interact to give rise to the collective activity of the brain, the number of equations to be solved becomes enormous. In this case, even powerful supercomputers running flat out for many hours can only simulate the activity of a few cubic millimeters of brain for a few seconds (Billeh et al., 2020; Markram et al., 2015).

Now, in eLife, Viktor Oláh, Nigel Pedersen and Matthew Rowan from the Emory University School of Medicine report on a promising new technique that relies on machine learning tools to greatly accelerate simulations of networks of biologically realistic neurons, without the need for supercomputers (Oláh et al., 2022).

Machine learning approaches have become ubiquitous in recent years, whether it be in self-driving cars, computer-generated art or in the computers that have beat grandmasters in chess and Go. One of the most widely-used tools for machine learning is the artificial neural network, or ANN.

First developed around the middle of the 20th century, ANNs are based on a highly simplified model of how real neurons work (McCulloch and Pitts, 1943; Rosenblatt, 1958). However, it was only in the early 2000s that their use really took off, due to a combination of increased computing power and theoretical advances that allowed ‘deep learning’ (which involves training ANNs with many layers of artificial neurons; reviewed in Schmidhuber, 2015). Each layer in an ANN takes the data from the previous layer as an input, transforms it and feeds it into the next layer, allowing the ANN to perform complex computations (Figure 1).

Illustration of various types of artificial neural networks (ANN) and their associated components.

(A) A basic ANN consists of an input layer (red circles), one or more hidden layers (peach circles), and an output layer (blue circle). In the case of neuronal modelling, the input could be features such as the membrane potential (Vm), and the excitatory (exc) and inhibitory (inh) synaptic inputs. The hidden layers perform computations on the inputs, with the actual operations depending on the type of ANN. Their objective is to identify features in the inputs and use these to correlate a given input and the correct output. An ANN can have multiple outputs: in this example, the output is a prediction of the membrane potential. (B) A deep neural network (DNN) is an ANN with multiple hidden layers. (C) A convolutional neural network (CNN) is a type of DNN that can be trained to extract important features contained in the input data, which can then be used as inputs to the other hidden layers, significantly improving the performance of the overall network. (D) Some details of the feature extraction process of a CNN, which consists of several hidden layers. First, it has multiple filters (F1, F2, F3), each configured to capture specific features. This process can greatly increase the size of the data, so a pooling layer (P1, P2, P3) is then used to reduce this size. The pooling process does not lead to the loss of valuable data; instead, it helps remove noise and consolidate meaningful data. The flattening layer converts the pooled data into a 1-dimensional stream. This serves as an input for the subsequent fully connected layer, which does the final evaluation to produce the output based on the features extracted by the convolution layers. (E) A CNN with a long short-term memory (LSTM) layer. The additional LSTM layer enables the network to benefit from long-term memory, in addition to the existent short-term working memory. (F) The LSTM layer achieves this long-term memory through its ability to relay both the cell state (dashed green arrows) and the output generated by each module (solid maroon arrows) across its several modules, allowing the flow of useful information. This enables the network to better identify context in the input data over longer time periods. CNN-LSTMs have been found useful for predicting time series data.

A type of ANN known as a recurrent network has proven to be highly effective at learning to predict changes over time (Hewamalage et al., 2021). In these networks, the activity of a layer of neurons is fed back into itself or into earlier layers, allowing the network to integrate new inputs with its own previous activity. Such ANNs have been used for stock market predictions, machine translation, to accelerate weather and climate change simulations (review in Chantry et al., 2021), and to predict the electrical activity of individual biological neurons (Beniaguev et al., 2021; Wang et al., 2022). Oláh et al. have now developed ANNs that can predict the activity of entire networks of biologically realistic neurons with good levels of accuracy.

First, the team tested several different ANN architectures, and found that a particular type of recurrent neural network – which they call a convolutional neural network with long short-term memory (CNN-LSTM) – was able to accurately predict not only the sub-threshold activity but also the shape and timing of action potentials of neurons. For single neurons, their approach was comparable in speed to traditional simulators. However, when they simulated networks made up of many similar neurons, the performance of the CNN-LSTM was much better, becoming over 10,000 times faster than traditional simulators in certain cases.

In summary, the work of Oláh et al. shows that ANNs are a promising tool for greatly increasing the scope of what can be modelled with generally available computing hardware, reducing the bottleneck of supercomputer availability. Further studies will be needed to better understand the tradeoffs between performance and accuracy for this approach. By clearly describing the successful CNN-LSTM model and providing their source code in a public repository, Oláh et al. have laid a strong foundation for such future exploration.

References

Article and author information

Author details

  1. Andrew P Davison

    Andrew P Davison is in the Institut des Neurosciences Paris-Saclay, Université Paris-Saclay, CNRS, Saclay, France

    For correspondence
    andrew.davison@cnrs.fr
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-4793-7541
  2. Shailesh Appukuttan

    Shailesh Appukuttan is in the Institut des Neurosciences Paris-Saclay, Université Paris-Saclay, CNRS, Saclay, France

    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-0148-8023

Publication history

  1. Version of Record published:

Copyright

© 2022, Davison and Appukuttan

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,764
    views
  • 125
    downloads
  • 0
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Andrew P Davison
  2. Shailesh Appukuttan
(2022)
Computational Neuroscience: A faster way to model neuronal circuitry
eLife 11:e84463.
https://doi.org/10.7554/eLife.84463

Further reading

    1. Neuroscience
    Hohyun Cho, Markus Adamek ... Peter Brunner
    Tools and Resources

    Determining the presence and frequency of neural oscillations is essential to understanding dynamic brain function. Traditional methods that detect peaks over 1/f noise within the power spectrum fail to distinguish between the fundamental frequency and harmonics of often highly non-sinusoidal neural oscillations. To overcome this limitation, we define fundamental criteria that characterize neural oscillations and introduce the cyclic homogeneous oscillation (CHO) detection method. We implemented these criteria based on an autocorrelation approach to determine an oscillation’s fundamental frequency. We evaluated CHO by verifying its performance on simulated non-sinusoidal oscillatory bursts and validated its ability to determine the fundamental frequency of neural oscillations in electrocorticographic (ECoG), electroencephalographic (EEG), and stereoelectroencephalographic (SEEG) signals recorded from 27 human subjects. Our results demonstrate that CHO outperforms conventional techniques in accurately detecting oscillations. In summary, CHO demonstrates high precision and specificity in detecting neural oscillations in time and frequency domains. The method’s specificity enables the detailed study of non-sinusoidal characteristics of oscillations, such as the degree of asymmetry and waveform of an oscillation. Furthermore, CHO can be applied to identify how neural oscillations govern interactions throughout the brain and to determine oscillatory biomarkers that index abnormal brain function.

    1. Neuroscience
    Jing Li, Chao Ning ... Chuan Zhou
    Research Article

    Female sexual receptivity is essential for reproduction of a species. Neuropeptides play the main role in regulating female receptivity. However, whether neuropeptides regulate female sexual receptivity during the neurodevelopment is unknown. Here, we found the peptide hormone prothoracicotropic hormone (PTTH), which belongs to the insect PG (prothoracic gland) axis, negatively regulated virgin female receptivity through ecdysone during neurodevelopment in Drosophila melanogaster. We identified PTTH neurons as doublesex-positive neurons, they regulated virgin female receptivity before the metamorphosis during the third-instar larval stage. PTTH deletion resulted in the increased EcR-A expression in the whole newly formed prepupae. Furthermore, the ecdysone receptor EcR-A in pC1 neurons positively regulated virgin female receptivity during metamorphosis. The decreased EcR-A in pC1 neurons induced abnormal morphological development of pC1 neurons without changing neural activity. Among all subtypes of pC1 neurons, the function of EcR-A in pC1b neurons was necessary for virgin female copulation rate. These suggested that the changes of synaptic connections between pC1b and other neurons decreased female copulation rate. Moreover, female receptivity significantly decreased when the expression of PTTH receptor Torso was reduced in pC1 neurons. This suggested that PTTH not only regulates female receptivity through ecdysone but also through affecting female receptivity associated neurons directly. The PG axis has similar functional strategy as the hypothalamic–pituitary–gonadal axis in mammals to trigger the juvenile–adult transition. Our work suggests a general mechanism underlying which the neurodevelopment during maturation regulates female sexual receptivity.