Computational Neuroscience: A faster way to model neuronal circuitry
Computational modelling and simulation are widely used to help understand the brain. To represent the billions of neurons and trillions of synapses that make up our nervous system, models express electrical and chemical activity mathematically, using equations that they solve with computational methods.
Coarse-grained models of the brain – where each equation represents the collective activity of hundreds of thousands or millions of neurons – have been valuable in helping us understand the coordination of activity across the whole brain (Sanz Leon et al., 2013). The equations from these models can be solved using a normal computer that any researcher might have on their desk. But if we start to investigate how individual neurons and synapses interact to give rise to the collective activity of the brain, the number of equations to be solved becomes enormous. In this case, even powerful supercomputers running flat out for many hours can only simulate the activity of a few cubic millimeters of brain for a few seconds (Billeh et al., 2020; Markram et al., 2015).
Now, in eLife, Viktor Oláh, Nigel Pedersen and Matthew Rowan from the Emory University School of Medicine report on a promising new technique that relies on machine learning tools to greatly accelerate simulations of networks of biologically realistic neurons, without the need for supercomputers (Oláh et al., 2022).
Machine learning approaches have become ubiquitous in recent years, whether it be in self-driving cars, computer-generated art or in the computers that have beat grandmasters in chess and Go. One of the most widely-used tools for machine learning is the artificial neural network, or ANN.
First developed around the middle of the 20th century, ANNs are based on a highly simplified model of how real neurons work (McCulloch and Pitts, 1943; Rosenblatt, 1958). However, it was only in the early 2000s that their use really took off, due to a combination of increased computing power and theoretical advances that allowed ‘deep learning’ (which involves training ANNs with many layers of artificial neurons; reviewed in Schmidhuber, 2015). Each layer in an ANN takes the data from the previous layer as an input, transforms it and feeds it into the next layer, allowing the ANN to perform complex computations (Figure 1).

Illustration of various types of artificial neural networks (ANN) and their associated components.
(A) A basic ANN consists of an input layer (red circles), one or more hidden layers (peach circles), and an output layer (blue circle). In the case of neuronal modelling, the input could be features such as the membrane potential (Vm), and the excitatory (exc) and inhibitory (inh) synaptic inputs. The hidden layers perform computations on the inputs, with the actual operations depending on the type of ANN. Their objective is to identify features in the inputs and use these to correlate a given input and the correct output. An ANN can have multiple outputs: in this example, the output is a prediction of the membrane potential. (B) A deep neural network (DNN) is an ANN with multiple hidden layers. (C) A convolutional neural network (CNN) is a type of DNN that can be trained to extract important features contained in the input data, which can then be used as inputs to the other hidden layers, significantly improving the performance of the overall network. (D) Some details of the feature extraction process of a CNN, which consists of several hidden layers. First, it has multiple filters (F1, F2, F3), each configured to capture specific features. This process can greatly increase the size of the data, so a pooling layer (P1, P2, P3) is then used to reduce this size. The pooling process does not lead to the loss of valuable data; instead, it helps remove noise and consolidate meaningful data. The flattening layer converts the pooled data into a 1-dimensional stream. This serves as an input for the subsequent fully connected layer, which does the final evaluation to produce the output based on the features extracted by the convolution layers. (E) A CNN with a long short-term memory (LSTM) layer. The additional LSTM layer enables the network to benefit from long-term memory, in addition to the existent short-term working memory. (F) The LSTM layer achieves this long-term memory through its ability to relay both the cell state (dashed green arrows) and the output generated by each module (solid maroon arrows) across its several modules, allowing the flow of useful information. This enables the network to better identify context in the input data over longer time periods. CNN-LSTMs have been found useful for predicting time series data.
A type of ANN known as a recurrent network has proven to be highly effective at learning to predict changes over time (Hewamalage et al., 2021). In these networks, the activity of a layer of neurons is fed back into itself or into earlier layers, allowing the network to integrate new inputs with its own previous activity. Such ANNs have been used for stock market predictions, machine translation, to accelerate weather and climate change simulations (review in Chantry et al., 2021), and to predict the electrical activity of individual biological neurons (Beniaguev et al., 2021; Wang et al., 2022). Oláh et al. have now developed ANNs that can predict the activity of entire networks of biologically realistic neurons with good levels of accuracy.
First, the team tested several different ANN architectures, and found that a particular type of recurrent neural network – which they call a convolutional neural network with long short-term memory (CNN-LSTM) – was able to accurately predict not only the sub-threshold activity but also the shape and timing of action potentials of neurons. For single neurons, their approach was comparable in speed to traditional simulators. However, when they simulated networks made up of many similar neurons, the performance of the CNN-LSTM was much better, becoming over 10,000 times faster than traditional simulators in certain cases.
In summary, the work of Oláh et al. shows that ANNs are a promising tool for greatly increasing the scope of what can be modelled with generally available computing hardware, reducing the bottleneck of supercomputer availability. Further studies will be needed to better understand the tradeoffs between performance and accuracy for this approach. By clearly describing the successful CNN-LSTM model and providing their source code in a public repository, Oláh et al. have laid a strong foundation for such future exploration.
References
-
Opportunities and challenges for machine learning in weather and climate modelling: hard, medium and soft AIPhilosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences 379:20200083.https://doi.org/10.1098/rsta.2020.0083
-
Recurrent neural networks for time series forecasting: current status and future directionsInternational Journal of Forecasting 37:388–427.https://doi.org/10.1016/j.ijforecast.2020.06.008
-
A logical calculus of the ideas immanent in nervous activityThe Bulletin of Mathematical Biophysics 5:115–133.https://doi.org/10.1007/BF02478259
-
The perceptron: a probabilistic model for information storage and organization in the brainPsychological Review 65:386–408.https://doi.org/10.1037/h0042519
-
The virtual brain: a simulator of primate brain network dynamicsFrontiers in Neuroinformatics 7:10.https://doi.org/10.3389/fninf.2013.00010
-
Predicting spike features of hodgkin-huxley-type neurons with simple artificial neural networkFrontiers in Computational Neuroscience 15:800875.https://doi.org/10.3389/fncom.2021.800875
Article and author information
Author details
Publication history
Copyright
© 2022, Davison and Appukuttan
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 2,056
- views
-
- 142
- downloads
-
- 0
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
The basic excitatory neurons of the cerebral cortex, the pyramidal cells, are the most important signal integrators for the local circuit. They have quite characteristic morphological and electrophysiological properties that are known to be largely constant with age in the young and adult cortex. However, the brain undergoes several dynamic changes throughout life, such as in the phases of early development and cognitive decline in the aging brain. We set out to search for intrinsic cellular changes in supragranular pyramidal cells across a broad age range: from birth to 85 y of age and we found differences in several biophysical properties between defined age groups. During the first year of life, subthreshold and suprathreshold electrophysiological properties changed in a way that shows that pyramidal cells become less excitable with maturation, but also become temporarily more precise. According to our findings, the morphological features of the three-dimensional reconstructions from different life stages showed consistent morphological properties and systematic dendritic spine analysis of an infantile and an old pyramidal cell showed clear significant differences in the distribution of spine shapes. Overall, the changes that occur during development and aging may have lasting effects on the properties of pyramidal cells in the cerebral cortex. Understanding these changes is important to unravel the complex mechanisms underlying brain development, cognition, and age-related neurodegenerative diseases.
-
- Neuroscience
Preclinical and clinical studies show that mild to moderate hypothermia is neuroprotective in sudden cardiac arrest, ischemic stroke, perinatal hypoxia/ischemia, traumatic brain injury, and seizures. Induction of hypothermia largely involves physical cooling therapies, which induce several clinical complications, while some molecules have shown to be efficient in pharmacologically induced hypothermia (PIH). Neurotensin (NT), a 13 amino acid neuropeptide that regulates body temperature, interacts with various receptors to mediate its peripheral and central effects. NT induces PIH when administered intracerebrally. However, these effects are not observed if NT is administered peripherally, due to its rapid degradation and poor passage of the blood-brain barrier (BBB). We conjugated NT to peptides that bind the low-density lipoprotein receptor (LDLR) to generate ‘vectorized’ forms of NT with enhanced BBB permeability. We evaluated their effects in epileptic conditions following peripheral administration. One of these conjugates, VH-N412, displayed improved stability, binding potential to both the LDLR and NTSR-1, rodent/human cross-reactivity and improved brain distribution. In a mouse model of kainate (KA)-induced status epilepticus (SE), VH-N412 elicited rapid hypothermia associated with anticonvulsant effects, potent neuroprotection, and reduced hippocampal inflammation. VH-N412 also reduced sprouting of the dentate gyrus mossy fibers and preserved learning and memory skills in the treated mice. In cultured hippocampal neurons, VH-N412 displayed temperature-independent neuroprotective properties. To the best of our knowledge, this is the first report describing the successful treatment of SE with PIH. In all, our results show that vectorized NT may elicit different neuroprotection mechanisms mediated by hypothermia and/or by intrinsic neuroprotective properties.