Computational Neuroscience: A faster way to model neuronal circuitry
Computational modelling and simulation are widely used to help understand the brain. To represent the billions of neurons and trillions of synapses that make up our nervous system, models express electrical and chemical activity mathematically, using equations that they solve with computational methods.
Coarse-grained models of the brain – where each equation represents the collective activity of hundreds of thousands or millions of neurons – have been valuable in helping us understand the coordination of activity across the whole brain (Sanz Leon et al., 2013). The equations from these models can be solved using a normal computer that any researcher might have on their desk. But if we start to investigate how individual neurons and synapses interact to give rise to the collective activity of the brain, the number of equations to be solved becomes enormous. In this case, even powerful supercomputers running flat out for many hours can only simulate the activity of a few cubic millimeters of brain for a few seconds (Billeh et al., 2020; Markram et al., 2015).
Now, in eLife, Viktor Oláh, Nigel Pedersen and Matthew Rowan from the Emory University School of Medicine report on a promising new technique that relies on machine learning tools to greatly accelerate simulations of networks of biologically realistic neurons, without the need for supercomputers (Oláh et al., 2022).
Machine learning approaches have become ubiquitous in recent years, whether it be in self-driving cars, computer-generated art or in the computers that have beat grandmasters in chess and Go. One of the most widely-used tools for machine learning is the artificial neural network, or ANN.
First developed around the middle of the 20th century, ANNs are based on a highly simplified model of how real neurons work (McCulloch and Pitts, 1943; Rosenblatt, 1958). However, it was only in the early 2000s that their use really took off, due to a combination of increased computing power and theoretical advances that allowed ‘deep learning’ (which involves training ANNs with many layers of artificial neurons; reviewed in Schmidhuber, 2015). Each layer in an ANN takes the data from the previous layer as an input, transforms it and feeds it into the next layer, allowing the ANN to perform complex computations (Figure 1).
A type of ANN known as a recurrent network has proven to be highly effective at learning to predict changes over time (Hewamalage et al., 2021). In these networks, the activity of a layer of neurons is fed back into itself or into earlier layers, allowing the network to integrate new inputs with its own previous activity. Such ANNs have been used for stock market predictions, machine translation, to accelerate weather and climate change simulations (review in Chantry et al., 2021), and to predict the electrical activity of individual biological neurons (Beniaguev et al., 2021; Wang et al., 2022). Oláh et al. have now developed ANNs that can predict the activity of entire networks of biologically realistic neurons with good levels of accuracy.
First, the team tested several different ANN architectures, and found that a particular type of recurrent neural network – which they call a convolutional neural network with long short-term memory (CNN-LSTM) – was able to accurately predict not only the sub-threshold activity but also the shape and timing of action potentials of neurons. For single neurons, their approach was comparable in speed to traditional simulators. However, when they simulated networks made up of many similar neurons, the performance of the CNN-LSTM was much better, becoming over 10,000 times faster than traditional simulators in certain cases.
In summary, the work of Oláh et al. shows that ANNs are a promising tool for greatly increasing the scope of what can be modelled with generally available computing hardware, reducing the bottleneck of supercomputer availability. Further studies will be needed to better understand the tradeoffs between performance and accuracy for this approach. By clearly describing the successful CNN-LSTM model and providing their source code in a public repository, Oláh et al. have laid a strong foundation for such future exploration.
References
-
Opportunities and challenges for machine learning in weather and climate modelling: hard, medium and soft AIPhilosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences 379:20200083.https://doi.org/10.1098/rsta.2020.0083
-
Recurrent neural networks for time series forecasting: current status and future directionsInternational Journal of Forecasting 37:388–427.https://doi.org/10.1016/j.ijforecast.2020.06.008
-
A logical calculus of the ideas immanent in nervous activityThe Bulletin of Mathematical Biophysics 5:115–133.https://doi.org/10.1007/BF02478259
-
The perceptron: a probabilistic model for information storage and organization in the brainPsychological Review 65:386–408.https://doi.org/10.1037/h0042519
-
The virtual brain: a simulator of primate brain network dynamicsFrontiers in Neuroinformatics 7:10.https://doi.org/10.3389/fninf.2013.00010
-
Predicting spike features of hodgkin-huxley-type neurons with simple artificial neural networkFrontiers in Computational Neuroscience 15:800875.https://doi.org/10.3389/fncom.2021.800875
Article and author information
Author details
Publication history
Copyright
© 2022, Davison and Appukuttan
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,893
- views
-
- 134
- downloads
-
- 0
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Complex structural and functional changes occurring in typical and atypical development necessitate multidimensional approaches to better understand the risk of developing psychopathology. Here, we simultaneously examined structural and functional brain network patterns in relation to dimensions of psychopathology in the Adolescent Brain Cognitive Development dataset. Several components were identified, recapitulating the psychopathology hierarchy, with the general psychopathology (p) factor explaining most covariance with multimodal imaging features, while the internalizing, externalizing, and neurodevelopmental dimensions were each associated with distinct morphological and functional connectivity signatures. Connectivity signatures associated with the p factor and neurodevelopmental dimensions followed the sensory-to-transmodal axis of cortical organization, which is related to the emergence of complex cognition and risk for psychopathology. Results were consistent in two separate data subsamples and robust to variations in analytical parameters. Although model parameters yielded statistically significant brain-behavior associations in unseen data, generalizability of the model was rather limited for all three latent components (r change from within- to out-of-sample statistics: LC1within=0.36, LC1out=0.03; LC2within=0.34, LC2out=0.05; LC3within=0.35, LC3out=0.07). Our findings help in better understanding biological mechanisms underpinning dimensions of psychopathology, and could provide brain-based vulnerability markers.
-
- Neuroscience
In amniotes, head motions and tilt are detected by two types of vestibular hair cells (HCs) with strikingly different morphology and physiology. Mature type I HCs express a large and very unusual potassium conductance, gK,L, which activates negative to resting potential, confers very negative resting potentials and low input resistances, and enhances an unusual non-quantal transmission from type I cells onto their calyceal afferent terminals. Following clues pointing to KV1.8 (Kcna10) in the Shaker K channel family as a candidate gK,L subunit, we compared whole-cell voltage-dependent currents from utricular HCs of KV1.8-null mice and littermate controls. We found that KV1.8 is necessary not just for gK,L but also for fast-inactivating and delayed rectifier currents in type II HCs, which activate positive to resting potential. The distinct properties of the three KV1.8-dependent conductances may reflect different mixing with other KV subunits that are reported to be differentially expressed in type I and II HCs. In KV1.8-null HCs of both types, residual outwardly rectifying conductances include KV7 (Knq) channels. Current clamp records show that in both HC types, KV1.8-dependent conductances increase the speed and damping of voltage responses. Features that speed up vestibular receptor potentials and non-quantal afferent transmission may have helped stabilize locomotion as tetrapods moved from water to land.