Computational Neuroscience: A faster way to model neuronal circuitry
Computational modelling and simulation are widely used to help understand the brain. To represent the billions of neurons and trillions of synapses that make up our nervous system, models express electrical and chemical activity mathematically, using equations that they solve with computational methods.
Coarse-grained models of the brain – where each equation represents the collective activity of hundreds of thousands or millions of neurons – have been valuable in helping us understand the coordination of activity across the whole brain (Sanz Leon et al., 2013). The equations from these models can be solved using a normal computer that any researcher might have on their desk. But if we start to investigate how individual neurons and synapses interact to give rise to the collective activity of the brain, the number of equations to be solved becomes enormous. In this case, even powerful supercomputers running flat out for many hours can only simulate the activity of a few cubic millimeters of brain for a few seconds (Billeh et al., 2020; Markram et al., 2015).
Now, in eLife, Viktor Oláh, Nigel Pedersen and Matthew Rowan from the Emory University School of Medicine report on a promising new technique that relies on machine learning tools to greatly accelerate simulations of networks of biologically realistic neurons, without the need for supercomputers (Oláh et al., 2022).
Machine learning approaches have become ubiquitous in recent years, whether it be in self-driving cars, computer-generated art or in the computers that have beat grandmasters in chess and Go. One of the most widely-used tools for machine learning is the artificial neural network, or ANN.
First developed around the middle of the 20th century, ANNs are based on a highly simplified model of how real neurons work (McCulloch and Pitts, 1943; Rosenblatt, 1958). However, it was only in the early 2000s that their use really took off, due to a combination of increased computing power and theoretical advances that allowed ‘deep learning’ (which involves training ANNs with many layers of artificial neurons; reviewed in Schmidhuber, 2015). Each layer in an ANN takes the data from the previous layer as an input, transforms it and feeds it into the next layer, allowing the ANN to perform complex computations (Figure 1).
A type of ANN known as a recurrent network has proven to be highly effective at learning to predict changes over time (Hewamalage et al., 2021). In these networks, the activity of a layer of neurons is fed back into itself or into earlier layers, allowing the network to integrate new inputs with its own previous activity. Such ANNs have been used for stock market predictions, machine translation, to accelerate weather and climate change simulations (review in Chantry et al., 2021), and to predict the electrical activity of individual biological neurons (Beniaguev et al., 2021; Wang et al., 2022). Oláh et al. have now developed ANNs that can predict the activity of entire networks of biologically realistic neurons with good levels of accuracy.
First, the team tested several different ANN architectures, and found that a particular type of recurrent neural network – which they call a convolutional neural network with long short-term memory (CNN-LSTM) – was able to accurately predict not only the sub-threshold activity but also the shape and timing of action potentials of neurons. For single neurons, their approach was comparable in speed to traditional simulators. However, when they simulated networks made up of many similar neurons, the performance of the CNN-LSTM was much better, becoming over 10,000 times faster than traditional simulators in certain cases.
In summary, the work of Oláh et al. shows that ANNs are a promising tool for greatly increasing the scope of what can be modelled with generally available computing hardware, reducing the bottleneck of supercomputer availability. Further studies will be needed to better understand the tradeoffs between performance and accuracy for this approach. By clearly describing the successful CNN-LSTM model and providing their source code in a public repository, Oláh et al. have laid a strong foundation for such future exploration.
Opportunities and challenges for machine learning in weather and climate modelling: hard, medium and soft AIPhilosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences 379:20200083.https://doi.org/10.1098/rsta.2020.0083
Recurrent neural networks for time series forecasting: current status and future directionsInternational Journal of Forecasting 37:388–427.https://doi.org/10.1016/j.ijforecast.2020.06.008
A logical calculus of the ideas immanent in nervous activityThe Bulletin of Mathematical Biophysics 5:115–133.https://doi.org/10.1007/BF02478259
The perceptron: a probabilistic model for information storage and organization in the brainPsychological Review 65:386–408.https://doi.org/10.1037/h0042519
The virtual brain: a simulator of primate brain network dynamicsFrontiers in Neuroinformatics 7:10.https://doi.org/10.3389/fninf.2013.00010
Deep learningScholarpedia 10:32832.https://doi.org/10.4249/scholarpedia.32832
Predicting spike features of hodgkin-huxley-type neurons with simple artificial neural networkFrontiers in Computational Neuroscience 15:800875.https://doi.org/10.3389/fncom.2021.800875
Article and author information
- Version of Record published: December 2, 2022 (version 1)
© 2022, Davison and Appukuttan
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
- Page views
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
- Cell Biology
Mitochondria influence cellular function through both cell-autonomous and non-cell autonomous mechanisms, such as production of paracrine and endocrine factors. Here, we demonstrate that mitochondrial regulation of the secretome is more extensive than previously appreciated, as both genetic and pharmacological disruption of the electron transport chain caused upregulation of the Alzheimer’s disease risk factor apolipoprotein E (APOE) and other secretome components. Indirect disruption of the electron transport chain by gene editing of SLC25A mitochondrial membrane transporters as well as direct genetic and pharmacological disruption of either complexes I, III, or the copper-containing complex IV of the electron transport chain elicited upregulation of APOE transcript, protein, and secretion, up to 49-fold. These APOE phenotypes were robustly expressed in diverse cell types and iPSC-derived human astrocytes as part of an inflammatory gene expression program. Moreover, age- and genotype-dependent decline in brain levels of respiratory complex I preceded an increase in APOE in the 5xFAD mouse model. We propose that mitochondria act as novel upstream regulators of APOE-dependent cellular processes in health and disease.
In value-based decision making, options are selected according to subjective values assigned by the individual to available goods and actions. Despite the importance of this faculty of the mind, the neural mechanisms of value assignments, and how choices are directed by them, remain obscure. To investigate this problem, we used a classic measure of utility maximization, the Generalized Axiom of Revealed Preference, to quantify internal consistency of food preferences in Caenorhabditis elegans, a nematode worm with a nervous system of only 302 neurons. Using a novel combination of microfluidics and electrophysiology, we found that C. elegans food choices fulfill the necessary and sufficient conditions for utility maximization, indicating that nematodes behave as if they maintain, and attempt to maximize, an underlying representation of subjective value. Food choices are well-fit by a utility function widely used to model human consumers. Moreover, as in many other animals, subjective values in C. elegans are learned, a process we find requires intact dopamine signaling. Differential responses of identified chemosensory neurons to foods with distinct growth potentials are amplified by prior consumption of these foods, suggesting that these neurons may be part of a value-assignment system. The demonstration of utility maximization in an organism with a very small nervous system sets a new lower bound on the computational requirements for utility maximization and offers the prospect of an essentially complete explanation of value-based decision making at single neuron resolution in this organism.