When an action potential arrives at a synapse there is a large probability that no neurotransmitter is released. Surprisingly, simple computational models suggest that these synaptic failures enable information processing at lower metabolic costs. However, these models only consider information transmission at single synapses ignoring the remainder of the neural network as well as its overall computational goal. Here, we investigate how synaptic failures affect the energy efficiency of models of entire neural networks that solve a goal-driven task. We find that presynaptic stochasticity and plasticity improve energy efficiency and show that the network allocates most energy to a sparse subset of important synapses. We demonstrate that stabilising these synapses helps to alleviate the stability-plasticity dilemma, thus connecting a presynaptic notion of importance to a computational role in lifelong learning. Overall, our findings present a set of hypotheses for how presynaptic plasticity and stochasticity contribute to sparsity, energy efficiency and improved trade-offs in the stability-plasticity dilemma.
Code for experiments is part of the submission and is published on GitHub (https://github.com/smonsays/presynaptic-stochasticity).
Data from: Unified pre- and postsynaptic long-term plasticity enables reliable and flexible learningDryad Digital Repository, doi:10.5061/dryad.p286g.
- Simon Schug
- Frederik Benzing
- Angelika Steger
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
- Timothy O'Leary, University of Cambridge, United Kingdom
- Received: April 28, 2021
- Accepted: October 18, 2021
- Accepted Manuscript published: October 18, 2021 (version 1)
© 2021, Schug et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.