Computational Neuroscience: Finding patterns in cortical responses
Any system, including biological systems, can be said to perform a computation when it transforms input information to generate an output. It is thought that many brain computations are performed by neurons (or groups of neurons) receiving input signals that they process to produce output activity, which then becomes input for other neurons. Many computations that brains can perform could, in principle, be carried out through feedforward processes (Yamins et al., 2014). In simple terms, feedforward means that the signals always travel in one direction – forward to the next neuron or network of neurons – and they never travel backwards or sideways to other neurons within a neuron group. In the cortex, however, networks of neurons have substantial 'recurrent' connectivity. Most cortical neurons are connected to other nearby cortical neurons, and therefore, signals can travel sideways due to these recurrent, local connections.
One property of networks with recurrent connectivity is that they can amplify certain inputs to produce larger outputs, while suppressing other inputs or amplifying them by a smaller factor. However, it has been challenging to understand how this can happen without the system displaying unstable or runaway activity, which is undesirable in the brain because it can lead to epileptic seizures. One plausible mechanism for recurrent amplification is known as 'balanced amplification' (Murphy and Miller, 2009). In mathematical network models that support balanced amplification, recurrent connectivity allows certain inputs to produce large outputs, yet the networks still exhibit other properties that are consistent with experimental data (such as fast responses to inputs). Recurrent connections can also influence the timing of neurons’ responses, allowing shorter inputs to create long-lasting, or time-varying outputs (Hennequin et al., 2014).
Neurons can be excitatory or inhibitory: when an excitatory neuron fires, the neuron receiving that input becomes more likely to fire as well, and when an inhibitory neuron fires, the opposite occurs, and the recipient neuron is suppressed. A network of excitatory and inhibitory cells must possess strong recurrent connectivity to support many recurrent computations, including balanced amplification. Here 'strong' means that recurrent connections are sufficiently dense to allow excitatory neurons to amplify other excitatory neurons’ activity, and in this situation, strong inputs from inhibitory neurons are required to stop the network from becoming unstable. More precisely, inhibitory-stabilized network models are those where, if the activity of inhibitory neurons could be locked to a fixed level, the excitatory neurons in the network would then become unstable (Tsodyks et al., 1997). Inhibitory-stabilized networks have been found in several cortical areas, and are seen across a range of levels of network activity – both when sensory stimulation is present, and when it is absent (Ozeki et al., 2009; Li et al., 2019; Sanzeni et al., 2019, but see Mahrach et al., 2020).
The simplest form of strong connectivity amongst excitatory neurons in a network is where the whole excitatory network is unstable. This is the standard inhibitory-stabilized network. But complex neural networks can have multiple unstable excitatory modes, where subgroups of excitatory neurons are unstable and would display runaway behavior if they were not stabilized by inhibition. Networks in which inhibition stabilizes multiple excitatory modes or subgroups are said to be in detailed balance (Vogels and Abbott, 2009; Hennequin et al., 2014; Litwin-Kumar and Doiron, 2014), while those in which inhibition stabilizes a single group of excitatory cells, typically the group of all excitatory cells, are in global balance. As a general rule, networks in detailed balance are also in global balance.
Now, in eLife, Sadra Sadeh and Claudia Clopath from Imperial College London report the result of simulations that show that networks in detailed balance have properties that extend the basic inhibitory-stabilized network (Sadeh and Clopath, 2020). In globally-balanced networks, when inhibitory neurons are stimulated uniformly (all of the neurons across the network receive an input of the same strength) a distinctive ‘paradoxical’ response, where adding input reduces activity, can be observed (Figure 1C). These paradoxical responses can be used as a signature to determine whether the network is an inhibitory-stabilized network (Tsodyks et al., 1997). Sadeh and Clopath extend this idea to detailed-balance networks with multiple unstable excitatory modes. They show that if the inhibitory neurons in these networks receive more complex, patterned stimulation (that is, certain neurons receive a stronger input than others) a predictable paradoxical signature can be observed (Figure 1D). Sadeh and Clopath call networks in which this happens ‘specific inhibitory-stabilized networks’. The connectivity patterns between neurons in their models are consistent with anatomical evidence of structured network connectivity in the cortex (Ko et al., 2013; Znamenskiy et al., 2018). Further, the existence of multiple excitatory submodes in the cortex is suggested by recent experiments that have found preferential amplification of specific patterns of input (Marshel et al., 2019; Peron et al., 2020).
Sadeh and Clopath thus make a concrete prediction: that this “specific paradoxical effect” will be seen in networks where the connectivity between neurons is strong and structured. This prediction can now be tested using a technique called two-photon optogenetics that allows patterned input to be provided to neural networks in vivo with single-cell resolution, both for excitatory and inhibitory neurons (for example, Marshel et al., 2019; Forli et al., 2018).
The article by Sadeh and Clopath also takes a conceptual step forward by considering the information that can be gained about network structure and function by providing each neuron with an input of different strength. This conceptual framework is timely, as two-photon stimulation has this ability to vary the strength of the input to selected neurons. Specifically, Sadeh and Clopath predict that a pattern of input across inhibitory neurons will generate a response that is similar to the input pattern but with opposite sign. These predictions should shape future experiments, yielding new information about a key element of cortical function: how the recurrent connectivity in cortical networks is used for computation.
References
-
Formation and maintenance of neuronal assemblies through synaptic plasticityNature Communications 5:5319.https://doi.org/10.1038/ncomms6319
-
Paradoxical effects of external modulation of inhibitory interneuronsThe Journal of Neuroscience 17:4382–4388.https://doi.org/10.1523/JNEUROSCI.17-11-04382.1997
Article and author information
Author details
Publication history
Copyright
© 2020, Sanzeni and Histed
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,656
- views
-
- 187
- downloads
-
- 3
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Granule cells of the cerebellum make up to 175,000 excitatory synapses on a single Purkinje cell, encoding the wide variety of information from the mossy fibre inputs into the cerebellar cortex. The granule cell axon is made of an ascending portion and a long parallel fibre extending at right angles, an architecture suggesting that synapses formed by the two segments of the axon could encode different information. There are controversial indications that ascending axon (AA) and parallel fibre (PF) synapse properties and modalities of plasticity are different. We tested the hypothesis that AA and PF synapses encode different information, and that the association of these distinct inputs to Purkinje cells might be relevant to the circuit and trigger plasticity, similar to the coincident activation of PF and climbing fibre inputs. Here, by recording synaptic currents in Purkinje cells from either proximal or distal granule cells (mostly AA and PF synapses, respectively), we describe a new form of associative plasticity between these two distinct granule cell inputs. We show for the first time that synchronous AA and PF repetitive train stimulation, with inhibition intact, triggers long-term potentiation (LTP) at AA synapses specifically. Furthermore, the timing of the presentation of the two inputs controls the outcome of plasticity and induction requires NMDAR and mGluR1 activation. The long length of the PFs allows us to preferentially activate the two inputs independently, and despite a lack of morphological reconstruction of the connections, these observations reinforce the suggestion that AA and PF synapses have different coding capabilities and plasticity that is associative, enabling effective association of information transmitted via granule cells.
-
- Neuroscience
Time estimation is an essential prerequisite underlying various cognitive functions. Previous studies identified ‘sequential firing’ and ‘activity ramps’ as the primary neuron activity patterns in the medial frontal cortex (mPFC) that could convey information regarding time. However, the relationship between these patterns and the timing behavior has not been fully understood. In this study, we utilized in vivo calcium imaging of mPFC in rats performing a timing task. We observed cells that showed selective activation at trial start, end, or during the timing interval. By aligning long-term time-lapse datasets, we discovered that sequential patterns of time coding were stable over weeks, while cells coding for trial start or end showed constant dynamism. Furthermore, with a novel behavior design that allowed the animal to determine individual trial interval, we were able to demonstrate that real-time adjustment in the sequence procession speed closely tracked the trial-to-trial interval variations. And errors in the rats’ timing behavior can be primarily attributed to the premature ending of the time sequence. Together, our data suggest that sequential activity maybe a stable neural substrate that represents time under physiological conditions. Furthermore, our results imply the existence of a unique cell type in the mPFC that participates in the time-related sequences. Future characterization of this cell type could provide important insights in the neural mechanism of timing and related cognitive functions.