Simulations predict a paradoxical effect that should be revealed by patterned stimulation of the cortex.
Any system, including biological systems, can be said to perform a computation when it transforms input information to generate an output. It is thought that many brain computations are performed by neurons (or groups of neurons) receiving input signals that they process to produce output activity, which then becomes input for other neurons. Many computations that brains can perform could, in principle, be carried out through feedforward processes (Yamins et al., 2014). In simple terms, feedforward means that the signals always travel in one direction – forward to the next neuron or network of neurons – and they never travel backwards or sideways to other neurons within a neuron group. In the cortex, however, networks of neurons have substantial 'recurrent' connectivity. Most cortical neurons are connected to other nearby cortical neurons, and therefore, signals can travel sideways due to these recurrent, local connections.
One property of networks with recurrent connectivity is that they can amplify certain inputs to produce larger outputs, while suppressing other inputs or amplifying them by a smaller factor. However, it has been challenging to understand how this can happen without the system displaying unstable or runaway activity, which is undesirable in the brain because it can lead to epileptic seizures. One plausible mechanism for recurrent amplification is known as 'balanced amplification' (Murphy and Miller, 2009). In mathematical network models that support balanced amplification, recurrent connectivity allows certain inputs to produce large outputs, yet the networks still exhibit other properties that are consistent with experimental data (such as fast responses to inputs). Recurrent connections can also influence the timing of neurons’ responses, allowing shorter inputs to create long-lasting, or time-varying outputs (Hennequin et al., 2014).
Neurons can be excitatory or inhibitory: when an excitatory neuron fires, the neuron receiving that input becomes more likely to fire as well, and when an inhibitory neuron fires, the opposite occurs, and the recipient neuron is suppressed. A network of excitatory and inhibitory cells must possess strong recurrent connectivity to support many recurrent computations, including balanced amplification. Here 'strong' means that recurrent connections are sufficiently dense to allow excitatory neurons to amplify other excitatory neurons’ activity, and in this situation, strong inputs from inhibitory neurons are required to stop the network from becoming unstable. More precisely, inhibitory-stabilized network models are those where, if the activity of inhibitory neurons could be locked to a fixed level, the excitatory neurons in the network would then become unstable (Tsodyks et al., 1997). Inhibitory-stabilized networks have been found in several cortical areas, and are seen across a range of levels of network activity – both when sensory stimulation is present, and when it is absent (Ozeki et al., 2009; Li et al., 2019; Sanzeni et al., 2019, but see Mahrach et al., 2020).
The simplest form of strong connectivity amongst excitatory neurons in a network is where the whole excitatory network is unstable. This is the standard inhibitory-stabilized network. But complex neural networks can have multiple unstable excitatory modes, where subgroups of excitatory neurons are unstable and would display runaway behavior if they were not stabilized by inhibition. Networks in which inhibition stabilizes multiple excitatory modes or subgroups are said to be in detailed balance (Vogels and Abbott, 2009; Hennequin et al., 2014; Litwin-Kumar and Doiron, 2014), while those in which inhibition stabilizes a single group of excitatory cells, typically the group of all excitatory cells, are in global balance. As a general rule, networks in detailed balance are also in global balance.
Now, in eLife, Sadra Sadeh and Claudia Clopath from Imperial College London report the result of simulations that show that networks in detailed balance have properties that extend the basic inhibitory-stabilized network (Sadeh and Clopath, 2020). In globally-balanced networks, when inhibitory neurons are stimulated uniformly (all of the neurons across the network receive an input of the same strength) a distinctive ‘paradoxical’ response, where adding input reduces activity, can be observed (Figure 1C). These paradoxical responses can be used as a signature to determine whether the network is an inhibitory-stabilized network (Tsodyks et al., 1997). Sadeh and Clopath extend this idea to detailed-balance networks with multiple unstable excitatory modes. They show that if the inhibitory neurons in these networks receive more complex, patterned stimulation (that is, certain neurons receive a stronger input than others) a predictable paradoxical signature can be observed (Figure 1D). Sadeh and Clopath call networks in which this happens ‘specific inhibitory-stabilized networks’. The connectivity patterns between neurons in their models are consistent with anatomical evidence of structured network connectivity in the cortex (Ko et al., 2013; Znamenskiy et al., 2018). Further, the existence of multiple excitatory submodes in the cortex is suggested by recent experiments that have found preferential amplification of specific patterns of input (Marshel et al., 2019; Peron et al., 2020).
Sadeh and Clopath thus make a concrete prediction: that this “specific paradoxical effect” will be seen in networks where the connectivity between neurons is strong and structured. This prediction can now be tested using a technique called two-photon optogenetics that allows patterned input to be provided to neural networks in vivo with single-cell resolution, both for excitatory and inhibitory neurons (for example, Marshel et al., 2019; Forli et al., 2018).
The article by Sadeh and Clopath also takes a conceptual step forward by considering the information that can be gained about network structure and function by providing each neuron with an input of different strength. This conceptual framework is timely, as two-photon stimulation has this ability to vary the strength of the input to selected neurons. Specifically, Sadeh and Clopath predict that a pattern of input across inhibitory neurons will generate a response that is similar to the input pattern but with opposite sign. These predictions should shape future experiments, yielding new information about a key element of cortical function: how the recurrent connectivity in cortical networks is used for computation.
Formation and maintenance of neuronal assemblies through synaptic plasticityNature Communications 5:5319.https://doi.org/10.1038/ncomms6319
Paradoxical effects of external modulation of inhibitory interneuronsThe Journal of Neuroscience 17:4382–4388.https://doi.org/10.1523/JNEUROSCI.17-11-04382.1997
Downloads (link to download the article as PDF)
Download citations (links to download the citations from this article in formats compatible with various reference manager tools)
Open citations (links to open the citations from this article in various online reference manager services)
Ion channel complexes promote action potential initiation at the mammalian axon initial segment (AIS), and modulation of AIS size by recruitment or loss of proteins can influence neuron excitability. Although endocytosis contributes to AIS turnover, how membrane proteins traffic to this proximal axonal domain is incompletely understood. Neurofascin186 (Nfasc186) has an essential role in stabilising the AIS complex to the proximal axon, and the AIS channel protein Kv7.3 regulates neuron excitability. Therefore, we have studied how these proteins reach the AIS. Vesicles transport Nfasc186 to the soma and axon terminal where they fuse with the neuronal plasma membrane. Nfasc186 is highly mobile after insertion in the axonal membrane and diffuses bidirectionally until immobilised at the AIS through its interaction with AnkyrinG. Kv7.3 is similarly recruited to the AIS. This study reveals how key proteins are delivered to the AIS and thereby how they may contribute to its functional plasticity.
While animals track or search for targets, sensory organs make small unexplained movements on top of the primary task-related motions. While multiple theories for these movements exist—in that they support infotaxis, gain adaptation, spectral whitening, and high-pass filtering—predicted trajectories show poor fit to measured trajectories. We propose a new theory for these movements called energy-constrained proportional betting, where the probability of moving to a location is proportional to an expectation of how informative it will be balanced against the movement’s predicted energetic cost. Trajectories generated in this way show good agreement with measured trajectories of fish tracking an object using electrosense, a mammal and an insect localizing an odor source, and a moth tracking a flower using vision. Our theory unifies the metabolic cost of motion with information theory. It predicts sense organ movements in animals and can prescribe sensor motion for robots to enhance performance.