Nonlinear transient amplification in recurrent neural networks with shortterm plasticity
Abstract
To rapidly process information, neural circuits have to amplify specific activity patterns transiently. How the brain performs this nonlinear operation remains elusive. Hebbian assemblies are one possibility whereby strong recurrent excitatory connections boost neuronal activity. However, such Hebbian amplification is often associated with dynamical slowing of network dynamics, nontransient attractor states, and pathological runaway activity. Feedback inhibition can alleviate these effects but typically linearizes responses and reduces amplification gain. Here, we study nonlinear transient amplification (NTA), a plausible alternative mechanism that reconciles strong recurrent excitation with rapid amplification while avoiding the above issues. NTA has two distinct temporal phases. Initially, positive feedback excitation selectively amplifies inputs that exceed a critical threshold. Subsequently, shortterm plasticity quenches the runaway dynamics into an inhibitionstabilized network state. By characterizing NTA in supralinear network models, we establish that the resulting onset transients are stimulus selective and wellsuited for speedy information processing. Further, we find that excitatoryinhibitory cotuning widens the parameter regime in which NTA is possible in the absence of persistent activity. In summary, NTA provides a parsimonious explanation for how excitatoryinhibitory cotuning and shortterm plasticity collaborate in recurrent networks to achieve transient amplification.
Editor's evaluation
Many brain circuits, particularly those found in mammalian sensory cortices, need to respond rapidly to stimuli while at the same time avoiding pathological, runaway excitation. Over several years, many theoretical studies have attempted to explain how cortical circuits achieve these goals through interactions between inhibitory and excitatory cells. This study adds to this literature by showing how synaptic shortterm depression can stabilise strong positive feedback in a circuit under a variety of plausible scenarios, allowing strong, rapid and stimulusspecific responses.
https://doi.org/10.7554/eLife.71263.sa0Introduction
Perception in the brain is reliable and strikingly fast. Recognizing a familiar face or locating an animal in a picture only takes a split second (Thorpe et al., 1996). This pace of processing is truly remarkable since it involves several recurrently connected brain areas each of which has to selectively amplify or suppress specific signals before propagating them further. This processing is mediated through circuits with several intriguing properties. First, excitatoryinhibitory (EI) currents into individual neurons are commonly correlated in time and cotuned in stimulus space (Wehr and Zador, 2003; Froemke et al., 2007; Okun and Lampl, 2008; Hennequin et al., 2017; Rupprecht and Friedrich, 2018; Znamenskiy et al., 2018). Second, neural responses to stimulation are shaped through diverse forms of shortterm plasticity (STP) (Tsodyks and Markram, 1997; Markram et al., 1998; Zucker and Regehr, 2002; Pala and Petersen, 2015). Finally, mounting evidence suggests that amplification rests on neuronal ensembles with strong recurrent excitation (Marshel et al., 2019; Peron et al., 2020), whereby excitatory neurons with similar tuning preferentially form reciprocal connections (Ko et al., 2011; Cossell et al., 2015). Such predominantly symmetric connectivity between excitatory cells is consistent with the notion of Hebbian cell assemblies (Hebb, 1949), which are considered an essential component of neural circuits and the putative basis of associative memory (Harris, 2005; Josselyn and Tonegawa, 2020). Computationally, Hebbian cell assemblies can amplify specific activity patterns through positive feedback, also referred to as Hebbian amplification. Based on these principles, several studies have shown that Hebbian amplification can drive persistent activity that outlasts a preceding stimulus (Hopfield, 1982; Amit and Brunel, 1997; Yakovlev et al., 1998; Wong and Wang, 2006; Zenke et al., 2015; Gillary et al., 2017), comparable to selective delay activity observed in the prefrontal cortex when animals are engaged in working memory tasks (Funahashi et al., 1989; Romo et al., 1999).
However, in most brain areas, evoked responses are transient and sensory neurons typically exhibit pronounced stimulus onset responses, after which the circuit dynamics settle into a lowactivity steadystate even when the stimulus is still present (DeWeese et al., 2003; Mazor and Laurent, 2005; Bolding and Franks, 2018). Preventing runaway excitation and multistable attractor dynamics in recurrent networks requires powerful and often finely tuned feedback inhibition resulting in EI balance (Amit and Brunel, 1997; Compte et al., 2000; LitwinKumar and Doiron, 2012; PonceAlvarez et al., 2013; Mazzucato et al., 2019), However, strong feedback inhibition tends to linearize steadystate activity (van Vreeswijk and Sompolinsky, 1996; Baker et al., 2020). Murphy and Miller, 2009 proposed balanced amplification which reconciles transient amplification with strong recurrent excitation by tightly balancing recurrent excitation with strong feedback inhibition (Goldman, 2009; Hennequin et al., 2012; Hennequin et al., 2014; Bondanelli and Ostojic, 2020; Gillett et al., 2020). Importantly, balanced amplification was formulated for linear network models of excitatory and inhibitory neurons. Due to linearity, it intrinsically lacks the ability to nonlinearly amplify stimuli which limits its capabilities for pattern completion and pattern separation. Further, how balanced amplification relates to nonlinear neuronal activation functions and nonlinear synaptic transmission as, for instance, mediated by STP (Tsodyks and Markram, 1997; Markram et al., 1998; Zucker and Regehr, 2002; Pala and Petersen, 2015), remains elusive. This begs the question of whether there are alternative nonlinear amplification mechanisms and how they relate to existing theories of recurrent neural network processing.
Here, we address this question by studying an alternative mechanism for the emergence of transient dynamics that relies on recurrent excitation, supralinear neuronal activation functions, and STP. Specifically, we build on the notion of ensemble synchronization in recurrent networks with STP (Loebel and Tsodyks, 2002; Loebel et al., 2007) and study this phenomenon in analytically tractable network models with rectified quadratic activation functions (Ahmadian et al., 2013; Rubin et al., 2015; Hennequin et al., 2018; Kraynyukova and Tchumatchenko, 2018) and STP. We first characterize the conditions under which individual neuronal ensembles with supralinear activation functions and recurrent excitatory connectivity succumb to explosive runaway activity in response to external stimulation. We then show how STP effectively mitigates this instability by restabilizing ensemble dynamics in an inhibitionstabilized network (ISN) state, but only after generating a pronounced stimulustriggered onset transient. We call this mechanism NTA and show that it yields selective onset responses that carry more relevant stimulus information than the subsequent steadystate. Finally, we characterize the functional benefits of inhibitory cotuning, a feature that is widely observed in the brain (Wehr and Zador, 2003; Froemke et al., 2007; Okun and Lampl, 2008; Rupprecht and Friedrich, 2018) and readily emerges in computational models endowed with activitydependent plasticity of inhibitory synapses (Vogels et al., 2011). We find that cotuning prevents persistent attractor states but does not preclude NTA from occurring. Importantly, NTA purports that, following transient amplification, neuronal ensembles settle into a stable ISN state, consistent with recent studies suggesting that inhibition stabilization is a ubiquitous feature of cortical networks (Sanzeni et al., 2020). In summary, our work indicates that NTA is ideally suited to amplify stimuli rapidly through the interaction of strong recurrent excitation with STP.
Results
To understand the emergence of transient responses in recurrent neural networks, we studied ratebased population models with a supralinear, power law inputoutput function (Figure 1A and B; Ahmadian et al., 2013; Hennequin et al., 2018), which captures essential aspects of neuronal activation (Priebe et al., 2004), while also being analytically tractable. We first considered an isolated neuronal ensemble consisting of one excitatory (E) and one inhibitory (I) population (Figure 1A).
The dynamics of this network are given by
where ${r}_{E}$ and ${r}_{I}$ are the firing rates of the excitatory and inhibitory population, ${\tau}_{E}$ and ${\tau}_{I}$ represent the corresponding time constants, ${J}_{XY}$ denotes the synaptic strength from the population $Y$ to the population $X$, where $X,Y\in \{E,I\}$, ${g}_{E}$ and ${g}_{I}$ are the external inputs to the respective populations. Finally, ${\alpha}_{E}$ and ${\alpha}_{I}$, the exponents of the respective inputoutput functions, are fixed at two unless mentioned otherwise. For ease of notation, we further define the weight matrix $\mathbf{J}$ of the compound system as follows:
We were specifically interested in networks with strong recurrent excitation that can generate positive feedback dynamics in response to external inputs ${g}_{E}$. Therefore, we studied networks with
In contrast, networks in which recurrent excitation is met by strong feedback inhibition such that $\mathrm{d}\mathrm{e}\mathrm{t}(\mathbf{J})>0$ are unable to generate positive feedback dynamics provided that inhibition is fast enough (Ahmadian et al., 2013). Importantly, we assumed that most inhibition originates from recurrent connections (Franks et al., 2011; Large et al., 2016) and, hence, we kept the input to the inhibitory population ${g}_{I}$ fixed unless mentioned otherwise.
Nonlinear amplification of inputs above a critical threshold
We initialized the network in a stable lowactivity state in the absence of external stimulation, consistent with spontaneous activity in cortical networks (Figure 1C). However, an input ${g}_{E}$ of sufficient strength, destabilized the network (Figure 1C). Importantly, this behavior is distinct from linear network models in which the network stability is independent of inputs (Materials and methods). The transition from stable to unstable dynamics can be understood by examining the phase portrait of the system (Figure 1D). Before stimulation, the system has a stable and an unstable fixed point (Figure 1D, left). However, both fixed points disappear for an input ${g}_{E}$ above a critical stimulus strength (Figure 1D, right).
To further understand the system’s bifurcation structure, we consider the characteristic function
where $z$ denotes the total current into the excitatory population and $\text{det}(\mathbf{J})$ represents the determinant of the weight matrix (Kraynyukova and Tchumatchenko, 2018; Materials and methods). The characteristic function reduces the original twodimensional system to one dimension, whereby the zero crossings of the characteristic function correspond to the fixed points of the original system (Eq. (1)(2)). We use this correspondence to visualize how the fixed points of the system change with the input ${g}_{E}$. Increasing ${g}_{E}$ shifts $F(z)$ upwards, which eventually leads to all zero crossings disappearing and the ensuing unstable dynamics (Figure 1E; Materials and methods). Importantly, for any weight matrix $\mathbf{J}$ with negative determinant, there exists a critical input ${g}_{E}$ at which all fixed points disappear (Materials and methods). While for weak recurrent EtoE connection strength ${J}_{EE}$, the transition from stable dynamics to unstable is gradual, in that it happens at higher firing rates (Figure 1F), it becomes more abrupt for stronger ${J}_{EE}$. Thus, our analysis demonstrates that individual neuronal ensembles with negative determinant $\text{det}(\mathbf{J})$ nonlinearly amplify inputs above a critical threshold by switching from initially stable to unstable dynamics.
Shortterm plasticity, but not spikefrequency adaptation, can restabilize ensemble dynamics
Since unstable dynamics are not observed in neurobiology, we wondered whether neuronal spike frequency adaptation (SFA) or STP could restabilize the ensemble dynamics while keeping the nonlinear amplification character of the system. Specifically, we considered SFA of excitatory neurons, EtoE shortterm depression (STD), and EtoI shortterm facilitation (STF). We focused on these particular mechanisms because they are ubiquitously observed in the brain. Most pyramidal cells exhibit SFA (Barkai and Hasselmo, 1994) and most synapses show some form of STP (Markram et al., 1998; Zucker and Regehr, 2002; Pala and Petersen, 2015). Moreover, the time scales of these mechanisms are wellmatched to typical timescales of perception, ranging from milliseconds to seconds (Tsodyks and Markram, 1997; Fairhall et al., 2001; Pozzorini et al., 2013).
When we simulated our model with SFA (Eqs. (21)–(23)), we observed different network behaviors depending on the adaptation strength. When adaptation strength was weak, SFA was unable to stabilize runaway excitation (Figure 2A; Materials and methods). Increasing the adaptation strength eventually prevented runaway excitation, but to give way to oscillatory ensemble activity (Figure 2—figure supplement 1). Finally, we confirmed analytically that SFA cannot stabilize excitatory runaway dynamics at a stable fixed point (Materials and methods). In particular, while the input is present, strong SFA creates a stable limit cycle with associated oscillatory ensemble activity (Figure 2—figure supplement 1; Materials and methods), which was also shown in previous modeling studies (van Vreeswijk and Hansel, 2001), but is not typically observed in sensory systems (DeWeese et al., 2003; Rupprecht and Friedrich, 2018).
Next, we considered STP, which is capable of saturating the effective neuronal inputoutput function (Mongillo et al., 2012; Zenke et al., 2015; Eqs. (37)–(39), Eqs. (41)–(43)). We first analyzed the stimulusevoked network dynamics when we added STD to the recurrent EtoE connections. Strong depression of synaptic efficacy resulted in a brief onset transient after which the ensemble dynamics quickly settled into a stimulusevoked steadystate with slightly higher activity than the baseline (Figure 2B, left). After stimulus removal, the ensemble activity returned back to its baseline level (Figure 2B, left; Figure 2C). Notably, the ensemble dynamics settled at a stable steady state with a much higher firing rate, when inhibition was inactivated during stimulus presentation (Figure 2B, right). This shows that STP is capable of creating a stable highactivity fixed point, which is fundamentally different from the SFA dynamics discussed above. This difference in ensemble dynamics can be readily understood by analyzing the stability of the threedimensional dynamical system (Materials and methods). We can gain a more intuitive understanding by considering selfconsistent solutions of the characteristic function $F(z)$. Initially, the ensemble is at the stable low activity fixed point. But the stimulus causes this fixed point to disappear, thus giving way to positive feedback which creates the leading edge of the onset transient (Figure 2B). However, because EtoE synaptic transmission is rapidly reduced by STD, the curvature of $F(z)$ changes and a stable fixed point is created, thereby allowing excitatory runaway dynamics to terminate and the ensemble dynamics settle into a steadystate at low activity levels (Figure 2D). We found that EtoI STF leads to similar dynamics (Figure 2E, left; Appendix 1) with the only difference that this configuration requires inhibition for network stability (Figure 2E, right), whereas EtoE STD stabilizes activity even without inhibition, albeit at physiologically implausibly high activity levels. Importantly, the restabilization through either form of STP did not impair an ensemble’s ability to amplify stimuli during the initial onset phase.
Crucially, transient amplification in supralinear networks with STP occurs above a critical threshold (Figure 2—figure supplement 2), and requires recurrent excitation ${J}_{\mathrm{EE}}$ to be sufficiently strong (Figure 2—figure supplement 2C, D). To quantify the amplification ability of these networks, we calculated the ratio of the evoked peak firing rate to the input strength, henceforth called the ‘Amplification index’. We found that amplification in STPstabilized supralinear networks can be orders of magnitude larger than in linear networks with equivalent weights and comparable stabilized supralinear networks (SSNs) without STP (Figure 2—figure supplement 3). We stress that the resulting firing rates are parameterdependent (Figure 2—figure supplement 4) and their absolute value can be high due to the high temporal precision of the onset peak and its short duration. In experiments, such high rates manifest themselves as precisely timelocked spikes with millisecond resolution (DeWeese et al., 2003; Wehr and Zador, 2003; Bolding and Franks, 2018; Gjoni et al., 2018).
Recent studies suggest that cortical networks operate as inhibitionstabilized networks (ISNs) (Sanzeni et al., 2020; Sadeh and Clopath, 2021), in which the excitatory network is unstable in the absence of feedback inhibition (Tsodyks et al., 1997; Ozeki et al., 2009). To that end, we investigated how ensemble restabilization relates to the network operating regime at baseline and during stimulation. Whether a network is an ISN or not is mathematically determined by the real part of the leading eigenvalue of the Jacobian of the excitatorytoexcitatory subnetwork (Tsodyks et al., 1997). We computed the leading eigenvalue in our model incorporating STP and referred to it as ‘ISN index’ (Materials and methods; Appendix 2). We found that in networks with STP the ISN index can switch sign from negative to positive during external stimulation, indicating that the ensemble can transition from a nonISN to an ISN (Figure 2F). Notably, this behavior is distinct from linear network models in which the network operating regime is independent of the input (Materials and methods). Whether this switch between nonISN to ISN occurred, however, was parameter dependent and we also found network configurations that were already in the ISN regime at baseline and remained ISNs during stimulation (Figure 2—figure supplement 5). Thus, restabilization was largely unaffected by the network state and consistent with experimentally observed ISN states (Sanzeni et al., 2020).
Theoretical studies have shown that one defining characteristic of ISNs in static excitatory and inhibitory networks is that injecting excitatory (inhibitory) current into inhibitory neurons decreases (increases) inhibitory firing rates, which is also known as the paradoxical effect (Tsodyks et al., 1997; Miller and Palmigiano, 2020). Yet, it is unclear whether in networks with STP, inhibitory stabilization implies paradoxical response and vice versa. We therefore analyzed the condition of being an ISN and the condition of having paradoxical response in networks with STP (Materials and methods; Appendix 2; Appendix 3). Interestingly, we found that in networks with EtoE STD, the paradoxical effect implies inhibitory stabilization, whereas inhibitory stabilization does not necessarily imply paradoxical response (Figure 2G; Materials and methods), suggesting that having paradoxical effect is a sufficient but not necessary condition for being an ISN. In contrast, in networks with EtoI STF, inhibitory stabilization and paradoxical effect imply each other (Appendix 2; Appendix 3). Therefore, paradoxical effect can be exploited as a proxy for inhibition stabilization for networks with STP we considered here. By injecting excitatory current into the inhibitory population, we found that the network did not exhibit the paradoxical effect before stimulation (Figure 2H, left; Figure 2—figure supplement 6). In contrast, injecting excitatory inputs into the inhibitory population during stimulation reduced their activity (Figure 2H, right; Figure 2—figure supplement 6). As demonstrated in our analysis, nonparadoxical response does not imply nonISN (Figure 2—figure supplement 7; Materials and methods). We therefore examined the inhibition stabilization property of the ensemble by probing the ensemble behavior when a small transient perturbation to excitatory population activity is introduced while inhibition is frozen before stimulation and during stimulation. Before stimulation, the firing rate of the excitatory population slightly increases and then returns to its baseline after the transient perturbation (Figure 2—figure supplement 8). During stimulation, however, the transient perturbation leads to a transient explosion of the excitatory firing rate (Figure 2—figure supplement 8). These results further confirm that the ensemble shown in our example is initially a nonISN before stimulation and can transition to an ISN with stimulation. By elevating the input level at the baseline in the model, the ensemble can be initially an ISN (Figure 2—figure supplement 5), resembling recent studies revealing that cortical circuits in the mouse V1 operate as ISNs in the absence of sensory stimulation (Sanzeni et al., 2020).
Despite the fact that the supralinear inputoutput function of our framework captures some aspects of intracellular recordings (Priebe et al., 2004), it is unbounded and thus allows infinitely high firing rates. This is in contrast to neurobiology where firing rates are bounded due to neuronal refractory effects. While this assumption permitted us to analytically study the system and therefore to gain a deeper understanding of the underlying ensemble dynamics, we wondered whether our main conclusions were also valid when we limited the maximum firing rates. To that end, we carried out the same simulations while capping the firing rate at 300 Hz. In the absence of additional SFA or STP mechanisms, the firing rate saturation introduced a stable highactivity state in the ensemble dynamics which replaced the unstable dynamics in the uncapped model. As above, the ensemble entered this highactivity steadystate when stimulated with an external input above a critical threshold and exhibited persistent activity after stimulus removal (Figure 2—figure supplement 9). While weak SFA did not change this behavior, strong SFA resulted in oscillatory behavior during stimulation consistent with previous analytical work (Figure 2—figure supplement 9, van Vreeswijk and Hansel, 2001), but did not in stable steadystates commonly observed in biological circuits. In the presence of EtoE STD or EtoI STF, however, the ensemble exhibited transient evoked activity at stimulation onset that was comparable to the uncapped case. Importantly, the ensemble did not show persistent activity after the stimulation (Figure 2—figure supplement 9). Finally, we confirmed that all of these findings were qualitatively similar in a realistic spiking neural network model (Figure 2—figure supplement 10; Materials and methods).
In summary, we found that neuronal ensembles can rapidly, nonlinearly, and transiently amplify inputs by briefly switching from stable to unstable dynamics before being restabilized through STP mechanisms. We call this mechanism nonlinear transient amplification (NTA) which, in contrast to balanced amplification (Murphy and Miller, 2009; Hennequin et al., 2012), arises from population dynamics with supralinear neuronal activation functions interacting with STP. While we acknowledge that there may be other nonlinear transient amplification mechanisms, in this article we restrict our analysis to the definition above. NTA is characterized by a large onset response, a subsequent ISN steadystate while the stimulus persists, and a return to a unique baseline activity state after the stimulus is removed. Thus, NTA is ideally suited to rapidly and nonlinearly amplify sensory inputs through recurrent excitation, like reported experimentally (Ko et al., 2011; Cossell et al., 2015), while avoiding persistent activity.
Cotuned inhibition broadens the parameter regime of NTA in the absence of persistent activity
Up to now, we have focused on a single neuronal ensemble. However, to process information in the brain, several ensembles with different stimulus selectivity presumably coexist and interact in the same circuit. This coexistence creates potential problems. It can lead to multistable persistent attractor dynamics, which are not commonly observed and could have adverse effects on the processing of subsequent stimuli. One solution to this issue could be EI cotuning, which arises in network models with plastic inhibitory synapses (Vogels et al., 2011) and has been observed experimentally in several sensory systems (Wehr and Zador, 2003; Froemke et al., 2007; Okun and Lampl, 2008; Rupprecht and Friedrich, 2018).
To characterize the conditions under which neuronal ensembles nonlinearly amplify stimuli without persistent activity, we analyzed the case of two interacting ensembles. More specifically, we considered networks with two excitatory ensembles and distinguished between global and cotuned inhibition (Figure 3A). In the case of global inhibition, one inhibitory population nonspecifically inhibits both excitatory populations (Figure 3A, left). In contrast, in networks with cotuned inhibition, each ensemble is formed by a dedicated pair of an excitatory and an inhibitory population which can have crossover connections, for instance, due to overlapping ensembles (Figure 3A, right).
Global inhibition supports winnertakeall competition and is therefore often associated with multistable attractor dynamics (Wong and Wang, 2006; Mongillo et al., 2008). We first illustrated this effect in a network model with global inhibition. When the recurrent excitatory connections within each ensemble were sufficiently strong, small amounts of noise in the initial condition led to one of the ensembles spontaneously activating at elevated firing rates, while the other ensemble’s activity remained low (Figure 3B, left). A specific external stimulation could trigger a switch from one state to the other in which the other ensemble was active at a high firing rate. Importantly, this change persisted even after the stimulus had been removed, a hallmark of multistable dynamics. In contrast, unistable systems have a global symmetric state in which both ensembles have the same activity in the absence of stimulation. While the stimulated ensemble showed elevated firing rates in response to the stimulus, its activity returned to the baseline level after the stimulus is removed (Figure 3B, right), consistent with experimental observations (DeWeese et al., 2003; Rupprecht and Friedrich, 2018; Bolding and Franks, 2018). Note that the only difference between these two models is that ${J}_{EE}$ is larger in the multistable example than in the unistable one.
Symmetric baseline activity is most consistent with activity observed in sensory areas. Hence, we sought to understand which inhibitory connectivity would be most conducive to maintain it. To that end, we analytically identified the unistability conditions, which are determined by the leading eigenvalue of the Jacobian matrix of the system, for networks with varying degrees of EI cotuning (Materials and methods). We found that a broader parameter regime underlies unistability in networks with cotuned inhibition than global inhibition (Figure 3C). Notably, this conclusion is general and extends to networks with an arbitrary number of ensembles (Materials and methods). In comparison to the ensemble with global inhibition, the ensemble with cotuned inhibition exhibits weaker — but still strong — NTA (Figure 3—figure supplement 1). Thus, cotuned inhibition broadens the parameter regime in which NTA is possible while simultaneously avoiding persistent attractor dynamics.
NTA provides better pattern completion than fixed points while retaining stimulus selectivity
Neural circuits are capable of generating stereotypical activity patterns in response to partial cues and forming distinct representations in response to different stimuli (CarrilloReid et al., 2016; Marshel et al., 2019; Bolding et al., 2020; Vinje and Gallant, 2000; CaycoGajic and Silver, 2019). To test whether NTA achieves pattern completion while retaining stimulus selectivity, we analyzed the transient onset activity in our models and compared it to the fixed point activity.
To investigate pattern completion and stimulus selectivity in our model, we considered a cotuned network with EtoE STD and two distinct excitatory ensembles $E1$ and $E2$. We gave additional input ${g}_{E1}$ to a Subset 1, consisting of 75% of the neurons in ensemble $E1$ (Figure 4A). We then measured the evoked activity in the remaining 25% of the excitatory neurons in $E1$ to quantify pattern completion. To assess stimulus selectivity, we injected additional input ${g}_{E1}$ into the entire $E1$ ensemble during the second stimulation phase (Figure 4A) while measuring the activity of $E2$. We found that neurons in Subset 2, which did not receive additional input, showed large onset responses, their steadystate activity was largely suppressed (Figure 4B). Despite the fact that inputs to $E1$ caused increased transient onset responses in $E2$, the amount of increase was orders of magnitude smaller than in $E1$ (Figure 4B). To quantify pattern completion, we defined the
Here, ${r}_{E{1}_{1}}$ and ${r}_{E{1}_{2}}$ correspond to the subpopulation activities of $E1$, respectively. By definition, the Association index ranges from zero to one, with larger values indicating stronger associativity. In addition, to quantify the selectivity between $E1$ and $E2$, we considered a symmetric binary classifier (Figure 4A, inset) and measured the distance to the decision boundary (Materials and methods). Note that the Association index was computed during Phase one and the distance to the decision boundary during Phase two in this simulation paradigm (Figure 4B).
With these definitions, we ran simulations with different input strengths ${g}_{E1}$. We found that the onset peaks showed stronger association than the fixed point activity (Figure 4C). Note that the Association index at the fixed point remained zero, a direct consequence of ${r}_{E{1}_{2}}$ being suppressed to zero. Furthermore, we found that the distance between the transient onset response and the decision boundary was always greater than for the fixed point activity (Figure 4D) showing that onset responses retain stimulus selectivity. While the fixed point activity of the unstimulated cotuned neurons is zero in the given example, stimulating a subset of neurons in one ensemble can lead to an increase in the fixed point activity of the unstimulated neurons in the same ensemble under certain conditions (Figure 4—figure supplement 1; Appendix 4), which is consistent with pattern completion experiments (CarrilloReid et al., 2016; Marshel et al., 2019) showing that unstimulated neurons from the same ensemble can remain active throughout the whole stimulation period.
To investigate how the recurrent excitatory connectivity affects both pattern completion and stimulus selectivity, we introduced the parameter $\beta $ which controls recurrent excitatory tuning by trading off withinensemble EtoE strength ${J}_{EE}$ relative to the interensemble strength ${J}_{EE}^{{}^{\prime}}$ (Figure 4A) such that ${J}_{EE}=\beta {J}_{\mathrm{tot}}$ and ${J}_{EE}^{{}^{\prime}}=(1\beta ){J}_{\mathrm{tot}}$. These definitions ensure that the total weight ${J}_{\mathrm{tot}}={J}_{EE}+{J}_{EE}^{{}^{\prime}}$ remains constant for any choice of $\beta $. Notably, the overall recurrent excitation strength within an ensemble ${J}_{EE}$ increases with increasing $\beta $. When $\beta $ is larger than 0.5, the excitatory connection strength within the ensemble ${J}_{EE}$ exceeds the one between ensembles ${J}_{EE}^{{}^{\prime}}$.
We found that pattern completion ability monotonically increases with $\beta $ with a pronounced onset for $\beta \S gt;0.6$ where NTA takes hold (Figure 4E). Moreover, in this regime the two stimulus representations are well separated (Figure 4F) which ensures stimulus selectivity also during onset transients. Together, these findings recapitulate the point that recurrent excitatory tuning is a key determinant of network dynamics. Finally, we confirmed that our findings were also valid in networks with EtoI STF (Figure 4—figure supplement 2), which is commonly observed in the brain (Markram et al., 1998; Zucker and Regehr, 2002; Pala and Petersen, 2015). In summary, NTA’s transient onset responses maintain stimulus selectivity and result in overall better pattern completion than fixed point activity.
NTA provides higher amplification and pattern separation in morphing experiments
So far, we only considered input to one ensemble. To examine how representations in our model are affected by ambiguous inputs to several ensembles, we performed additional morphing experiments (Freedman et al., 2001; Niessing and Friedrich, 2010). To that end, we introduced the parameter $p$ which interpolates between two input stimuli which target $E1$ and $E2$ respectively. When $p$ is zero, all additional input is injected into $E1$. For $p$ equal to one, all additional input is injected into $E2$. Finally, $p$ equal to 0.5 corresponds to the symmetric case in which $E1$ and $E2$ receive the same amount of additional input (Figure 5A).
First, we investigated how the recurrent excitatory connection strength within each ensemble ${J}_{EE}$ affects the onset peak amplitude and fixed point activity. We found that the peak amplitudes depend strongly on ${J}_{EE}$, whereas the fixed point activity was only weakly dependent on ${J}_{EE}$ (Figure 5B and C). When we disconnected the ensembles by completely eliminating all recurrent excitatory connections, activity was noticeably decreased (Figure 5B and C). This illustrates, that recurrent excitation does play an important role in selectively amplifying specific stimuli similar to experimental observations (Marshel et al., 2019; Peron et al., 2020), but that amplification is highest at the onset.
Further, we examined the impact of competition through lateral inhibition as a function of the EtoI interensemble strength ${J}_{IE}^{{}^{\prime}}$ (Materials and methods). As above, we quantified its impact by measuring the representational distance to the decision boundary for the transient onset responses and fixed point activity. We found that regardless of the specific STP mechanism, the distance was larger for the onset responses than for the fixed point activity, consistent with the notion that the onset dynamics separate stimulus identity reliably (Figure 5D and E). Since the absolute activity levels between onset and fixed point differed substantially, we further computed the relative pattern Separation index $({r}_{E2}{r}_{E1})/({r}_{E1}+{r}_{E2})$ and found that the onset transient provides better pattern separation ability for ambiguous stimuli with $p$ close to 0.5 (Figure 5—figure supplement 1) provided that the EtoI connection strength across ensembles ${J}_{IE}^{{}^{\prime}}$ is strong enough. All the while separability for the onset transient was slightly decreased for distinct inputs with $p\in \{0,1\}$ in comparison to the fixed point. In contrast, fixed points clearly separated such pure stimuli while providing weaker pattern separation for ambiguous input combinations. Importantly, these findings qualitatively held for networks with NTA mediated by EtoI STF (Figure 5—figure supplement 2). Thus, NTA provides stronger amplification and pattern separation than fixed point activity in response to ambiguous stimuli.
NTA in spiking neural networks
Thus far, our analysis relied on power law neuronal inputoutput functions in the interest of analytical tractability. To test whether our findings also qualitatively apply to more realistic network models, we built a spiking neural network consisting of randomly connected 800 excitatory and 200 inhibitory neurons, in which the EtoE synaptic connections were subject to STD (Materials and methods). Here, we defined five overlapping ensembles, each corresponding to 200 randomly selected excitatory neurons. During an initial simulation phase (0–22 s), we consecutively stimulated each ensemble by giving additional input to their excitatory neurons, whereas the input to other neurons remained unchanged (Figure 6A). In addition, we also tested pattern completion by stimulating only 75% (Subset 1) of the neurons belonging to Ensemble 5 (22–24 s; Figure 6A). We quantified each ensemble’s activity by calculating the population firing rate of the ensemble (Materials and methods). As in the case of the ratebased model, the neuronal ensembles in the spiking model generated pronounced transient onset responses. We then measured the difference of peak ensemble activity and fixed point activity between the stimulated ensemble and the remaining unstimulated ensembles (Materials and methods). As for the ratebased networks, this difference was consistently larger for the onset peak than for the fixed point (Figure 6B and C). Thus, transient onset responses allow better stimulus separation than fixed points also in spiking neural network models.
Finally, to visualize the neural activity, we projected the binned spiking activity during the first 10 s of our simulation onto its first two principal components. Notably, the PC trajectory does not exhibit a pronounced rotational component (Figure 6D) as activity is confined to one specific ensemble, consistent with experiments (Marshel et al., 2019). Furthermore, we computed the fifth ensemble’s activity for Subset 1 and 2 during the time interval 16–26 s. In agreement with our rate models, neurons in Subset 2 which did not receive additional inputs showed a strong response at the onset (Figure 6E), but not at the fixed point, suggesting that the strongest pattern completion occurs during the initial amplification phase. Finally, we also observed higherthanbaseline fixed point activity in unstimulated neurons of Subset 2 in spiking neural networks (Figure 6—figure supplement 1). Thus, the key characteristics of NTA are preserved across ratebased and more realistic spiking neural network models.
Discussion
In this study, we demonstrated that neuronal ensemble models with recurrent excitation and suitable forms of STP exhibit nonlinear transient amplification (NTA), a putative mechanism underlying selective amplification in recurrent circuits. NTA combines a supralinear neuronal transfer function, recurrent excitation between neurons with similar tuning, and pronounced STP. Using analytical and numerical methods, we showed that NTA generates rapid transient onset responses during which optimal stimulus separation occurs rather than at steadystates. Additionally, we showed that cotuned inhibition is conducive to prevent the emergence of persistent activity, which could otherwise interfere with processing subsequent stimuli. In contrast to balanced amplification (Murphy and Miller, 2009), NTA is an intrinsically nonlinear mechanism for which only stimuli above a critical threshold are amplified effectively. While the precise threshold value is parameterdependent, it can be arbitrarily low provided the excitatory recurrent connections are sufficiently strong (Figure 1F). Importantly, such a critical activation threshold offers a possible explanation for sensory perception experiments which show similar threshold behavior (Marshel et al., 2019; Peron et al., 2020). Following transient amplification, ensemble dynamics are inhibitionstabilized, which renders our model compatible with existing work on SSNs (Ahmadian et al., 2013; Rubin et al., 2015; Hennequin et al., 2018; Kraynyukova and Tchumatchenko, 2018; Echeveste et al., 2020). Thus, NTA provides a parsimonious explanation for why sensory systems may rely upon neuronal ensembles with recurrent excitation in combination with EI cotuning, and pronounced STP dynamics.
Several theoretical studies approached the problem of transient amplification in recurrent neural network models. Loebel and Tsodyks, 2002 have described an NTAlike mechanism as a driver for powerful ensemble synchronization in ratebased networks and in spiking neural network models of auditory cortex (Loebel et al., 2007). Here, we generalized this work to both EtoE STD and EtoI STF and provide an indepth characterization of its amplification capabilities, pattern completion properties, and the resulting network states with regard to their inhibitionstabilization properties. Moreover, we showed that SFA cannot provide similar network stabilization and explored how EI cotuning interacts with NTA. Finally, we contrasted NTA to alternative transient amplification mechanisms. Balanced amplification is a particularly wellstudied transient amplification mechanism (Murphy and Miller, 2009; Goldman, 2009; Hennequin et al., 2014; Bondanelli and Ostojic, 2020; Gillett et al., 2020; Christodoulou et al., 2021) that relies on nonnormality of the connectivity matrix to selectively and rapidly amplify stimuli. Importantly, balanced amplification occurs in networks in which strong recurrent excitation is appropriately balanced by strong recurrent inhibition. It is capable of generating rich transient activity in linear network models (Hennequin et al., 2014), and selectively amplifies specific activity patterns, but without a specific activation threshold. In addition, in spiking neural networks, strong input can induce synchronous firing at the population level which is subsequently stabilized by strong feedback inhibition without the requirement for STP mechanisms (Stern et al., 2018). These properties contrast with NTA, which has a nonlinear activation threshold and intrinsically relies on STP to stabilize otherwise unstable runaway dynamics. Due to the switch of the network’s dynamical state, NTA’s amplification can be orders of magnitudes larger than balanced amplification (Figure 2—figure supplement 3). Interestingly, after the transient amplification phase, ensemble dynamics settle in an inhibitorystabilized state, which renders NTA compatible with previous work on SSNs but in the presence of STP. Finally, although NTA and balanced amplification rely on different amplification mechanisms, they are not mutually exclusive and could, in principle, coexist in biological networks.
NTA’s requirement to generate positive feedback dynamics through recurrent excitation, motivated our focus on networks with $\text{det}(\mathbf{J})<0$. As demonstrated in previous work (Ahmadian et al., 2013), supralinear networks with $\text{det}(\mathbf{J})>0$ and instantaneous inhibition (${\tau}_{I}/{\tau}_{E}\to 0$) are always stable for any given input, they are thus unable to generate positive feedback dynamics. In addition, networks with $\text{det}(\mathbf{J})>0$ can exhibit a range of interesting behaviors, for example, oscillatory dynamics and persistent activity (Kraynyukova and Tchumatchenko, 2018). It is worth noting, however, that for delayed or slow inhibition, stimulation can still lead to unstable network dynamics in networks with $\text{det}(\mathbf{J})>0$. Nevertheless, our simulations suggest that our main conclusions about the stabilization mechanisms still hold (Figure 2—figure supplement 11).
NTA shares some properties with the notion of network criticality in the brain, like synchronous activation of cell ensembles (Plenz and Thiagarajan, 2007) and STP which can tune networks to a critical state (Levina et al., 2007). However, in contrast to most models of criticality, in NTA an ensemble briefly transitions to supercritical dynamics in a controlled, stimulusdependent manner rather than spontaneously. Yet, how the two paradigms are connected at a more fundamental level, is an intriguing question left for future work. Furthermore, recurrent cotuned inhibition is essential for NTA to ensure unistability and selectivity through the suppression of ensembles with different tuning. This requirement is similar in flavor to semibalanced networks characterized by excess inhibition to some excitatory ensembles while others are balanced (Baker et al., 2020). However, the theory of semibalanced networks has, so far, only been applied to steadystate dynamics while ignoring transients and STP. EI cotuning prominently features in several models and was shown to support network stability (Vogels et al., 2011; Hennequin et al., 2017; Znamenskiy et al., 2018), efficient coding (Denève and Machens, 2016), novelty detection (Schulz et al., 2021), changes in neuronal variability (Hennequin et al., 2018; Rost et al., 2018), and correlation structure (Wu et al., 2020). Moreover, some studies have argued that EI balance and cotuning could increase robustness to noise in the brain (Rubin et al., 2017). The present work mainly highlights its importance for preventing multistability and delay activity in circuits not requiring such longtimescale dynamics.
NTA is consistent with several experimental findings. First, our model recapitulates the key findings of Shew et al., 2015 who showed ex vivo that strong sensory inputs cause a transient shift to a supercritical state, after which adaptive changes rapidly tune the network to criticality. Second, NTA requires strong recurrent excitatory connectivity between neurons with similar tuning, which has been reported in experiments (Ko et al., 2011; Cossell et al., 2015; Peron et al., 2020). Third, ensemble activation in our model depends on a critical stimulus strength in line with recent alloptical experiments in the visual cortex, which further link ensemble activation with a perceptual threshold (Marshel et al., 2019). Fourth, sensory networks are unistable in that they return to a nonselective activity state after the removal of the stimulus and usually do not show persistent activity (DeWeese et al., 2003; Mazor and Laurent, 2005; Rupprecht and Friedrich, 2018). Fifth, our work shows that NTA’s onset responses encode stimulus identity better than the fixed point activity, consistent with experiments in the locust antennal lobe (Mazor and Laurent, 2005) and research supporting that the brain relies on coactivity on short timescales to represent information (Stopfer et al., 1997; Engel et al., 2001; Harris et al., 2003; ElGaby et al., 2021). Yet, it remains to be seen whether these findings are also coherent with data on the temporal evolution in other sensory systems. Finally, EI cotuning, which is conducive for NTA, has been found ubiquitously in different sensory circuits (Wehr and Zador, 2003; Froemke et al., 2007; Okun and Lampl, 2008; Rupprecht and Friedrich, 2018; Znamenskiy et al., 2018).
In our model, we made several simplifying assumptions. For instance, we kept the input to inhibitory neurons fixed and only varied the input to the excitatory population. This step was motivated by experiments in the piriform cortex where the total inhibition is dominated by feedback inhibition (Franks et al., 2011). Nevertheless, significant feedforward inhibition was observed in other areas (Bissière et al., 2003; Cruikshank et al., 2007; Ji et al., 2016; Miska et al., 2018). While an indepth comparison for different origins of inhibition was beyond the scope of the present study, we found that increasing the inputs to the excitatory population and inhibitory population by the same amount can still lead to NTA (Figure 1—figure supplement 1; Figure 2—figure supplement 12; Materials and methods), suggesting that our main findings can remain unaffected in the presence of substantial feedforward inhibition. In addition, we limited our analysis to only a few overlapping ensembles. It will be interesting future work to study NTA in the case of many interacting and potentially overlapping ensembles and to determine the maximum storage capacity above which performance degrades. Finally, we anticipate that temporal differences in excitatory and inhibitory synaptic transmission may be important to preserve NTA’s stimuli selectivity.
Our model makes several predictions. In contrast to balanced amplification, in which the network operating regime depends solely on the connectivity, an ensemble involved in NTA can transition from a nonISN to an ISN state. Such a transition is consistent with noise variability observed in sensory cortices (Hennequin et al., 2018) and could be tested experimentally by probing the paradoxical effect under different stimulation conditions (Figure 2G–H; Figure 2—figure supplement 6). Moreover, NTA predicts that onset activity provides a better stimulus encoding and its activity is correlated with the fixed point activity. This signature is different from purely nonnormal amplification mechanisms which would involve a wave of neuronal activity across several distinct ensembles similar to a synfire chain (Abeles, 1991). The difference should be clearly discernible in data. Since NTA relies on recurrent excitation between ensemble neurons, it suggests normal dynamics in which distinct ensembles first activate and then inactivate. The resulting dynamics have weak rotational components (Figure 6D) as seen in some experiments (Marshel et al., 2019). Strong nonnormal amplification, on the other hand, relies on sequential activation associated with pronounced rotational dynamics (Hennequin et al., 2014; Gillett et al., 2020), as for instance observed in motor areas (Churchland et al., 2012). Although both nonnormal mechanisms and NTA are likely to coexist in the brain, we speculate that strong NTA is best suited for, and thus most like to be found in, sensory systems.
In summary, we introduced a general theoretical framework of selective transient signal amplification in recurrent networks. Our approach derives from the minimal assumptions of a nonlinear neuronal transfer function, recurrent excitation within neuronal ensembles, and STP. Importantly, our analysis revealed the functional benefits of STP and EI cotuning, both pervasively found in sensory circuits. Finally, our work suggests that transient onset responses rather than steadystate activity are ideally suited for coactivitybased stimulus encoding and provides several testable predictions.
Materials and methods
Stability conditions for supralinear networks
Request a detailed protocolThe dynamics of a neuronal ensemble consisting of one excitatory and one inhibitory population with a supralinear, power law inputoutput function can be described as follows:
The Jacobian $\mathbf{M}$ of the system is given by
To ensure that the system is stable, the product of $\mathbf{M}$’s eigenvalues ${\lambda}_{1}{\lambda}_{2}$, which is equivalent to the determinant of $\mathbf{M}$, has to be positive. In addition, the sum of the two eigenvalues ${\lambda}_{1}+{\lambda}_{2}$, which corresponds to $\text{tr}(\mathbf{M})$, has to be negative. We therefore obtained the following two stability conditions
Notably, the stability conditions depend on the firing rate of the excitatory population ${r}_{E}$ and the inhibitory population ${r}_{I}$. Since firing rates are inputdependent, the stability of supralinear networks is inputdependent. In contrast, in linear networks in which ${\alpha}_{E}={\alpha}_{I}=1$, the conditions can be simplified to
and are thus inputindependent.
ISN index for supralinear networks
Request a detailed protocolIf an ensemble is unstable without feedback inhibition, then the ensemble is an ISN (Tsodyks et al., 1997). To determine whether a given system is an ISN, we analyzed the stability of the EE subnetwork, which is determined by the real part of the leading eigenvalue of the Jacobian of the EE subnetwork. In the following, we call this leading eigenvalue the ‘ISN index’, which is defined as follows:
A positive ISN index indicates the system is an ISN. Otherwise, the system is nonISN. For supralinear networks in which ${\alpha}_{E}\S gt;1$, the ISN index depends on the firing rates, inputs can therefore switch the network from nonISN to ISN. In contrast, ${\alpha}_{E}=1$ for linear networks which renders the ISN index firing rate independent.
Characteristic function
Request a detailed protocolTo investigate how network stability changes with input, we trace the steps of Kraynyukova and Tchumatchenko, 2018 and define the characteristic function $F(z)$ as follows:
where
is the current into the excitatory population. The characteristic function simplifies the original twodimensional system to a onedimensional system, and the zero crossings of $F(z)$ correspond to the fixed points of the original system. For $z\ge 0$, we note:
Therefore, if the derivative of $F(z)$ evaluated at one of its roots is positive, the corresponding fixed point is a saddle point. Note that as ${r}_{E}$ and ${r}_{I}$ increase, the term in parenthesis becomes dominant. To ensure that ${\lambda}_{1}{\lambda}_{2}$ is negative also for large ${r}_{E}$ and ${r}_{I}$, the determinant of the weight matrix $\text{det}(\mathbf{J})$ has to be positive. Therefore, $\text{det}(\mathbf{J})$ has a decisive impact on the curvature of $F(z)$. In systems with negative determinant, $F(z)$ bends upwards for large $z$. In contrast, $F(z)$ asymptotically bends downwards in systems with positive determinant. Hence, the highactivity steadystate of systems with negative determinant is unstable. In addition, we can simplify the above condition to the determinant of the weight matrix which is a necessary condition for network stability at any firing rate:
To investigate how the network stability changes with input ${g}_{E}$, we examined how $F(z)$ varies with changing input ${g}_{E}$ by calculating the derivative of $F(z)$ with respect to ${g}_{E}$,
Since $\frac{dF(z)}{d{g}_{E}}$ is positive, increasing ${g}_{E}$ always shifts $F(z)$ upwards, eventually leading to the vanishing of all roots and, thus, unstable dynamics in supralinear networks with negative $\text{det}(\mathbf{J})$. In scenarios in which feedforward input to the inhibitory population also changes, we have
When the change in stimulation strength into the excitatory ($\mathrm{\Delta}{g}_{E}$) and the inhibitory population ($\mathrm{\Delta}{g}_{I}$) are the same, $\frac{dF(z)}{dt}$ is always positive provided ${J}_{II}$ is greater than ${J}_{EI}$. Hence, depending on the value of $\frac{{J}_{II}}{{J}_{EI}}$, stimulation can lead to unstable network dynamics even when the input to the inhibitory population increases more than to the excitatory population.
Spikefrequency adaptation (SFA)
We modeled SFA of excitatory neurons as an activitydependent negative feedback current (Benda and Herz, 2003; Brette and Gerstner, 2005):
where $a$ is the adaptation variable, ${\tau}_{a}$ is the adaptation time constant, and $b$ is the adaptation strength.
Stability conditions in networks with SFA
Request a detailed protocolThe Jacobian $\mathbf{M}}_{\mathbf{S}\mathbf{F}\mathbf{A}$ of the system with SFA is given by
The characteristic polynomial of the system with SFA can be written as follows (Horn and Johnson, 1985):
where $\text{tr}({\mathbf{M}}_{\mathbf{S}\mathbf{F}\mathbf{A}})$ and $\text{det}({\mathbf{M}}_{\mathbf{S}\mathbf{F}\mathbf{A}})$ are the trace and the determinant of the Jacobian matrix $\mathbf{M}}_{\mathbf{S}\mathbf{F}\mathbf{A}$, A_{11}, A_{22}, and A_{33} are the matrix cofactors. More specifically,
To ensure that the dynamics of the system are stable, the real parts of the eigenvalues of the Jacobian at the fixed point, and thus all roots of the characteristic polynomial have to be negative. Since the product of the roots is equal to $\text{det}({\mathbf{M}}_{\mathbf{S}\mathbf{F}\mathbf{A}})$, $\text{det}({\mathbf{M}}_{\mathbf{S}\mathbf{F}\mathbf{A}})$ has to be positive. We then have
Since SFA does not modify the synaptic connections, the term $J}_{EE}\text{det}(\mathbf{J})\cdot {\alpha}_{I}{r}_{I}^{\frac{{\alpha}_{I}1}{{\alpha}_{I}}$ is positive for networks with $\text{det}(\mathbf{J})<0$.
In the large ${r}_{E}$ limit, if $b$ is small such that the above condition cannot be fulfilled, $\text{det}({\mathbf{M}}_{\mathbf{S}\mathbf{F}\mathbf{A}})$ is then positive, suggesting that the Jacobian of the system has always at least one positive eigenvalue. Therefore, the dynamics of the system cannot be stabilized in the presence of small $b$.
In addition, ${A}_{11}+{A}_{22}+{A}_{33}$ is equal to ${\lambda}_{1}{\lambda}_{2}+{\lambda}_{2}{\lambda}_{3}+{\lambda}_{1}{\lambda}_{3}$, with the roots of the characteristic polynomial ${\lambda}_{1}$, ${\lambda}_{2}$, and ${\lambda}_{3}$. If all roots are real and negative, ${A}_{11}+{A}_{22}+{A}_{33}$ has to be positive. If one root is real and negative and two other roots are complex conjugates, to ensure that all roots have negative real parts, one necessary condition is ${A}_{11}+{A}_{22}+{A}_{33}\S gt;0$. From the $\text{tr}({\mathbf{M}}_{\mathbf{S}\mathbf{F}\mathbf{A}})$ and $\text{det}({\mathbf{M}}_{\mathbf{S}\mathbf{F}\mathbf{A}})$ conditions, we have
As a result, if ${\tau}_{a}^{1}({\tau}_{a}^{1}+b{\tau}_{E}^{1})b{\tau}_{E}^{1}{\tau}_{I}^{1}(1+{J}_{II}{\alpha}_{I}{r}_{I}^{\frac{{\alpha}_{I}1}{{\alpha}_{I}}})\S gt;0$, ${A}_{11}+{A}_{22}+{A}_{33}$ is guaranteed to be positive. We therefore have
Note that ${\tau}_{a}$ has to be small, in other words, SFA has to be fast, so that ${\tau}_{a}^{1}{\tau}_{E}^{1}{\tau}_{E}^{1}{\tau}_{I}^{1}(1+{J}_{II}{\alpha}_{I}{r}_{I}^{\frac{{\alpha}_{I}1}{{\alpha}_{I}}})$ is positive for arbitrary ${r}_{I}$. For positive ${\tau}_{a}^{1}{\tau}_{E}^{1}{\tau}_{E}^{1}{\tau}_{I}^{1}(1+{J}_{II}{\alpha}_{I}{r}_{I}^{\frac{{\alpha}_{I}1}{{\alpha}_{I}}})$, we have
Since ${\tau}_{a}$ has to be small, the above condition cannot be satisfied for small $b$.
Next, we consider the system with large $b$. Suppose that the firing rate ${r}_{E}$ and ${r}_{I}$ in the initial network are of order 1, and $b$ is of order $K$, where $K$ is a large number. We therefore have $\text{tr}({\mathbf{M}}_{\mathbf{S}\mathbf{F}\mathbf{A}})\sim O(1)$, ${A}_{11}+{A}_{22}+{A}_{33}\sim O(K)$, and $\text{det}({\mathbf{M}}_{\mathbf{S}\mathbf{F}\mathbf{A}})\sim O(K)$. The discriminant of the characteristic polynomial is
Clearly, in the large $b$ limit, the discriminant is negative, suggesting that the characteristic polynomial has one real root and two complex conjugate roots (Irving, 2004).
As the input ${g}_{E}$ increases, the complex conjugate eigenvalues cross the imaginary axis when $\text{tr}({\mathbf{M}}_{\mathbf{S}\mathbf{F}\mathbf{A}})({A}_{11}+{A}_{22}+{A}_{33})$ equals $\text{det}({\mathbf{M}}_{\mathbf{S}\mathbf{F}\mathbf{A}})$. As a result, the system undergoes a supercritical Hopf bifurcation. We numerically confirmed that the resulting limit cycle is stable (Figure 2—figure supplement 1), consistent with previous work (van Vreeswijk and Hansel, 2001). Thus, the system shows oscillatory behavior instead of stable steady state.
Shortterm plasticity (STP)
We modeled EtoE STD following previous work (Tsodyks and Markram, 1997; Varela et al., 1997):
where $x$ is the depression variable, which is limited to the interval $(0,1)$, ${\tau}_{x}$ is the depression time constant, and ${U}_{d}$ is the depression rate. The steadystate solution ${x}^{*}$ is given by
Similarly, we modeled EtoI STF as
where $u$ is the facilitation variable constrained to the interval $(1,{U}_{max})$ , ${U}_{max}$ is the maximal facilitation value, ${\tau}_{u}$ is the time constant of STF, and ${U}_{f}$ is the facilitation rate. The steadystate solution ${u}^{*}$ is given by
Stability conditions for networks with EtoE STD
Request a detailed protocolThe Jacobian $\mathbf{M}}_{\mathbf{S}\mathbf{T}\mathbf{D}$ of the system with EtoE STD is given by
and the characteristic polynomial can be written as follows:
where $\text{tr}({\mathbf{M}}_{\mathbf{S}\mathbf{T}\mathbf{D}})$ and $\text{det}({\mathbf{M}}_{\mathbf{S}\mathbf{T}\mathbf{D}})$ are the trace and the determinant of the Jacobian matrix $\mathbf{M}}_{\mathbf{S}\mathbf{T}\mathbf{D}$, A_{11}, A_{22}, and A_{33} are the matrix cofactors. More specifically,
In the case of unstable dynamics, ${r}_{E}$ goes to infinity due to runaway excitation. However, the depression variable $x$ approaches zero in this limit, as ${lim}_{{r}_{E}\to \mathrm{\infty}}x={lim}_{{r}_{E}\to \mathrm{\infty}}\frac{1}{1+{U}_{d}{r}_{E}{\tau}_{x}}=0$. Therefore, in the large ${r}_{E}$ limit, $\text{tr}({\mathbf{M}}_{\mathbf{S}\mathbf{T}\mathbf{D}})$ is positive.
Similarly, in the large ${r}_{E}$ limit, ${A}_{11}+{A}_{22}+{A}_{33}$ is positive.
Similarly, in the large ${r}_{E}$ limit, $\text{det}({\mathbf{M}}_{\mathbf{S}\mathbf{T}\mathbf{D}})$ is positive.
According to the Descartes’ rule of signs, the number of positive roots is at most the number of sign changes in the sequences of polynomial’s coefficients. Therefore, there are no positive roots for the above characteristic polynomial and the network dynamics can be stabilized by EtoE STD.
Characteristic function approximation for networks with EtoE STD
Request a detailed protocolAs demonstrated above, EtoE STD is able to restabilize the system, there exists a stable steady state for which the STD variable $x$ is constant $x={x}^{*}$. Because $x$ changes slowly compared to the neuronal dynamics, we can approximate it as constant which results in a natural reduction to a 2D system in which the weights with STD are modified. The stability of this 2D system can be readily characterized by the characteristic function $F(z)$ (Kraynyukova and Tchumatchenko, 2018), which depends on the previous steady state value of $x$. The characteristic function approximation with EtoE STD can therefore be written as follows:
where
Note that $\text{det}\mathbf{(}{\mathbf{J}}_{\mathbf{S}\mathbf{T}\mathbf{D}}\mathbf{)}$ can now change its sign due to EtoE STD, the characteristic function can therefore change its bending shape. We used this relation to visualize how EtoE STD effectively changes the network stability of the reduced system in Figure 2D.
Conditions for ISN in networks with EtoE STD
Request a detailed protocolHere, we identify the condition of being in the ISN regime in supralinear networks with EtoE STD. When the level of inhibition is frozen, the Jacobian of the system reduces to the following:
For the system with frozen inhibition, the dynamics are stable if
and
Therefore, if the network is an ISN at the fixed point, the following condition has to be satisfied:
Furthermore, we define the largest real part of the eigenvalues of $\mathbf{M}}_{\mathbf{1}$ as the ISN index for networks with EtoE STD. More specifically,
Conditions for paradoxical response in networks with EtoE STD
Request a detailed protocolNext, we identify the condition of having the paradoxical effect in supralinear networks with EtoE STD. To that end, we exploit a separation of timescales between the fast neural activity and the slow STP variable. Therefore, set the depression variable to its value at the fixed point corresponding to the fixed point value of ${r}_{E}$. The excitatory nullcline is defined as follows
For ${r}_{E,I}\S gt;0$, we have
The slope of the excitatory nullcline in the ${r}_{E}/{r}_{I}$ plane where $x$ axis is ${r}_{E}$ and $y$ axis is ${r}_{I}$ can be written as follows
Note that the slope of the excitatory nullcline is nonlinear. To have paradoxical effect, the slope of the excitatory nullcline at the fixed point of the system has to be positive. Therefore, the STD variable $x$ at the fixed point has to satisfy the following condition
The inhibitory nullcline can be written as follows
In the region of rates ${r}_{E,I}\S gt;0$, we have
The slope of the inhibitory nullcline can be written as follows
In addition to the positive slope of the excitatory nullcline, the slope of the inhibitory nullcline at the fixed point of the system has to be larger than the slope of the excitatory nullcline. We therefore have
The above condition is the same as the stability condition of the determinant of the Jacobian of the system with EtoE STD (Eq. (49)). Therefore, the condition is always satisfied when the system with EtoE STD is stable.
Based on the condition of being ISN shown in Eq. (55) and the condition of having paradoxical effect shown in Eq. (60), we therefore can conclude that in supralinear networks with EtoE STD, the paradoxical effect implies inhibitory stabilization, whereas inhibitory stabilization does not necessarily imply paradoxical responses. This is consistent with recent work by Sanzeni et al., 2020, in which thresholdlinear networks with STP have been studied. Here, we showed analytically that the conclusion holds for any rectified powerlaw activation function with positive $\alpha $.
To visualize the conditions in a twodimensional plane, we reduced the conditions into a function of ${J}_{EE}$ and $x$. For Figure 2G, ${r}_{E}=1$. In Figure 2—figure supplement 5 and Figure 2—figure supplement 8, the depression variable thresholds above which the network exhibits the paradoxical effect were calculated based on Eq. (60).
Unistability conditions
Request a detailed protocolThe system is said to be ‘unistable’, when it has a single stable fixed point. We first identified the unistability condition for networks with global inhibition. To that end, we considered a general network with $N$ excitatory populations and $N$ inhibitory populations. To treat this problem analytically, we did not take STP into account in our analysis. The Jacobian matrix of networks with global inhibition $\mathbf{Q}$, can be written as follows,
where $\mathbf{J}}_{E\leftarrow E$, $\mathbf{J}}_{E\leftarrow I$, $\mathbf{J}}_{I\leftarrow E$, and $\mathbf{J}}_{I\leftarrow I$ are $N$ by $N$ block matrices defined below.
where $a={\tau}_{E}^{1}{J}_{EE}{\alpha}_{E}{[{z}_{E}]}_{+}^{{\alpha}_{E}1}$, $b={\tau}_{E}^{1}{J}_{EI}{\alpha}_{E}{[{z}_{E}]}_{+}^{{\alpha}_{E}1}$, $c={\tau}_{I}^{1}{J}_{IE}{\alpha}_{I}{[{z}_{I}]}_{+}^{{\alpha}_{I}1}$, $d={\tau}_{I}^{1}{J}_{II}{\alpha}_{I}{[{z}_{I}]}_{+}^{{\alpha}_{I}1}$, $e={\tau}_{E}^{1}$, and $f={\tau}_{I}^{1}$. Here, ${z}_{E}$ and ${z}_{I}$ denote the total current into the excitatory and inhibitory population, respectively. Note that all these parameters are nonnegative. Parameter $k$ controls the excitatory connection strength across different populations. $\mathbf{J}}_{N,N$ is a $N$ by $N$ matrix of ones.
The eigenvalues of the Jacobian $\mathbf{Q}$ are roots of its characteristic polynomial,
where $\mathbb{1}$ represents the identity matrix of size $N$. The characteristic polynomial can be expanded to:
We therefore had four distinct eigenvalues:
and
Note that the eigenvalues ${\lambda}_{1}$ and ${\lambda}_{2}$ have an algebraic and geometric multiplicity of ($N$–1), whereas the eigenvalues ${\lambda}_{3}$ and ${\lambda}_{4}$ have an algebraic and geometric multiplicity of 1.
In analogy to networks with global inhibition, the Jacobian matrix of networks with cotuned inhibition $\mathbf{R}$, can be written as
where $\mathbf{J}}_{E\leftarrow E$, $\mathbf{J}}_{E\leftarrow I$, $\mathbf{J}}_{I\leftarrow E$, and $\mathbf{J}}_{I\leftarrow I$ are $N$ by $N$ block matrices defined as follows:
where $m$ controls the degree of cotuning in the network. If $m=0$, the network decouples into $N$ independent ensembles and inhibition is perfectly cotuned with excitation. In the case $m=1$, inhibition is global and the block matrices become identical to the above case of global inhibition.
The eigenvalues of the matrix $\mathbf{R}$ are given as the roots of the characteristic polynomial defined by:
which yields the following expression:
We therefore had four distinct eigenvalues:
The eigenvalues ${\lambda}_{1}^{{}^{\prime}}$ and ${\lambda}_{2}^{{}^{\prime}}$ have an algebraic and geometric multiplicity of ($N$–1), whereas the eigenvalues ${\lambda}_{3}^{{}^{\prime}}$ and ${\lambda}_{4}^{{}^{\prime}}$ have an algebraic and geometric multiplicity of 1. We noted that ${\lambda}_{3}={\lambda}_{3}^{{}^{\prime}}$, ${\lambda}_{4}={\lambda}_{4}^{{}^{\prime}}$.
To compare under which conditions networks with different structures are unistable, we examined the different eigenvalues derived above. As ${\lambda}_{2}\S lt;0$, and ${\lambda}_{1}^{{}^{\prime}}\S gt;{\lambda}_{2}^{{}^{\prime}}$, we only had to compare ${\lambda}_{1}^{{}^{\prime}}$ to ${\lambda}_{1}$. For networks with cotuned inhibition, we have $m\S lt;1$,
The inequality, ${\lambda}_{1}^{{}^{\prime}}\S lt;{\lambda}_{1}$, indicates that networks with cotuned inhibition have a broad parameter regime in which they are unistable than networks with global inhibition. Note that in the absence of a saturating nonlinearity of the inputoutput function and in the absence of any additional stabilization mechanisms, systems with positive eigenvalues of the Jacobian are unstable. In this case, networks with cotuned inhibition have a broad parameter regime of being stable than networks with global inhibition.
To visualize the conditions in a twodimensional plane, we reduced the conditions into a function of $a$ and $d$. For Figure 3C, $k=0.1$, $m=0.5$ and $bc=0.9ad$.
Distance to the decision boundary
Request a detailed protocolTo calculate the distance to the decision boundary in Figures 4 and 5, Figure 4—figure supplement 2 and Figure 5—figure supplement 2, we first projected the excitatory activity in Phase two onto a twodimensional Cartesian coordinate system in which the horizontal axis is the activity of the first excitatory ensemble ${r}_{E1}$ and the vertical axis is the activity of the second excitatory ensemble ${r}_{E2}$. We denote the location of the projected data point in the Cartesian coordinate system by ($x$, $y$), where $x$ and $y$ equal ${r}_{E1}$ and ${r}_{E2}$, respectively. The distance $L$ between the projected data and the decision boundary which corresponds to the diagonal line in the coordinate system can be expressed as follows:
Note that the inverse trigonometric function arcsin gives the value of the angle in degrees.
Inhibitory feedback pathways for suppressing unwanted neural activation
Request a detailed protocolTo identify the important neural pathways for the suppression of unwanted neural activation, we analyzed how the activity of the second excitatory ensemble ${r}_{E2}$ changes with the input to the first excitatory ensemble ${g}_{E1}$. To that end, we considered a general weight matrix for networks with two interacting ensembles
We can write the change in firing rate of the excitatory population in the second ensemble $\delta {r}_{E2}$ as a function of the change in the input to the other $\delta {g}_{E1}$:
where $\mathbb{1}$ is the identity matrix. And $\mathbf{F}$ is given by
where ${f}_{E1}^{{}^{\prime}}$, ${f}_{E2}^{{}^{\prime}}$, ${f}_{I1}^{{}^{\prime}}$ and ${f}_{I2}^{{}^{\prime}}$ are the derivatives of the inputoutput functions evaluated at the fixed point.
Assuming that ${J}_{E1E1}={J}_{E2E2}={J}_{EE}$, ${J}_{I1E1}={J}_{I2E2}={J}_{IE}$, ${J}_{E1I1}={J}_{E2I2}={J}_{EI}$, ${J}_{I1I1}={J}_{I2I2}={J}_{II}$, ${J}_{E1E2}={J}_{E2E1}={J}_{EE}^{{}^{\prime}}$, ${J}_{I1E2}={J}_{I2E1}={J}_{IE}^{{}^{\prime}}$, ${J}_{E1I2}={J}_{E2I1}={J}_{EI}^{{}^{\prime}}$ and ${J}_{I1I2}={J}_{I2I1}={J}_{II}^{{}^{\prime}}$, we find
By further assuming that the weight strengths across ensembles are weak and ignoring the corresponding higherorder terms, we get
Note that $\frac{{J}_{EE}^{{}^{\prime}}}{{J}_{IE}^{{}^{\prime}}}$ and $\frac{{J}_{II}^{{}^{\prime}}}{{J}_{EI}^{{}^{\prime}}}$ are terms regulating the respective excitatory and inhibitory input from one ensemble to the excitatory and inhibitory population in another ensemble. The term $\text{det}(\mathbb{1}\mathbf{F}\mathbf{J})$ is positive to ensure the stability of the system.
To suppress the activity of the excitatory population in the second ensemble ${r}_{E2}$, in other words, to ensure that $\delta {r}_{E2}\S lt;0$, ${J}_{IE}^{{}^{\prime}}$ or/and ${J}_{EI}^{{}^{\prime}}$ have to be large. Therefore, we identified ${J}_{IE}^{{}^{\prime}}$ and ${J}_{EI}^{{}^{\prime}}$ as important synaptic connections which lead to suppression of the unwanted neural activation, suggesting that inhibition can be provided via ${J}_{IE}^{{}^{\prime}}$ through the $E1$$I2$$E2$ pathway or via ${J}_{EI}^{{}^{\prime}}$ through the $E1$$I1$$E2$ pathway.
For Figures 4 and 5, the ratebased model consists of two ensembles, each of which is composed of 100 excitatory and 25 inhibitory neurons with alltoall connectivity.
Spiking neural network model
Request a detailed protocolThe spiking neural network model was composed of ${N}_{E}$ excitatory and ${N}_{I}$ inhibitory leaky integrateandfire neurons. Neurons were randomly connected with probability of 20%. The dynamics of membrane potential of neuron i, ${U}_{i}$, as defined by Zenke et al., 2015:
Here, ${\tau}^{m}$ is the membrane time constant and ${U}^{\text{rest}}$ is the resting potential. Spikes are triggered when the membrane potential reaches the spiking threshold ${U}^{\text{thr}}$. After a spike is emitted, the membrane potential is reset to ${U}^{\text{rest}}$ and the neuron enters a refractory period of ${\tau}^{\text{ref}}$. Inhibitory neurons obeyed the same integrateandfire formalism but with a shorter membrane time constant.
Excitatory synapses contain a fast AMPA component and a slow NMDA component. The dynamics of the excitatory conductance are described by:
Here, ${J}_{ij}$ denotes the synaptic strength from neuron $j$ to neuron i. If the connection does not exist, ${J}_{ij}$ was set to 0. ${S}_{j}(t)$ is the spike train of neuron $j$, which is defined as ${S}_{j}(t)={\sum}_{k}\delta (t{t}_{j}^{k})$, where $\delta $ is the Dirac delta function and ${t}_{j}^{k}$ the spikes times $k$ of neuron $j$. $\xi$ is a weighting parameter. The dynamics of inhibitory conductances are governed by:
In the spiking neural network models, SFA of excitatory neurons is modeled as follows,
where i is the index of excitatory neurons.
The dynamics of EtoE STD are given by
where i represents the index of excitatory neurons.
The dynamics of EtoI STF are governed by
where i denotes the index of inhibitory neurons.
For Figure 6, each excitatory and inhibitory neuron received external excitatory input from 300 neurons firing with Poisson statistics at an average firing rate of 0.1 Hz at baseline. During stimulation, the excitatory neurons corresponding to the activated ensemble received external excitatory input from 300 neurons firing with Poisson statistics at an average firing rate of 0.5 Hz. The ensemble activity is computed from the instantaneous firing rates of the respective ensembles with 10ms bin size. The difference in ensemble activity for the peak amplitude is calculated by subtracting the average maximal ensemble activity of the unstimulated ensembles from the maximal ensemble activity of the activated ensemble. Similarly, the difference in ensemble activity for the fixed point is calculated by subtracting the average ensemble activity of the unstimulated ensembles at the fixed point from the ensemble activity of the activated ensemble at the fixed point. Fixed point activity is computed by averaging the activity of the middle 1 second within the 2second stimulation period.
For Figure 2—figure supplement 10, each excitatory and inhibitory neuron received external excitatory input from 300 neurons firing with Poisson statistics at an average firing rate of 0.1 Hz at the baseline. During stimulation, each excitatory neuron received external excitatory input from 300 neurons firing with Poisson statistics at an average firing rate of 0.3 Hz.
For Figure 6—figure supplement 1, the firing rates of 300 neurons are varying from $4/15$ Hz to $7/15$ Hz.
Simulations
Request a detailed protocolSimulations were performed in Python and Mathematica. All differential equations were implemented by Euler integration with a time step of 0.1 ms. All simulation parameters are listed in Tables 1–5 and Appendix 5—Tables 1–10. The simulation source code to reproduce the figures is publicly available at https://github.com/fmibasel/gzenkenonlineartransientamplification (Wu, 2021 copy archived at swh:1:rev:6ff6ff10b9f4994a0f948a987a66cc82f98451e1).
Appendix 1
Stability conditions in networks with EtoI STF
The dynamics of supralinear networks with EtoI STF can be described as follows:
The Jacobian $\mathbf{M}}_{\mathbf{S}\mathbf{T}\mathbf{F}$ of the system with EtoI STF is given by:
The characteristic polynomial for the system with EtoI STF can be written as follows:
where $\text{tr}({\mathbf{M}}_{\mathbf{S}\mathbf{T}\mathbf{F}})$ and $\text{det}({\mathbf{M}}_{\mathbf{S}\mathbf{T}\mathbf{F}})$ are the trace and the determinant of the Jacobian matrix $\mathbf{M}}_{\mathbf{S}\mathbf{F}\mathbf{A}$ , A_{11}, A_{22}, and A_{33} are the matrix cofactors. More specifically,
Assuming that ${\alpha}_{E}={\alpha}_{I}=\alpha $, we then have
Substituting the firing rates with the current into excitatory population $z$, we then have
In the large ${r}_{E}$ limit, $z$ is large, ${lim}_{{r}_{E}\to \mathrm{\infty}}u={lim}_{{r}_{E}\to \mathrm{\infty}}\frac{1+{U}_{f}{U}_{max}{r}_{E}{\tau}_{u}}{1+{U}_{f}{r}_{E}{\tau}_{u}}\approx {U}_{max}$. Therefore, we can guarantee that $\text{det}\phantom{\rule{thinmathspace}{0ex}}\mathbf{(}{\mathbf{J}}_{\mathbf{S}\mathbf{T}\mathbf{F}}\mathbf{)}$ becomes positive for sufficiently large ${U}_{max}$. Since the denominator $\text{det}\phantom{\rule{thinmathspace}{0ex}}\mathbf{(}{\mathbf{J}}_{\mathbf{S}\mathbf{T}\mathbf{F}}\mathbf{)}\cdot {J}_{EI}^{1}[z{]}_{+}^{\alpha}+{J}_{EI}^{1}{J}_{II}z{J}_{EI}^{1}{J}_{II}{g}_{E}+{g}_{I}$ grows faster than the numerator for $z\gg 1$, $\text{tr}({\mathbf{M}}_{\mathbf{S}\mathbf{T}\mathbf{F}})$ becomes negative for large ${r}_{E}$.
Similarly, in the large ${r}_{E}$ limit, ${A}_{11}+{A}_{22}+{A}_{33}$ is positive.
Similarly, in the large ${r}_{E}$ limit, $\text{det}({\mathbf{M}}_{\mathbf{S}\mathbf{T}\mathbf{F}})$ is negative.
Therefore, similar to EtoE STD, networks dynamics can also be stabilized by EtoI STF.
Appendix 2
Conditions for ISN in networks with EtoI STF
Here, we identify the condition of being ISN in supralinear networks with EtoI STF. If inhibition is frozen, in other words, if feedback inhibition is absent, the Jacobian of the system becomes as follows:
For the system with frozen inhibition, the dynamics are stable if
and
Therefore, if the network is an ISN at the fixed point, the following condition has to be satisfied:
Note that this condition is independent of the facilitation variable $u$ of EtoI STF. We further define the ISN index for the system with EtoI STF as follows:
Appendix 3
Conditions for paradoxical response in networks with EtoI STF
Next, we identify the condition of having the paradoxical effect in supralinear networks with EtoI STF. The excitatory nullcline is defined by
For ${r}_{E,I}\S gt;0$, we have
The slope of the excitatory nullcline in the ${r}_{E}/{r}_{I}$ plane where $x$ axis is ${r}_{E}$ and $y$ axis is ${r}_{I}$ can be written as follows
Note that the slope of the excitatory nullcline is nonlinear. To have paradoxical effect, the slope of the excitatory nullcline at the fixed point of the system has to be positive. We therefore have
We exploit a separation of timescales between fast neural activity and slow shortterm plasticity variable, we therefore set the facilitation variable to the value at its fixed point corresponding to the dynamical value of ${r}_{E}$. Then we can write the inhibitory nullcline as follows
In the region of rates ${r}_{E,I}\S gt;0$, we have
The slope of the inhibitory nullcline can be written as follows
In addition to the positive slope of the excitatory nullcline, the slope of the inhibitory nullcline at the fixed point of the system has to be larger than the slope of the excitatory nullcline. We therefore have
The above condition is the same as the stability condition of the determinant of the Jacobian of the system with EtoI STF (Eq. (112)). Therefore, the condition is always satisfied when the system with EtoI STF is stable.
Note that the condition of being ISN shown in Eq. (116) is identical to the condition of having paradoxical effect shown in Eq. (121). Therefore, in networks with EtoI STF alone, paradoxical effect implies ISN and ISN implies paradoxical effect. We thus use paradoxical effect as a proxy for inhibitory stabilization.
Appendix 4
Change in steadystate activity of unstimulated cotuned neurons
To analyze the pattern completion in supralinear networks, we considered a network with one excitatory population and one inhibitory population. Neurons in the excitatory population are cotuned to the same stimulus feature and are separated into two subsets denoting by E_{11} and E_{12}. The dynamics of the system can be described as follows:
The change in the firing rate of the Subset 2 in the excitatory population $\delta {r}_{{E}_{12}}$ can be written as a function of the change in the input to the Subset 1 $\delta {g}_{{E}_{11}}$:
where $\mathbb{1}$ is the identity matrix. And $\mathbf{F}$ is given by
where ${f}_{{E}_{11}}^{{}^{\prime}}$, ${f}_{{E}_{12}}^{{}^{\prime}}$, and ${f}_{I}^{{}^{\prime}}$ are the derivatives of the inputoutput functions evaluated at the fixed point. The term $\text{det}(\mathbb{1}\mathbf{F}\mathbf{J})$ is positive to ensure the stability of the system.
Clearly, if the term ${J}_{{E}_{12}{E}_{11}}+{J}_{{E}_{12}{E}_{11}}{J}_{II}{f}_{I}^{{}^{\prime}}{J}_{{E}_{12}I}{J}_{I{E}_{11}}{f}_{I}^{{}^{\prime}}$ is positive (negative), increasing the input to the Subset 1 leads to an increase (a decrease) in the activity of neurons in the Subset 2. As the input to the Subset 1 increases, the firing rate of the inhibitory population ${r}_{I}$ and also ${f}_{I}^{{}^{\prime}}$ will increase. In the presence of EtoE STD or EtoI STF, ${J}_{{E}_{12}{E}_{11}}$ or ${J}_{I{E}_{11}}$ will decrease or increase with the input to the Subset 1. As a result, the sign of ${J}_{{E}_{12}{E}_{11}}+{J}_{{E}_{12}{E}_{11}}{J}_{II}{f}_{I}^{{}^{\prime}}{J}_{{E}_{12}I}{J}_{I{E}_{11}}{f}_{I}^{{}^{\prime}}$ can switch from positive to negative as the input to the Subset 1 increases, indicating that the effect on the activity of the cotuned unstimulated neurons in the same ensemble can switch from potentiation to suppression. Note that this behavior is different from linear networks in which the change is independent of the input or firing rates.
Appendix 5
Data availability
This project is a theory project without data. All simulation code has been deposited on GitHub under https://github.com/fmibasel/gzenkenonlineartransientamplification, (copy archived at swh:1:rev:6ff6ff10b9f4994a0f948a987a66cc82f98451e1).
References

Analysis of the stabilized supralinear networkNeural Computation 25:1994–2037.https://doi.org/10.1162/NECO_a_00472

Nonlinear stimulus representations in neural circuits with approximate excitatoryinhibitory balancePLOS Computational Biology 16:1008192.https://doi.org/10.1371/journal.pcbi.1008192

Modulation of the input/output function of rat piriform cortex pyramidal cellsJournal of Neurophysiology 72:644–658.https://doi.org/10.1152/jn.1994.72.2.644

A universal model for spikefrequency adaptationNeural Computation 15:2523–2564.https://doi.org/10.1162/089976603322385063

Dopamine gates LTP induction in lateral amygdala by suppressing feedforward inhibitionNature Neuroscience 6:587–592.https://doi.org/10.1038/nn1058

Coding with transient trajectories in recurrent neural networksPLOS Computational Biology 16:e1007655.https://doi.org/10.1371/journal.pcbi.1007655

Adaptive exponential integrateandfire model as an effective description of neuronal activityJournal of Neurophysiology 94:3637–3642.https://doi.org/10.1152/jn.00686.2005

Efficient codes and balanced networksNature Neuroscience 19:375–382.https://doi.org/10.1038/nn.4243

Binary spiking in auditory cortexThe Journal of Neuroscience 23:7940–7949.https://doi.org/10.1523/JNEUROSCI.232107940.2003

An emergent neural coactivity code for dynamic memoryNature Neuroscience 24:694–704.https://doi.org/10.1038/s4159302100820w

Dynamic predictions: oscillations and synchrony in topdown processingNature Reviews Neuroscience 2:704–716.https://doi.org/10.1038/35094565

Mnemonic coding of visual space in the monkey’s dorsolateral prefrontal cortexJournal of Neurophysiology 61:331–349.https://doi.org/10.1152/jn.1989.61.2.331

Shortterm depression and transient memory in sensory cortexJournal of Computational Neuroscience 43:273–294.https://doi.org/10.1007/s1082701706628

Neural signatures of cell assembly organizationNature Reviews Neuroscience 6:399–407.https://doi.org/10.1038/nrn1669

Inhibitory Plasticity: Balance, Control, and CodependenceAnnual Review of Neuroscience 40:557–579.https://doi.org/10.1146/annurevneuro072116031005

Slow dynamics and high variability in balanced cortical networks with clustered connectionsNature Neuroscience 15:1498–1505.https://doi.org/10.1038/nn.3220

Computation by ensemble synchronization in recurrent networks with synaptic depressionJournal of Computational Neuroscience 13:111–124.https://doi.org/10.1023/a:1020110223441

Processing of sounds by population spikes in a model of primary auditory cortexFrontiers in Neuroscience 1:197–209.https://doi.org/10.3389/neuro.01.1.1.015.2007

The organizing principles of neuronal avalanches: cell assemblies in the cortex?Trends in Neurosciences 30:101–110.https://doi.org/10.1016/j.tins.2007.01.005

Temporal whitening by powerlaw adaptation in neocortical neuronsNature Neuroscience 16:942–948.https://doi.org/10.1038/nn.3431

The contribution of spike threshold to the dichotomy of cortical simple and complex cellsNature Neuroscience 7:1113–1122.https://doi.org/10.1038/nn1310

Inhibitory stabilization and cortical computationNature Reviews Neuroscience 22:21–37.https://doi.org/10.1038/s4158302000390z

Adaptation to sensory input tunes visual cortex to criticalityNature Physics 11:659–663.https://doi.org/10.1038/nphys3370

Paradoxical effects of external modulation of inhibitory interneuronsThe Journal of Neuroscience 17:4382–4388.https://doi.org/10.1523/JNEUROSCI.171104382.1997

Patterns of synchrony in neural networks with spike adaptationNeural Computation 13:959–992.https://doi.org/10.1162/08997660151134280

A recurrent network mechanism of time integration in perceptual decisionsThe Journal of Neuroscience 26:1314–1328.https://doi.org/10.1523/JNEUROSCI.373305.2006

SoftwareNonlinear Transient Amplification, version swh:1:rev:6ff6ff10b9f4994a0f948a987a66cc82f98451e1Software Heritage.

Shortterm synaptic plasticityAnnual Review of Physiology 64:355–405.https://doi.org/10.1146/annurev.physiol.64.092501.114547
Article and author information
Author details
Funding
Novartis Foundation
 Yue Kris Wu
 Friedemann Zenke
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
We thank Rainer W Friedrich, Claire MeissnerBernard, William F Podlaski, and members of the Zenke Group for comments and discussions. This work was supported by the Novartis Research Foundation.
Copyright
© 2021, Wu and Zenke
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 3,798
 views

 546
 downloads

 17
 citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Neuroscience
Memories are stored as ensembles of engram neurons and their successful recall involves the reactivation of these cellular networks. However, significant gaps remain in connecting these cell ensembles with the process of forgetting. Here, we utilized a mouse model of object memory and investigated the conditions in which a memory could be preserved, retrieved, or forgotten. Direct modulation of engram activity via optogenetic stimulation or inhibition either facilitated or prevented the recall of an object memory. In addition, through behavioral and pharmacological interventions, we successfully prevented or accelerated forgetting of an object memory. Finally, we showed that these results can be explained by a computational model in which engrams that are subjectively less relevant for adaptive behavior are more likely to be forgotten. Together, these findings suggest that forgetting may be an adaptive form of engram plasticity which allows engrams to switch from an accessible state to an inaccessible state.

 Computational and Systems Biology
 Neuroscience
Animal behaviour alternates between stochastic exploration and goaldirected actions, which are generated by the underlying neural dynamics. Previously, we demonstrated that the compositional Restricted Boltzmann Machine (cRBM) can decompose wholebrain activity of larval zebrafish data at the neural level into a small number (∼100200) of assemblies that can account for the stochasticity of the neural activity (van der Plas et al., eLife, 2023). Here, we advance this representation by extending to a combined stochasticdynamical representation to account for both aspects using the recurrent temporal RBM (RTRBM) and transferlearning based on the cRBM estimate. We demonstrate that the functional advantage of the RTRBM is captured in the temporal weights on the hidden units, representing neural assemblies, for both simulated and experimental data. Our results show that the temporal expansion outperforms the stochasticonly cRBM in terms of generalization error and achieves a more accurate representation of the moments in time. Lastly, we demonstrate that we can identify the original timescale of assembly dynamics by estimating multiple RTRBMs at different temporal resolutions. Together, we propose that RTRBMs are a valuable tool for capturing the combined stochastic and timepredictive dynamics of largescale data sets.