Abstract
Biological memory networks are thought to store information in the synaptic connectivity between assemblies of neurons. Recent models suggest that these assemblies contain both excitatory and inhibitory neurons (E/I assemblies), resulting in co-tuning and precise balance of excitation and inhibition. To understand computational consequences of E/I assemblies under biologically realistic constraints we created a spiking network model based on experimental data from telencephalic area Dp of adult zebrafish, a precisely balanced recurrent network homologous to piriform cortex. We found that E/I assemblies stabilized firing rate distributions compared to networks with excitatory assemblies and global inhibition. Unlike classical memory models, networks with E/I assemblies did not show discrete attractor dynamics. Rather, responses to learned inputs were locally constrained onto manifolds that “focused” activity into neuronal subspaces. The covariance structure of these manifolds supported pattern classification when information was retrieved from selected neuronal subsets. Networks with E/I assemblies therefore transformed the geometry of neuronal coding space, resulting in continuous representations that reflected both relatedness of inputs and an individual’s experience. Such continuous internal representations enable fast pattern classification, can support continual learning, and may provide a basis for higher-order learning and cognitive computations.
Introduction
Autoassociative memory establishes internal representations of specific inputs that may serve as a basis for higher brain functions including classification and prediction. Representational learning in autoassociative memory networks is thought to involve spike timing-dependent synaptic plasticity and potentially other mechanisms that enhance connectivity among assemblies of excitatory neurons (Hebb, 1949; Miehl et al., 2022; Ryan et al., 2015). Classical theories proposed that assemblies define discrete attractor states and map related inputs onto a common stable output pattern. Hence, neuronal assemblies are thought to establish internal representations, or memories, that classify inputs relative to previous experience via attractor dynamics (Hopfield, 1982; Kohonen, 1984). However, brain areas with memory functions such as the hippocampus or neocortex often exhibit dynamics that is atypical of attractor networks including irregular firing patterns, transient responses to inputs, and high trial-to-trial variability (Iurilli and Datta, 2017; Renart et al., 2010; Shadlen and Newsome, 1994).
Irregular, fluctuation-driven firing reminiscent of cortical activity emerges in recurrent networks when neurons receive strong excitatory (E) and inhibitory (I) synaptic input (Shadlen and Newsome, 1994; Van Vreeswijk and Sompolinsky, 1996). In such “balanced state” networks, enhanced connectivity among assemblies of E neurons is prone to generate runaway activity unless matched I connectivity establishes co-tuning of E and I inputs in individual neurons. The resulting state of “precise” synaptic balance stabilizes firing rates because inhomogeneities or fluctuations in excitation are tracked by correlated inhibition (Hennequin et al., 2017). E/I co-tuning has been observed experimentally in cortical brain areas (Bhatia et al., 2019; Froemke et al., 2007; Okun and Lampl, 2008; Rupprecht and Friedrich, 2018; Wehr and Zador, 2003) and emerged in simulations that included spike-timing dependent plasticity at I synapses (Lagzi and Fairhall, 2022; Litwin-Kumar and Doiron, 2014; Vogels et al., 2011; Zenke et al., 2015). In simulations, E/I co-tuning can be established by including I neurons in assemblies, resulting in “E/I assemblies” where I neurons track activity of E neurons (Barron et al., 2017; Lagzi and Fairhall, 2022; Mackwood et al., 2021). Exploring the structural basis of E/I co-tuning in biological networks is challenging because it requires the dense reconstruction of large neuronal circuits at synaptic resolution (Friedrich and Wanner, 2021).
Modeling studies started to investigate effects of E/I assemblies on network dynamics (Chenkov, 2017; Mackwood et al., 2021; Sadeh and Clopath, 2020a; Schulz et al., 2021) but the impact on neuronal computations in the brain remains unclear. Balanced state networks can exhibit a broad range of dynamical behaviors, including chaotic firing, transient responses and stable states (Festa et al., 2018; Hennequin et al., 2014; Litwin-Kumar and Doiron, 2012; Murphy and Miller, 2009; Rost et al., 2018; Roudi and Latham, 2007), implying that computational consequences of E/I assemblies depend on network parameters. We therefore examined effects of E/I assemblies on autoassociative memory in a spiking network model that was constrained by experimental data from telencephalic area Dp of adult zebrafish, which is homologous to piriform cortex (Mueller et al., 2011).
Dp and piriform cortex receive direct input from mitral cells in the olfactory bulb (OB) and have been proposed to function as autoassociative memory networks (Haberly, 2001; Wilson and Sullivan, 2011). Consistent with this hypothesis, manipulations of neuronal activity in piriform cortex affected olfactory memory (Meissner-Bernard et al., 2018; Sacco and Sacchetti, 2010). In both brain areas, odors evoke temporally patterned, distributed activity patterns (Blazing and Franks, 2020; Stettler and Axel, 2009; Yaksi et al., 2009) that are dominated by synaptic inputs from recurrent connections (Franks et al., 2011; Rupprecht and Friedrich, 2018) and modified by experience (Chapuis and Wilson, 2011; Frank et al., 2019; Jacobson et al., 2018; Pashkovski et al., 2020). Whole-cell voltage clamp recordings revealed that neurons in posterior Dp (pDp) received large E and I synaptic inputs during odor responses. These inputs were co-tuned in odor space and correlated on fast timescales, demonstrating that pDp enters a transient state of precise synaptic balance during odor stimulation (Rupprecht and Friedrich, 2018).
We found that network models of pDp without E/I co-tuning generated persistent attractor dynamics and exhibited a biologically unrealistic broadening of the firing rate distribution. Introducing E/I assemblies established E/I co-tuning, stabilized the firing rate distribution, and abolished persistent attractor states. In networks with E/I assemblies, population activity was locally constrained onto manifolds that represented learned and related inputs by “focusing” activity into neuronal subspaces. The covariance structure of manifolds supported pattern classification when information was retrieved from selected neuronal subsets. These results show that autoassociative memory networks constrained by biological data operate in a balanced regime where information is contained in the geometry of neuronal manifolds. Predictions derived from these analyses may be tested experimentally by measurements of neuronal population activity in zebrafish.
Results
A spiking network model based on pDp
To analyze memory-related computational functions of E/I assemblies under biologically realistic constraints we created a spiking neural network model, pDpsim, based on experimental data from pDp (Figure 1A) (Blumhagen et al., 2011; Rupprecht and Friedrich, 2018; Yaksi et al., 2009). pDpsim comprised 4000 E neurons and 1000 I neurons, consistent with the estimated total number of 4000 – 10000 neurons in adult zebrafish pDp (unpublished observations). The network received afferent input from 1500 mitral cells in the OB with a mean spontaneous firing rate of 6 Hz (Friedrich and Laurent, 2004, 2001; Tabor and Friedrich, 2008). Odors were modeled by increasing the firing rates of 10% of mitral cells to a mean of 15 Hz and decreasing the firing rates of 5% of mitral cells to a mean of 2 Hz (Methods, Figure 1B). As a result, the mean activity increased only slightly while the variance of firing rates across the mitral cell population increased approximately 7-fold, consistent with experimental observations (Friedrich and Laurent, 2004; Wanner and Friedrich, 2020).
pDpsim consisted of sparsely connected integrate-and-fire neurons with conductance-based synapses (connection probability ≤5%). Model parameters were taken from the literature when available and otherwise determined to reproduce experimentally accessible observables (Figure 1F-H; Methods). The mean firing rate was <0.1 Hz in the absence of stimulation and increased to ~1 Hz during odor presentation (Figure 1C, F) (Blumhagen et al., 2011; Rupprecht et al., 2021; Rupprecht and Friedrich, 2018). Population activity was odor-specific and activity patterns evoked by uncorrelated OB inputs remained uncorrelated in Dp (Figure 1H) (Yaksi et al., 2009). The synaptic conductance during odor presentation substantially exceeded the resting conductance and inputs from other E neurons contributed >80% of the excitatory synaptic conductance (Figure 1G). Hence, pDpsim entered an inhibition-stabilized balanced state (Sadeh and Clopath, 2020b) during odor stimulation (Figure 1D, E) with recurrent input dominating over afferent input, as observed in Dp (Rupprecht and Friedrich, 2018). Shuffling spike times of inhibitory neurons resulted in runaway activity with a probability of ~80%, demonstrating that activity was indeed inhibition-stabilized. These results were robust against parameter variations (Methods). pDpsim therefore reproduced key features of pDp.
Co-tuning and stability of networks with E/I assemblies
To create networks with defined neuronal assemblies we re-wired a small subset of the connections in randomly connected (rand) networks. An assembly representing a “learned” odor was generated by identifying the 100 E neurons that received the largest number of connections from the activated mitral cells representing this odor and increasing the probability of E-E connectivity among these neurons by the factor α (Figure 2A). The number of incoming connections per neuron was maintained by randomly eliminating connections from neurons outside the assembly. In each network, we created 15 assemblies representing uncorrelated odors. As a consequence, ~30% of E neurons were part of an assembly with few neurons participating in multiple assemblies. Odor-evoked activity within assemblies was higher than the population mean and increased with α (Figure 2B). When α reached a critical value of ~6, networks became unstable and generated runaway activity (Figure 2B).
We first set α = 5 and scaled I-to-E connection weights uniformly by a factor χ (“Scaled I” networks) until population firing rates in response to learned odors were similar to firing rates in rand networks (Figure 2A, C, D, F). Under these conditions, activity within assemblies was still amplified substantially in comparison to the corresponding neurons in rand networks (“pseudo-assembly”) whereas activity outside assemblies was substantially reduced (Figure 2E, G). Hence, non-specific scaling of inhibition resulted in a divergence of firing rates that exhausted the dynamic range of individual neurons in the population, implying that homeostatic global inhibition is insufficient to maintain a stable firing rate distribution. We further observed that neurons within activated assemblies produced regular spike trains (Supplementary Figure 2IIA, B), indicating that the balanced regime was no longer maintained.
In rand networks, correlations between E and I synaptic conductances in individual neurons were slightly above zero (Figure 2H), presumably as a consequence of stochastic inhomogeneities in the connectivity (Pehlevan and Sompolinsky, 2014). In Scaled I networks, correlations remained near zero, indicating that E assemblies by themselves did not enhance E/I co-tuning (Figure 2H). Scaled I networks with structured E but random I connectivity can therefore not account for the stability, synaptic balance, and E/I co-tuning observed experimentally (Rupprecht and Friedrich, 2018).
We next created structured networks with more precise E/I balance by including I neurons within assemblies. We first selected the 25 I neurons that received the largest number of connections from the 100 E neurons of an assembly. The connectivity between these two sets of neurons was then enhanced by two procedures: (1) in “Tuned I” networks, the probability of I-to-E connections was increased by a factor β while E-to-I connections remained unchanged. (2) In “Tuned E+I” networks, the probability of I-to-E connections was increased by β and the probability of E-to-I connections was increased by γ (Figure 2A, Supplementary Figure 2IA). As for “Scaled I” networks, β and γ were adjusted to obtain mean population firing rates of ~1 Hz in response to learned odors (Figure 2F). The other observables used to constrain the rand networks remained unaffected (Supplementary Figure 2I B-D).
In Tuned networks, correlations between E and I conductances in individual neurons were significantly higher than in rand or Scaled I networks (Figure 2H. To further analyze E/I co-tuning we projected synaptic conductances of each neuron onto a line representing the E/I ratio expected in a balanced network (“balanced axis”) and onto an orthogonal line (“counter-balanced axis”; Figure 2I). The ratio between the standard deviations along these axes has been used previously to quantify E/I co-tuning in experimental studies (Rupprecht and Friedrich, 2018). This ratio was close to 1 in rand and Scaled I networks but significantly higher in Tuned I and Tuned E+I networks (Figure 2I). Hence, Tuned networks exhibited significant co-tuning along the balanced axis, as observed in pDp (Rupprecht and Friedrich, 2018).
In Tuned networks, activity within assemblies was higher than the mean activity but substantially lower and more irregular than in Scaled I networks (Figure 2D, G; Supplementary Figure 2IIA,B). Unlike in Scaled I networks, mean firing rates evoked by novel odors were indistinguishable from those evoked by learned odors and from mean firing rates in rand networks (Figure 2F). Hence, E/I co-tuning prevented excessive amplification of activity in assemblies without affecting global network activity.
Effects of E/I assemblies on attractor dynamics
We next explored effects of assemblies on network dynamics. In rand networks, firing rates increased after stimulus onset and rapidly returned to a low baseline after stimulus offset. Correlations between activity patterns evoked by the same odor at different time points and in different trials were positive but substantially lower than unity, indicating high variability. Hence, rand networks showed transient and variable responses to input patterns, consistent with the typical behavior of generic balanced state networks (Shadlen and Newsome, 1994; Van Vreeswijk and Sompolinsky, 1996). Scaled networks responded to learned odors with persistent firing of assembly neurons and high pattern correlations across trials and time, implying attractor dynamics (Hopfield, 1982; Khona and Fiete, 2022), whereas Tuned networks exhibited transient responses and modest pattern correlations similar to rand networks. Hence, Tuned networks did not exhibit stable attractor states, presumably because precise synaptic balance prevented strong recurrent amplification within E/I assemblies.
In classical memory networks, attractor dynamics mediates autoassociative pattern classification because noisy or corrupted versions of learned inputs converge onto a consistent output. Hence, classical attractor memory networks perform pattern completion, which may be assessed by different procedures. Completion of partial input patterns can be examined by stimulating subsets of E neurons in an assembly during baseline activity and testing for the recruitment of the remaining assembly neurons (Sadeh and Clopath, 2021; Vogels et al., 2011). We found that assemblies were recruited by partial inputs in all structured pDpsim networks (Scaled and Tuned) without a significant increase in the overall population activity (Supplementary Figure 3A).
To assess pattern completion under more biologically realistic conditions, we morphed a novel odor into a learned odor (Figure 3A), or a learned odor into another learned odor. In rand networks, correlations between activity patterns across E neurons (output correlations) increased approximately linearly as a morphed odor approached a learned odor and remained lower than the corresponding pattern correlations in the OB (input correlations, Figure 3B and Supplementary Figure 3B). This is consistent with the absence of pattern completion in generic random networks (Babadi and Sompolinsky, 2014; Marr, 1969; Schaffer et al., 2018; Wiechert et al., 2010). Scaled I networks in contrast, showed typical signatures of pattern completion: output correlations increased abruptly as the learned odor was approached and eventually exceeded the corresponding input correlations (Figure 3B and Supplementary Figure 3B). In Tuned networks, output correlations changed approximately linearly and never exceeded input correlations, similar to observations in rand networks (Figure 3B and Supplementary Figure 3B). Similarly, firing rates of assembly neurons increased abruptly as the learned odor was approached in Scaled I networks but not in Tuned or rand networks (Figure 3C). Hence, networks with E/I assemblies did not perform pattern completion in response to naturalistic stimuli, consistent with the absence of stable attractor dynamics.
Geometry of activity patterns in networks with E/I assemblies
We next examined how E/I assemblies transform the geometry of neuronal representations, i.e their organization in a state space where each axis represents the activity of one neuron or one pattern of neural covariance (Chung and Abbott, 2021; Gallego et al., 2017; Langdon et al., 2023). To address this general question, we created an odor subspace and examined its transformation by pDpsim. The subspace consisted of a set of OB activity patterns representing four uncorrelated pure odors, which were assigned to the corners of a square. Mixtures between the pure odors were represented by pixels inside the square. OB activity patterns representing mixtures were generated by selecting active mitral cells from each of the pure odors’ patterns with probabilities depending on the relative distances from the corners (Methods). Correlations between OB activity patterns representing pure odors and mixtures decreased approximately linearly as a function of distance in the subspace (Figure 4B). The odor subspace therefore represented a hypothetical olfactory environment with four odor sources at the corners of a square arena. Locations in the odor subspace were visualized by the color code depicted in Figure 4A.
To examine how pDpsim transforms this odor subspace we projected time-averaged activity patterns onto the first two principal components (PCs). As expected, the distribution of OB activity patterns in PC space closely reflected the geometry of the square (Figure 4C). This geometry was largely maintained in the output of rand networks, consistent with the notion that random networks tend to preserve similarity relationships between input patterns (Babadi and Sompolinsky, 2014; Marr, 1969; Schaffer et al., 2018; Wiechert et al., 2010). We next examined outputs of Scaled or Tuned networks containing 15 assemblies, two of which were aligned with pure odors. The four odors delineating the odor subspace therefore consisted of two learned and two novel odors. In Scaled I networks, odor inputs were discretely distributed between three locations in state space representing the two learned odors and the residual odors, consistent with the expected attractor states (Figure 4D, E). Tuned networks, in contrast, generated continuous representations of the odor subspace (Figure 4D). The geometry of these representations was distorted in the vicinity of learned odors, which were further removed from most of the mixtures than novel odors. These geometric transformations were less obvious when activity patterns of Tuned networks were projected onto the first two PCs extracted from rand networks (Supplementary Figure 4A). Hence, E/I assemblies introduced local curvature into the coding space that partially separated learned from novel odors without disrupting the continuity of the subspace representation.
The curvature of the representation manifold in Tuned networks suggests that E/I assemblies confine activity along specific dimensions of the state space, indicating that activity was locally constrained onto manifolds. To test this hypothesis, we first quantified the dimensionality of odor-evoked activity by the participation ratio, a measure that depends on the eigenvalue distribution of the pairwise neuronal covariance matrix (Altan et al., 2021) (Methods). As expected, dimensionality was highest in rand networks and very low in Scaled I networks, reflecting the discrete attractor states (Figure 4F). In Tuned networks, dimensionality was high compared to Scaled I networks but lower than in rand networks (Figure 4F). The same trend was observed when we sampled data from a limited number of neurons to mimic experimental conditions (Supplementary Figure 4D). Furthermore, when restraining the analysis to activity evoked by novel odors and related mixtures, dimensionality was similar between rand and Tuned networks (Figure 4F). These observations, together with additional analyses of dimensionality (Supplementary Figure 4B, C), support the hypothesis that E/I assemblies locally constrain neuronal dynamics onto manifolds without establishing discrete attractor states. Generally, these observations are consistent with recent findings showing effects of specific circuit motifs on the dimensionality of neural activity (Dahmen et al., 2023; Recanatesi et al., 2019).
We further tested this hypothesis by examining the local geometry of activity patterns around representations of learned and novel odors. If E/I assemblies locally confine activity onto manifolds, small changes of input patterns should modify output patterns along preferred dimensions near representations of learned but not novel odors. To test this prediction, we selected sets of input patterns including each pure odor and the seven most closely related mixtures. We then quantified the variance of the projections of their corresponding output patterns onto the first 40 PCs (Figure 4G). This variance decreased only slightly as a function of PC rank for activity patterns related to novel odors, indicating that patterns varied similarly in all directions. For patterns related to learned odors, in contrast, the variance was substantially higher in the direction of the first few PCs, implying variation along preferred dimensions. In addition, we measured the distribution of angles between edges connecting activity patterns representing pure odors and their corresponding related mixtures in high-dimensional PC space (Figure 4H, inset; Methods; Schoonover et al., 2021). Angles were narrowly distributed around 1 rad in rand networks but smaller in the vicinity of learned patterns in Tuned networks (Figure 4 H). These observations further support the conclusion that E/I assemblies locally constrain neuronal dynamics onto manifolds.
Activity may be constrained non-isotropically by amplification along a subset of dimensions, by inhibition along other dimensions, or both. E neurons participating in E/I assemblies had large loadings on the first two PCs (Supplementary Figure 4 E-F) and responded to learned odors with increased firing rates as compared to the mean rates in Tuned E+I and rand networks. Firing rates of the remaining neurons, in contrast, were slightly lower than the corresponding mean rates in rand networks (Figure 2G). Consistent with these observations, the variance of activity projected onto the first few PCs was higher in Tuned E+I than in rand networks (Figure 4G) while the variance along higher-order PCs was lower (Figure 4G, inset). These observations indicate that activity manifolds are delineated both by amplification of activity along preferred directions and by suppression of activity along other dimensions.
Pattern classification by networks with E/I assemblies
The lack of stable attractor states raises the question how transformations of activity patterns by Tuned networks affect pattern classification. To quantify the association between an activity pattern and a class of patterns representing a pure odor we computed the Mahalanobis distance (dM). This measure quantifies the distance between the pattern and the class center, normalized by the intra-class variability along the relevant direction. Hence, dM is a measure for the discriminability of a given pattern from a given class. In bidirectional comparisons between patterns from different classes, the mean dM may be asymmetric when the distributions of patterns within the classes are different.
We first quantified dM between representations of pure odors based on activity patterns across 80 E neurons drawn from the corresponding (pseudo-) assemblies. dM was asymmetrically increased in Tuned E+I networks as compared to rand networks. Large increases were observed for distances between patterns related to learned odors and reference classes representing novel odors (Figure 5A, B). In the other direction, increases in dM were smaller. Moreover, distances between patterns related to novel odors were almost unchanged (Figure 5B). Further analyses showed that increases in dM in Tuned E+I networks involved both increases in the Euclidean distance between class centers and non-isotropic scaling of intra-class variability (Supplementary Figure 5). The geometric transformation of odor representations by E/I assemblies therefore facilitated pattern classification and particularly enhanced the discriminability of patterns representing learned odors.
To further analyze pattern classification, we performed multi-class quadratic discriminant analysis (QDA), an extension of linear discriminant analysis for classes with unequal variance. Using QDA, we determined the probabilities that an activity pattern evoked by a mixture is classified as being a member of each of four classes representing the pure odors, thus mimicking a 4-way forced choice odor classification task in the olfactory environment. The four classes were defined by the distributions of activity patterns evoked by the pure and closely related odors. We then examined the classification probability of patterns evoked by mixtures with respect to a given class as a function of the similarity between the mixture and the corresponding pure odor (“target”) in the OB. As expected, the classification probability increased with similarity. Furthermore, in Tuned E+I networks, the classification probability of mixtures similar to a pure odor was significantly higher when the pure odor was learned (Figure 5C). Hence, E/I assemblies enhanced the classification of inputs related to learned patterns.
When neuronal subsets were randomly drawn not from assemblies but from the entire population, dM was generally lower (Figure 5B). These results indicate that assembly neurons convey higher-than-average information about learned odors. Together, these observations imply that pDpsim did not function as a categorical classifier but nonetheless supported the classification of learned odors, particularly when the readout focused on assembly neurons. Conceivably, classification may be further enhanced by optimizing the readout strategy, for example, by a learning-based approach. However, modeling biologically realistic readout mechanisms requires further experimental insights into the underlying circuitry.
Stability of networks with E/I assemblies against addition of memories
When networks successively learn multiple inputs over time, the divergence of firing rates and the risk of network instability is expected to increase as assemblies are accumulated. We surmised that Tuned networks should be more resistant against these risks than Scaled networks because activity is controlled more precisely, particularly when assemblies overlap. To test this hypothesis, we examined responses of Tuned E+I and Scaled I networks to an additional odor subspace where four of the six pairwise correlations between the pure odors were clearly positive (range, 0.27 – 0.44; Figure 6A). We then compared networks with 15 randomly created assemblies to networks with two additional assemblies aligned to two of the correlated pure odors. In Scaled I networks, creating two additional memories resulted in a substantial increase in firing rates, particularly in response to the learned and related odors. In Tuned E+I networks, in contrast, firing rates remained almost unchanged despite the increased memory load, and representations of learned odors were well separated in PC space, despite the overlap between assemblies (Figure 6B, C). These observations are consistent with the assumption that precise balance in E/I assemblies protects networks against instabilities during continual learning, even when memories overlap. Furthermore, in this regime of higher pattern similarity, dM was again increased upon learning, particularly between learned odors and reference classes representing other odors (not shown). E/I assemblies therefore consistently increased dM in a directional manner under different conditions.
Discussion
A precisely balanced memory network constrained by pDp
Autoassociative memory networks map inputs onto output patterns representing learned information. Classical models proposed this mapping to be accomplished by discrete attractor states that are defined by assemblies of E neurons and stabilized by global homeostatic inhibition. However, as seen in Scaled I networks, global inhibition is insufficient to maintain a stable, biologically plausible firing rate distribution. This problem can be overcome by including I neurons in assemblies, which leads to precise synaptic balance. To explore network behavior in this regime under biologically relevant conditions we created a spiking network model constrained by experimental data from pDp. The resulting Tuned networks reproduced additional experimental observations that were not used as constraints including irregular firing patterns, lower output than input correlations, and the absence of persistent activity. Hence, pDpsim recapitulated characteristic properties of a biological memory network with precise synaptic balance.
Neuronal dynamics and representations in precisely balanced memory networks
Simulated networks with global inhibition showed attractor dynamics and pattern completion, consistent with classical attractor memory. However, the distribution of firing rates broadened as connection density within assemblies increased, resulting in unrealistically high (low) rates inside (outside) assemblies and, consequently, in a loss of synaptic balance. Hence, global inhibition was insufficient to stabilize population activity against basic consequences of structured connectivity. In networks with E/I assemblies, in contrast, firing rates remained within a realistic range and the inhibition-stabilized balanced state was maintained. Such Tuned networks showed no discrete attractor states but transformed the geometry of the coding space by confining activity to continuous manifolds near representations of learned inputs.
Geometrical transformations in Tuned networks may be considered as intermediate between two extremes: (1) geometry-preserving transformations as, for example, performed by many random networks (Babadi and Sompolinsky, 2014; Marr, 1969; Schaffer et al., 2018; Wiechert et al., 2010), and (2) discrete maps as, for example, generated by discrete attractor networks (Freeman and Skarda, 1985; Hopfield, 1982; Khona and Fiete, 2022) (Figure 7). We found that transformations became more discrete map-like when amplification within assemblies was increased and precision of synaptic balance was reduced. Likewise, decreasing amplification in assemblies of Scaled networks changed transformations towards the intermediate behavior, albeit with broader firing rate distributions than in Tuned networks (not shown). Hence, precise synaptic balance may be expected to generally favor intermediate over discrete transformations because this regime tends to linearize input-output functions (Baker et al., 2020; Denève and Machens, 2016).
E/I assemblies increased variability of activity patterns along preferred directions of state space and reduced their dimensionality in comparison to rand networks. Nonetheless, dimensionality remained high compared to Scaled networks with discrete attractor states. These observations indicate that geometric transformations in Tuned networks involved (1) a modest amplification of activity in one or a few directions aligned to the assembly, and (2) a modest reduction of activity in other directions. E/I assemblies therefore created a local curvature of coding space that “focused” activity in a subset of dimensions and, thus, stored information in the geometry of coding space.
As E/I assemblies were small relative to the total size of the E neuron population, stored information may be represented predominantly by small neuronal subsets. Consistent with this hypothesis, dM was increased and the classification of learned inputs by QDA was enhanced when activity was read out from subsets of assembly neurons as compared to random neuronal subsets. Moreover, signatures of pattern completion were found in the activity of assemblies but not in global pattern correlations. The retrieval of information from networks with small E/I assemblies therefore depends on the selection of informative neurons for readout. Unlike in networks with global attractor states, signatures of memory storage may thus be difficult to detect experimentally without specific knowledge of assembly memberships.
Computational functions of networks with E/I assemblies
In theory, precisely balanced networks with E/I assemblies may support pattern classification despite high variability and the absence of discrete attractor states (Denève and Machens, 2016). Indeed, we found in Tuned E+I networks that input patterns were classified successfully by a generic classifier (QDA) based on selected neuronal subsets, particularly relative to learned inputs. Analyses based on the Mahalanobis distance dM indicate that classification of learned inputs was enhanced by two effects: (1) local manifolds representing learned odors became more distant from representations of other odors due to a modest increase in firing rates within E/I assemblies, and (2) the concomitant increase in variability was not isotropic, remaining sufficiently low in directions that separated novel from learned patterns. Hence, information contained in the geometry of coding space can be retrieved by readout mechanisms aligned to activity manifolds. Efficient readout mechanisms may thus integrate activity primarily from assembly neurons, as mimicked in our QDA-based pattern classification. This notion is consistent with the finding that the integrated activity of E/I assemblies can be highly informative despite variable firing of individual neurons (Boerlin et al., 2013; Denève et al., 2017; Denève and Machens, 2016). It will thus be interesting to explore how the readout of information from local manifolds could be further optimized.
Representations by local manifolds and discrete attractor states exhibit additional differences affecting neuronal computation: (1) Tuned networks do not mediate short-term memory functions based on persistent activity. Such networks may therefore support fast memoryless classification to interpret dynamical sensory inputs on a moment-to-moment basis. (2) The representation of learned inputs by small neuronal subsets, rather than global activity states, raises the possibility that multiple inputs can be classified simultaneously. (3) The stabilization of firing rate distributions by precise synaptic balance may prevent catastrophic network failures when memories are accumulated during continual learning. (4) The continuous nature of local manifolds indicates that information is not stored in the form of distinct items. Moreover, the coding space provides, in principle, a distance metric reflecting both relatedness in the feature space of sensory inputs and an individual’s experience. Internal representations generated by precisely balanced memory networks may therefore provide a basis for higher-order learning and cognitive computations.
Balanced state networks with E/I assemblies as models for olfactory cortex
Piriform cortex and Dp have been proposed to function as attractor-based memory networks for odors. Consistent with this hypothesis, pattern completion and its modulation by learning has been observed in piriform cortex of rodents (Barnes et al., 2008; Chapuis and Wilson, 2011). However, odor-evoked firing patterns in piriform cortex and Dp are typically irregular, variable, transient and less reproducible than in the OB even after learning (Jacobson et al., 2018; Pashkovski et al., 2020; Schoonover et al., 2021; Yaksi et al., 2009), indicating that activity does not converge onto stable attractor states. Balanced networks with E/I assemblies, in contrast, are generally consistent with these experimental observations. Alternative models for pattern classification in the balanced state include networks endowed with adaptation, which respond to stimuli with an initial transient followed by a tonic non-balanced activity state (Wu and Zenke, 2021), or mechanisms related to “balanced amplification”, which typically generate pronounced activity transients (Ahmadian and Miller, 2021; Murphy and Miller, 2009). However, it has not been explored whether these models can be adapted to reproduce characteristic features of Dp or piriform cortex.
Our results generate predictions to test the hypothesis that E/I assemblies establish local manifolds in Dp: (1) odor-evoked population activity should be constrained onto manifolds, particularly in response to learned odors. (2) Learning should increase the magnitude and asymmetry of dM between odor representations. (3) Activity evoked by learned and related odors should exhibit lower dimensionality and more directional variability than activity evoked by novel odors. (4) Careful manipulations of inhibition may unmask assemblies by increasing amplification. These predictions may be addressed experimentally by large-scale measurements of odor-evoked activity after learning. The direct detection of E/I assemblies will ultimately require dense reconstructions of large neuronal networks at synaptic resolution. Given the small size of Dp, this challenge may be addressed in zebrafish by connectomics approaches based on volume electron microscopy (Denk et al., 2012; Friedrich and Wanner, 2021; Kornfeld and Denk, 2018).
The hypothesis that memory networks contain E/I assemblies and operate in a state of precise synaptic balance can be derived from the basic assumptions that synaptic plasticity establishes assemblies and that firing rate distributions remain stable as network structure is modified by experience (Barron et al., 2017; Hennequin et al., 2017). Hence, Tuned networks based on Dp may also reproduce features of other recurrently connected brain areas such as hippocampus and neocortex, which also operate in a balanced state (Renart et al., 2010; Sadeh and Clopath, 2020b; Shadlen and Newsome, 1994) Future experiments may therefore explore representations of learned information by local manifolds also in cortical brain areas.
Methods
pDp spiking network model
pDpsim consisted of 4000 excitatory (E) and 1000 inhibitory (I) neurons which were modeled as adaptive leaky integrate-and-fire units with conductance-based synapses of strength w. A spike emitted by the presynaptic neuron y from population Y triggered an increase in the conductance gYx in the postsynaptic neuron x:
Neuron x received synaptic inputs from the olfactory bulb OB as well as from the different local neuronal populations P. Its membrane potential Vx evolved according to:
When the membrane potential reached a threshold Vth, the neuron emitted a spike and its membrane potential was reset to Erest and clamped to this value during a refractory period τref. Excitatory neurons were endowed with adaptation with the following dynamics (Brette and Gerstner, 2005):
In inhibitory neurons, z was set to 0 for simplicity.
The neuronal parameters of the model are summarized in Table 1. The values of the membrane time constant, resting conductance, inhibitory and excitatory reversal potential are in the range of experimentally measured values (Rupprecht and Friedrich, 2018 and Blumhagen et al., 2011). The remaining parameters were then fitted such as to fulfill two conditions (derived from unpublished experimental observations): (1) the neuron should not generate action potentials in response to a step current injection of duration 500 ms and amplitude 15 nA, and (2) the mean firing rate should be on the order of tens of Hz when the amplitude of the step current is 100 nA. Furthermore, the firing rates of inhibitory neurons should be higher than the firing rates of excitatory neurons, as observed experimentally (unpublished data).
The time constants of inhibitory and excitatory synapses (τsyn,I and τsyn,E) were 10 ms and 30 ms, respectively. To verify that the behavior of pDpsim was robust, we simulated 20 networks with different connection probabilities pYX and synaptic strengths wYX (Table 2). The connections between neurons were drawn from a Bernoulli distribution with a predefined pYX ≤0.05 (Zou, 2014). As a consequence, each neuron received the same number of input connections. Care was also taken to ensure that the variation in the number of output connections was low across neurons.
The connection strengths wYX were then fitted to reproduce experimental observations in pDp (five observables in total, see below and Figure 1). For this purpose, a lower and upper bound for wYX were set such that the amplitude of single EPSPs and IPSPs was in the biologically plausible range of 0.2 to 2 mV. wOE was then further constrained to maintain the odor-evoked, time-averaged gOE in the range of experimental values (Rupprecht and Friedrich, 2018). Once wOE was fixed, the lower bound of wEE was increased to obtain a network rate >10 Hz in the absence of inhibition. A grid search was then used to refine the remaining wYX.
Olfactory bulb input
Each pDp neuron received external input from the OB, which consisted of 1500 mitral cells spontaneously active at 6 Hz. Odors were simulated by increasing the firing rate of 150 randomly selected mitral cells. Firing rates of these “activated” mitral cells were drawn from a discrete uniform distribution ranging from 8 to 32 Hz and their onset latencies were drawn from a discrete uniform distribution ranging from 0 to 200 ms. An additional 75 mitral cells were inhibited. Firing rates and latencies of these neurons were drawn from discrete uniform distributions ranging from 0 to 5 Hz and from 0 to 200 ms, respectively. After odor onset, firing rates decreased with a time constant of 1, 2 or 4 s (equally distributed). Spikes were generated from a Poisson distribution. Because all odors had almost identical firing patterns, the total OB input did not vary much across odors. In Figures 1–3, the odor set consisted of 10 novel and/or 10 learned odors, all of which were uncorrelated (pattern correlations near zero). Odors were presented for 2 seconds and separated by 1 second of baseline activity.
Olfactory subspaces comprised 121 OB activity patterns. Each pattern was represented by a pixel in a 11 x 11 square. The pixel at each vertex corresponded to one pure odor with 150 excited and 75 inhibited mitral cells as described above, and the remaining pixels corresponded to mixtures. The fraction of activated and inhibited mitral from a given pure odor decreased with the distance from the corresponding vertex as shown in Table 3. The total number of activated and inhibited mitral cells at each location in the virtual square remained within the range of 150 ± 10% and 75 ± 10%, respectively. To generate activity patterns representing mixtures, activated mitral cells were sorted by onset latencies for each pure odor. At each location within the square and for each trial, mitral cells that remained activated in the mixture response were randomly selected from the pool of C mitral cells with the shortest latencies from each odor. C decreased with the distance from the vertices representing pure odors as shown in Table 4. The firing rate of each selected mitral cell varied ±1 Hz around its rate in response to the pure odor. The identity, but not the firing rate, of the activated mitral cells therefore changed gradually within the odor subspace. This procedure reflects the experimental observation that responses of zebrafish mitral cells to binary odor mixtures often resemble responses to one of the pure components (Tabor et al., 2004). We generated 8 different trajectories within the virtual square, each visiting all possible virtual odor locations for 1s. Each trajectory (trial) thus consisted of 121s of odor presentation, and trajectories were separated by 2 seconds of baseline activity. The dataset for analysis therefore comprised 968 activity patterns (8 trials x 121 odors).
Assemblies
Unless noted otherwise, Scaled and Tuned networks contained 15 assembles (“memories”). An assembly representing a given odor contained the 100 E neurons that received the highest density of inputs from the corresponding active mitral cells. Hence, the size of assemblies was substantially smaller than the total population, consistent with the observation that only a minority of neurons in Dp or piriform cortex are activated during odor stimulation (Miura et al., 2012; Stettler and Axel, 2009; Yaksi et al., 2009) and upregulate cfos during olfactory learning (Meissner-Bernard et al., 2018). We then rewired assembly neurons: additional connections were created between assembly neurons, and a matching number of existing connections between non-assembly and assembly neurons were eliminated. The number of input connections per neuron therefore remained unchanged. A new connection between two assembly neurons doubled the synaptic strength wEE if it added to an existing connection. As a result of this rewiring, the connection probability within the assembly increased by a factor α relative to the baseline connection probability.
In Scaled networks, wIE was increased globally by a constant factor χ. In Tuned networks, connections were modified between the 100 E neurons of an assembly and the 25 I neurons that were most densely connected to these E neurons, using the same procedure as for E-to-E connections. In Tuned I networks, only I-to-E connections were rewired, while in Tuned E+I networks, both I-to-E and E-to-I connections were rewired (Table 5). Whenever possible, networks with less than 15% change in population firing rates compared to the corresponding rand network were selected. In Figure 6, two additional assemblies were created in Scaled or Tuned networks without adjusting any parameters.
Observables
All variables were measured in the E population and time-averaged over the first 1.5 seconds of odor presentations, unless otherwise stated.
The firing rate is the number of spikes in a time interval T divided by T.
gOE is the mean conductance change in E neurons triggered by spikes from the OB.
gsyn is the total synaptic conductance change due to odor stimulation, calculated as the sum of gOE, gEE and gIE. gEE and gIE are the conductance changes contributed by E synapses and I synapses, respectively.
The percentage of recurrent input quantifies the average contribution of the recurrent excitatory input to the total excitatory input in E neurons. It was defined for each excitatory neuron as the ratio of the time-averaged gEE to the time-averaged total excitatory conductance (gEE+gOE) multiplied by 100. In (2,3,4), the time-averaged E and I synaptic conductances during the 500 ms before odor presentation were subtracted from the E and I conductances measured during odor presentation for each neuron.
In addition, we required the Pearson correlation between activity patterns to be close to zero in response to uncorrelated inputs. The Pearson correlation between pairs of activity vectors composed of the firing rate of E neurons was averaged over all possible odor pairs.
Co-tuning
Co-tuning was quantified in 2 different ways: (1) For each neuron, we calculated the Pearson correlation between the time-averaged E and I conductances in response to 10 learned odors. (2) As described in (Rupprecht and Friedrich, 2018), we projected observed pairs of E and I conductances onto a “balanced” and “counter-balanced” axis. The balanced axis was obtained by fitting a linear model without constant to the E and I conductances of 4000*10 neuron-learned odor pairs. The resulting model was a constant I/E ratio (~1.2) that defined a membrane potential close to spike threshold. The counter-balanced axis was orthogonal to the balanced axis. For each neuron, synaptic conductances were projected onto these axes and their dispersions quantified by the standard deviations.
Characterization of population activity in state space
Principal Component Analysis (PCA) was applied to the OB activity patterns from the square subspace or to the corresponding activity patterns across E neurons in pDpsim (8 x 121 = 968 patterns, each averaged over 1s).
The participation ratio PR provides an estimate of the maximal number of principal components (PC) required to recapitulate the observed neuronal activity. It is defined as , where λi are the eigenvalues obtained from PCA (variance along each PC).
For angular analyses we projected activity patterns onto the first 400 PCs, which was the number of PCs required to explain at least 75% of the variance in all networks. We measured the angle θ between the edges connecting the trial-averaged pattern p evoked by a pure odor to two patterns sy and sz. sy and sz were trial-averaged patterns evoked by 2 out of the 7 odors that were most similar to the pure odor (21 angles in total). θ was defined as . This metric is sensitive to non-uniform expansion and other non-linear transformations.
The Mahalanobis distance dM is defined as , where ν is a vector representing an activity pattern, and Q a reference class consisting of a distribution of activity patterns with mean μ and covariance matrix S.
Classification
To assess the assignment of odor-evoked patterns to representations of pure odors in the odor subspace, we used Quadratic Linear Discriminant (QDA) analysis, a non-linear classifier that takes into account the separation and covariance patterns of different classes (Ghojogh and Crowley, 2019). The training set consisted of the population response to multiple trials of each of the 4 pure odors, averaged over the first and second half of the 1-s odor presentation. To increase the number of training data in each of the 4 odor classes, the training set also included population response to odors that were closely related to the pure odor (Pearson correlation between OB response patterns >0.6). Analyses were performed using subsets of 80 neurons (similar results were obtained using 50 – 100 neurons). These neurons were randomly selected (50 iterations) either from the (pseudo-) assemblies representing the pure odors (400 E neurons; pseudo-assemblies in rand networks and for novel odors) or from the entire population of E neurons. We verified that the data came from a Gaussian mixture model. The trained classifier was then applied to activity patterns evoked by the remaining odors of the subspace (correlation with pure odors < 0.6). Each pattern was then assigned to the class x that maximized the discriminant function where SL is the covariance matrix of each odor class k (subsampling of neurons ensured invertibility) and πL is the prior probability of class k. This discriminant function is closely related to dM.
Simulations
Simulations were performed using Matlab and Python. Differential equations were solved using the forward Euler method and an integration time step of dt = 0.1 ms.
Supplementary Figure Legends
Acknowledgements
We thank the Friedrich lab for insightful discussions. This work was supported by the Novartis Research Foundation, by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement no. 742576), and by the Swiss National Science Foundation (grants no. 31003A_172925/1, PCEFP3_202981).
References
- What is the dynamical regime of cerebral cortex?Neuron 109:3373–3391https://doi.org/10.1016/j.neuron.2021.07.031
- Estimating the dimensionality of the manifold underlying multi-electrode neural recordingsPlos Comput Biol 17https://doi.org/10.1371/journal.pcbi.1008591
- Sparseness and Expansion in Sensory RepresentationsNeuron 83:1213–1226https://doi.org/10.1016/j.neuron.2014.07.035
- Nonlinear stimulus representations in neural circuits with approximate excitatory-inhibitory balancePlos Comput Biol 16https://doi.org/10.1371/journal.pcbi.1008192
- Olfactory perceptual stability and discriminationNat Neurosci 11:1378–1380https://doi.org/10.1038/nn.2217
- Inhibitory engrams in perception and memoryProc National Acad Sci 114:6666–6674https://doi.org/10.1073/pnas.1701812114
- Precise excitation-inhibition balance controls gain and timing in the hippocampuseLife :1–29https://doi.org/10.7554/elife.43415.001
- ScienceDirect Odor coding in piriform cortex: mechanistic insights into distributed codingElsevier Ltd :1–7https://doi.org/10.1016/j.conb.2020.03.001
- Neuronal filtering of multiplexed odour representationsNature 479:493–498https://doi.org/10.1038/nature10633
- Predictive Coding of Dynamical Variables in Balanced Spiking NetworksPLoS Comput Biol 9https://doi.org/10.1371/journal.pcbi.1003258
- Adaptive Exponential Integrate-and-Fire Model as an Effective Description of Neuronal ActivityJ Neurophysiol :1–6https://doi.org/10.1152/jn.00686.2005
- Bidirectional plasticity of cortical pattern recognition and behavioral sensory acuityNat Neurosci 15:155–161https://doi.org/10.1038/nn.2966
- Memory replay in balanced recurrent networks:1–36https://doi.org/10.1371/journal.pcbi.1005359
- Neural population geometry: An approach for understanding biological and artificial neural networksCurr Opin Neurobiol 70:137–144https://doi.org/10.1016/j.conb.2021.10.010
- Strong and localized recurrence controls dimensionality of neural activity across brain areasbioRxiv https://doi.org/10.1101/2020.11.02.365072
- The Brain as an Efficient and Robust Adaptive LearnerNeuron 94:969–977https://doi.org/10.1016/j.neuron.2017.05.016
- Efficient codes and balanced networksNat Neurosci 19:375–382https://doi.org/10.1038/nn.4243
- Structural neurobiology: missing link to a mechanistic understanding of neural computationNat Rev Neurosci 13:351–358https://doi.org/10.1038/nrn3169
- Analog Memories in a Balanced Rate-Based Network of E-I Neurons:1–9
- Associative conditioning remaps odor representations and modifies inhibition in a higher olfactory brain areaNat Neurosci :1–12https://doi.org/10.1038/s41593-019-0495-z
- Recurrent Circuitry Dynamically Shapes the Activation of Piriform CortexNeuron 72:49–56https://doi.org/10.1016/j.neuron.2011.08.020
- Spatial EEG patterns, non-linear dynamics and perception: the neo-sherringtonian viewBrain Res Rev 10:147–175https://doi.org/10.1016/0165-0173(85)90022-0
- Dynamics of Olfactory Bulb Input and Output Activity During Odor Stimulation in ZebrafishJournal of Neurophysiology 91:2658–2669https://doi.org/10.1152/jn.01143.2003
- Dynamic Optimization of Odor Representations by Slow Temporal Patterning of Mitral Cell ActivityScience 291:1–7
- Dense Circuit Reconstruction to Understand Neuronal Computation: Focus on ZebrafishAnnual Review of Neuroscience 44:1–19https://doi.org/10.1146/annurev-neuro-110220-013050
- A synaptic memory trace for cortical receptive field plasticityNature 450:425–429https://doi.org/10.1038/nature06289
- Neural Manifolds for the Control of MovementNeuron 94:978–984https://doi.org/10.1016/j.neuron.2017.05.025
- Linear and Quadratic Discriminant Analysis: Tutorialhttps://doi.org/10.48550/arxiv.1906.02590
- Parallel-distributed Processing in Olfactory Cortex: New Insights from Morphological and Physiological Analysis of Neuronal CircuitryChem Senses 26:551–576https://doi.org/10.1093/chemse/26.5.551
- The organization of behavior; a neuropsychological theoryWiley
- Inhibitory Plasticity: Balance, Control, and CodependenceAnnu Rev Neurosci 40:557–579https://doi.org/10.1146/annurev-neuro-072116-031005
- Optimal Control of Transient Dynamics in Balanced Networks Supports Generation of Complex MovementsNeuron 82:1394–1406https://doi.org/10.1016/j.neuron.2014.04.045
- Neural networks and physical systems with emergent collective computational abilitiesProc Natl Acad Sci :2554–2558
- Population Coding in an Innately Relevant Olfactory AreaNeuron 93:1180–1197https://doi.org/10.1016/j.neuron.2017.02.010
- Experience-Dependent Plasticity of Odor Representations in the Telencephalon of ZebrafishCurr Biol 28:1–14https://doi.org/10.1016/j.cub.2017.11.007
- Attractor and integrator networks in the brainNat Rev Neurosci 23:744–766https://doi.org/10.1038/s41583-022-00642-0
- Self-Organzation and Associative Memory
- Progress and remaining challenges in high-throughput volume electron microscopyCurr Opin Neurobiol 50:261–267https://doi.org/10.1016/j.conb.2018.04.030
- Tuned inhibitory firing rate and connection weights as emergent network propertiesbioRxiv https://doi.org/10.1101/2022.04.12.488114
- A unifying perspective on neural manifolds and circuits for cognitionNat Rev Neurosci 24:363–377https://doi.org/10.1038/s41583-023-00693-x
- Formation and maintenance of neuronal assemblies through synaptic plasticityNature Communications 5:1–12https://doi.org/10.1038/ncomms6319
- Slow dynamics and high variability in balanced cortical networks with clustered connectionsNat Neurosci 15:1498–1505https://doi.org/10.1038/nn.3220
- Learning excitatory-inhibitory neuronal assemblies in recurrent networksElife 10https://doi.org/10.7554/elife.59715
- A theory of cerebellar cortexJ Physiol 202:437–470
- Encoding of Odor Fear Memories in the Mouse Olfactory CortexCurrent Biology 29:1–19https://doi.org/10.1016/j.cub.2018.12.003
- Formation and computational implications of assemblies in neural circuitsJ Physiology https://doi.org/10.1113/jp282750
- Odor Representations in Olfactory Cortex: Distributed Rate Coding and Decorrelated Population ActivityNeuron 74:1087–1098https://doi.org/10.1016/j.neuron.2012.04.021
- The dorsal pallium in zebrafish, Danio rerio (CyprinidaeTeleostei). Brain Res 1381:95–105https://doi.org/10.1016/j.brainres.2010.12.089
- Balanced Amplification: A New Mechanism of Selective Amplification of Neural Activity PatternsNeuron 61:635–648https://doi.org/10.1016/j.neuron.2009.02.005
- Instantaneous correlation of excitation and inhibition during ongoing and sensory-evoked activitiesNat Neurosci 11:535–537https://doi.org/10.1038/nn.2105
- Structure and flexibility in cortical representations of odour spaceNature :1–28https://doi.org/10.1038/s41586-020-2451-1
- Selectivity and Sparseness in Randomly Connected Balanced Networks:1–15
- Dimensionality in recurrent spiking networks: Global trends in activity and local origins in connectivityPLoS Comput Biol 15https://doi.org/10.1371/journal.pcbi.1006446
- The Asynchronous State in Cortical CircuitsScience 327:587–590https://doi.org/10.1126/science.1179850
- Winnerless competition in clustered balanced networks: inhibitory assemblies do the trickBiol Cybern 112:81–98https://doi.org/10.1007/s00422-017-0737-7
- A Balanced Memory NetworkPLoS Comput Biol 3:e141–22https://doi.org/10.1371/journal.pcbi.0030141
- A database and deep learning toolbox for noise-optimized, generalized spike inference from calcium imagingNat Neurosci 24:1324–1337https://doi.org/10.1038/s41593-021-00895-5
- Precise Synaptic Balance in the Zebrafish Homolog of Olfactory CortexNeuron 100:1–29https://doi.org/10.1016/j.neuron.2018.09.013
- Engram Cells Retain Memory Under Retrograde AmnesiaScience :1007–1013
- Role of Secondary Sensory Cortices in Emotional Memory Storage and Retrieval in RatsScience 329:649–656https://doi.org/10.1126/science.1183165
- Excitatory-inhibitory balance modulates the formation and dynamics of neuronal assemblies in cortical networksSci Adv 7https://doi.org/10.1126/sciadv.abg8411
- Patterned perturbation of inhibition can reveal the dynamical structure of neural processingeLife 9:226–29https://doi.org/10.7554/elife.52757
- Inhibitory stabilization and cortical computationNat Rev Neurosci :1–17https://doi.org/10.1038/s41583-020-00390-z
- Odor Perception on the Two Sides of the Brain: Consistency Despite RandomnessNeuron 98:736–742https://doi.org/10.1016/j.neuron.2018.04.004
- Representational drift in primary olfactory cortexNature :1–34https://doi.org/10.1038/s41586-021-03628-7
- The generation of cortical novelty responses through inhibitory plasticityeLife 10https://doi.org/10.1101/2020.11.30.403840
- Noise, neural codes and cortical organizationCurr Opin Neurobiol 4:569–579https://doi.org/10.1016/0959-4388(94)90059-0
- Representations of Odor in the Piriform CortexNeuron 63:854–864https://doi.org/10.1016/j.neuron.2009.09.005
- Pharmacological Analysis of Ionotropic Glutamate Receptor Function in Neuronal Circuits of the Zebrafish Olfactory BulbPLoS ONE 3https://doi.org/10.1371/journal.pone.0001416
- Processing of Odor Mixtures in the Zebrafish Olfactory BulbThe Journal of Neuroscience 24:6611–6620https://doi.org/10.1523/jneurosci.1834-04.2004
- Chaos in Neuronal Networks with Balanced Excitatory and Inhibitory ActivityScience 274:1724–1726https://doi.org/10.1126/science.274.5293.1724
- Inhibitory Plasticity Balances Excitation and Inhibition in Sensory Pathways and Memory NetworksScience 334:1–7https://doi.org/10.1126/science.1212991
- Whitening of odor representations by the wiring diagram of the olfactory bulbNat Neurosci :1–24https://doi.org/10.1038/s41593-019-0576-z
- Balanced inhibition underlies tuning and sharpens spike timing in auditory cortexNature 426:1–5
- Mechanisms of pattern decorrelation by recurrent neuronal circuitsNat Neurosci 13:1003–1010https://doi.org/10.1038/nn.2591
- Cortical Processing of Odor ObjectsNeuron 72:506–519https://doi.org/10.1016/j.neuron.2011.10.027
- Nonlinear transient amplification in recurrent neural networks with short-term plasticityeLife https://doi.org/10.1101/2021.06.09.447718
- Transformation of odor representations in target areas of the olfactory bulbNat Neurosci 12:474–482https://doi.org/10.1038/nn.2288
- Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networksNature Communications 6:1–13https://doi.org/10.1038/ncomms7922
- Connectivity, Plasticity, and Function of Neuronal Circuits in the Zebrafish Olfactory Forebrain
Article and author information
Author information
Version history
- Preprint posted:
- Sent for peer review:
- Reviewed Preprint version 1:
Copyright
© 2024, Meissner-Bernard et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 265
- downloads
- 4
- citations
- 0
Views, downloads and citations are aggregated across all versions of this paper published by eLife.