1. Neuroscience
Download icon

Adaptation of spontaneous activity in the developing visual cortex

  1. Marina E Wosniack
  2. Jan H Kirchner
  3. Ling-Ya Chao
  4. Nawal Zabouri
  5. Christian Lohmann
  6. Julijana Gjorgjieva  Is a corresponding author
  1. Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Germany
  2. School of Life Sciences Weihenstephan, Technical University of Munich, Germany
  3. Netherlands Institute for Neuroscience, Netherlands
  4. Center for Neurogenomics and Cognitive Research, Vrije Universiteit, Netherlands
Research Article
  • Cited 0
  • Views 1,167
  • Annotations
Cite this article as: eLife 2021;10:e61619 doi: 10.7554/eLife.61619

Abstract

Spontaneous activity drives the establishment of appropriate connectivity in different circuits during brain development. In the mouse primary visual cortex, two distinct patterns of spontaneous activity occur before vision onset: local low-synchronicity events originating in the retina and global high-synchronicity events originating in the cortex. We sought to determine the contribution of these activity patterns to jointly organize network connectivity through different activity-dependent plasticity rules. We postulated that local events shape cortical input selectivity and topography, while global events homeostatically regulate connection strength. However, to generate robust selectivity, we found that global events should adapt their amplitude to the history of preceding cortical activation. We confirmed this prediction by analyzing in vivo spontaneous cortical activity. The predicted adaptation leads to the sparsification of spontaneous activity on a slower timescale during development, demonstrating the remarkable capacity of the developing sensory cortex to acquire sensitivity to visual inputs after eye-opening.

Introduction

The impressive ability of the newborn brain to respond to its environment and generate coordinated output without any prior experience suggests that brain networks undergo substantial organization, tuning and coordination even as animals are still in the womb, driven by powerful developmental mechanisms. These broadly belong to two categories: activity-independent mechanisms, involving molecular guidance cues and chemoaffinity gradients which establish the initial coarse connectivity patterns at early developmental stages (Feldheim and O'Leary, 2010; Goodhill, 2016), and activity-dependent plasticity mechanisms which continue with refinement of this initially imprecise connectivity into functional circuits that can execute diverse behaviors in adulthood (Ackman and Crair, 2014; Richter and Gjorgjieva, 2017; Thompson et al., 2017). Non-random patterns of spontaneous activity drive these refinements and act as training inputs to the immature circuits before the onset of sensory experience. Many neural circuits in the developing brain generate spontaneous activity, including the retina, hippocampus, cortex, and spinal cord (reviewed in Blankenship and Feller, 2010; Wang and Bergles, 2015). This activity regulates a plethora of developmental processes such as neuronal migration, ion channel maturation, and the establishment of precise connectivity (Huberman et al., 2008; Moody and Bosma, 2005; Kirkby et al., 2013; Godfrey and Swindale, 2014), and perturbing this activity impairs different aspects of functional organization and axonal refinement (Cang et al., 2005a; Xu et al., 2011; Burbridge et al., 2014). These studies firmly demonstrate that spontaneous activity is necessary and instructive for the emergence of specific and distinct patterns of neuronal connectivity in the developing nervous system.

Recent in vivo recordings in the developing sensory cortex have found that the spatiotemporal properties of spontaneous activity, including frequency, synchronicity, amplitude and spatial spread, depend on the studied region and developmental age (Golshani et al., 2009; Rochefort et al., 2009; Gribizis et al., 2019). These studies have shown that the generation and propagation of spontaneous activity in the intact cortex depend on input from different brain areas. For instance, activity from the sensory periphery substantially contributes to the observed activity patterns in the developing cortex, but there are other independent sources of activity within the cortex itself (Ackman et al., 2012; Siegel et al., 2012; Hanganu et al., 2006; Gribizis et al., 2019). Two-photon imaging of spontaneous activity in the in vivo mouse primary visual cortex before eye-opening (postnatal days, P8-10) has demonstrated that there are two independently occurring patterns of spontaneous activity with different sources and spatiotemporal characteristics. Peripheral events driven by retinal waves (Feller et al., 1996; Blankenship and Feller, 2010) spread in the cortex as low-synchronicity local events (L-events), engaging a relatively small number of the recorded neurons. In contrast, events intrinsic to the cortex that are unaffected by manipulation of retinal waves spread as highly synchronous global events (H-events), activating a large proportion of the recorded neurons (Siegel et al., 2012).

We know relatively little about the information content of these local and global patterns of spontaneous cortical activity relevant for shaping local and brain-wide neural circuits. Specifically, it is unknown whether spontaneous activity from different sources affects distinct aspects of circuit organization, each providing an independent instructive signal, or if L- and H-events cooperate to synergistically guide circuit organization. Therefore, using experimentally characterized properties of spontaneous activity in the visual cortex in vivo at P8-10, we developed a biologically plausible, yet analytically tractable, theoretical framework to determine the implications of this activity on normal circuit development with a focus on the topographic refinement of connectivity and the emergence of stable receptive fields.

We postulated that peripheral L-events play a key role in topographically organizing receptive fields in the cortex, while H-events regulate connection strength homeostatically, operating in parallel to network refinements by L-events. We considered that H-events are ideally suited for this purpose because they maximally activate many neurons simultaneously, and hence lack topographic information that can be used for synaptic refinement. We studied two prominent activity-dependent plasticity rules to investigate the postulated homeostatic function of H-events, the Hebbian covariance rule (Miller et al., 1989; Miller, 1994; Lee et al., 2002; Sejnowski, 1977) and the Bienenstock-Cooper-Munro (BCM) rule (Bienenstock et al., 1982). In the Hebbian covariance rule, simultaneous pre- and postsynaptic activation (e.g. during L-events) triggers the selective potentiation of synaptic connections, while postsynaptic activation without presynaptic input (e.g. during H-events) leads to the unselective depression of all connections. In the BCM rule, H-events dynamically regulate potentiation and depression. However, both rules generate receptive fields that have either refinement or topography defects. Therefore, we proposed that H-events might be self-regulating, with amplitudes that adapt to the levels of recent cortical activity. Indeed, we found evidence of this adaptation in spontaneous activity recorded in the developing visual cortex (Siegel et al., 2012). Besides generating topographically refined receptive fields, this adaptation leads to the sparsification of cortical spontaneous activity over a prolonged timescale of development as in the visual and somatosensory cortex (Rochefort et al., 2009; Golshani et al., 2009). Therefore, our work proposes that global, cortically generated activity in the form of H-events rapidly adapts to ongoing network activity, supporting topographic organization of connectivity and maintaining synaptic strengths in an operating regime.

Results

A network model for connectivity refinements driven by spontaneous activity

How spontaneous activity instructs network refinements between the sensory periphery and the visual cortex depends on two aspects: the properties of spontaneous activity and the activity-dependent learning rules that translate these properties into specific changes in connectivity. We first characterized spontaneous activity in the mouse primary visual cortex before eye-opening, and investigated two prominent learning rules to organize connectivity in a network model of the thalamus and visual cortex.

Spontaneous activity recorded in vivo using two-photon Ca2+ imaging exhibits two independently occurring patterns: network events originating in the retina and propagating through the thalamus, and network events generated in the cortex (Siegel et al., 2012; Figure 1A). These two types of events were first identified by a cluster analysis based on event amplitude and jitter (a measure of synchrony; Siegel et al., 2012). The analysis identified a participation rate criterion to separate network events into local low-synchronicity (L-) events generated in the retina, where 20–80% of the neurons in the field of view are simultaneously active, and global high-synchronicity (H-) events intrinsic to the cortex, where nearly all (80–100%) cortical neurons are simultaneously active. This same 80% participation rate criterion was recently validated both at the single-cell and population levels (Leighton et al., 2020). We first confirmed differences in specific features of the recorded spontaneous events (Siegel et al., 2012), and also characterized novel aspects (Figure 1B). In particular, L-events have a narrow distribution of amplitudes and inter-event intervals (IEI, the inverse of firing frequency) that follow an exponential-like distribution. H-events have a broader distribution of amplitudes with higher values and IEIs that follow a long-tailed distribution with higher values relative to L-events. We found that L- and H-events have similar durations.

Spontaneous activity patterns in early postnatal development.

(A) Two distinct patterns of spontaneous activity recorded in vivo in the visual cortex of young mice before eye-opening (P8-10). Blue shading denotes local low-synchronicity (L-) events generated by the retina; orange shading denotes global high-synchronicity (H-) events generated by the cortex. Activated neurons during each event are shown in red. (B) Distributions of different event properties (amplitude, inter-event interval, and event duration). Amplitude was measured as changes in fluorescence, relative to baseline, F/F0. (C) Network schematic: thalamocortical connections are refined by spontaneous activity. The initially broad receptive fields with weak synapses evolve into a stable configuration with strong synapses organized topographically.

Next, we built a model that incorporates these two different patterns of spontaneous activity to investigate the potentially different roles that L- and H-events might play in driving connectivity refinements between the thalamus and the visual cortex (Figure 1C). We used a one-dimensional feedforward network model – a microcircuit motivated by the small region of cortex imaged experimentally – composed of two layers, an input (presynaptic) layer corresponding to the sensory periphery (the thalamus) and a target (postsynaptic) layer corresponding to the primary visual cortex (Figure 2A). Cortical activity v in the model is generated by two sources (Figure 2B; Table 1). First, L-events, u, activate a fraction between 20% and 80% of neighboring thalamic cells (also referred to as the L-event size) and drive the cortex through the weight matrix, W. Second, H-events, vspon, activate the majority of the cortical cells (a fraction between 80% and 100%, also referred to as the H-event size). We used a rate-based unit with a membrane time constant τm and linear activation function consistent with the coarse temporal structure of spontaneous activity during development, carrying information on the order of hundreds of milliseconds (Gjorgjieva et al., 2009; Butts and Kanold, 2010; Richter and Gjorgjieva, 2017):

(1) τmdv(t)dt=v(t)+W(t)u(t)+vspon(t).
A network model of thalamocortical connectivity refinements.

(A) A feedforward network with an input layer of thalamic neurons u(t) connected to an output layer of cortical neurons v(t) by synaptic weights W(t). (B) Properties of L- and H-events in the model (amplitude Lamp,Hamp, inter-event interval Lint,Hint and duration Ldur,Hdur) follow probability distributions extracted from data (Siegel et al., 2012) (see Table 1). (C) Initially weak all-to-all connectivity with a small topographic bias along the diagonal (left) gets refined by the spontaneous activity events (right). (D) Evaluating network refinement through receptive field statistics (see Materials and methods). We quantify two properties: (1) the receptive field size and (2) the topography, which quantifies on average how far away the receptive field center of each cortical cell (red dot) is from the diagonal (dashed gray line).

Table 1
List of parameters used in the model unless stated otherwise.
NameValue/DistributionDescription
Network
Nu50Number of thalamic neurons
Nv50Number of cortical neurons
T50,000Simulation length [s]
Weights
winiU(0.15,0.25)Range of initial weights (U: uniform dist.)
s0.05Amplitude of Gaussian bias
σs4Spread of Gaussian bias
wmax0.5Weight saturation limit
L-events
Lamp1.0Amplitude (equivalently, binary neuron)
LpctU(20%,80%)Percentage of thalamic cells activated
Ldur𝒩(0.15,0.015)Mean duration [s] (𝒩: Gaussian dist.)
LintExp(1.5)Mean inter-event interval [s] (Exp: exponential dist.)
H-events
Hamp𝒩(6,2)Amplitude
HpctU(80%,100%)Percentage of cortical cells activated
Hdur𝒩(0.15,0.015)Mean duration [s]
HintGamma(3.5, 1.0)Mean inter-event interval [s] (Gamma: Gamma dist.)
Time constants
τm0.01Membrane time constant [s]
τw500Weight-change time constant for Hebbian covariance rule [s]
τw1000Weight-change time constant for BCM rule [s]
τθ20Output threshold time constant for BCM rule [s]
τη1Adaptation time constant [s]

To investigate the refinement of network connectivity during development, we studied the evolution of synaptic weights using plasticity rules operating over long timescales identified experimentally (Butts et al., 2007; Winnubst et al., 2015). First, we examined a classical Hebbian plasticity rule where coincident presynaptic thalamic activity and postsynaptic cortical activity in the form of L-events leads to synaptic potentiation. We postulated that H-events act homeostatically and maintain synaptic weights in an operating regime by depressing the majority of synaptic weights in the absence of peripheral drive. Because they activate most cortical neurons simultaneously, H-events lack the potential to drive topographical refinements. Their postulated homeostatic action resembles synaptic depression through downscaling, as observed in response to highly correlated network activity, for instance, upon blocking inhibition (Turrigiano and Nelson, 2004), or during slow-wave sleep (Tononi and Cirelli, 2006). Therefore, to the Hebbian rule we added a non-Hebbian term that depends only on the postsynaptic activity, with a proportionality constant that controls the relative amount of synaptic depression. This differs from other Hebbian covariance plasticity rules for the generation of weight selectivity, which include non-Hebbian terms that depend on both pre- and postsynaptic activity (Lee et al., 2002; Mackay and Miller, 1990) and is mathematically related to models of heterosynaptic plasticity (Chistiakova et al., 2014; Lynch et al., 1977; Zenke et al., 2015). Hence, the change in synaptic weight between cortical neuron j and thalamic neuron i is given by:

(2) τwdwji(t)dt=vj(t)(ui(t)-θu),

where τw is the learning time constant and θu the proportionality constant in the non-Hebbian term, which we refer to as the ‘input threshold’. The activity time constant τm is much faster than the learning time constant, τmτw, which allows us to separate timescales and to study how network activity on average affects learning (see Appendix). Interestingly, in this Hebbian covariance rule, the input threshold together with H-events effectively implement a subtractive constraint (see Appendix: ‘Normalization constraints’). Subtractive normalization preserves the sum of all weights by subtracting from each weight a constant amount independent of each weight strength and is known to generate selectivity and refined receptive fields (Miller and MacKay, 1994). This is in contrast to the alternative multiplicative normalization, which generates graded and unrefined receptive fields where most correlated inputs are represented (Miller and MacKay, 1994) and hence was not considered here.

Additionally, we investigated the BCM learning rule, which can induce weight stability and competition without imposing constraints in the weights, and hence generate selectivity in postsynaptic neurons which experience patterned inputs (Bienenstock et al., 1982). For instance, the BCM framework can explain the emergence of ocular dominance (neurons in primary visual cortex being selective for input from one of the two eyes) and orientation selectivity in the visual system (Cooper et al., 2004). An important property of the BCM rule is its ability to homeostatically regulate the balance between potentiation and depression of all incoming inputs into a given neuron depending on how far away the activity of that neuron is from some target level. The change in synaptic weight between cortical neuron j and thalamic neuron i is given by:

(3) τwdwji(t)dt=vj(t)ui(t)(vj(t)-θvj(t)),

where

(4) τθdθvj(t)dt=-θvj(t)+vj2(t)v0

describes the threshold θvj(t) between depression and potentiation which slides as a function of postsynaptic activity, v0 is the target rate of the cortical neurons and τθ the sliding threshold time constant. According to this rule, synaptic weight change is Hebbian in that it requires coincident pre- and postsynaptic activity, as is only the case during L-events. H-events induce no direct plasticity in the network because of the absence of presynaptic activation, but they still trigger synaptic depression indirectly by increasing the threshold between potentiation and depression.

Based on experimental measurements of the extent of thalamocortical connectivity at different developmental ages (López-Bendito, 2018), we assumed that initial network connectivity was weak and all-to-all, such that each cortical neuron was innervated by all thalamic neurons. To account for the activity-independent stage of development guided by molecular guidance cues and chemoaffinity gradients, a small bias was introduced to the initial weight matrix to generate a coarse topography in the network, where neighboring neurons in the thalamus project to neighboring neurons in the cortex and preserve spatial relationships (Figure 2C, left). Following connectivity refinements through spontaneous activity and plasticity, a desired outcome is that the network achieves a stable topographic configuration (Figure 2C, right) where each cortical neuron receives input only from a neighborhood of thalamic neurons.

To evaluate the success of this process, we quantified two properties. First, the receptive field size defined as the average number of thalamic neurons that strongly innervate a cortical cell (Figure 2D). We normalized the receptive field size to the total number of thalamic cells, so that it ranges from 0 (no receptive field, all cortical cells decouple from the thalamus) to 1 (each cortical cell receives input from all the thalamic cells, all weights potentiate leading to no selectivity). We also quantified the topography of the final receptive field (Figure 2D and Materials and methods), which evaluates how well the initial bias is preserved in the final network connectivity. The topography ranges from 0 (all cortical neurons connect to the same set of thalamic inputs) to 1 (perfect topography relative to the initial bias). We note that the lack of initial connectivity bias did not disrupt connectivity refinements and receptive field formation but could not on its own establish topography (Figure 3—figure supplement 1A).

Spontaneous cortical H-events disrupt topographic connectivity refinement in the Hebbian covariance and BCM plasticity rules

Both the Hebbian and the BCM learning rules are known to generate selectivity with patterned input stimuli (Mackay and Miller, 1990; Bienenstock et al., 1982), and we confirmed that L-events on their own can refine receptive fields in both scenarios (Figure 3—figure supplement 2). We found that including H-events in the Hebbian covariance rule requires that the parameters of the learning rule and the properties of H-events (the input threshold θu and the inter-event interval Hint) follow a tight relationship to generate selective and refined receptive fields (Figure 3A,C, left). For a narrow range of Hint, weight selectivity emerges, but with some degree of decoupling between pre- and postsynaptic neurons (Figure 3A, middle). Outside of this narrow functional range, individual cortical neurons are either non-selective (Figure 3A, left) or decoupled from the thalamus (Figure 3A, right). These results are robust to changes in the participation rates of L- and H-events. For instance, when H-events involve 70–100% of cortical neurons, the percent of outcomes with selective receptive fields increases slightly to 19.8% (compared to 14.0% when H-events involve 80–100% of cortical neurons), while the percent of outcomes with decoupled cortical neurons increases to 60.4% (compared to 43.6% when H-events involve 80–100% of cortical neurons), reinforcing the idea that H-events are detrimental to receptive field refinements. In comparison, including H-events in the BCM learning rule does not decouple pre- and postsynaptic neurons (Figure 3B) and selectivity can be generated over a wider range of H-inter-event-intervals Hint and target rates v0 for the BCM rule (Figure 3C, right).

Figure 3 with 2 supplements see all
Spontaneous cortical events disrupt receptive field refinement.

(A) Receptive fields generated by the Hebbian covariance rule with input threshold θu=0.4 and decreasing Hint. (B) Receptive fields generated by the BCM rule with target rate v0=0.7 and decreasing Hint. (C) Top: Receptive field sizes obtained from 500 Monte Carlo simulations for combinations of Hint and θu for the Hebbian covariance rule (left) and Hint and v0 for the BCM rule (right). Bottom: Percentage of simulation outcomes classified as ‘selective’ when the average receptive field size is smaller than one and larger than 0, ‘non-selective’ when the average receptive field size is equal to 1, and ‘decoupled’ when the average receptive field size is 0 for the two rules. (D) Topography of receptive fields classified as selective in C. Horizontal line indicates median, the box is drawn between the 25th and 75th percentile, whiskers extend above and below the box to the most extreme data points that are within a distance to the box equal to 1.5 times the interquartile range and points indicate all data points. Distributions are significantly different (***) as measured by a two-sample Kolmogorov-Smirnov test (n=70,302 selective outcomes for each rule out of 500; p<1010; D = 0.45). (E) The response of a single cortical cell to L-events of different sizes (color) as a function of the sliding threshold for the BCM rule with Hint=3.5 and v0=0.7. The cell’s incoming synaptic weights from presynaptic thalamic neurons undergo LTP or LTD depending on L-event size. (F) Probability of L-event size contributing to LTD (left) and LTP (right) for the BCM rule with the same parameters as in E.

Despite this apparent advantage of the BCM rule, it generates receptive fields with much worse topography than the Hebbian covariance rule (Figure 3D). The underlying reason for this worse topography of the BCM rule is the sign of synaptic change evoked by L-events of different sizes corresponding to different participation rates. In particular, small L-events with low participation rates generate postsynaptic cortical activity smaller than the sliding threshold and promote long-term synaptic depression (LTD), while large L-events with high participation rates generate cortical activity larger than the sliding threshold and promote long-term synaptic potentiation (LTP) (Figure 3E,F). Therefore, the amount of information for connectivity refinements present in the small L-events is limited in the BCM learning rule resulting in poor topographic organization of receptive fields.

Taken together, our results confirm that H-events can operate in parallel to network refinements by L-events and homeostatically regulate connection strength as postulated. However, the formation of receptive fields by the Hebbian covariance rule is very sensitive to small changes in event properties (e.g. inter-event intervals), which are common throughout development (Rochefort et al., 2009). In this case, H-events are disruptive and lead to the elimination of all thalamocortical synapses, effectively decoupling the cortex from the sensory periphery. In the BCM rule, including H-events prevents the decoupling of cortical cells from the periphery because the amount of LTD is dynamically regulated by the sliding threshold on cortical activity. However, L-events lose the ability to instruct topography because they generate LTP primarily when they are large. Therefore, neither learning rule seems suitable to organize network connectivity between the thalamus and cortex during development.

Adaptive H-events achieve robust selectivity

After comparing the distinct outcomes of the Hebbian and BCM learning rules in the presence of L- and H-events, we proposed that a mechanism that regulates the amount of LTD during H-events based on cortical activity, similar to the sliding threshold of the BCM rule, could be a biologically plausible solution to mitigate the decoupling of cortical cells in the Hebbian covariance rule. This mechanism combined with the Hebbian learning rule could lead to refined receptive fields that also have good topographic organization. Hence, we postulated that H-events adapt by assuming that during H-events cortical cells scale their amplitude to the average amplitude of the preceding recent events. In particular, for each cortical cell j an activity trace ηj integrates the cell’s firing rate vj over a timescale τη slower than the membrane time constant:

(5) τηdηj(t)dt=-ηj(t)+vj(t).

This activity trace ηj then scales the intrinsic firing rate of the cortical cells during an H-event, HampηjHamp, making it dependent on its recent activity. The activity trace ηj might biophysically be implemented through a calcium-dependent signaling pathway that is activated upon sufficient burst depolarization and that is able to modulate a cell’s excitability in the form of plasticity of intrinsic excitability (Desai et al., 1999; Daoudal and Debanne, 2003; Tien and Kerschensteiner, 2018). A fast, activity-dependent mechanism that decreases single-neuron excitability following a prolonged period of high network activity has been identified in spinal motor neurons of neonatal mice (Lombardo et al., 2018). However, there might be other ways to implement this adaptation (see Discussion).

Using adaptive H-events, we investigated the refinement of receptive fields in the network with the same Hebbian covariance rule (Figure 4A). In sharp contrast to the Hebbian covariance rule with non-adaptive H-events (Figure 3A), we observed that changing the average inter-event interval of H-events in a wider and more biologically realistic range (from the data, Hint3Lint) yields selectivity and appropriately refined receptive fields (Figure 4A). Increasing θu or decreasing Hint yields progressively smaller receptive fields while mitigating cortical decoupling (Figure 4B). The refined receptive fields also have a very good topography because L-events in the Hebbian learning rule carry nearest-neighbor information for the topographic refinements (Figure 4C). The proportion of selective receptive fields for adaptive H-events, however, is much higher than for their non-adaptive counterparts (390 vs. 70 out of 500 simulations). These results persist when the participation rates of L- and H-events change. For instance, when H-events involve 70–100% of cortical neurons, the percent of outcomes with selective receptive fields (75.0%) and the percent of outcomes with decoupled cortical neurons (0%) remain similar.

Adaptive cortical events refine thalamocortical connectivity.

(A) Receptive field refinement with adaptive H-events and different H-inter-event-intervals, Hint. Top: θu=0.5; bottom: θu=0.6. (B) Receptive field sizes from 500 Monte Carlo simulations for combinations of Hint and θu. Bottom: Percentage of simulation outcomes classified as ‘selective’ when the average receptive field size is smaller than one and larger than 0, ‘non-selective’ when the average receptive field size is equal to 1, and ‘decoupled’ when the average receptive field size is 0 for the two rules. (C) Topography of receptive fields classified as selective in B. Horizontal line indicates median, the box is drawn between the 25th and 75th percentile, whiskers extend above and below the box to the most extreme data points that are within a distance to the box equal to 1.5 times the interquartile range and points indicate all data points. Distributions are not significantly different (ns) as measured by a two-sample Kolmogorov-Smirnov test (n=70,390 selective outcomes for each rule out of 500; p=0.41; D = 0.45). (D) Top: Reduction of the full weight dynamics into two dimensions. Two sets of weights were averaged: those which potentiate and form the receptive field, wRF, and the complementary set of weights that depress, wC. Bottom: Initial conditions in the reduced two-dimensional phase plane were classified into three outcomes: ‘selective’, ‘non-selective’, and ‘decoupled’. We sampled 2500 initial conditions which evolved according to Equation 16 until the trajectories reached one of the selective fixed points, (wmax,0) and (0,wmax), or resulted in no selectivity either because both weights depressed to (0,0) or potentiated to (wmax,wmax). The normalized number of initial coordinates generating each region can be interpreted as the area of the phase plane that results in each outcome. (E) Top: Normalized area of the phase plane of the reduced two-dimensional system that resulted in ‘selective’, ‘non-selective’, and ‘decoupled’ outcomes for θu=0.53 as a function of H-event strength. The darker shading indicates ranges of non-adapted H-event strength where the selectivity area is maximized. Bottom: The corresponding adapted strength of H-events was calculated in simulations with adaptive H-events and plotted as a function of the nominal, non-adapted strength of H-events. The range of adapted H-event strengths (bottom) corresponds to the range of non-adaptive values that maximize the selectivity area (top). Each point shows the average over 10 runs and the bars the standard deviation (which are very small).

Next, we investigated how the proposed adaptive mechanism scales H-event amplitude by modulating the relative strength of H-events. For the Hebbian covariance rule, we calculated the analytical solution for weight development with L- and H-events by reducing the dimension of the system to two: one being the average of the weights that potentiate and form the receptive field, wRF, and the other being the average of the remaining weights, which we called ‘complementary’ to the receptive field, wC (Figure 4D; see Appendix, Materials and methods). We calculated the phase plane area of the reduced two-dimensional system with non-adaptive H-events (calculated as the proportion of initial conditions) that results in selectivity, potentiation or depression (Figure 4D, bottom). We found that adaptively modulating the strength of H-events maximizes the area of the phase plane that results in selectivity (Figure 4E, shaded region). The range of H-event strengths that maximizes the selective area for each input threshold in the reduced two-dimensional system can be related to the scaling of H-event amplitude in the simulations (Methods). In particular, the adaptation reliably shifts the H-event amplitude that would have occurred without adaptation, which we call ‘non-adapted strength of H-events’, into the regime of amplitudes that maximizes selectivity, which we call ‘adapted strength of H-events’ (Figure 4E). Therefore, the adaptation of H-event amplitudes controls the selective refinement by peripheral L-events by modulating the cortical depression by adapted H-events.

In vivo spontaneous cortical activity shows a signature of adaptation

To determine whether spontaneous cortical activity contains a signature of our postulated adaptation mechanism of H-event amplitudes, we reanalyzed published in vivo two-photon Ca2+ imaging data recorded in the visual cortex of young mice (P8-10) (Siegel et al., 2012). We combined multiple consecutive ~ 300 s long recordings for up to 40 mins of data from a given animal. First, we tested for long-term fluctuations in cortical excitability in the concatenated recordings of the same animal. We identified L- and H-events based on previously established criteria (Siegel et al., 2012). We found that the average amplitude of all (L and H) events is not significantly different across consecutive recordings of the same animal (Figure 5—figure supplement 1A). Additionally, across different animals and ages, individual event amplitudes remain uncorrelated between successive recordings at this timescale (Figure 5—figure supplement 1B). This suggests that there are no prominent long-term amplitude fluctuations, and therefore, the correlations cannot be explained by such fluctuations. Even so, slow amplitude fluctuations would not be able to generate refined receptive fields in the model (Figure 5—figure supplement 2).

Next, we investigated the relationship between the amplitude of a given H-event and the average activity preceding it. For each detected H-event, we extracted all spontaneous (L- or H-) events that preceded this H-event up to Tmax=300 s before it. We then scaled the amplitude of each previous event multiplying it by an exponential kernel with a decay time constant of τdecay=1000 s, which is sufficiently long to integrate many preceding spontaneous events (compared with the inter-event intervals in Figure 1B), and averaged these scaled amplitudes to get an aggregate quantity over amplitude and frequency (see Materials and methods).

We found that this aggregate amplitude of L- and H-events preceding a given H-event is significantly correlated (r=0.44, p<10-10) to the amplitude of the selected H-event (Figure 5B). Consequently, a strong (weak) H-event follows strong (weak) average preceding network activity (Figure 5C), suggesting that cortical cells adapt their spontaneous firing rates as a function of their previous activity levels. The correlations are robust to variations in the inclusion criteria, maximum time Tmax to integrate activity and the exponential decay time constant τdecay (Figure 5—figure supplement 3).

Figure 5 with 3 supplements see all
Spontaneous events in developing cortex adapt to recent activity.

(A) Calcium trace of a representative recording with L- (blue) and H-events (orange) (Siegel et al., 2012). (B) The amplitude of an H-event shown as a function of the aggregate amplitude of preceding L- and H-events up to Tmax=300 s before it, scaled by an exponential kernel with a decay time constant of 1000 s (N=195 events from nine animals). Animals with fewer than 12 H-events preceded by activity within Tmax were excluded from this analysis (see Materials and methods). The Pearson correlation coefficient is r=0.44,p<1010, CI =(0.32,0.54). Red line indicates regression line with 95% confidence bounds as dashed lines. (C) Schematic of the postulated adaptation: A weak (strong) H-event is more likely to be preceded by weak (strong) spontaneous events.

Modulating spontaneous activity properties affects receptive field refinements

Our results make relevant predictions for the refinement of receptive fields upon manipulating spontaneous activity. For example, H-event frequency can be experimentally reduced by a gap junction blocker (carbenoxolone) (Siegel et al., 2012). Our work demonstrates a trade-off between H-event frequency and the learning rule’s threshold between potentiation and depression on receptive field size; hence, less frequent H-events will need a somewhat higher threshold to achieve the same receptive field size (Figure 4).

Similarly, L-events can also be experimentally manipulated, for instance, by altering inhibitory signaling (Leighton et al., 2020), or the properties of retinal waves which propagate as L-events into the cortex. We performed Monte Carlo simulations with a range of input thresholds θu and variable participation rates of thalamic neurons in L-events, using the Hebbian covariance rule with adaptive H-events (Figure 6A). Larger L-events in our model produce less refined, that is, larger receptive fields in the cortical network (Figure 6B,C, left). This result is not surprising given the proposed role of L-events in guiding receptive field refinements, and is consistent with the imprecise and unrefined receptive fields observed in the visual cortex of animals where retinal wave properties have been modified. For instance, a prominent example of retinal wave manipulations are β2 knockout mice, which lack expression of the β2 subunit of the nicotinic acetylcholine receptor (β2-nAChR) that mediates spontaneous retinal waves in the first postnatal week. In these animals, retinal waves are consistently larger as characterized by the increased correlation with distance (Sun et al., 2008; Stafford et al., 2009; Cutts and Eglen, 2014), in addition to other features. As a result, there are measurable defects in the retinotopic map refinement of downstream targets (Grubb et al., 2003; Cang et al., 2005b; Burbridge et al., 2014). Smaller L-events also refine receptive fields with better topographic organization (Figure 6C, right) and do not impair connectivity refinements. This result could be linked to experiments where the expression of β2-nAChR is limited to the ganglion cell layer of the retina, resulting in smaller retinal waves than those in wild-type and undisturbed retinotopy in the superior colliculus (Xu et al., 2011), although the effects in the cortex are unknown.

Receptive field refinement depends on the properties of L-events.

(A) Receptive field sizes from 500 Monte Carlo simulations for different sizes of L-events where the minimum participation rate was 20%, and the maximum participation rate was varied. The input threshold was taken from the range 0.3θu0.7, while the adaptive H-events had a fixed inter-event-interval Hint=3.5. (B) Individual receptive fields for different L-event maximum participation rates and θu=0.50. As the upper bound of the participation rate progressively increases from 40% to 80%, receptive fields get larger. (C) Left: Receptive field sizes from A binned according to the maximum L-event size. Right: Corresponding topography of selective receptive fields for different sizes of L-events. Diamonds in red indicate the mean, while horizontal bars indicate the 95% confidence interval.

Therefore, we suggest that certain manipulations that modulate the size of sensory activity from the periphery have a profound impact on the precision of receptive field refinement in downstream targets, making predictions to be tested experimentally. In contrast to retinal wave manipulations, the effect of altered inhibitory signaling on receptive field refinements is still unknown. It is likely that such manipulations will also affect H-events (Leighton et al., 2020), as well as shape ongoing plasticity in the network (Wu et al., 2020), and hence have less predictable effects on receptive field size and topography.

Adaptive H-events promote the developmental event sparsification of cortical activity

Thus far, we focused on the development of the network connectivity in our model driven by spontaneous activity based on properties measured during a few postnatal days (P8-10, Figure 4A). However, in vivo spontaneous activity patterns are not static, but dynamically regulated during development by ongoing activity-dependent plasticity which continuously reshapes network connectivity that lasts several days (Rochefort et al., 2009; Golshani et al., 2009; Frye and MacLean, 2016). Moreover, it is unclear if the same criteria based on event participation rates and amplitude can be used to separate the spontaneous events into L and H at later developmental ages. Hence, we next asked how our observed modifications in network connectivity that are the result of receptive field refinement further modify spontaneous activity patterns on a much longer developmental timescale of several days in our model. Therefore, we analyzed all spontaneous events of simulated cortical neurons during the process of receptive field refinement in the presence of adaptive H-events (Figure 4B). Since the input threshold θu of the Hebbian learning rule is related to receptive field size (Figure 4B), we used θu as a proxy for time of development in the model: low θu corresponds to earlier developmental stages when receptive fields are large, while high θu corresponds to late developmental stages when receptive fields are refined. This assumption is also in line with the fact that the input resistance of neurons in V1 and S1 decreases during development (Etherington and Williams, 2011; Golshani et al., 2009), so that the depolarizing current necessary to trigger an action potential increases with age.

At an early developmental stage in the model (θu=0.45), the unrefined receptive fields of cortical neurons in our network model propagate thalamic activity into the cortex as very broad spontaneous events, while adaptive H-events remain intrinsic to the cortical layer. As in the data (Figure 7A,C; Siegel et al., 2012), the amplitude of events with 20–80% participation rate is approximately half the amplitude of events with greater than 80% participation rate (Figure 7B,D, left). Moreover, we also observed a high proportion of large events with greater than 80% participation rate (Figure 7—figure supplement 1A), suggesting that in the network model large spontaneous events are very frequent. At an intermediate developmental stage in the model (θu=0.50), as receptive fields refine and peripheral events activate fewer cortical neurons, our proposed adaptation of H-event amplitudes decreases the overall level of intrinsic activity in the cortical layer. This changes the relationship between effective amplitude and participation rate (Figure 7D, middle), with large events decreasing their amplitudes and density (Figure 7—figure supplement 1A). Finally, at late developmental stages in the model (θu=0.60), the relationship between effective amplitude and participation rate is almost absent (Figure 7D, right). Overall event amplitude is much lower resulting in far fewer large events with greater than 80% participation rate (Figure 7—figure supplement 1B). Therefore, due to the progressive receptive field refinements and the continued H-event adaptation in response to resulting activity changes, spontaneous events in our model progressively sparsify during ongoing development, whereby spontaneous events become smaller in size with fewer participating cells. This finding suggests that spontaneous events in the cortex at later developmental ages can no longer be separated into L and H using the same criteria of participation rate and amplitude as during early development. We also found that the mean pairwise correlation of all cortical neurons in the model decreases as a function of developmental age (θu; Figure 7E,F), which further supports the trend of progressive sparsification already observed in the event sizes.

Figure 7 with 1 supplement see all
Adaptive H-events promote sparsification of cortical activity during development.

(A) Spontaneous activity in the mouse visual cortex recorded in vivo at P8-10 (Siegel et al., 2012). Each activity trace represents an individual cortical cell. Blue and orange shading denotes L-events and H-events, respectively, as identified by Siegel et al., 2012. (B) Sample traces of cortical activity for different values of θu (as a proxy for developmental age). Gray shading denotes all events detected in our networks. (C) Amplitude vs. participation rate plot from the data (Siegel et al., 2012). The regression line for the amplitude vs. participation rate in H-events has a positive slope. (D) Amplitude vs. participation rate plots from the model, for different values of θu. Inset: The regression line for the amplitude vs. participation rate in large events with greater than 80% participation rate has a slope that decreases with θu. Error bars represent standard deviation. (E) Correlation between cortical neurons decreases as a function of the input threshold θu in the model, a proxy for developmental time. (F) Correlation matrices of simulated cortical neuron activity corresponding to D. (G,H). Event sizes and the relationship between frequencies (open squares) and amplitudes (filled circles) of spontaneous events at different postnatal ages (data reproduced from Rochefort et al., 2009). Error bars represent standard error of the mean (number of animals used at each age is provided in the original reference). (I) Spontaneous event sizes as a function of the input threshold θu. (J) Frequencies (squares) and amplitudes (circles) of events with 20–80% participation rate in the model at different input thresholds. Error bars represent the standard error of the mean of 10 simulations.

Interestingly, such event sparsification of spontaneous activity has been observed experimentally in the mouse barrel cortex during postnatal development from P4 to P26 (Golshani et al., 2009) and in the visual cortex from P8 to P79 (Rochefort et al., 2009). During this period, in the visual cortex, the size of spontaneous events decreases (Figure 7G), the amplitude of the participating cells also decreases, while event frequency increases (Figure 7H; Rochefort et al., 2009). This progressive event sparsification of cortical activity is generated by mechanisms intrinsic to the cortex, and does not seem to be sensory-driven (Rochefort et al., 2009; Golshani et al., 2009). We found the same relationships in our model using θu as a proxy for developmental time (Figure 7I,J).

In summary, our framework for activity-dependent plasticity and receptive field refinement between thalamus and cortex with adaptive H-events can tune the properties of cortical spontaneous activity and provide a substrate for the event sparsification of cortical activity during development on a much longer timescale than receptive field refinement. This sparsification has been found in different sensory cortices, including visual (Rochefort et al., 2009), somatosensory (Golshani et al., 2009), and auditory (Frye and MacLean, 2016), suggesting a general principle that underlies network refinement. However, the event sparsification we observe is different from sparse network activity implicated in sparse efficient coding, which interestingly seems to decrease during development (Berkes et al., 2009; Berkes et al., 2011). Our modeling predicts that cortical event sparsification is primarily due to the suppression of cortically-generated H-events in the Hebbian covariance rule, which switches cortical sensitivity to input from the sensory periphery after the onset of sensory experience.

Discussion

We examined the information content of spontaneous activity for refining local microcircuit connectivity during early postnatal development. In contrast to classical works on activity-dependent refinements, which used mathematically convenient formulations of spontaneous activity (Willshaw and von der Malsburg, 1976; Mackay and Miller, 1990), we used spontaneous activity patterns characterized in the mouse visual cortex in vivo before the onset of vision (P8-10), which revealed its rich structure. Specifically, we explored the joint contribution of two distinct patterns of spontaneous activity recorded in the mouse visual cortex before the onset of vision, local (L-events) and global (H-events), on establishing topographically refined receptive fields between the thalamus and the cortex without decoupling in a model network with activity-dependent plasticity. Because of their spatially correlated activity, we proposed that peripherally generated L-events enable topographic refinement, while H-events regulate connection strength homeostatically. We investigated two Hebbian learning rules – the Hebbian covariance and the BCM rules – which use joint pre- and postsynaptic activity to trigger synaptic plasticity. First, we studied the Hebbian covariance rule that induces global synaptic depression in the presence of only postsynaptic activity (i.e. H-events). Second, we studied the BCM rule, which is known to establish the emergence of ocular dominance and orientation selectivity in the visual system. Although L-events successfully instruct topographic receptive field refinements in the Hebbian covariance rule, naively including H-events provides too much depression, eliminating selectivity in the network despite fine-tuning (Figure 3). In contrast, in the BCM rule, H-events are indeed homeostatic, regulating the threshold between depression and potentiation. However, small L-events, which carry precise information for topographic connectivity refinements, mostly cause long-term depression in the synaptic weights and disrupt topography. Inspired by the sliding threshold in the BCM rule, we proposed a similar adaptive mechanism operating at the single-cell level in the Hebbian covariance rule. This mechanism regulates the amplitude of the cortically generated H-events according to the preceding average activity in the network to homeostatically balance local increases and decreases in activity, and can successfully refine receptive fields with excellent topography (Figure 4). Without any additional fine-tuning, this mechanism can also explain the long-term event sparsification of cortical activity as the circuit matures and starts responding to visual input (Figure 7). Therefore, we propose that L- and adaptive H-events cooperate to synergistically guide circuit organization of thalamocortical synapses during postnatal development.

The origin of cortical event amplitude adaptation

After a re-examination of spontaneous activity recorded in the developing cortex in vivo between postnatal days 8 and 10 (Siegel et al., 2012), we found evidence for the proposed H-event amplitude adaptation (Figure 5). This mechanism is sufficiently general in its formulation that it could be realized at the cellular, synaptic or network level. At the cellular level, the adaptation mechanism resembles the plasticity of intrinsic excitability. Typically, plasticity of intrinsic excitability has been reported in response to long-term perturbations in activity or persistent changes in synaptic plasticity like LTP and LTD, where the intrinsic properties of single neurons are adjusted in an activity-dependent manner (Daoudal and Debanne, 2003; Desai et al., 1999). During plasticity of intrinsic excitability, neurons can alter the number and expression levels of ion channels to adjust their input-output function either by modifying their firing thresholds or response gains, which could represent the substrate for H-event amplitude regulation. Our adaptation mechanism is consistent with the fast plasticity of intrinsic excitability operating on the timescale of several spontaneous events supported by many experimental studies. For instance, intrinsic excitability of spine motoneurons is depressed after brief but sustained changes in spinal cord network activity in neonatal mice (Lombardo et al., 2018). Similarly, hippocampal pyramidal neurons also exhibit a rapid reduction of intrinsic excitability in response to sustained depolarizations lasting up to several minutes (Sánchez-Aguilera et al., 2014). In addition to reduced excitability, in the developing auditory system, enhanced intrinsic excitability has been reported in the cochlea followed by reduced synaptic excitatory input from hair cells in a model of deafness, although this change is slower than our proposed adaptation mechanism (Babola et al., 2018).

At the synaptic level, our adaptation mechanism can be implemented by synaptic scaling, a process whereby neurons regulate their activity by scaling incoming synaptic strengths in response to perturbations (Turrigiano et al., 1998). A second possibility is short-term depression, which appears to underlie the generation of spontaneous activity episodes in the chick developing spinal cord (Tabak et al., 2001; Tabak et al., 2010). Similarly, release probability suppression has been reported to strongly contribute to synaptic depression during weak activity at the calyx of Held (Xu and Wu, 2005), which is more pronounced at immature synapses where morphological development renders synaptic transmission less effective (Renden et al., 2005; Nakamura and Takahashi, 2007). This is also the case in the cortex, where short-term synaptic plasticity in young animals is stronger (Oswald and Reyes, 2008). Beyond chemical synapses, plasticity of gap junctions, which are particularly prevalent in development (Niculescu and Lohmann, 2014), could also be a contributing mechanism that adapts overall network activity (Cachope et al., 2007; Haas et al., 2011).

Finally, at the network level, the development of inhibition could be a substrate for amplitude adaptation of cortically generated events. The main inhibitory neurotransmitter, GABA, is thought to act as a depolarizing neurotransmitter, excitatory in early postnatal days (Ben-Ari, 2002), although recent evidence argues that GABAergic neurons have an inhibitory effect on the cortical network already in the second postnatal week (Murata and Colonnese, 2020; Kirmse et al., 2015; Valeeva et al., 2016). Thus, the local maturation of inhibitory neurons – of which there are several types (Tremblay et al., 2016) – that gradually evolve to balance excitation and achieve E/I balance (Dorrn et al., 2010) could provide an alternative implementation of the proposed H-event adaptation.

Developmental sparsification of cortical activity

On a longer timescale than receptive field refinement, we demonstrated that the adaptation of H-event amplitude can also bring about the event sparsification of cortical activity, as global, cortically generated H-events are attenuated and become more localized (Figure 7). The notion of ‘sparse neural activity’ has received significant attention in experimental and theoretical studies of sensory processing in the cortex, including differing definitions and implementations (Field, 1994; Willmore and Tolhurst, 2001; Berkes et al., 2009; Olshausen and Field, 2004; Zylberberg and DeWeese, 2013). In particular, sparse activity in the mature cortex has been argued to be important for the efficient coding of sensory inputs of different modalities (Olshausen and Field, 2004; Field, 1994). Hence, the developmental process of receptive field refinement might be expected to produce sparser network activity over time. However, experiments directly testing this idea have found no, or even opposite, evidence for the developmental emergence of efficient sparse coding (Berkes et al., 2009; Berkes et al., 2011). In the context of our work, sparsification simply refers to an overall sparsification of network events (fewer active cells per event). Given that our data pertain to developmental spontaneous activity before eye-opening, in complete absence of stimulation, it is not straightforward to relate our event sparsification to the sparse efficient coding hypothesis.

Assumptions in the model

Our model is based on the assumption that L- and H-events have distinct roles during the development of the visual system. Retinal waves, the source of L-events, carry information downstream about the position and function of individual retinal ganglion cells (Stafford et al., 2009), hence they are ideally suited to serve as ‘training patterns’ to enable activity-dependent refinements based on spatiotemporal correlations (Ko et al., 2011; Ackman and Crair, 2014; Thompson et al., 2017). Since all cells are maximally active during H-events, these patterns likely do not carry much information that can be used for activity-dependent refinement of connectivity. In contrast, we assumed that H-events homeostatically control synaptic weights, operating in parallel to network refinements by L-events (Figure 4). Indeed, highly correlated network activity can cause homeostatic down-regulation of synaptic weights via a process known as synaptic scaling (Turrigiano and Nelson, 2004). The homeostatic role of H-events is also consistent with synaptic downscaling driven by slow waves during sleep, a specific form of synchronous network activity (Tononi and Cirelli, 2006; Vyazovskiy et al., 2008). Since during development sleep patterns are not yet regular, we reasoned that refinement (by L-events) and homeostasis (by H-events) occur simultaneously instead of being separated into wake and sleep states.

We focused on the role of spontaneous activity in driving receptive field refinements rather than study how spontaneous activity is generated. While the statistical properties of spontaneous activity in the developing cortex are well-characterized, the cellular and network mechanisms generating this activity remain elusive. In particular, while H-event generation has been shown to rely on gap junctions (Siegel et al., 2012; Niculescu and Lohmann, 2014), which recurrently connect developing cortical cells, not much is known about how the size of cortical events is modulated and how an L-event is prevented from spreading and turning into an H-event. It is likely that cortical inhibition plays a critical role in localizing cortical activity and shaping receptive field refinements (Wood et al., 2017; Leighton et al., 2020), for instance, through the plasticity of inhibitory connections by regulating E/I balance (Dorrn et al., 2010). As new experiments are revealing more information about the cellular and synaptic mechanisms that generate spatiotemporally patterned spontaneous activity (Fujimoto et al., 2019), a full model of the generation and the effect of spontaneous activity might soon be feasible.

The threshold parameter in the Hebbian covariance rule in the presence of H-events implements an effective subtractive normalization that sharpens receptive fields (see Appendix). Despite the strong weight competition, subtractive normalization seems to be insufficient to stabilize receptive fields in the presence of non-adaptive H-events (Figure 3). Multiplicative normalization is an alternative normalization scheme, but it does not generate refined receptive fields (Miller and MacKay, 1994). Therefore, we also studied the BCM rule due to its ability to generate selectivity in postsynaptic neurons under patterned input. While the BCM rule successfully generates selectivity and receptive field refinement, the resulting topography is worse than in the Hebbian covariance rule (Figure 3). Both rules have an adaptive component: in the BCM rule it is the threshold between potentiation and depression that slides as a function of postsynaptic activity, while in the Hebbian covariance rule it is the adaptive amplitude of H-events, while the rule itself is fixed. Although experiments have shown the stereotypical activity dependence of the BCM rule (Kirkwood et al., 1996; Sjöström et al., 2001), whether a sliding threshold for potentiation vs. depression exists is still debated. Moreover, the timescale over which the threshold slides to prevent unbounded synaptic growth needs to be much faster than found experimentally (Zenke et al., 2017). Our proposed H-event amplitude adaptation operates on the fast timescale of several spontaneous events found experimentally (Siegel et al., 2012; Sánchez-Aguilera et al., 2014; Lombardo et al., 2018). Hence, together with the better topography and the resulting event sparsification as a function of developmental stage that the Hebbian covariance rule with adaptive H-events generates, we propose it as the more likely plasticity mechanism to refine receptive fields in the developing visual cortex.

Finally, we have focused here on the traditional view that molecular gradients set up a coarse map that activity-dependent mechanisms then refine (Goodhill and Xu, 2005). In our model, this was implemented as a weak bias in the initial connectivity, which did not affect our results regarding the refinement of receptive fields. Both activity and molecular gradients may work together in interesting ways to refine receptive fields (Grimbert and Cang, 2012; Godfrey and Swindale, 2014; Naoki, 2017), and future work should include both aspects.

Predictions of the model

Our model makes several experimentally testable predictions. First, we showed that changing the frequency of H-events can affect the size of the resulting receptive fields under both the BCM (Figure 3) and the Hebbian covariance rule with adaptive H-events (Figure 4). The frequency of H-events can be experimentally manipulated using optogenetics or pharmacology. For instance, gap junction blocker (carbenoxolone) has been shown to specifically reduce the frequency of H-events (Siegel et al., 2012), hence in that scenario our results predict broader receptive fields.

Additionally, L-events can also be experimentally manipulated. Recently, reduced inhibitory signaling by suppressing somatostatin-positive interneurons have has been shown to increase the size of L-events in the developing visual cortex (Leighton et al., 2020). With the effect of altered inhibitory signaling on receptive field refinements still unknown, our work predicts larger receptive fields and worse topography upon reduction of inhibition. L-events can also be experimentally manipulated by changing the properties of retinal waves, which can significantly affect retinotopic map refinement of downstream targets (Grubb et al., 2003; Cang et al., 2005b; Burbridge et al., 2014). Indeed, β2 knockout mice discussed earlier have larger retinal waves and less refined receptive fields in the visual cortex (Sun et al., 2008; Stafford et al., 2009; Cutts and Eglen, 2014). If we assume that these larger retinal waves manifest as larger L-events in the visual cortex following Siegel et al., 2012, then these experimental observations are in agreement with our model results.

Third, our model predicts that as a result of receptive field refinement during development, network events sparsify as global, cortically generated events are attenuated and become more localized. Interestingly, the properties of spontaneous activity measured experimentally in different sensory cortices (Rochefort et al., 2009; Frye and MacLean, 2016; Smith et al., 2015; Ikezoe et al., 2012; Shen and Colonnese, 2016; Golshani et al., 2009) and in the olfactory bulb (Fujimoto et al., 2019) change following a very similar timeline during development as predicted in our model. However, in many of these studies activity has not been segregated into peripherally driven L-events and cortically generated H-events. Therefore, our model predicts that the frequency of L-events would increase while the frequency of H-events would decrease over development.

Finally, we propose that for a Hebbian covariance rule to drive developmental refinements of receptive fields using spontaneous L- and H-event patterns recorded in vivo (Siegel et al., 2012), H-events need to adapt to ongoing network activity. Whether a fast adaptation mechanism like the one we propose operates in the cortex requires prolonged and detailed activity recordings in vivo, which are within reach of modern technology (Ackman and Crair, 2014; Ji, 2017; Gribizis et al., 2019). Our framework also predicts that manipulations that affect overall activity levels of the network, such as activity reduction by eye enucleation, would correspondingly affect the amplitude of ongoing H-events.

Conclusion

In summary, we studied the refinement of receptive fields in a developing cortex network model constrained by realistic patterns of experimentally recorded spontaneous activity. We proposed that adaptation of the amplitude of cortically generated spontaneous events achieves this refinement without additional assumptions on the type of plasticity in the network. Our model further predicts how cortical networks could transition from supporting highly synchronous activity modules in early development to sparser peripherally driven activity suppressing local amplification, which could be useful for preventing hyper-excitability and epilepsy in adulthood while enhancing the processing of sensory stimuli.

Materials and methods

Network model

Request a detailed protocol

We studied a feedforward, rate-based network with two one-dimensional layers, one of Nu thalamic neurons (u) and the other of Nv cortical neurons (v), with periodic boundary conditions in each layer to avoid edge effects. The initial connectivity in the network was all-to-all with uniformly distributed weights in the range wini=[a,b]. In addition, a topographic bias was introduced by modifying the initially random connectivity matrix to have the strongest connections between neurons at the matched topographic location, and which decay with a Gaussian profile with increasing distance (Figure 2C), with amplitude s and spread σs. During the evolution of the weights, soft bounds were applied on the interval [0,wmax]. We studied weight evolution under two activity-dependent learning rules: the Hebbian (Equation 2) and the BCM (Equation 3) rules. Table 1 lists all parameters. Sample codes can be found at github.com/comp-neural-circuits/LH-events (Wosniack, 2021; copy archived at swh:1:rev:b90e189a9e1a4d0cdda097d435fa91b1236f1866).

Generation of L- and H-events

Request a detailed protocol

We modeled two types of spontaneous events in the thalamic (L-events) and the cortical (H-events) layer of our model (Siegel et al., 2012). During L-events, the firing rates of a fraction (Lpct) of neighboring thalamic neurons were set to Lamp=1 during a period Ldur and were otherwise 0. Similarly, during H-events, the firing rates of a fraction (Hpct) of cortical cells were set to Hamp during time Hdur. As a result, cortical neuron activity was composed of H-events and L-events transmitted from the thalamus. For each H-event, Hamp was independently sampled from a Gaussian distribution with mean Hamp and standard deviation Hamp/3. The inter-event intervals were Lint and Hint sampled from experimentally characterized distributions in Siegel et al., 2012, (Table 1). The event durations and inter-event intervals were shortened by a factor of 10 compared to the values measured in the data (Figure 1) to speed up our simulations, but the relationships observed in the data were preserved. We note that in the experiments, both L- and H-events were characterized in the primary visual cortex; in our model, we assume that L-events are generated in the retina and subsequently propagated through the thalamus to the cortex, where they manifest with the experimentally reported characteristics (see Figure 7B for example). This interpretation is supported by experimental evidence (Siegel et al., 2012), but we cannot exclude the possibility that the retina also generates H-events or that L-events are generated in the cortex.

Reduction of the weight dynamics to two dimensions

Request a detailed protocol

To reduce the full weight dynamics to a two-dimensional system, we averaged all the n weights belonging to the receptive field that are predicted to potentiate along the initial topological bias, as wRF, and all the Nu-n remaining weights, which we call complementary to the receptive field, as wC. When all weights behaved the same, we arbitrarily split them into two groups of the same size. Details about the classification of weights as wRF or wC can be found in the Appendix.

Computing the strength of simulated H-events

Request a detailed protocol

To relate the reduced two-dimensional phase planes to the simulation results, we wrote down the steady state activity of neuron j (Equation 1), which contains the rate gain from H-events relative to L-events, RH (also called ‘Strength of H-events’ in Figure 4E):

(6) RH=LintHintHampLampHdurLdur=LintHintHamp

since Ldur=Hdur and Lamp = 1.

In the absence of adaptive H-events, for a fixed set of values for Hamp and Lint (as in Table 1) and a chosen RH which we called ‘Non-adapted strength of H-events’ in Figure 4E, we used Equation 6 to find the Hint value that satisfies the equation. Next, we ran simulations with the same Hint and Lint parameters, but adaptive Hamp. We fixed the inter-event intervals of both L- and H-events to their mean values Lint and Hint instead of sampling them from distributions in Figure 4E. Then we numerically estimated the average amplitude of H-events with adaptation, which we called ‘Adapted strength of H-events’ in Figure 4E at the end of the simulation (final 5% of the simulation time) when the dynamics were stationary.

Receptive field statistics

The following receptive field statistics were used to quantify properties of the weight matrix W after the developing weights became stable.

Receptive field size

Request a detailed protocol

The receptive field of a cortical neuron is the group of weights from thalamic cells for which wij>wmax/5. The lower threshold was chosen to make the measurement robust to small fluctuations around 0, which are present because of the soft bounds. Mathematically, we compute the receptive field size of cortical neuron j as:

(7) RF(wj)=1Nui=1Nui,

with the I vector given by:

(8) i={1,wij>wmax/50,otherwise.

The normalized receptive field ranges from 0 corresponding to a total decoupling of the cortical cell from the input layer, to one corresponding to no selectivity due to the potentiation of all weights from the input layer to that neuron. To compute the average receptive field size of the network, we include only the cortical neurons (N*) that have not decoupled:

(9) RF(W)=1Nj=1NRF(wj).

If all the cortical cells have decoupled from the thalamus, we set RF(W)=0.

Topography 𝒯

Request a detailed protocol

The topography of the network is a measure of how much of the initially weak biased topography is preserved in the final receptive field. Due to our biased initial conditions, neighboring thalamic cells are expected to project to neighboring cortical cells, yielding a diagonal weight matrix. For each cortical neuron, we calculated how far the center of its receptive field is from the ideal diagonal. Mathematically, for each row j of W, we determined the center of the receptive field cj and calculated the smallest distance (while considering periodic boundary conditions) between the receptive field center and the diagonal element j. Then, we summed all the squared distances and calculated the average error of the topography:

(10) ξ=1Nvj=1Nv|cj-j|2.

To normalize the topography, we compared ξ to the topography error Ξ of a column receptive field (Figure 3—figure supplement 1A) where the centers of all cortical receptive fields were the same, cj=c (c a constant). For such a column receptive field, Ξ=Nu212. Therefore, we define the topography score 𝒯 as:

(11) 𝒯=1-ξΞ.

The topography will be close to one if the weight matrix is perfectly diagonal and 0 if the final receptive field is a column (ξ=Ξ).

Proportion of cortical decoupling 𝒟

Request a detailed protocol

To quantify the cortical decoupling, we use Equation 8 to compute the fraction of decoupled neurons divided by the number of neurons, 1Nuj=1Nui=1Nu(1-ij). If the decoupling is 0, no cortical neuron has decoupled from the thalamus, while decoupling of 1 means that all the cortical neurons are decoupled from the thalamus.

Quantifying adaptation in the data

Request a detailed protocol

We first investigated if fluctuations in the activity across recordings could generate significant correlations. We analyzed consecutive recordings (each ∼5 mins long) in the same animal of which we had between 3 and 14 in all 26 animals (separated by <5 mins due to experimental constraints on data collection) to identify possible fluctuations on a longer timescale. We found that the average amplitude of all (L and H) events is not significantly different across consecutive recordings of the same animal (Figure 5—figure supplement 1A, one-way ANOVA tests, p>0.05 in 23 out of 26 animals). Across different animals and ages, individual event amplitudes remained uncorrelated between successive recordings at this timescale, which we confirmed by plotting the difference in event amplitude as a function of the time between recordings (Figure 5—figure supplement 1B), Kruskal-Wallis test, p>0.05.

For our reanalysis of the spontaneous events (Figure 5), we only included events that recruited at least 20% of the cells in the imaging field of view following Siegel et al., 2012. We computed the average amplitude of all events that occurred within a time window Tmax before an H-event (consecutive recordings were concatenated) and compared it to the amplitude of the H-event. We excluded animals that had fewer than 12 H-events preceded by spontaneous activity within the time window Tmax (nine animals remained after exclusion). Next, we computed the correlation coefficient of the relationship between H-event amplitude and the average amplitude of preceding activity within Tmax with a leaky accumulator of time constant τdecay. To estimate the 95% confidence interval, we performed a bootstrap analysis in which we generated 1000 bootstrap datasets by drawing without replacement from the valid pairs of H-event amplitudes and average amplitude of preceding activity. We repeated this analysis with different thresholds for excluding data (Figure 5—figure supplement 3A,B), different values of the time window Tmax within which events are averaged (Figure 5—figure supplement 3C) and for different decay time constants τdecay (Figure 5—figure supplement 3D). All data and analysis code can be found at github.com/comp-neural-circuits/LH-events.

Spontaneous events identification in the model

Request a detailed protocol

To quantify the properties of spontaneous activity in the cortical layer of our model, we used the time series of activity of all the simulated cortical neurons (after weight stabilization is achieved) sampled in a high time resolution (0.01 s, Figure 7B). We defined a global activity threshold ν=vmax/r, where vmax is the highest amplitude among the cortical cells in the recording and r is a fixed scaling constant (r=8 for all recordings). For each cortical cell j, we labeled the intervals where the cell was active (1) or inactive (0) based on:

(12) xj(t)={1,ifvj(t)ν,0,otherwise.

We then used the trace X(t)=j=1Nxj(t) to define the number of active cortical cells at each time step t, that is, the participation rate. For each identified event, we averaged the amplitude of the active cells to obtain the amplitude vs. participation rate relationship.

Appendix 1

The weight dynamics under the Hebbian covariance rule

Since synaptic plasticity operates on a much slower time scale than the response dynamics of the output neuron, we make a steady state assumption and write Equation 1 for neuron j as:

(13) vj(t)=u(t)wj(t)+RH(t),

where wj is the vector of the elements in the row of the weight matrix W, that is, the vector of weights from all thalamic neurons into cortical neuron j. RH(t) is the rate gain from H-events relative to L-events, which depends on the duration, amplitude, and inter-event intervals of both L- and H-events. Specifically, RH is proportional to Hamp, Hdur, Lint and inversely proportional to Hint, Lamp and Ldur (where denotes an ensemble average over the activity patterns), such that:

(14) RH=LintHintHampLampHdurLdur.

Rewriting Equation 2 in vector form:

(15) τwd𝒘j(t)dt=vj(t)(𝒖(t)-θu).

Inserting Equation 13 into the weight dynamics from Equation 15 yields (dependence on t is omitted for clarity):

(16) τwdwjdt=(uwj+RH)(uθu)=Cwj(θuu)uwjRHθu=C(θu)wjRHθu,

where 𝑪=𝑸-u2 is the input covariance matrix, 𝑸=𝒖𝒖 is the input correlation matrix and we have defined 𝑪(θu)=𝑪-u(θu-u)=𝑸-uθu to be the ‘modified covariance matrix’ of the learning rule. We write 𝒖 to denote the vector with repeated element u, which is the mean normalized size of an L-event (e.g. for L-events engaging 20–80% of input neurons, u=0.5). We also used the fact that the occurrence of L- and H-events is uncorrelated (Siegel et al., 2012), such that RH𝒖=0.

Normalization constraints

The unconstrained Hebbian covariance rule has the undesirable effect that all the synaptic inputs to an output cell are potentiated and no selectivity is achieved. However, the presence of the threshold and H-events in Equation 16 effectively implements a subtractive constraint in the weight dynamics (Miller and MacKay, 1994). In general, a subtractive constraint can be written as:

(17) τwdwjdt=Awjε(wj)n,

where A is a symmetric matrix, 𝒘j the weight vector, ε(𝒘j) a scalar function of the weights and n a constant vector of ones. By comparing the terms of the second line of Equation 16, we can identify 𝑪=𝑨, since the covariance matrix is symmetric, and the function ε(𝒘j) as:

(18) ε(𝒘j)=(θu-u)𝒖𝒘j-RHθu.

Due to the subtractive constraint in Equation 16, the synaptic weights will saturate at either the upper or lower bounds, resulting in the sharpening of receptive fields. Notice that if RH=0, the decay term in the subtractive constraint is proportional to 𝒘j(t). Then, the subtractive component can be adjusted to induce less depression and prevent the decoupling of cortical cells. With non-adaptive H-events, however, RH0 adds a constant decay rate to the weight dynamics that is independent of weight strength and therefore decouples cortical cells.

The input correlation matrix

The spontaneous L-events in our network model have useful mathematical properties that allowed us to derive an analytical expression for the elements of the input correlation matrix Q. The locality and periodicity of L-events and the long-term averaging of their stationary dynamics generate a symmetric and circulant matrix Q (Gray, 2005). In a circulant matrix, each row is rotated one element to the right relative to the preceding row, and thus Q can be completely defined by a vector.

Let be the fixed size of an L-event between 0 and Nu. By computing the correlations among all the possible L-events of size , we can write the elements of the vector qk=[q1,q2,,qNu], which completely defines Q, as:

(19) qk=-min(min(Nu-k-1,k-1),min(Nu-,))Nu,

where min(a,b) returns the minimum of a and b. Since we wanted to explore the dependence of refinement on the size of L-events, defined by the minimum (min) and maximum (max) number of cells they activate in the input layer, we average over the size of a given event:

(20) qk=1(max-min)=minmax-min(min(Nu-k-1,k-1),min(Nu-,))Nu.

Finally, using the emergent symmetry of Q, we can simplify the elements of this vector as follows:

(21) qk=qmin(k1,Nuk+1)δq,forallk=1,2,,Nu,

with q=(max+min)/2Nu and δq=1/Nu.

Since 𝑪(θu) and 𝑪(θu) are defined in terms of Q minus a constant, both are circulant matrices as well.

Fixed point of the weight dynamics

The fixed point of the weight dynamics under L- and H-events is obtained by setting Equation 16 to zero and solving the resulting equation for w:

(22) 𝒘*=RHθu𝑪(θu)-1𝒏,

where n is a vector of ones. To study the nature of the fixed point w, we need to investigate the eigenvalues of the circulant matrix 𝑪(θu)-1 (the inverse of a circulant matrix is also a circulant matrix). A circulant matrix has the property that its eigenvectors can be written in terms of roots of unity. The eigenvalues are real and can be written as the discrete Fourier transforms of any row of the matrix (Gray, 2005). In particular, one of the eigenvalues is the sum of the elements of any row of the matrix, with a corresponding eigenvector that is a constant. We call this special eigenvalue the ‘row-sum eigenvalue’. All eigenvalues except for the row-sum are non-negative.

We can write the fixed point as 𝒘*=(λ*)-1θuRH𝒏, where λ* is the row-sum eigenvalue of 𝑪(θu). It is clear that if only L-events are present, RH=0, the fixed point is always at the origin, 𝒘*=0. Consequently, adding H-events in the cortical layer moves the location of the fixed point along the diagonal of the phase plane, that is, equally in all directions. The fixed point will be positive (negative) when λ* is positive (negative). Furthermore, it will be an unstable fixed point for λ*>0, since all the eigenvalues of the dynamical system are positive. The fixed point is a saddle node when λ*<0 since the other eigenvalues are positive.

Calculation of receptive field size

The Hebbian covariance rule on its own does not have a mechanism to prevent the weights to grow infinitely large or negative. Thus, to generate receptive fields we imposed a lower bound at 0 and an upper bound at wmax. This enabled us to calculate the size of the receptive field, n, as a function of L-event properties and the input threshold, θu in the absence of H-events. Because the cortical cells are not recurrently connected, here we examine the weight vector onto a single cortical neuron. Assuming the dynamical system has reached steady state, to get a receptive field of size n<Nu for this cortical neuron, we can write the fixed point weight vector as 𝒘*=(wmax,,wmax,0,,0), where n of the weights have reached the upper bound wmax, while the remaining weights the lower bound 0. The specific weight identity reaching the upper bound is not important, as long as they are topographically near each other. To achieve this fixed point:

(23) w˙i|w=w={0,1in<0,i>n.

Using the structure of our circulant correlation matrix (Equation 21), we can rewrite Equation 16 (note that in the absence of H-events RH=0) as:

(24) τw[w˙1w˙nw˙n+1w˙Nu]=[q1uθuqnuθuqn+1uθuqNuuθuqnuθuq1uθuq2uθuqn+1uθuqn+1uθuq2uθuq1uθuqn+2uθuq2uθuqn+1uθuqn+2uθuq1uθu][wmaxwmax00].

Now, we study the conditions to guarantee that Equation 23 is satisfied. Computing each equation explicitly, we obtain:

(25) τww˙1=(q1++qn)wmaxnuθuwmaxτww˙n=(qn++q1)wmaxnuθuwmaxτww˙n+1=(qn+1++q2)wmaxnuθuwmaxτww˙Nu=(q2++qn+1)wmaxnuθuwmax.

And finally, using Equation 21 and after some manipulations this can be written as:

(26) τuw˙i={nq-[i(i-1)2+(n-i)(n-i+1)2]δq-nuθu}wmax,i=1,2,,Nu.

To determine the input threshold θu in Equation 25 that yields a receptive field of size n, we assume that w˙i|w=w=0 for i=n; this implies that w˙i|w=w>0 for i<n and w˙i|w=w<0 for i>n. We write θu as a linear combination of q and δq:

(27) θu=αq+βδq.

Using this ansatz in Equation 26 and setting it to zero for i=n (since wmax>0), we find that α=1/u and β=-(n-1)/(2u). Therefore, only in the presence of L-events, we derive the input threshold at which the resulting receptive field size is n:

(28) θun=q-(n-1)2δqu.

Plugging this threshold into Equation 26 for all i, we get a quadratic polynomial in i:

(29) τww˙i=(-i2+(n+1)i-n)δqwmax,i=1,2,,Nu.

Since δq,wmax>0, indeed 1in results in w˙i0 while i>n yields w˙i<0, thus, satisfying Equation 23.

In the absence of H-events, we computed the average size of receptive fields using Equation 28 for a range of input thresholds θu and maximum participation rates of L-events, while keeping the minimum participation rate at 20% (Appendix 1–figure 1A, contour lines). We verified our analytical predictions with Monte Carlo simulations for the same range of parameters (Appendix 1–figure 1A). We confirmed that the size of L-events, which depends on the range of participation rates, has a direct impact on receptive field size, with larger L-events resulting in larger receptive fields for a fixed input threshold (Appendix 1–figure 1B). Low input thresholds generate refined receptive fields only if the size of spontaneous events is small.

Appendix 1—figure 1
Receptive field size depends on L-event properties and learning rule input threshold in the absence of H-events.

(A) Receptive field sizes from 500 Monte Carlo simulations for combinations of L-event maximum participation rate and input threshold, θu. For all simulations, the L-event minimum participation rate was fixed at 20%. The contour plots of receptive field sizes were obtained using the analytical approach (Equation 23). (B) Example receptive fields for different L-event maximum participation rates and θu=0.5. Smaller events recruiting only 20–40% of the input neurons generate very refined receptive fields. As the upper bound of the participation rate progressively increases from 40% to 80%, receptive fields get larger.

The eigenspace of 𝑪(θu)

To gain intuition for the weight dynamics in Equation 16, we first investigated the eigenspace of 𝑪(θu), the vector space spanned by the eigenvectors of 𝑪(θu). Specifically, we focused on the conditions that enabled the robust formation of cortical receptive fields.

Using the fact that Q and 𝑪(θu) are circulant matrices (Appendix 1–figure 2A, B), we identified two input thresholds, θ* and θ**, that define three dynamical regions that the row-sum eigenvalue of 𝑪(θu), λ*, can occupy (Appendix 1–figure 2C). To obtain the first critical input threshold θ* that characterizes the transition from region (i) to region (ii), we set, for any row j, the row-sum eigenvalue to the largest (fixed) eigenvalue of 𝑪(θu):

(30) i=1Nu𝑸ji-Nuθ*u=λ1,

and for the input statistics of experimentally measured L-events (Table 1), we obtained θ*=0.414. Similarly, the transition from region (ii) to region (iii) is achieved when the row-sum eigenvalue is set to zero:

(31) i=1Nu𝑸ji-Nuθ**u=0,

and the second critical input threshold is obtained as θ**=0.564.

Appendix 1—figure 2
Eigenvalues and eigenvectors of weight dynamics predict receptive field refinement.

(A) The input correlation matrix 𝑸=𝒖𝒖. (B) The modified covariance matrices 𝑪(θu)=𝑸-uθu for different input thresholds, θu. (C) Two thresholds, θ*=0.414 and θ**=0.564, define three different dynamical regions in the spectrum of 𝑪(θu), delineated by the horizontal red dashed lines: (i) 0<θu<θ*, (ii) θ*<θu<θ**, and (iii) θ**<θu<1. The row-sum eigenvalue λ* in each case is given by the yellow star, while the remaining eigenvalues are shown as gray circles. (D) Dominant eigenvectors corresponding to each region in C. Inset: fixed points corresponding to each region: (i), (ii) unstable node (open circles); (iii) saddle node (half-open circle).

In region (i), 0<θu<θ* and λ*>0 is the dominant (largest) eigenvalue of 𝑪(θu). The eigenvector corresponding to it is a constant (Appendix 1–figure 2D, (i)). This predicts that all synaptic weights to a given cortical cell will potentiate preventing the formation of a localized receptive field. Since all eigenvalues are non-negative, the fixed point is an unstable node of the linear dynamical system (Equation 16).

In region (ii), θ*<θu<θ** and λ*>0. However, λ* is no longer the dominant eigenvalue. In this setting, there is a pair of dominant eigenvalues with the corresponding eigenvectors taking the form of out-of-sync sine waves with positive and negative elements. The sign of these elements predicts that some weights will potentiate while others depress, thus enabling the formation of receptive fields (Appendix 1–figure 2D, (ii)). All eigenvalues in this second region remain non-negative and the fixed point is still an unstable node of the dynamical system.

Finally, in region (iii), θ**<θu<1 and λ*<0. While the dominant eigenvectors are similar to those in region (ii), enabling the formation of localized receptive fields (Appendix 1–figure 2D, (iii)), the dynamics of the dynamical system are different because the fixed point is now a saddle node.

Analysis of the weight dynamics in two dimensions

We reduced the dimension of the weight dynamics by defining two distinct sets of weights. The first set, wRF, corresponds to the n weights which correspond to the topographically biased locations of the receptive field. The complementary set wC contains the remaining weights. To classify the weights into wRF and wC, first we solved Equation 16 with the biased initial condition and limited the sum to the dominant eigenvalues and respective eigenvectors. In the case of selectivity, the eigenvectors have half the elements positive and half negative. Therefore, if the receptive field size is n<Nu/2 we needed to subsample the potentiating weights to achieve the smaller receptive field size. We did it by keeping only the n largest positive elements in wRF and moving the remaining ones to wC. If n>Nu/2, we downsampled wC by moving the n-Nu/2 less-negative weights to wRF. Due to the topographically biased initial conditions, wRF always contained the weights potentiating along the diagonal wRF=wC.

We then regularly sampled initial conditions in [0,wmax]×[0,wmax]. For a given receptive field size n, we set θu=θun according to Equation 27. If wRF(0)>wC(0), wRF contains the n weights that form the receptive field by potentiating to the upper bound wmax, while wC contains the remaining Nu-n weights that depress to 0. Similarly, if wRF(0)<wC(0), wC contains the n weights that potentiate to the upper bound, while wRF contains the remaining Nu-n weights that depress to 0. At each initial condition, we solved the weight evolution (Equation 16) for each weight, and averaged the weights in wRF and wC for a small time interval to obtain the direction of the phase plane arrows. We computed the evolution trajectory by solving Equation 16 with a topographically biased initial condition.

Analytical solution of the two-dimensional weight dynamics with only L-events in the Hebbian covariance rule

We first examined the weight dynamics in the reduced two-dimensional phase plane wRF×wC for only L-events (RH=0 in Equation 16). The phase plane is symmetric about the diagonal wRF=wC due to the symmetry of the dominant eigenvectors (Appendix 1–figure 3A). As predicted by the eigenvectors (Appendix 1–figure 2D), in region (i) both wRF and wC converge to the upper bound and the fixed point, which is located in the origin, is an unstable node (Appendix 1–figure 3A, left). Therefore, all weights potentiate and no receptive field can be formed. In regions (ii) and (iii), the eigenvectors predict the formation of receptive fields with wRFwmax and wC0, respectively (Appendix 1–figure 3A, middle and right), although the dynamics are different in each case because the origin is an unstable node or a saddle node, respectively.

Appendix 1—figure 3
Peripheral L-events generate robust receptive field refinement.

(A) The reduced two-dimensional weight dynamics in the phase plane with the same θu as Appendix 1–figure 2B and D. For each plane, the red trajectory depicts the weight evolution from an initial condition where wRF(0)>wC(0) until the weights’ upper bound (wmax=0.5). Left: wCwmax and wRFwmax, resulting in no selectivity. Middle and right: wC0 and wRFwmax, resulting in selectivity and receptive field refinement. (B) Simulation results for the same input thresholds of Appendix 1–figure 2B,D for the full 50-dimensional system.

Our analytical predictions of the reduced two-dimensional system with only L-events were confirmed in numerical simulations of the full N-dimensional system. In particular, in region (i) all weights potentiate with each cortical cell receiving input from all thalamic inputs, such that no receptive field forms (Appendix 1–figure 3B, left). In regions (ii) and (iii), receptive fields form with good topography (Appendix 1–figure 3B, middle and right). Therefore, consistent with the analytical prediction of receptive field size (Appendix 1–figure 1), higher input thresholds resulted in smaller receptive fields.

Thus, the Hebbian covariance rule can generate receptive fields of size depending on the input threshold θu in the presence of only L-events originating from the sensory periphery. This result is in agreement with previous findings of the emergence of other aspects of development including topographic maps (Willshaw and von der Malsburg, 1976) and ocular dominance (Miller et al., 1989; Miller, 1994) or other selectivity (Mackay and Miller, 1990; Miller and MacKay, 1994; Lee et al., 2002) in the presence of correlated activity in the input layer of similar feedforward networks. We find that when the only input to the cortex are peripheral L-events, intrinsic properties of the learning rule, such as the threshold between potentiation and depression, control receptive field refinement.

Analytical solution of the two-dimensional weight dynamics with L- and H-events in the Hebbian covariance rule

We also studied how the addition of H-events affects network refinements. To investigate the role of H-events in a systematic way, we repeated our analytical study of the weight-dynamics from Equation 16, but with RH0. In the reduced two-dimensional phase plane, wRF×wC, including spontaneous events in the cortical layer moves the fixed point of Equation 16 away from the origin to the coordinates wRF=wC=(λ*)-1θuRH. Nevertheless, the different dynamical regimes reported in n Appendix 1–figure 2C continue to be valid. In region (i), the addition of cortical events moves the unstable node away from the origin and into the first quadrant (Appendix 1–figure 4A top). As a result, a small region of selectivity emerges in the plane in which initial conditions generate refined receptive fields (Appendix 1–figure 4A, top middle). Therefore, the addition of H-events enables the emergence of weight selectivity but through a different mechanism than the one obtained with only L-events. Rather than modulating the learning rule through the input threshold, changing the H-event statistics through the RH parameter can generate different receptive field sizes for a fixed input threshold. However, the strength of H-events has to be fine-tuned to generate refined receptive fields. Within a small range of RH the network transitions from no-selectivity (where all weights potentiate, Appendix 1–figure 4A, top left) to complete decoupling (where all weights depress, Appendix 1–figure 4A, top right).

Appendix 1—figure 4
Spontaneous cortical H-events disrupt receptive field refinement.

(A) Top: Phase planes of the reduced two-dimensional system for input threshold θu=0.4 (region i) and increasing strength of cortical events RH with an example trajectory (red). Selectivity can only be observed for fine-tuned RH. The fixed point (open circle), an unstable node, has moved to the first quadrant. Bottom: Simulations of receptive field development with the same parameters where Hint was progressively reduced (this is the same set of parameters as shown in Figure 3A). (B) Top: Phase planes for θu=0.52 (region ii) and increasing RH with an example trajectory (red). The unstable node (open circle) moves from the origin to the first quadrant as RH increases. Bottom. Simulations with the same parameters where Hint was progressively reduced. (C) Top: Phase planes for θu=0.6 (region iii) show the transition from selective receptive fields to cortical decoupling in response to increasing RH. The fixed point (half-open circle), now a saddle node because λ*<0, has moved away from the origin to the third quadrant. Bottom: Simulations with the same parameters with very infrequent H-events where Hint was progressively reduced.

To relate the reduced two-dimensional phase planes to the simulation results, we used Equation 14 to obtain RH by taking into account the simulation parameters in Table 1. We next verified the predictions of the reduced two-dimensional system in numerical simulations of the full network with H-events (Appendix 1–figure 4A bottom). To capture the gradual increase of RH as in the reduced two-dimensional system, we decreased the average inter-event interval between H-events, Hint. As before, only a narrow range of Hint leads to refined receptive fields, albeit with some degree of decoupling (Appendix 1–figure 4A, bottom middle). Outside of this range, individual cortical neurons are either non-selective (Appendix 1–figure 4A, bottom left) or nearly completely decoupled from the thalamus (Appendix 1–figure 4A, bottom right).

In regions (ii) and (iii), the fixed point moves from the origin to the first and third quadrants, respectively (Appendix 1–figure 4B and Appendix 1–figure 4C). In both cases, only very weak H-events can sustain finite receptive fields because the high input threshold value already provides sufficient depression to the network. We confirmed our analytical results in regions (ii) and (iii) with numerical simulations of the full network (Appendix 1–figure 4B and Appendix 1–figure 4C, bottom).

Data availability

Sample codes and data are available at https://github.com/comp-neural-circuits/LH-events (copy archived at https://archive.softwareheritage.org/swh:1:rev:b90e189a9e1a4d0cdda097d435fa91b1236f1866/).

References

  1. Book
    1. Berkes P
    2. White B
    3. Fiser J
    (2009)
    No evidence for active sparsification in the visual cortex
    In: Bengio Y, Schuurmans D, Lafferty J. D, Williams C. K, Culotta A, editors. Advances in Neural Information Processing Systems 22. Curran Associates, Inc. pp. 108–116.
    1. Chistiakova M
    2. Bannon NM
    3. Bazhenov M
    4. Volgushev M
    (2014) Heterosynaptic plasticity: multiple mechanisms and multiple roles
    The Neuroscientist : A Review Journal Bringing Neurobiology, Neurology and Psychiatry 20:483–498.
    https://doi.org/10.1177/1073858414529829
  2. Book
    1. Cooper LN
    2. Intrator N
    3. Blais BS
    4. Shouval HZ
    (2004)
    Theory of Cortical Plasticity
    Singapore: World Scientific.
    1. Gray RM
    (2005) Toeplitz and circulant matrices: a review
    Foundations and Trends in Communications and Information Theory 2:155–239.
    https://doi.org/10.1561/0100000006
    1. Willshaw DJ
    2. von der Malsburg C
    (1976) How patterned neural connections can be set up by self-organization
    Proceedings of the Royal Society of London. Series B, Biological Sciences 194:431–445.
    https://doi.org/10.1098/rspb.1976.0087

Decision letter

  1. Andrew J King
    Senior and Reviewing Editor; University of Oxford, United Kingdom
  2. József Fiser
    Reviewer; Central European University, Hungary
  3. Nicholas V Swindale
    Reviewer; University of British Columbia, Canada
  4. Jianhua Cang
    Reviewer; University of Virginia, United States

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

This computational modeling study examines how experimentally characterized patterns of spontaneous activity, thought to arise either locally or to be driven by events in the eye, can influence the developmental refinement of connectivity and receptive field properties in the visual cortex. The theoretical framework set out by the authors provides new insight into the role of spontaneous activity, which is present before vision starts, and will no doubt inspire future experimental studies.

Decision letter after peer review:

[Editors’ note: the authors submitted for reconsideration following the decision after peer review. What follows is the decision letter after the first round of review.]

Thank you for submitting your work entitled "Adaptation of spontaneous activity in the developing visual cortex" for consideration by eLife. Your article has been reviewed by two peer reviewers, and the evaluation has been overseen by a Reviewing Editor and a Senior Editor. The following individual involved in review of your submission has agreed to reveal their identity: József Fiser (Reviewer #3).

Our decision has been reached after consultation between the reviewers. Based on these discussions and the individual reviews below, we regret to inform you that your work will not be considered further for publication in eLife.

This manuscript addresses the important and interesting topic of the role of spontaneous activity at different levels of the visual pathway in the emergence of topographically-organized V1 receptive fields. Although the reviewers commended the modelling and data analyses, as well as the quality of the writing, they did not think that this study provides fundamental insights into the biological mechanisms responsible for establishing the connectivity and response properties of cortical neurons. In addition, it was felt that the study lacked novelty in some aspects and that some of the findings presented had been over interpreted.

Reviewer #1:

This paper provides a detailed theoretical analysis of how different types of spontaneous activity influence receptive field development in the cortex. The model is well-grounded, the simulations are supported by nice mathematical analyses, and the paper is well written. Overall I think it's good work and certainly worthy of publication in a solid journal. However I don't think it provides the level of advance appropriate for eLife. My main concerns in this regard are as follows.

1) The paper shows how the deleterious effects of H events can be mitigated, but it doesn't explain why H events are present in the first place. Overall the results don't seem very surprising, and I don't think there's a huge amount of new insight into the underlying biological principles.

2) The modelling and analytical tools used have already been thoroughly investigated. For instance Equation 2 is Equation 8.9 in the Dayan and Abbott textbook, the eigenvector analysis is very similar to that in Mackay and Miller, 1990, and many of the principles relevant to this work were already established by Willshaw and von der Malsburg, 1976. While all this certainly supports the soundness of the work presented here, it does undermine its novelty.

3) I think the experimental finding that the amplitude of L and H events preceding a given H event are correlated could be explained simply by long-term fluctations in overall cortical excitability, rather than being a confirmation of the model.

Reviewer #3:

This manuscript aims at reconciling the roles of spontaneous activities of thalamic vs. cortical origin in the establishment of cortical connectivity and known neural characteristics of those activities. The authors propose the existence of an amplitude-adaptation mechanism for the cortically generated spontaneous activity and test this idea by re-analyzing previous in vivo data and perform simulation on a 1D model of cortical development.

I find the paper clearly written, the data analyses and the simulations adequately executed. However, I also think some additional effort to clarify the exact contribution of the paper, the link between analyses and the claims of the paper and its link to previous proposals would be necessary to better assess the significance of the proposed model. In addition, clear predictions justifying the insight of the work would further improve the value of the manuscript.

I had three issues when reading the manuscript, handling of which might help to increase the impact of the paper.

First, it seems to me that describing the exact contribution of the paper should be better elaborated. My understanding is that the course of action of the paper is this: (a) taking up the experimental findings of Siegel et al., 2012, about the two independent sources of activity generation in the developing visual system (thalamic and cortical), the authors tried to fold these constraints to the computational requirements of topographic pattern formation. (b) After choosing one particular implementation, they realized and demonstrated that an adaptive tuning of the global H-events' amplitude is needed for stable behavior in this model. Finally, (c) after reanalyzing the original Siegel et al., 2012, data, they found evidence for such an adaptive mechanism. In addition, they linked their work to the concept of sparsification of neural signal for efficient coding.

If this is correct, I would like to know the answer to the following set of questions.

1) Why the postulation of H-events acting homeostatically? It is clear that this assumption can step in for the missing normalization functionality necessary for the Hebbian plasticity rule to operate properly, but there are other options as well. Did the authors have a more established reason to go with this choice? Beyond being a heterosynaptic learning rule, in what way the resulting Hebbian input-threshold learning rule is different from or more adequate than other realizations (e.g. the ones based on pre-post synaptic activities)? And wrt other heterosynaptic rules?

2) When arguing for evidence in the Siegel et al., 2012, data based on the fact that the amplitude of an H-event and the amplitudes of H- and L-events in the preceding 100 msec are correlated, how can they rule out the alternative that the correlation is not a result of an active adaptation mechanism for H-events, but due to a general fluctuation of both L- and H- event amplitude magnitudes in time caused by other factors? Shouldn't the existence of active adaptation mechanism be supported by showing a causal proportionality based on some aggregate sum of amplitudes and frequencies of preceding events in a specified window rather than just a time-insensitive amplitude-to-amplitude correlation the authors demonstrate?

My second issue is related to the notion of sparsification, which has been widely but typically very loosely used in the literature, and the manuscript seem to follow this trend. For a sufficient treatment of sparsification, see Willmore and Tolhurst, 2001, Berkes et al., 2009 and Zylberberg et al., 2013. A full treatment of sparseness includes (a) proper definitions (i.e. distinguishing between population and lifetime sparseness of neural activity), (b) the clarification that sparsification can be interpreted as a principle either for energy conservation, in which case lifetime sparseness is the appropriate measure, and homeostasis is a possible implementation, or for efficient coding for which population sparseness is the proper measurement, and regulation of the individual firing rates is an insufficient proxy, and (c) using the proper sparseness for the actual argument. The Siegel et al., 2012, paper carefully uses the term "event" sparsification, which refers to the occurrence of waves, bursts, which has only indirect connection to information processing capacity as is was implied (but not clearly spelled out) in Olshausen and Field, 1996. The Rochefort et al., 2009, paper does refer to sparsity of neural activity, but it neglects the fact that by measuring percentage of active neurons in any one event itself does not capture the information processing capacity of the network either.

3) My problem is that it is unclear what the present manuscript aims at: is it simply trying to match the Siegel et al. recorded data with their model (which they show in the manuscript) without any attempt to link that to information processing, or alternatively, they give their model a functional spin by tying to link the results to the concept of efficient coding, in which case a whole different set of analyses is required. According to the authors "…we observe a progressive sparsification of the effective spontaneous events during ongoing development in our model. ". This would suggest that they are describing event statistics. But then in the Discussion, they refer to the efficient coding aspect of spike sparsification for which there is no adequate analysis provided in the manuscript. Given that this is a computational modelling paper, it would help to get a clear statement about and the corresponding analysis supporting the goals the authors have with referring to sparsification. The proposed adaptation of the H-events amplitudes would have a significant effect on efficient coding, and this is a different issue from map formation. I wonder if the authors want to dive into that issue, but in any case, it would be necessary to clarify what type of sparsification the authors discuss in the manuscript.

My third comment is pointing out that the authors did not provide any testable prediction based on their new model, if I am not mistaken.

4) If the new idea is that amplitude-adaptation must exist in the cortex, proposing a test that can verify this directly would be invaluable.

[Editors’ note: further revisions were suggested prior to acceptance, as described below.]

Thank you for submitting your article "Adaptation of spontaneous activity in the developing visual cortex" for consideration by eLife. Your article has been reviewed by three peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Andrew King as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: József Fiser (Reviewer #1); Nicholas V Swindale (Reviewer #2); Jianhua Cang (Reviewer #3).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

We would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). Specifically, we are asking editors to accept without delay manuscripts, like yours, that they judge can stand as eLife papers without additional data, even if they feel that they would make the manuscript stronger. Thus the revisions requested below only address clarity and presentation.

Summary:

This modelling study shows that a set of rules involving adaptation can lead to developmental changes in the visual cortex that match those observed experimentally, including receptive field formation, synapse stabilization and a slow sparsification of spontaneous activity. The biological applications of this study are important and interesting. The proposed roles of local low-synchronicity events originating in the retina and global high-synchronicity events originating in the cortex, and the associated predictions from this research, will stimulate future experimental work.

Revisions:

The reviewers were generally positive, but raised several concerns that will need to be addressed satisfactorily before your paper can be considered to be acceptable for publication in eLife. While the assessment of this work took into account the revisions made to the previously rejected manuscript, it was considered as a new submission and has therefore attracted additional comments.

1) The direct comparison with the BCM learning rule has increased the value of the paper significantly. Nevertheless, questions were raised about more conventional normalization schemes. We appreciate that it would be excessive to test all existing alternatives. However, alternatives do need to be considered and the authors should explain how these compare to the non-adaptive-augmented version of your original model, and explain why, if only one contrasting alternative is to be tested, BCM is the right choice.

2) The manuscript has benefited from making the main text lighter and shifting much of the standard mathematical treatment to the Appendix. Nevertheless, the reviewers felt that the presentation of the manuscript could be improved further to more clearly explain the proposed links between model behaviour and real development and to make it easier for experimentalists to follow.

3) There was some discussion among the reviewers about the classification of spontaneous events into H and L types. Because this is critical for the study, further explanation of the previous evidence for this would be helpful.

4) The fact that the model itself produces L and H events seemed to throw a spanner into the works. One criticism of the model is that in the real brain, cortical spontaneous activity is not likely to be imposed from outside, as in the model, but is integral to the overall developmental dynamics and might involve circuits beyond the scope of the model that could regulate plasticity if needed. But now the significance of the imposed events seems to be called into question. It certainly does not help that H events can now be effectively identified as L events, resulting in confusion as to how to interpret what was going on and hence to evaluate the significance of the model.

5) Figure 2: the initial conditions were noisy but arguably not much different in other respects from the final state. Can you use much broader and more scattered initial receptive field widths? Is more than a bit of smoothing and weight thresholding going on?

6) The method used to show adaptation in the experimental data is unclear. Surely the exponential is used to scale the weights of the values that go into the running average, not their amplitude? Assuming that is the case, what is the point of having a time constant for the averaging of 1000s when recordings do not last longer than 300 s? Effectively it is an unweighted average, or close to it.

7) Can the correlation between the amplitude of an H event and the average amplitude of preceding events shown in Figure 5B be more parsimoniously explained by slow changes in the overall signal strength over time? This was felt to be a key issue.

https://doi.org/10.7554/eLife.61619.sa1

Author response

[Editors’ note: the authors resubmitted a revised version of the paper for consideration. What follows is the authors’ response to the first round of review.]

Reviewer #1:

This paper provides a detailed theoretical analysis of how different types of spontaneous activity influence receptive field development in the cortex. The model is well-grounded, the simulations are supported by nice mathematical analyses, and the paper is well written. Overall I think it's good work and certainly worthy of publication in a solid journal. However I don't think it provides the level of advance appropriate for eLife. My main concerns in this regard are as follows.

We thank the reviewer for the critical assessment of our work. However, we were puzzled by the reviewer’s evaluation that our work does not provide the appropriate advance for eLife after evaluating it as good and worthy of publication in a solid journal. Our work provides a balance between a mathematically-grounded theoretical framework and modeling applied to published experimental data, with some predictions already supported by the data and further predictions to be tested experimentally. In our view, this integrated approach is precisely what a journal like eLife aims to publish.

1) The paper shows how the deleterious effects of H events can be mitigated, but it doesn't explain why H events are present in the first place. Overall the results don't seem very surprising, and I don't think there's a huge amount of new insight into the underlying biological principles.

We acknowledge that our discussion about the biological principles underlying H-events and their potential role in developmental synaptic refinements were unclear.

Even though global spontaneous cortical events (like the H-events) have been observed in different sensory cortices during development (e.g., auditory cortex (Babola et al., 2018), also visual cortex (Gribizis et al., 2019)), their functional role remains elusive. We addressed this knowledge gap in our modeling with the hypothesis that they are a self-regulating homeostatic mechanism operating in parallel to network refinements driven by peripheral spontaneous activity (L-events). Although we note, that the goal of our work is not to propose a mechanism for the generation of H-events (which is a complex process involving inhibition and multiple cell types, see for e.g. (Leighton et al., 2020)). We propose three key reasons for the hypothesis about the homeostatic role of H-events, which we have included in the revised manuscript:

a) Since all cells are maximally active during H-events, these patterns likely do not carry much information that can be used for synaptic refinement.

b) Synaptic scaling, a well-known homeostatic mechanism, has been shown to induce global synaptic depression in response to highly-correlated network activity (like H-events) (Turrigiano and Nelson, 2004).

c) As postulated in the synaptic homeostasis hypothesis (SHY), during slow-wave sleep, which is characterized by highly synchronous activity (like H-events), synaptic strengths are downscaled to balance their net increase during wakefulness (Tononi and Cirelli, 2006). Since during development sleep patterns are not yet regular, we reasoned that refinement (by L-events) and homeostasis (by H-events) occur simultaneously instead of being separated into wake and sleep states.

As we further discuss in response to point 2 below, the proposed adaptation in H-event amplitudes has not been previously studied. Importantly, going beyond the resulting mitigation of deleterious effects caused by too strong H-events, this adaptation surprisingly also explains the developmental sparsification of spontaneous activity on a much longer timescale, without any additional finetuning.

Additionally, we followed reviewer 3’s suggestion to consider alternative learning rules to the Hebbian rule (specifically, the Bienenstock, Cooper and Munro, or BCM, rule) (Bienenstock et al., 1982). We found that in the BCM rule the functional role of H-events is mechanistically different than in the Hebbian rule. In particular, H-events are not deleterious and do not cause cortical decoupling as in the Hebbian rule. Instead, they regulate the threshold between potentiation and depression hence act homeostatically like we originally postulated. We now include an extended set of results on the BCM rule in the revised manuscript, and compare the role of H-events in synaptic refinements when using both the BCM and Hebbian rules (updated Figure 3 and corresponding text).

2) The modelling and analytical tools used have already been thoroughly investigated. For instance Equation 2 is Equation 8.9 in the Dayan and Abbott textbook, the eigenvector analysis is very similar to that in Mackay and Miller, 1990, and many of the principles relevant to this work were already established by Willshaw and von der Malsburg, 1976. While all this certainly supports the soundness of the work presented here, it does undermine its novelty.

We respectfully disagree with the reviewer on this point. The Hebbian learning rule that we study is fundamental to studying synaptic plasticity. In fact, different forms of spike-timingbased STDP based on spikes and triplets of spikes can be reduced to the same type of equation (Kempter et al., 1999, Gjorgjieva et al., 2011). In our view, applying a fundamental theoretical framework to an interesting biological question does not undermine the novelty of our paper. For comparison, these rules have been used by numerous studies in different settings (see also a recent review by Zenke et al. (2017), Equation 2.2), including some published in eLife:

• (Sweeney and Clopath 2020) (eLife 2020) – Their Equation 2 for the study of population coupling and plasticity of cortical responses

• (Weber and Sprekeler 2018) (eLife 2018) – Their Equation 2 for the learning of place and grid cells

(Bono and Clopath 2018) (PLoS Comp Biol 2019)– Their Equation 3 for the study of ocular dominance plasticity

(Toyoizumi et al. 2013) (Neuron 2013) – Equation on line 18, pg. 54 (Equation is unlabeled) for the study of critical period plasticity

We used this theoretical framework to implement Hebbian plasticity and straightforwardly test the hypothesis of the homeostatic role of H-events (see point 1). Importantly, we derived that this framework with fixed properties of L- and H-events cannot generate stable and refined receptive fields, and hence proposed the adaptation of H-event amplitude. This advances the field by combining plasticity based on fixed activity statistics operating on a shorter timescale of a single postnatal day, and the long-term developmental modification of activity statistics on the timescale of several days. Such combination of timescales is unique to the developmental setting of the biological problem that we consider and has not been previously addressed. At the same time, our work goes beyond the topographic map formation by correlated activity proposed by (Willshaw and von der Malsburg, 1976) in that it focuses on the generation of stable and robust receptive fields, using in vivo spontaneous activity patterns, which could not be recorded in 1976. We show that this refinement of receptive fields leads to the developmental sparsification of spontaneous events. To our knowledge, no previous work has demonstrated this feedback from connectivity to activity on a developmental timescale. We now highlight these advances in the revised manuscript and make several experimental predictions in the Discussion.

In response to the reviewer’s comment, we moved the mathematical analysis to an Appendix (Appendix 1: Figures 1–4 and corresponding text) in the revised manuscript because it is not the main result of our work. Given that our paper is not a mathematical paper about a new technique, we do not believe that the use of well-established methods should preclude publication in eLife. Moreover, the eigenvector analysis is a standard mathematical technique from dynamical systems theory to study the phase plane of the synaptic weights and has been used by multiple theoretical studies before us (Litwin-Kumar and Doiron, 2014, Gjorgjieva et al., 2011, Toyoizumi et al., 2014, among others). Including the full eigenvector analysis (Appendix 1: Figure 2) is necessary to establish a relationship between the abstract parameters in the model and in vivo measured spontaneous activity properties. Considering the broad readership of eLife, we believe that it is important to present the analysis as part of the Appendix in our revised manuscript to make it as clear and self-contained as possible. Even so, we extended the mathematical analysis of MacKay and Miller, 1990, in several novel ways:

a) the derivation of the entries of the input correlation matrix for L-events (Equation 17-19),

b) the calculation of the receptive field size (Equation 21-27, and comparison to simulations in Appendix 1: Figure 1),

c) the reduction into a 2D phase plane for the weights corresponding (or complementary) to the receptive field (Figure 4D and Appendix 1: Figures 3–4),

d) the analysis for how adaptive H-events successfully refine receptive fields without cortical decoupling (Figure 4E).

3) I think the experimental finding that the amplitude of L and H events preceding a given H event are correlated could be explained simply by long-term fluctations in overall cortical excitability, rather than being a confirmation of the model.

We thank the reviewer for raising this important point. To explore this possibility, we performed two additional analyses of our data.

First, we analyzed consecutive recordings (each ∼5 mins long) in the same animal of which we had between 3 and 14 in all 26 animals to identify possible fluctuations on a longer timescale. We found that the average amplitude of all (L and H) events is not significantly different across consecutive recordings of the same animal (Figure 5—figure supplement 1A in the revised manuscript, one-way ANOVA tests, p > 0.05 in 23 out of 26 animals). Across different animals and ages, individual event amplitudes remained uncorrelated between successive recordings at this timescale, which we confirmed by plotting the difference in event amplitude as a function of the time between recordings (Figure 5—figure supplement 1B in the revised manuscript). Hence, our additional analysis shows that long-term fluctuations in cortical excitability are unlikely to account for the observed correlation.

Next, even though we did not find evidence for long-term fluctuations in the overall cortical excitability in the data, we used our model to investigate whether such fluctuations could be an alternative mechanism to adaptation for the stable and robust refinement of receptive fields under the Hebbian rule with L- and H-events. To simulate a top-down input that generates longterm fluctuations in L- and H-events, we sampled the event amplitudes from correlated OrnsteinUhlenbeck processes, according to:

Lamp(t)=[αL+pX1(t)=X2(t)2+1pX1(t)]+
Hamp(t)=[αH+pX1(t)=X2(t)2+1pX2(t)]+

where αL and αH are constants, ρ is the correlation and X1(t),Χ2(t) are sampled as (Gillespie, 1996):X(t+Δt)=X(T)μ+c(1μ2)2n, where µ = e−∆t/τ, τ is the relaxation time of the process, c is the diffusion constant, and n is a random number drawn from a normal distribution of mean 0 and standard deviation 1. For all the simulations, we fixed αL = 1, αH = 6, τ = 1 s, c = 0.1 and ∆t = 0.01 s, and explored correlation strengths of ρ = 0, 0.5 and 1. During L- (H-events), the amplitudes of all participating cells will be Lamp(t) (Hamp(t)). Apart from the time-dependent amplitudes, we followed the same protocol of simulations from our manuscript and quantified the receptive field size (S) for different levels of correlation between L- and H- amplitudes (Figure 5—figure supplement 2A). We also quantified the proportion of simulations that resulted in selective, non-selective and decoupled receptive fields (Figure 5—figure supplement 2A, compare to Figure 3C in the revised manuscript). For the three values of correlation strengths that we tested, the proportion of simulations that resulted in selective receptive fields is always smaller than 20%, which is in the same range as that for non-adaptive H-events (Figure 5—figure supplement 2A, compare to Figure 3C left in the revised manuscript). If the reviewer wants us to include these results in the revised manuscript, we would be happy to do so.

As an alternative to these slowly-varying event amplitudes, we also considered whether a topdown signal that decays on a slow developmental timescale that affects only H-event amplitudes without directly integrating ongoing activity in the network might generate selectivity and prevent cortical decoupling. In this set of simulations, H-events are initially very strong but their amplitudes decay exponentially as a function of developmental time with different time constants (Author response image 1B). To determine the decay time constants used in this model, we fitted an exponential decay function to the initial decay of amplitude of the model with adaptive H-events (Author response image 1A). The only working solution in this scenario was to consider very fast decays, when H-events are effective only at the beginning of the simulation, which reduces to the case of pure L-events in our manuscript (Author response image 1C). We do not consider this a biologically plausible scenario since H-events are detected in animals as old as P14 (Figure 3A of Siegel et al., 2012). If the reviewer wants us to include these results in the manuscript, we would be happy to do so.

Author response image 1
A top-down signal imposes exponential decay of H-event amplitudes during development (independent of ongoing network activity) enhances receptive field refinement only if the decay is very fast.

A. Average amplitudes of adaptive H-events as a function of simulation time for two input thresholds in the Hebbian learning rule, θu = 0.6 and θu = 0.65. To determine the decay time constants used in B and C, an exponential decay function was fitted to the initial decay of amplitude, with fitted decay time constants τH = 825 s and τH = 676 s, respectively. B. Using the range of time constants from A, we simulated the model with non-adaptive H-events. Their amplitudes decayed exponentially with different decay time constants (τH = {10,100,1000,10000} s). The inset shows a detail of the beginning of the simulation. C. Final receptive fields for non-adaptive H-events with exponentially decaying amplitudes as in B for the two different input thresholds.

Reviewer #3:

This manuscript aims at reconciling the roles of spontaneous activities of thalamic vs. cortical origin in the establishment of cortical connectivity and known neural characteristics of those activities. The authors propose the existence of an amplitude-adaptation mechanism for the cortically generated spontaneous activity and test this idea by re-analyzing previous in vivo data and perform simulation on a 1D model of cortical development.

I find the paper clearly written, the data analyses and the simulations adequately executed. However, I also think some additional effort to clarify the exact contribution of the paper, the link between analyses and the claims of the paper and its link to previous proposals would be necessary to better assess the significance of the proposed model. In addition, clear predictions justifying the insight of the work would further improve the value of the manuscript.

I had three issues when reading the manuscript, handling of which might help to increase the impact of the paper.

First, it seems to me that describing the exact contribution of the paper should be better elaborated. My understanding is that the course of action of the paper is this: (a) taking up the experimental findings of Siegel et al., 2012, about the two independent sources of activity generation in the developing visual system (thalamic and cortical), the authors tried to fold these constraints to the computational requirements of topographic pattern formation. (b) After choosing one particular implementation, they realized and demonstrated that an adaptive tuning of the global H-events' amplitude is needed for stable behavior in this model. Finally, (c) after reanalyzing the original Siegel et al., 2012 data, they found evidence for such an adaptive mechanism. In addition, they linked their work to the concept of sparsification of neural signal for efficient coding.

Indeed, the reviewer is correct, this is a beautiful way of summarizing our work (apart from the link to the concept of sparsification for efficient coding that we addressed). We have revised the manuscript to better highlight its main contribution following many of the reviewer’s suggestions.

If this is correct, I would like to know the answer to the following set of questions.

1) Why the postulation of H-events acting homeostatically? It is clear that this assumption can step in for the missing normalization functionality necessary for the Hebbian plasticity rule to operate properly, but there are other options as well. Did the authors have a more established reason to go with this choice? Beyond being a heterosynaptic learning rule, in what way the resulting Hebbian input-threshold learning rule is different from or more adequate than other realizations (e.g. the ones based on pre-post synaptic activities)? And wrt other heterosynaptic rules?

We agree with the reviewer’s feedback that we did not motivate this assumption, which inspired us to study the Hebbian learning rule in the first place. We propose three key reasons for this motivation, which we have included in the revised manuscript:

a) Since all cells are maximally active during H-events, these patterns likely do not carry much information that can be used for synaptic refinement.

b) Synaptic scaling, a well-known homeostatic mechanism, has been shown to induce global synaptic depression in response to highly-correlated network activity (like H-events) (Turrigiano and Nelson, 2004).

c) As postulated in the synaptic homeostasis hypothesis (SHY), during slow-wave sleep, which is characterized by highly synchronous activity (like H-events), synaptic strengths are downscaled to balance their net increase during wakefulness (Tononi and Cirelli, 2006). Since during development sleep patterns are not yet regular, we reasoned that refinement (by L-events) and homeostasis (by H-events) occur simultaneously instead of being separated into wake and sleep states.

Based on the reviewer’s comment, we implemented an alternative learning rule that has had a prominent application in previous theoretical models of synaptic plasticity, specifically, the Bienenstock, Cooper and Munro, or BCM, rule (Bienenstock et al., 1982). Synaptic potentiation and depression in the BCM rule are still determined by Hebbian terms that require coincident pre- and postsynaptic activity, but with two important differences to the Hebbian rule: (1) the rule depends on the square of postsynaptic activity (in contrast to Hebb’s which depends linearly on postsynaptic activity), and (2) the rule has the property of metaplasticity where the threshold which determines the amount of potentiation vs. depression is modulated as a function of postsynaptic activity. We hypothesized that this adaptive way to regulate potentiation vs. depression might be sufficient to replace the requirement for our proposed adaptive H-events in the Hebbian rule. Both Hebbian and BCM rules are rate-based learning rules and have been extensively analyzed in previous works (e.g. Cooper and Bear (2012) for a review). Importantly, however, they can be directly linked to different forms of spike-timing-dependent plasticity (STDP) which depend on the order and timing of pairs and triplets of spikes, respectively, and hence have even wider applicability. For instance, the pair-based STDP rule maps to the Hebbian rule (Kempter et al., 1999) while the triplet STDP rule maps to the BCM rule under certain conditions of Poisson spiking (Pfister and Gerstner, 2006, Gjorgjieva et al., 2011).

We found that the BCM learning rule generates refined receptive fields without cortical cell decoupling from the thalamus (Figures 3B, C in the revised manuscript). Indeed, under the BCM rule, H-events are homeostatic by dynamically sliding the threshold between potentiation and depression. However, when analyzing the receptive field properties, we found that the BCM rule generates receptive fields with much worse topography than the Hebbian rule (Figure 3D in the revised manuscript). The underlying reason for the worse topography is the fact that small Levents, which have precise information for topographic connectivity refinements, mostly cause LTD in the synaptic weights (Figure 3E, F). Indeed, the sliding threshold that determines the amount of potentiation vs. depression is systematically increased both by large L-events and H-events. Therefore, the cortical activity triggered by small L-events is often lower than the sliding threshold at the event onset, resulting in LTD. We now provide an extensive discussion of the results with the BCM rule and specifically compare them to the Hebbian rule with adaptive H-events in the revised manuscript.

2) When arguing for evidence in the Siegel et al., 2012, data based on the fact that the amplitude of an H-event and the amplitudes of H- and L-events in the preceding 100 msec are correlated, how can they rule out the alternative that the correlation is not a result of an active adaptation mechanism for H-events, but due to a general fluctuation of both L- and H- event amplitude magnitudes in time caused by other factors? Shouldn't the existence of active adaptation mechanism be supported by showing a causal proportionality based on some aggregate sum of amplitudes and frequencies of preceding events in a specified window rather than just a time-insensitive amplitude-to-amplitude correlation the authors demonstrate?

In our answer to P1.3, we addressed the point of existing fluctuations in the data by a further analysis where we compared the average amplitude of L- and H-events across recordings in the same animal, and determined that long-term fluctuations in cortical excitability are unlikely to account for the observed correlation (Figure 5—figure supplement 1 in the revised manuscript). We also performed additional simulations where we used such hypothetical correlated fluctuations between L- and H-events as input, as well as assumed the decay of H-event amplitude uncoupled to ongoing activity, and showed that neither can generate robust and refined receptive field refinement.

We followed the reviewer’s suggestion to perform a time-sensitive analysis of the correlation between H-events amplitude and average preceding activity by re-analyzing the data with a time-dependent leak in the amplitudes of events preceding each H-event. The amplitude of each spontaneous event registered ∆ti seconds before an H-event was multiplied by an exponential kernel exp(−∆ti), and then all amplitudes were averaged. Due to the leak, spontaneous events that are closer in time to the H-event contribute more to the average, which mimics the mechanism used in the implementation of H-event adaptation in the simulations. For this analysis, we concatenated several consecutive 5 min-long recordings in the data to have longer time windows for the aggregate analysis. We investigated a range of time constants τ and found that as long as this time constant is not too fast (i.e. only a single recent event influences the amplitude of a new H-event), the correlation between the amplitude of H-events and the average amplitude of preceding activity with the leak remains significant. We present this analysis in Figure 5 and its supplement in the revised manuscript.

My second issue is related to the notion of sparsification, which has been widely but typically very loosely used in the literature, and the manuscript seem to follow this trend. For a sufficient treatment of sparsification, see Willmore and Tolhurst, , 2001, Berkes et al., 2009 and Zylberberg et al., 2013. A full treatment of sparseness includes (a) proper definitions (i.e. distinguishing between population and lifetime sparseness of neural activity), (b) the clarification that sparsification can be interpreted as a principle either for energy conservation, in which case lifetime sparseness is the appropriate measure, and homeostasis is a possible implementation, or for efficient coding for which population sparseness is the proper measurement, and regulation of the individual firing rates is an insufficient proxy, and (c) using the proper sparseness for the actual argument. The Siegel et al., 2012, paper carefully uses the term "event" sparsification, which refers to the occurrence of waves, bursts, which has only indirect connection to information processing capacity as is was implied (but not clearly spelled out) in Olshausen and Field, 1996. The Rochefort et al., 2009, paper does refer to sparsity of neural activity, but it neglects the fact that by measuring percentage of active neurons in any one event itself does not capture the information processing capacity of the network either.

We thank the reviewer for this important feedback on how we discuss the sparsification of spontaneous activity and the literature suggestions. We completely agree that our definition of sparsification was unclear and incomplete. Our notion of sparsification follows that of Siegel et al., 2012, and it refers to an overall sparsification of network events (fewer active cells per event) where cortically-generated H-events become replaced with peripherally-driven L-events. Hence, it has only indirect connection to information processing capacity of the network which we incorrectly alluded to in our manuscript. We have clarified this in our revised manuscript both in the Results and the Discussion sections, and in the text we now refer to it as “event sparsification”. We also provide relevant citations for alternative notions of sparsification.

3) My third comment is pointing out that the authors did not provide any testable prediction based on their new model, if I am not mistaken.

4) If the new idea is that amplitude-adaptation must exist in the cortex, proposing a test that can verify this directly would be invaluable.

We agree with the reviewer’s suggestion to highlight testable predictions based on our modeling results, which were not clearly stated in our previous manuscript.

We included a new section in the Results (“Modulating spontaneous activity properties makes different predictions for receptive field refinements”) and also in the Discussion (“Predictions of the model”) where we elaborate these model predictions and propose experimental validations.

1) First, we predict that under the Hebbian learning rule, changing the frequency of adaptive Hevents can affect the size of the resulting receptive fields (Figure 4 in the revised manuscript). The frequency of H-events can be experimentally manipulated, for instance, by a gap junction blocker (carbenoxolone) that reduces the frequency of H-events (Siegel et al., 2012). According to our prediction, cortical receptive fields will be broader after this manipulation.

2) Next, we predict receptive field size under the Hebbian learning rule when varying the size of L-events (Figure 6 in the revised manuscript). Specifically, larger L-events will generate larger receptive field sizes and worse topography. There are at least two ways to manipulate L-events experimentally.

(2.1) One way is to manipulate the source of L-events in the cortex, namely the retinal waves. Our predictions can be related to previous experimental studies that manipulated the size of retinal waves in the β2 knockout mouse and found less refined receptive fields in the cortex (Sun et al., 2008, Stafford et al., 2009, Cutts and Eglen, 2014).

(2.2) Another way is to directly manipulate L-events in the cortex. Recently it has been shown that the size of L-events in the developing visual cortex increases upon suppression of somatostatin-positive interneurons (Leighton et al., 2020). After such a manipulation, our work predicts larger receptive fields and worse topography.

3) Third, our model generates “event sparsification,” with H-events being gradually replaced by L-events during development. This can be tested by examining L- and H-events properties in older animals (e.g., P14) and comparing them to the properties in younger animals (e.g., P8-P10). While other experimental studies report such developmental event sparsification (Rochefort et al., 2009, Frye and MacLean, 2016, Smith et al. 2015, Ikezoe et al., 2012, Shen and Colonnese, 2016, Golshani et al., 2009), in many of these studies activity has not been segregated into peripherally driven L-events and cortically generated H-events. Therefore, our model predicts that the frequency of L-events would increase, while the frequency of H-events would decrease over development.

4) Finally, we propose that for a Hebbian rule to drive developmental refinements of receptive fields, the amplitude of H-events should adapt to the ongoing network activity. The confirmation of such a fast adaptation mechanism would require prolonged and detailed activity recordings in vivo, which are within reach of modern technology (Ackman and Crair, 2014, Gribizis et al., 2019). Our work also predicts that manipulations that affect overall activity levels of the network, such as activity reduction by eye enucleation, would correspondingly affect the amplitude of ongoing H-events.

[Editors’ note: what follows is the authors’ response to the second round of review.]

Revisions:

The reviewers were generally positive, but raised several concerns that will need to be addressed satisfactorily before your paper can be considered to be acceptable for publication in eLife. While the assessment of this work took into account the revisions made to the previously rejected manuscript, it was considered as a new submission and has therefore attracted additional comments.

1) The direct comparison with the BCM learning rule has increased the value of the paper significantly. Nevertheless, questions were raised about more conventional normalization schemes. We appreciate that it would be excessive to test all existing alternatives. However, alternatives do need to be considered and the authors should explain how these compare to the non-adaptive-augmented version of your original model, and explain why, if only one contrasting alternative is to be tested, BCM is the right choice.

We are happy that the reviewers value the new comparison with the BCM learning rule. We also appreciate the questions regarding alternative, more conventional normalization schemes for the original Hebbian covariance learning rule without our proposed adaptation of cortical H-events. A common approach to prevent unconstrained weight growth in previous theoretical work is to limit the total weight strength for a cell. Two methods of enforcing such a constraint have been proposed: subtractive and multiplicative normalization (Miller and Mackay, 1994).

Subtractive normalization constrains the sum of all weights to a constant value by subtracting from each weight a constant amount that depends on the sum of all weights. We now demonstrate that the input threshold in the learning rule, which determines whether weights are potentiated or depressed, and the inclusion of H-events, implement subtractive normalization (see Appendix section “Normalization constraints”). Previous theoretical work has shown that subtractive normalization induces strong weight competition and drives the emergence of weight selectivity and refined receptive fields (Miller and Mackay, 1994). However, our results in Figure 3 demonstrate that this normalization constraint is insufficient to stabilize the receptive fields (i.e., prevent the weights from completely depressing) due to the stronger cortical H-events relative to the peripheral L-events.

Multiplicative normalization constrains the sum of all weights to a constant value by subtracting from each weight an amount that is proportional of the weight itself. We did not consider a multiplicative normalization constraint because previous work has already shown that such a constraint does not generate refined (sharpened) receptive fields (see Miller and Mackay, 1994, Dayan and Abbott, 2001). Rather, multiplicative normalization yields a graded receptive field where most mutually correlated inputs are represented (Miller and Mackay, 1994).

A third alternative to introduce weight competition (and hence emergence of selectivity and refined receptive fields) is the BCM rule. We contrasted the BCM rule with the Hebbian covariance learning rule in greatest detail for the following reasons (see text between Equations 2 and 3):

i) The BCM rule has been shown to generate selectivity in postsynaptic neurons that experience patterned inputs. Specifically, the framework can explain the emergence of ocular dominance (neurons in V1 being selective for input from one of the two eyes) and orientation selectivity in the visual system (Cooper et al., 2004).

ii) It implements homeostatic weight regulation by “sensing” ongoing activity in the network and adjusting weights in the network to maintain a target level of ongoing activity (as opposed to constraining the sum of the weights in the cases above). Hence, it is more comparable to our adaptive mechanism that “senses” ongoing activity and adjusts H-event amplitude. The difference is that in the BCM rule weights are adjusted, while in our model, cortically-generated activity is adjusted.

We now also include a discussion of these alternative normalization schemes and explain why we chose to directly contrast our results to those of the BCM rule (see Results section “A network model for connectivity refinements driven by spontaneous activity”, and third paragraph of the Discussion section “Assumptions in the model”).

2) The manuscript has benefited from making the main text lighter and shifting much of the standard mathematical treatment to the Appendix. Nevertheless, the reviewers felt that the presentation of the manuscript could be improved further to more clearly explain the proposed links between model behaviour and real development and to make it easier for experimentalists to follow.

We greatly value this feedback from the reviewers. Therefore, we took additional effort to explain the links between model behavior and real development throughout the manuscript. This is most prominent in our Introduction and Discussion, but also in the Results section “Adaptive H-events promote the developmental event sparsification of cortical activity.”

3) There was some discussion among the reviewers about the classification of spontaneous events into H and L types. Because this is critical for the study, further explanation of the previous evidence for this would be helpful.

Thank you – this is a great point. We used the classification of events into L and H based on Siegel et al., 2012. To classify spontaneous events observed in the visual cortex, Siegel et al., 2012, used a clustering analysis based on two features: the average amplitude and jitter (a measure related to synchrony) of events recorded in vivo using two-photon calcium imaging. This analysis revealed that events with 20-80% participation rate are statistically different from events with 80-100% participation rate (see their Figure 3D). Accordingly, 20-80% participation rate events have lower amplitudes and low synchronicity (hence called L-events), while 80-100% participation rate events have higher amplitudes and high synchronicity (hence called H-events). Based on additional experiments where the eyes were enucleated or retinal waves enhanced, Siegel et al., 2012, also reported that these events have different sources: retinal waves in the case of L-events and intra-cortical activity in the case of H-events.

The classification of events according to the participation rate of the cells in an event was also recently revisited in Leighton et al., 2020 based on new experiments in the Lohmann lab. Pairing two-photon calcium imaging of L2/3 with simultaneous whole-cell recordings in vivo in V1 neurons during development, the authors found that neurons participating in H-events fire significantly more action potentials than in L-events. The duration of spiking was also significantly longer during H- than L-events. An independent hierarchical clustering using the number of spikes and the duration of spiking confirmed an optimum of two event clusters with a split similar to the one reported in Siegel et al., 2012. A complementary analysis at the population level in wide-field recordings confirmed the existence of two types of events also at this larger scale, despite the different mean calcium amplitudes and event sizes. We now detail in the text how the original classification of events was implemented in Siegel et al., 2012 and the recent validation by Leighton et al., 2020 (second paragraph in the Results section “A network model for connectivity refinements driven by spontaneous activity”).

Nonetheless, for our model, the exact point of division into L- and H-events is not too important. What matters is that we have (1) peripheral events originating in the first input layer of the network (corresponding to the thalamus), which activate a smaller fraction of the neurons, and (2) cortical events originating in the second output layer of the network (corresponding to the primary visual cortex), which activate a much larger fraction of the neurons. Indeed, our results persist when we repeat the analysis with Monte Carlo simulations with L- and H-events being separated by a 70%, as opposed to 80% participation rate threshold. Then L-events activate 20–70% of the thalamic neurons and H-events activate 70–100% of cortical neurons (Author response image 2). All other parameters are the same as in Table 1 of the manuscript. The proportion of simulation outcomes classified as “selective” is comparable to the results presented in the main text (Figure 3C for the Hebbian covariance rule with non-adaptive H-events and Figure 4B for the Hebbian covariance rule with adaptive H-events).

We also explain this in the text (first paragraph of Results section “Spontaneous cortical H-events disrupt topographic connectivity refinement in the Hebbian covariance and BCM plasticity rules” and second paragraph of Results section “Adaptive H-events achieve robust selectivity”).

Author response image 2
Receptive field statistics for a different distribution of L- and H-event sizes (20–70% and 70–100% participation rates, respectively).

A. 300 Monte Carlo simulations with non-adaptive H-events: receptive field size (left) and proportion of simulations that resulted in selective, non-selective and decoupled receptive fields (right). Compare to Figure 3C. B. Same as A for adaptive H-events in the same range of parameters. Compare to Figure 4B.

4) The fact that the model itself produces L and H events seemed to throw a spanner into the works. One criticism of the model is that in the real brain, cortical spontaneous activity is not likely to be imposed from outside, as in the model, but is integral to the overall developmental dynamics and might involve circuits beyond the scope of the model that could regulate plasticity if needed. But now the significance of the imposed events seems to be called into question. It certainly does not help that H events can now be effectively identified as L events, resulting in confusion as to how to interpret what was going on and hence to evaluate the significance of the model.

We thank the reviewers for pointing out these critical issues. Indeed, the model does not include a mechanism for the generation of L- and H-events, but L-events in the input layer and H-events in the output layer of our network are always imposed as external inputs. The reason for this gross simplification is our aim to focus solely on the plasticity of synaptic connections between the sensory periphery (where L-events originate) and the cortex (where H-events originate) induced by these activity patterns (as we note in the Discussion “Assumptions in the model”). We completely agree that a more complete network model of the developing cortex should generate its own spontaneous activity through developmental dynamics and additional circuit elements. In fact, how activity shapes the plasticity of synaptic connections (the question that we address) and how in turn, the change of connectivity affects the generation of activity is a fascinating question that has inspired a few of our recent studies (Montangie et al., 2020, Wu et al., 2020), also in the context of cortical development (Kirchner et al., 2020). However, although recent studies are beginning to unravel the factors involved in the generation of L- and H-events (including inhibition and the role of different interneurons subtypes, as well as the structure of recurrent connectivity), see for example Leighton et al., 2020, the mechanistic underpinnings are unknown and provide us with inspiration for future work.

We now realize that the re-classification of spontaneous events into L and H in the cortical layer of our model (Figure 7) is confusing. Since this re-classification is not necessary for the conclusions of our analysis in Figure 7, we removed it from the text. We now make the point that we do not know if the same criteria based on event participation rates and amplitude can be used to separate the spontaneous events into L and H at later developmental ages (first two paragraphs in Results section “Adaptive H-events promote the developmental event sparsification of cortical activity”). Therefore, we analyzed all spontaneous events of simulated cortical neurons during the process of receptive field refinement in the presence of adaptive H-events (Figure 7B, highlighted in gray). Due to the progressive receptive field refinements and the continued H-event adaptation in response to resulting activity changes, spontaneous events in our model progressively sparsify during ongoing development, meaning that spontaneous events become smaller in size with fewer participating cells.

We modified Figure 7 and the Results section “Adaptive H-events promote the developmental event sparsification of cortical activity” accordingly.

5) Figure 2: the initial conditions were noisy but arguably not much different in other respects from the final state. Can you use much broader and more scattered initial receptive field widths? Is more than a bit of smoothing and weight thresholding going on?

We thank the reviewers for this observation. In fact, by mistake we used a different colormap axis for the initial and final conditions in Figure 2C, hence giving the wrong impression that the initial bias was too strong. We now used the correct colormap in Figure 2 in the manuscript.

Additionally, we ran further simulations where we varied the spread and strength of the initial topographical bias (Figure 3—figure supplement 1). Unless the bias is very broad and weak, topographically organized receptive fields still form.

6) The method used to show adaptation in the experimental data is unclear. Surely the exponential is used to scale the weights of the values that go into the running average, not their amplitude? Assuming that is the case, what is the point of having a time constant for the averaging of 1000s when recordings do not last longer than 300 s? Effectively it is an unweighted average, or close to it.

The reviewers are correct; the exponential kernel was used to scale the amplitude of the events that are included in the running average.

We agree that the second point is confusing because we did not clearly explain our averaging procedure. Indeed, each individual recording in a given animal is ~5 mins (300 s) long. For many of the animals, multiple such (mostly) consecutive recordings were obtained (see for e.g., Figure 5—figure supplement 1A). Therefore, we chose to concatenate such consecutive recordings, usually giving us 40 mins and sometimes even up to 70 mins of continuous data. When computing the running average of amplitudes for all events preceding a given H-event, we only looked Tmax seconds before that H-event. In this time window, we then scaled the amplitude of the events with an exponentially decaying kernel with a time constant of τdecay. In Figure 5B in the manuscript, we showed the correlation for a single choice of these parameters: Tmax = 300 seconds and τdecay = 1000 seconds. In Figure 5—figure supplement 2C and D, we explored the correlation for other values of one of these parameters while fixing the other parameter.

We added more details about our data analysis to the Results section “in vivo spontaneous cortical activity shows a signature of adaptation”.

7) Can the correlation between the amplitude of an H event and the average amplitude of preceding events shown in Figure 5B be more parsimoniously explained by slow changes in the overall signal strength over time? This was felt to be a key issue.

We thank the reviewers for raising this important point. In our manuscript, we included two supplementary figures that demonstrate the absence of such slow fluctuations in the recordings.

First, we analyzed consecutive recordings (each ∼5 mins long) in the same animal of which we had between 3 and 14 in all 26 animals to identify possible fluctuations on a longer timescale. We found that the average amplitude of all (L and H) events is not significantly different across consecutive recordings of the same animal (Figure 5—figure supplement 1A in the manuscript, one-way ANOVA tests, p > 0.05 in 23 out of 26 animals). Second, we found that across different animals and ages, individual event amplitudes remained uncorrelated between successive recordings at this timescale. We confirmed this by plotting the difference in event amplitude as a function of the time between recordings (Figure 5—figure supplement 1B in the manuscript). Hence, our data analysis shows that slow fluctuations in cortical excitability are unlikely to account for the observed correlation. We now include a discussion of this analysis in greater detail in the text (first paragraph in the Results section “in vivo spontaneous cortical activity shows a signature of adaptation”).

Although we did not find evidence for slow fluctuations in the overall data, we used our model to investigate whether such fluctuations could be an alternative mechanism to adaptation of H-event amplitude for the stable and robust refinement of receptive fields. We studied two different scenarios:

1) We first simulated a top-down input (of an unknown source) that generates slow correlated fluctuations in L- and H-events. To achieve this, we sampled amplitudes of L- and H-events from correlated Ornstein-Uhlenbeck processes, according to:

Lamp(t)=[αL+ρX1(t)+X2(t)2+1ρX1(t)]+
Hamp(t)=[αH+ρX1(t)+X2(t)2+1ρX2(t)]+

where αL and αH are constants, ρ is the correlation and X1(t), X2(t) are sampled as (Gillespie, 1996):X(t+Δt)=X(t)μ+c(1μ2)2n where μ=eΔtτ, τ is the relaxation time of the process, c is the diffusion constant, and n is a random number drawn from a normal distribution of mean 0 and standard deviation 1. We fixed αL=1, αH=6, τ=1 s, c=0.1 and Δt=0.01 s, and explored correlation strengths ρ=0,0.5 and 1. Using these slowly varying L- and H-event amplitudes, we quantified the receptive field size for different levels of correlation strengths ρ between L- and H-event amplitudes (Figure 5—figure supplement 2A). We also quantified the proportion of simulations that resulted in selective, non-selective and decoupled receptive fields (Figure 5—figure supplement 2A, compare to Figure 3C in the manuscript). For the range of tested correlation strengths, less than 20% of simulations resulted in selective receptive fields, similar to the case for non-adaptive H-events (Figure 5—figure supplement 2A, compare to Figure 3C left in the manuscript).

2) We also considered whether a top-down signal that decays on a slow developmental timescale that affects only H-event amplitudes (without directly integrating ongoing activity in the network) might generate selectivity and prevent cortical decoupling. In these simulations, H-event amplitudes decay exponentially as a function of developmental time with different time constants (Figure 5—figure supplement 2B). We quantified the receptive field size for a range of decay time constants (Figure 5—figure supplement 2B). We also quantified the proportion of simulations that resulted in selective, non-selective and decoupled receptive fields (Figure 5—figure supplement 2B, compare to Figure 3C in the manuscript). Only very fast decay time constants resulted in selective and refined receptive fields. This scenario corresponds to H-events being effective only at the beginning of the simulation, which reduces to the case of pure L-events in our manuscript (gray box in Figure 5—figure supplement 2B). We do not consider this a biologically plausible scenario since H-events are detected in animals with ages differing by several days (P8 to P10) (Siegel et al., 2012).

This analysis is now included in the manuscript (Figure 5—figure supplement 3) and discussed in the second paragraph of the Results section “in vivo spontaneous cortical activity shows a signature of adaptation”.

Therefore, we conclude from our combination of data analysis, and the new simulations of receptive field refinements using alternative sources of slow activity fluctuations (either slowly varying correlated L and H amplitudes, or slowly decaying H-event amplitudes throughout development) that such mechanisms cannot generate the robust refinement of receptive fields that we find in our network model with the proposed adaptation of H-event amplitudes.

https://doi.org/10.7554/eLife.61619.sa2

Article and author information

Author details

  1. Marina E Wosniack

    1. Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
    2. School of Life Sciences Weihenstephan, Technical University of Munich, Freising, Germany
    Contribution
    Conceptualization, Software, Formal analysis, Methodology, Writing - original draft, Writing - review and editing, Performing simulations
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-2175-9713
  2. Jan H Kirchner

    1. Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
    2. School of Life Sciences Weihenstephan, Technical University of Munich, Freising, Germany
    Contribution
    Methodology, Writing - original draft
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9126-0558
  3. Ling-Ya Chao

    Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
    Contribution
    Software, Methodology
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-6706-5939
  4. Nawal Zabouri

    Netherlands Institute for Neuroscience, Amsterdam, Netherlands
    Contribution
    Data curation, Investigation
    Competing interests
    No competing interests declared
  5. Christian Lohmann

    1. Netherlands Institute for Neuroscience, Amsterdam, Netherlands
    2. Center for Neurogenomics and Cognitive Research, Vrije Universiteit, Amsterdam, Netherlands
    Contribution
    Conceptualization, Resources, Writing - review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-1780-2419
  6. Julijana Gjorgjieva

    1. Computation in Neural Circuits Group, Max Planck Institute for Brain Research, Frankfurt, Germany
    2. School of Life Sciences Weihenstephan, Technical University of Munich, Freising, Germany
    Contribution
    Conceptualization, Resources, Supervision, Funding acquisition, Methodology, Writing - original draft, Project administration, Writing - review and editing
    For correspondence
    gjorgjieva@brain.mpg.de
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-7118-4079

Funding

Alexander von Humboldt-Stiftung

  • Marina E Wosniack

H2020 European Research Council (804824)

  • Jan H Kirchner
  • Julijana Gjorgjieva

Max-Planck-Gesellschaft

  • Marina E Wosniack
  • Jan H Kirchner
  • Ling-Ya Chao
  • Julijana Gjorgjieva

Brain and Behavior Research Foundation (26253)

  • Julijana Gjorgjieva

SMART START training program in computational neuroscience

  • Jan H Kirchner

Nederlandse Organisatie voor Wetenschappelijk Onderzoek (819.02.017)

  • Nawal Zabouri
  • Christian Lohmann

Stichting Vrienden van het Herseninstituut (805254845)

  • Nawal Zabouri
  • Christian Lohmann

Nederlandse Organisatie voor Wetenschappelijk Onderzoek (822.02.006)

  • Nawal Zabouri
  • Christian Lohmann

Nederlandse Organisatie voor Wetenschappelijk Onderzoek (ALWOP.216)

  • Nawal Zabouri
  • Christian Lohmann

Nederlandse Organisatie voor Wetenschappelijk Onderzoek (ALW Vici no. 865.12.001)

  • Nawal Zabouri
  • Christian Lohmann

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 804824 to JG). This work was further supported by the Max Planck Society (MW, JHK, LYC, JG), a NARSAD Young Investigator Grant from the Brain and Behavior Research Foundation (JG), a Capes-Humboldt Research fellowship (MW), the Smart Start joint training program in computational neuroscience (JHK), and by grants of the Netherlands Organization for Scientific Research (NWO, ALW Open Program grants, no. 819.02.017, 822.02.006 and ALWOP.216; ALW Vici, no. 865.12.001) and the “Stichting Vrienden van het Herseninstituut” (NZ, CL). We thank Stephen Eglen for discussions and ideas in the initial stages of the project.

Senior and Reviewing Editor

  1. Andrew J King, University of Oxford, United Kingdom

Reviewers

  1. József Fiser, Central European University, Hungary
  2. Nicholas V Swindale, University of British Columbia, Canada
  3. Jianhua Cang, University of Virginia, United States

Publication history

  1. Received: July 30, 2020
  2. Accepted: February 3, 2021
  3. Version of Record published: March 16, 2021 (version 1)

Copyright

© 2021, Wosniack et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,167
    Page views
  • 117
    Downloads
  • 0
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

Further reading

    1. Neuroscience
    Stanley Heinze
    Insight

    Fruit flies rely on an intricate neural pathway to process polarized light signals in order to inform their internal compass about the position of the Sun.

    1. Neuroscience
    Zachariah Bertels et al.
    Research Article

    Migraine is the third most prevalent disease worldwide but the mechanisms that underlie migraine chronicity are poorly understood. Cytoskeletal flexibility is fundamental to neuronal-plasticity and is dependent on dynamic microtubules. Histone-deacetylase-6 (HDAC6) decreases microtubule dynamics by deacetylating its primary substrate, α-tubulin. We use validated mouse models of migraine to show that HDAC6-inhibition is a promising migraine treatment and reveal an undiscovered cytoarchitectural basis for migraine chronicity. The human migraine trigger, nitroglycerin, produced chronic migraine-associated pain and decreased neurite growth in headache-processing regions, which were reversed by HDAC6 inhibition. Cortical spreading depression (CSD), a physiological correlate of migraine aura, also decreased cortical neurite growth, while HDAC6-inhibitor restored neuronal complexity and decreased CSD. Importantly, a calcitonin gene-related peptide receptor antagonist also restored blunted neuronal complexity induced by nitroglycerin. Our results demonstrate that disruptions in neuronal cytoarchitecture are a feature of chronic migraine, and effective migraine therapies might include agents that restore microtubule/neuronal plasticity.