Dendrites shape information flow in neurons. Yet, there is little consensus on the level of spatial complexity at which they operate. Through carefully chosen parameter fits, solvable in the least-squares sense, we obtain accurate reduced compartmental models at any level of complexity. We show that (back-propagating) action potentials, Ca2+ spikes, and N-methyl-D-aspartate spikes can all be reproduced with few compartments. We also investigate whether afferent spatial connectivity motifs admit simplification by ablating targeted branches and grouping affected synapses onto the next proximal dendrite. We find that voltage in the remaining branches is reproduced if temporal conductance fluctuations stay below a limit that depends on the average difference in input resistance between the ablated branches and the next proximal dendrite. Furthermore, our methodology fits reduced models directly from experimental data, without requiring morphological reconstructions. We provide software that automatizes the simplification, eliminating a common hurdle toward including dendritic computations in network models.
Morphological neuron models have been instrumental in neuroscience (Segev and London, 2000). Major experimental discoveries, for instance that N-methyl-D-aspartate (NMDA) channels (MacDonald and Wojtowicz, 1982) produce local dendritic all or none responses (Schiller et al., 2000; Major et al., 2008), or that dendritic Ca2+ spikes mediate coincidence detection between distal inputs and somatic action potentials (APs) (Larkum et al., 1999), have been combined in morphological models to arrive at a consistent picture of dendritic integration: the dendrite is an intricate system of semi-independent subunits (Mel, 1993; Poirazi et al., 2003b; Poirazi et al., 2003a), amenable to dynamic regulation (Poleg-Polsky et al., 2018; Wybo et al., 2019), and able to distinguish specific input patterns (Branco and Häusser, 2010; Laudanski et al., 2014).
Nevertheless, morphological models are not without shortcomings. They are highly complex and consist of thousands of coupled compartments, each receiving multiple non-linear currents. The parameters of the models, typically fitted with evolutionary algorithms to electro-physiological recordings (Hay et al., 2011; Almog and Korngreen, 2014; Van Geit et al., 2016), number in the tens of thousands. Since recordings can only be obtained at a few dendritic sites, these fits are under-constrained, and thus susceptible to over-fitting. Accordingly, the single-neuron fitting challenge, where model performance was measured on unseen spike trains, was not won by a biophysical model, but by an abstract spiking model (Gerstner and Naud, 2009; DiLorenzo and Victor, 2013). Finally, many network-level observations can be explained without morphological models (Gerstner et al., 2012).
These shortcomings underline the need to find the essential computational repertoire of a neuron: the set of computations needed to understand brain functions such as learning and memory. Model simplification is crucial in this endeavor, as it elucidates the lowest level of complexity at which computational features are preserved. Conceptually, the simplification thus extracts the essential elements required for the computation from the underlying biophysics. Simulation-wise, the reduced model requires few resources and can thus be integrated in large-scale networks. Experimentally, the reduced model likely leads to a well-constrained fit.
Past simplification efforts can be grouped in two categories: approaches that use traditional compartments, but with adapted parameters (Pinsky and Rinzel, 1994; Destexhe, 2001; Tobin et al., 2006; Bahl et al., 2012; Marasco et al., 2013; Amsalem et al., 2020), and approaches that rely on more advanced mathematical techniques (Kellems et al., 2009; Kellems et al., 2010; Wybo et al., 2013; Wybo et al., 2015). Many authors apply ad hoc morphological simplifications and then adjust model parameters to conserve geometrical quantities, such as surface area (Davison et al., 2000; Hendrickson et al., 2011; Marasco et al., 2013), or electrical quantities, such as attenuation (Destexhe, 2001) and transfer impedance (Amsalem et al., 2020). Other authors propose two-compartment models whose parameters are adjusted to reproduce certain response properties (Pinsky and Rinzel, 1994; Naud et al., 2014). Nevertheless, these approaches often lack the flexibility to be adapted to a wide range of dendritic computations. Furthermore, particular afferent spatial connectivity motifs might be lost, so that essential computations are not captured by such a reduction. Hence, in a network model, the computational relevance of these motifs might unknowingly be ignored. Advanced mathematical techniques on the other hand are very flexible, as they can incorporate the response properties of the morphology implicitly. (Wybo et al., 2013 and Wybo et al., 2015 propose a system of convolutions whose kernels capture the passive properties of the compartmental model, whereas Kellems et al., 2009 and Kellems et al., 2010 linearly transform the compartmental model into a low-dimensional basis.) However, these techniques are not supported by standard simulation software, such as NEURON (Carnevale and Hines, 2004), a considerable hurdle toward their integration in the canonical neuroscience toolset.
Here, we introduce a simplification method with the flexibility of advanced mathematical techniques, but using traditional compartments. The approach accommodates any morphological model in public repositories (McDougal et al., 2017) and commonly used ion channels (Podlaski et al., 2017). The reduced models represent the optimal approximation to the dendritic resistance matrix, evaluated at any spatial resolution of choice. We fit ion channels by approximating the quasi-active resistance matrices (Koch and Poggio, 1985). All our fits are uniquely solvable linear least-squares problems. The obtained models extrapolate well to non-linear dynamics, and reproduce back-propagating APs (bAPs), Ca2+ spikes (Larkum et al., 1999), and NMDA spikes (Schiller et al., 2000; Major et al., 2008) with few compartments. Additionally, we investigate whether a dendritic tree with given afferent spatial connectivity motifs can be simplified by ablating subtrees or branches and grouping synapses in the next proximal compartment. We find that effective weight-rescale factors for synapses can be computed if temporal conductance fluctuations stay below a limit that depends on the difference in input resistance between the ablated branch and the next proximal dendrite. Under application of these factors, voltage responses in the simplified tree are preserved. Finally, we demonstrate that our approach can fit reduced models to experimental recordings, without the need to reconstruct full morphologies. We have created a Python toolbox (NEural Analysis Toolbox – NEAT – https://github.com/unibe-cns/NEAT) (copy archived at swh:1:rev:1cb15f36aa0a764105348541d046c85ef38e21ee) that implements this method together with an extension to NEURON to simulate the obtained models.
Our simplification strategy fits compartments with voltage-gated channels to a reduced set of dendritic locations of interest (Figure 1A, left-middle). The locations, together with the morphology, provide a tree structure for the reduced model (Figure 1A, middle-right) where each node corresponds to a traditional compartment and each edge to a coupling conductance. The fit requires that the reduced model responds similarly to perturbative current steps as the full neuron, at the set of chosen locations (Figure 1B). We consider perturbations around any spatially uniform holding potential . Experimentally, may be reached by injecting a constant current , and thus is the effective equilibrium potential under this current injection. In models, we do not need to know ; we simply assume that is the equilibrium potential around which to linearize the model. Note that if , is the resting membrane potential.
Suppose we have chosen locations. Let be a vector describing perturbative input currents to each of those locations. The linearized voltage response of the full neuron, at those locations, is given in the Fourier domain for an input of a particular frequency by
with the quasi-active (Koch and Poggio, 1985) impedance matrix. In experiments, is extracted from measured in response to . In models, is algorithmically computed based on the full morphology, the parameters of the passive currents, and the dynamics of the voltage-gated ion channels (Koch and Poggio, 1985). For our reduction method, we consider perturbative current steps, so that after an equilibration period, we obtain a steady-state voltage response . Note that this corresponds to evaluating (Equation 1) at zero frequency, and that the corresponding matrix contains the input and transfer resistances. Throughout the manuscript, we maintain the notation that without argument signifies , representing an individual input or transfer resistance, and that signifies , representing the resistance matrix.
Linearizing the reduced compartmental model around a holding potential (see Materials and methods — The conductance matrix) yields an conductance matrix , with on its diagonal
where is the unknown leak conductance of the -th compartment, the set of nearest neighbor compartments, and the unknown coupling conductance between compartment and compartment . The second sum runs over the set of all linearized currents of ion channels present in compartment , where is the unknown maximal conductance of ion channel and is a factor that follows from linearizing the dynamics of channel around the holding potential (see Materials and methods – Quasi-active channels). Thus, the unknowns , for , and for are to be fitted for each compartment, and the factors are known and determined by the ion-channel dynamics. The ’th off-diagonal element of is – the negative of the coupling conductance between compartments and – if and are nearest neighbor compartments on the reduced tree structure, and zero otherwise. The conductance matrix relates the reduced model’s voltage response to the perturbative input current steps
The full neuron and reduced model will behave similarly for all possible perturbative input steps if their responses match. From (Equation 1) and (Equation 3), it follows that should be the inverse of . Consequently, we require that multiplying the known (measured or calculated) by the parametric yields the identity
From (Equation 2), it can be seen that (Equation 4) is linear in the parameters that have to be fitted (leak, coupling, and maximal ion-channel conductances of the reduced compartments). By consequence, (Equation 4) can be cast into a least-squares problem and solved accurately (Figure 1C, Figure 1—figure supplement 1A).
Since the linearized ion channel activation depends on the holding potential, changes with (Figure 1D). The fit must disentangle the changes in induced by the various channels. In models, we first block all voltage-gated ion channels in the full and reduced models and fit to according to (Equation 4), and thus obtain leak and coupling conductances for each compartment. Then, we unblock one ion channel at a time and decompose the conductance matrix into , with a diagonal matrix containing the conductances of the unblocked channel, linearized around , at each compartment. Thus depends linearly on the unknown maximal ion channel conductance parameters. With and known, we optimize these maximal conductances, so that the left-hand side of
matches the right-hand side, and that it does so for multiple holding potentials (we chose , and mV, Figure 1D, Figure 1—figure supplement 1D). Thus, we obtain an overdetermined system of equations in the unknown maximal channel conductances at each compartment and compute its solution in the least mean squares sense.
Having fixed the steady-state behavior of the reduced model, we are left with the capacitance of each compartment of the reduced model to match the temporal dynamics of the full model as closely as possible. In our reductions, the capacitance of each compartment is thus a parameter to be fitted. To do so, we found it sufficient to consider passive membrane dynamics. Blocking all voltage-gated ion channels in the reduced model (), temporal voltage responses follow
with a vector containing the unknown capacitance of each compartment. We require that voltage-decay back to rest, as described by (Equation 6), matches voltage decay in the full neuron with all active channels blocked, at the chosen locations. According to linear dynamical systems theory (Strogatz, 2000), this decay can be decomposed as a sum of exponentially decaying eigenmodes, each with an associated eigenvalue and eigenvector . The former gives the exponential time scale of the decay and the latter the spatial profile of the mode. The most prominent of these modes has the largest time scale , and primarily models the voltage decay back to rest through trans-membrane currents (Holmes et al., 1992). In experiments, we extract this mode by fitting an exponential to the voltage decay. In full models, we compute the eigenvalue and eigenvector – restricted to the chosen locations – with the separation-of-variables method (Major et al., 1993). In the reduced model, eigenmodes are found as the eigenvalues and eigenvectors of the matrix . To fit the capacitances, we require that has as eigenvalue and as corresponding eigenvector:
This system of equations is linear in the reciprocals of the capacitances entering in and can thus be solved efficiently.
To accurately reproduce the spatio-temporal voltage responses of the full model, the reduced model must approximate the ‘impedance kernels’ (Wybo et al., 2015), the inverse Fourier transforms of the frequency-dependent elements of the impedance matrix. Despite only fitting the largest time scale , we accurately reproduce kernels at all times (Figure 1E,F, Figure 1—figure supplement 1B).
Unless a large number of closely spaced compartment sites is retained on the morphology, our reductions replace the true axial currents flowing along the dendritic tree, which are shaped by the distribution of ion channels in the dendritic membrane, with a strong abstraction: a single coupling conductance between compartments that may be far apart. Hence, it is not guaranteed that the resting voltage of reduced model will be equal to the resting voltage of the full model, at the chosen locations. We ensure that this will be the case by fitting the leak reversals for each compartment. The product of leak conductance and reversal represents a constant current for each compartment, and thus does not influence model dynamics beyond fixing it’s resting voltage. To fit the leak reversals, we evaluate all ion-channel currents in the reduced model at the full model’s resting voltage and require that temporal voltage derivatives are zero. The obtained equations are linear in the reduced model’s leak reversals, and hence can be solved efficiently. Note that our methodology does not guarantee that the obtained reversals are within physiological limits. Thus, they should purely be seen as fit parameters that determine the resting membrane potential.
We demonstrated the reduction on two computations that require accurate spatio-temporal interaction. We reproduced sequence discrimination in an L2/3 pyramidal cell model (Branco and Häusser, 2010), where a neuron responds more strongly to a centripetal sequence of inputs than to a centrifugal one, by only retaining compartments on a single branch (Figure 2A). We also reproduced input-order detection (Torben-Nielsen and Stiefel, 2009), where a neuron responds more strongly when one input arrives before the other, by only retaining compartments at the soma and the two input sites (Figure 2B).
We then tested our approach with non-linear currents concentrated at discrete points (hot-spots) along the morphology. In two biophysical models – an L5 neocortical pyramid (Hay et al., 2011; Figure 2C) and a Purkinje cell (Chen et al., 2013; Miyasho et al., 2001; Figure 2D) – we removed active dendritic channels, but added their total conductance at rest to the passive leak (we term this the 'passified’ model), while retaining all active channels at the soma. The soma, with its AP-generating channels, naturally forms such a non-linear hot-spot. We implemented dendritic hot-spots by clustering sets of excitatory (α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid [AMPA] + NMDA) and inhibitory (γ-aminobutyric acid [GABA]) synapses at randomly selected points on the morphology. Both the reduced pyramidal cell model and the reduced Purkinje cell model accurately reproduced the somatic and dendritic membrane voltage traces of their full counterparts (Figure 2C,D, middle panels). Furthermore, 97% and 89% of APs coincided within a 6 ms window, respectively (Jolivet et al., 2008; Figure 2C,D, right panels).
Models with passive dendritic membranes but non-linear synapse clusters can already reproduce much of the canonical computational repertoire of dendrites. Clusters of AMPA+NMDA synapses at the distal tips of basal dendrites can make neurons function as two-layer networks (Mel, 1993; Poirazi et al., 2003b; Poirazi et al., 2003a). Such reduced models also exhibit the same robustness to input noise as the full models (Poleg-Polsky, 2019), and can include modulation of the co-operativity between AMPA+NMDA clusters through compartments with shunting conductances (Wybo et al., 2019). Furthermore, because our reduced models reproduce dendritic input impedance properties, long time-scale NMDA currents activate strongly in distal compartments as high distal input impedance helps to overcome their voltage-dependent Mg2+ block (Jahr and Stevens, 1990a; Jahr and Stevens, 1990b), whereas short time-scale AMPA currents dominate in more proximal compartments. Thus passive models with non-linear synapse clusters also capture the shift in computational strategy from a proximal temporal code to a distal rate code (Branco and Häusser, 2011).
What level of spatial complexity reproduces characteristic responses generated by somatic and dendritic ion channels? These responses arise through the interplay of integrative properties of the morphology – captured in the impedance kernels – and channel dynamics. If a full model (here the L2/3 pyramid) is reduced to a single somatic compartment (Figure 3A), the impedance kernel has only one time scale – the membrane time-scale (Figure 3B, top – purple). The kernel of the full model contains additional shorter time scales (Figure 3B, top – gray), leading to a faster AP onset than in the reduced model (Figure 3B, bottom). This effect is thus a fundamental limitation of the single-compartment reduction. Extending the compartmental model with four nearby dendritic sites (Figure 3A) adds fast components to the somatic impedance kernel (Figure 3B, top – green) that model the spread of charge to the other compartments. These components increase AP amplitude and decrease AP delay (Figure 3C), leading to a better match with the full model.
Dendritic ion channels support the back-propagation of APs (Stuart et al., 1997). For each basal branch of at least 150 μm in the L2/3 pyramid, we derived reduced models with three compartments (at 50, 100, and 150 μm from the soma) and with one compartment. We compared bAP amplitudes and delays relative to the somatic AP (Figure 3E) and found that even models with a single distal compartment support active back-propagation (Figure 3E,F). In the apical dendrite of the L5 pyramid (Figure 3G), we found that a distance step of 100 μm between compartments was required to support bAPs to the same degree as in the full model (Figure 3H,I).
Finally, we considered the pairing of a somatic current pulse with a dendritic post-synaptic potential waveform. We found that an 11-compartment reduction (Figure 3J) – with compartments spaced at 100 μm through the apical trunk to support bAPs until the Ca2+ hot-zone – reproduced the Ca2+-mediated coincidence detection mechanism (Larkum et al., 1999; Pérez-Garci et al., 2013).
Our method allows us to reduce morphological complexity by removing any branch or subtree without affecting the integrative properties of other branches or subtrees. Up until now, we have only considered reductions where the to-be removed branch or subtree does not receive direct synaptic input. However, if this branch or subtree does receive synaptic input, the morphological reduction also simplifies the afferent spatial connectivity motifs targeting it, as synapses have to be grouped at the nearest proximal intact compartment. We assess whether the computational repertoire of the neuron remains intact under such a simplification, or which elements of the repertoire might be lost. We study two proxies for this repertoire: the voltage difference between full and reduced models at the intact compartments – in order to assess whether the output of local computations in the ablated branch or subtree can be recovered, and the difference between full and reduced models in NMDA-spike threshold at the nearest proximal intact compartment (quantified as the smallest number of AMPA+NMDA synapses with a weight of 1 nS that need to be activated simultaneously to elicit an NMDA spike) – in order to assess whether local integrative properties at the intact compartments can also be retained under the shift of synapses. We allow ourselves the liberty of rescaling the weights of the shifted synapses by a temporally constant factor (Figure 4A). Which spatial synapse distributions admit simplification in this sense, and under which input conditions?
For a single, current-based synapse, shifted from its original location on the ablated branch to the next proximal compartment, we analytically compute the weight-rescale factor and find , with the transfer resistance from synapse site to compartment site and the input resistance at . Since (Koch, 1998), the weight-rescale factor weakens the shifted synapse, so that it matches the attenuation of its distal counterpart. Nevertheless, when is located more distally on the same branch or subtree as , the transfer resistance is often close to the input resistance (Figure 1—figure supplement 1E). Thus, is often close to one, and hence negligible (Figure 4B,C). For a conductance-based synapse, rescaling weights by does not accurately fit the full model (Figure 4B). We decompose the synaptic conductance as , with the temporal average and the fluctuations around . We then analytically compute (see Materials and methods — Synaptic weight-rescale factors) that the weight-rescale factor for a conductance-based synapse
recovers the full model’s voltage if
where is the difference between input resistance at the original synapse site and input resistance at the nearest proximal compartment site . Multiplying the synaptic weights by weakens the synapse so that it matches the shunt effect of its distal counterpart, and so that the voltage at all compartments more proximal than and in their respective side branches is reproduced (Figure 4D).
If and are close, is small and (Equation 9) is satisfied for plausible conductance fluctuations. Thus, shifting the synapse to the next proximal compartment keeps the voltage response intact. If the separation between and increases, increases accordingly, and the magnitude of tolerated conductance fluctuations shrinks (Equation 9). Thus, for a given magnitude of , it is possible that a thick branch can be removed, but not a thin branch, as the input resistance increase in the former is much more gradual than in the latter. Nevertheless, (Equation 9) suggests that even for large the full model’s voltage can be recovered by downscaling synaptic weight by , but only if . A tonic level of AMPA and/or GABA activation, as in the high-conductance state (Destexhe et al., 2003), can implement this input condition. For NMDA synapses, this input condition does not hold, as NMDA spikes arise through a strong, transient conductance increase. However, when a cluster of such synapses – to be moved to a proximal site – is considered in isolation, a workaround can be found by applying constant rescale factors not just to synaptic weight, but also to threshold and width of the Mg2+ block, as well as to the reversal (see Materials and methods — Synaptic weight-rescale factors). These factors recover NMDA-spike shape (Figure 4E) and threshold (Figure 4F) when the cluster is moved to from to .
Our analytical weight-rescale factor treated reductions where a single site was moved to . We consider now the ablation of a whole subtree and the shift of all its synapses to (Figure 4G). In our simulations, we distributed 100 AMPA and 100 GABA synapses on the to be ablated subtree, and activated them with a Poisson process with fixed rate. We studied how the reduction influenced the local integrative properties at the intact compartment . As a proxy for those integrative properties, we checked the shape of the waveform elicited by activating AMPA+NMDA synapses at in a short burst (Figure 4H), and we quantified the threshold for NMDA-spike generation. We found that neither shape nor threshold were preserved by the analytical weight-rescale factors , derived for the case where only a single input site is present on the subtree. We numerically extend the derivation of the weight-rescale factors to multi-site inputs (see Materials and methods – Synaptic weight-rescale factors). These extended weight-rescale factors reproduced local voltage (Figure 4H) and NMDA-spike threshold (Figure 4I). We also investigate whether can still be conceptualized as depending on – the product between (1) a difference in average input resistance between synapses on the to be ablated subtree and the compartment site and (2) the average conductance load exerted by those synapses. To do so, we activated the 100 AMPA and 100 GABA synapses on the to be ablated subtree at a wide range of input conditions (see Materials and methods – Simulation-specific parameters), and measured the error in threshold for NMDA-spike generation at . We found that this error depended strongly on and remained small for (Figure 4J).
In conclusion, we can now assess whether given spatial synapse motifs on branches or subtrees admit simplification by shifting the synapses to the nearest proximal compartment site and ablating the branch or subtree. We find that reduction in this sense is possible for any motif if fluctuations are small, so that , by applying temporally constant weight-rescale factors. Otherwise, the synapses must be located sufficiently closely to the compartment site, so that is small and that remains below for all relevant activation levels.
We next investigated how many compartments are needed to reproduce somatic and dendritic responses under a synaptic bombardment and quantified the contribution of active dendritic Na+, Ca2+, and K+ channels. We mimicked the in vivo state in the basal dendrites of the L2/3 pyramid by distributing 200 AMPA and 200 GABA synapses receiving Poisson inputs and 20 clusters of AMPA+NMDA synapses receiving bursts of inputs (Figure 5A). We sampled 10 different sets of meta-parameters governing synaptic activation (Poisson and burst rates and synaptic weights, see Materials and methods — Simulation-specific parameters), resulting in output spike rates between 0 and 8 Hz (Figure 5B). We derived reduced models, once based on the full model with all active channels (‘active reduced dendrite’ in Figure 5C) and once based on the passified full model (‘passive reduced dendrite’ in Figure 5C). We distributed increasing numbers of compartment sites on the basal branches and measured somatic and dendritic voltage responses for the active full model and for the reduced models with active and passive dendrites (Figure 5C, Figure 5—figure supplement 1 for dendritic traces). Spike coincidence (Figure 5D), subthreshold somatic voltage error (Figure 5E), and dendritic voltage error (Figure 5F) showed a significant improvement when compartment numbers increased from 0 (point-neuron – top row in Figure 5C) to 2 per 100 μm. In the presence of active channels, the error decreases until up to three compartments per 100 μm (Figure 5E,F). Spike coincidence factors were consistently ∼10% higher than in the passive case (Figure 5D).
Traditionally, creating a morphological neuron model involves a reconstruction of the morphology and an evolutionary fit of the electrical parameters (Hay et al., 2011; Almog and Korngreen, 2014; Van Geit et al., 2016). Our method allows skipping these resource-intensive steps (Figure 6A). We adapt our method to a common experimental paradigm where hyper- and depolarizing current steps are injected under applications of various ion-channel blockers (Marti Mengual et al., 2020). We extract resistance matrices and holding potentials from voltage step amplitudes (Figure 6B), and the time scale of the largest eigenmode from the average voltage decay back to rest (Figure 6C). If the modeling goal is a reduced model, the traditional reconstruction and subsequent reduction both introduce errors (Figure 6D). Thus, we find that while our reduction of the full model is faithful within an expected accuracy margin (Figure 6E), fitting a reduced model directly to the experimental traces is more accurate (Figure 6F).
We have presented a flexible yet accurate method to construct reduced neuron models from experimental data and morphological reconstructions. The method consists of linear parameter fits at holding potentials that are informative for the full non-linear dynamics. First, we derive leak and coupling conductances from the passified version of the full model. Second, we fit the maximal conductances of the active ion-channels, so that the reduced model, at various holding potentials, responds similarly to input perturbations as the full model. Third, we fit the capacitances to reproduce spatio-temporal integrative properties. Finally, we ensure that resting membrane potentials in the full and reduced models match. The resulting method can be adapted to a wide variety of use cases.
A fundamental use case is deriving reduced models that retain specific elements of the dendritic computational repertoire, to explore how those dendritic computations may improve network function. To that purpose, one can apply our method to compartments placed at input sites needed for the computation of interest. For instance, if a computation requires independent computational subunits (Poirazi et al., 2003b; Kastellakis et al., 2016), one would place compartments at the dendritic tips. If one wants to modulate co-operativity of excitatory inputs at distal sites, one would place compartments with shunting conductances at the bifurcations between the distal sites (Wybo et al., 2019). Detecting input sequences (Branco and Häusser, 2010) or implementing a dendritic integration gradient (Branco and Häusser, 2011) would require placing two or more compartments along the branches of interest. For many of these computations, a passive morphology with non-linear (NMDA) synapses suffices. Furthermore, our method can be combined with abstract models of AP generation (Pozzorini et al., 2013; Pozzorini et al., 2015).
A second use case is constructing models directly from electro-physiological recordings. Our method avoids the labor-intensive detour of reconstructing the morphology and optimizing model parameters with an evolutionary algorithm (Marti Mengual et al., 2020). In combination with advances in voltage-sensitive dye imaging, our method may thus form the basis of a high-throughput experiment-to-model paradigm.
A third use case is elucidating the effective complexity of a given dendritic tree. We assessed whether afferent spatial input motifs could be simplified by removing a branch or subtree and grouping the affected synapses at the next proximal compartment. We found that such simplification is possible when the input resistance of the affected synapse sites is close to the input resistance of the compartment site. If this condition does not hold, the simplification can still proceed if conductance fluctuations are small compared to the average conductance magnitude, but not otherwise. What is the simplest model for a given dendritic tree that captures its full computational repertoire? We find that spike coincidence and subthreshold voltage metrics reach satisfying accuracy at, but do not improve much beyond ∼3 compartments per 100 μm. Further research is required to tease apart mere numerical errors from potentially missed computational features.
The process of simplifying complex dynamical systems, such as morphological neuron models, is inherently approximate. Due to the vast variety of morphologies, voltage-gated ion-channel configurations and input patterns, it is beyond the scope of any single paper to give an exhaustive overview of possible inaccuracies. Rather, we propose three lines of inquiry if a reduction fails to reproduce a particular response pattern. First, we suggest checking the discrepancy between the impedance kernels of the full and reduced models. Often, response mismatches arise from slight inaccuracies in how ion-channel dynamics are integrated by the passive system (Figure 3A–C). Second, we suggest assessing to what degree afferent spatial input motifs, present in the full model, are simplified, and whether that simplification still admits a sufficient accuracy in reproducing local voltage-dependent responses, such as NMDA spikes (Figure 4). Finally, we suggest checking whether the distribution of compartments on the morphology admits ion channel dynamics that are sufficiently rich to reproduce the response characteristic in question. To illustrate these lines of inquiry, we have further simplified the L5 pyramid to five compartments, by omitting the compartments in the apical trunk (Figure 3—figure supplement 1A vs Figure 3J). This reduction results a change in AP shape, and in a failure to elicit AP bursts through the pairing of a somatic and a dendritic input, even though a Ca2+ spike can still be elicited (Figure 3—figure supplement 1B). Likely, the change in AP shape can be traced back to the difference in the somatic input impedance kernels (Figure 3—figure supplement 1C), while the failure to elicit an AP burst may be due to insufficient Na+-driven cross-talk between soma and apical tuft.
Computational tools have played a key role in neuroscience. Take NEURON (Carnevale and Hines, 2004), which accelerated our understanding of morphological neurons, or NEST (Gewaltig and Diesmann, 2007), which enabled simulation of large-scale spiking point-neuron networks. We have implemented a Python toolbox that automatizes the simplification process (NEural Analysis Toolkit – NEAT – https://github.com/unibe-cns/NEAT; copy archived at swh:1:rev:1cb15f36aa0a764105348541d046c85ef38e21ee). The toolbox reads any morphology in the standard ‘.swc’ format (Ascoli, 2006) and returns the parameters of the reduced models, while also providing tools to export the models to NEURON (Carnevale and Hines, 2004).
Our method and toolbox fill a void in between the extremes of modeling large-scale networks with abstract models and modeling single cells in all their detail. By enabling the efficient systematic derivation of simplified neurons, amenable to simulation at the network level, our work bridges the gap between two branches of neuroscience that historically have remained separate.
Three exemplar cell models were used for the analysis: a cortical L2/3 pyramidal cell (Branco and Häusser, 2010; Figure 2D), a cortical L5 pyramidal cell (Hay et al., 2011; Figure 2E), and a cerebellar Purkinje cell (Figure 2F). The latter morphology was retrieved from the NeuroMorpho.org repository (Ascoli, 2006) and the two others from the ModelDB repository (McDougal et al., 2017).
In our passive models, physiological parameters were set according to Major et al., 2008: the equilibrium potential was , the membrane conductance , the capacitance , and the intracellular resistivity . For the active models, we took ion channels and parameters according to Branco and Häusser, 2010 for the L2/3 pyramid, Hay et al., 2011 for the L5 pyramid, and Miyasho et al., 2001 for the Purkinje cell.
AMPA and GABA synaptic input currents were implemented as the product of a conductance profile, here a double exponential (Rotter and Diesmann, 1999), with a driving force:
AMPA rise resp. decay times were , and AMPA reversal potential was . For GABA, we used , , and . NMDA currents (Jahr and Stevens, 1990a) had the form:
with rise resp. decay time , , and , while , modeling the channel’s magnesium block, had the form (Behabadi and Mel, 2014):
The ‘conductance’ of an AMPA or GABA synapse signifies the maximum value of its conductance window. For an AMPA+NMDA synapse, the conductance is the maximal value of the AMPA conductance window, and the conductance of the NMDA component is determined by multiplying the AMPA conductance value with an NMDA ratio , set to be either 2 or 3.
A voltage-dependent ion channel described by the Hodgkin–Huxley formalism can in general be written as follows:
where is the channel’s maximal conductance, its reversal potential, its state variables, a function that depends on the channel type (e.g. for a typical sodium channel ), and () the functions governing state-variable activation ( can usually be written as with the state-variable’s activation and its time scale). To obtain the channel’s quasi-active approximation (Mauro et al., 1970; Koch and Poggio, 1985) around a holding potential and a state variable expansion point , we linearize (Equation 13):
To obtain the zero-frequency contribution of (Equation 14) to the resistance and conductance matrices, we set , solve the linearized state-variable equations for , and substitute the result in :
Introducing a shorthand for the factor in square brackets
results in the linearized ion-channel current as described in (Equation 2).
The voltage response at a location along the neuron to an input current perturbation at location can be computed as the convolution of an impedance kernel with :
The impedance kernel itself can be computed in the frequency domain from the quasi-active cable equation using Koch’s algorithm (Koch and Poggio, 1985). We may assume any a priori arbitrary set of holding potentials and ion-channel state variables to compute the quasi-active expansion (with Koch’s algorithm, the only constraint is that their distribution is uniform for each cylindrical segment). For our purpose, spatially uniform holding potentials and state-variable expansion points suffice. For a steady-state current , (Equation 17) simplifies to:
where we term the ‘resistance’ ( is also known as the input resistance if or the transfer resistance if ). For passive membranes, impedance kernels can also be computed as a sum of exponentials (Holmes et al., 1992; Major et al., 1993):
where we adopt the ordering Essentially, this infinite sum can be thought of as the generalization of the eigenmodes for linear dynamical systems (Strogatz, 2000) to partial differential equations.
When a current perturbation is applied at multiple sites along the neuron (we write ), we obtain the voltage response at those sites from:
with the matrix of impedance kernels (the ’th elements of this matrix is the impedance kernel between sites and ). In the steady-state case, we call the resistance matrix.
The voltage in a compartment , connected to a set of nearest neighbor compartments, and with a set of ion channels , is given by
where is the capacitance of compartment , resp. its leak conductance resp. reversal, its coupling conductance to the neighboring compartment , the ion channel current (Equation 13) of channel in compartment and an arbitrary input current to compartment .
To obtain the conductance matrix of the compartmental model for steady-state inputs, (Equation 21) is linearized around a certain holding potential and we assume that . After absorbing all constant terms into , we obtain
where , where is the linearized ion-channel current (Equation 15), and where we used the notation (Equation 16). Summarizing the voltage responses for each compartment in a vector , (Equation 22) for each compartment can be recast as a matrix equation, yielding (Equation 3).
For any given set of sites on the morphology, we construct a simplified compartmental model whose connection structure follows a tree graph defined by the original morphology (Figure 1A). We do not allow triplet connections (e.g. sites 1–2, sites 2–3, and sites 3–1 all mutually connected) in our reduced models. Hence, to obtain accurate results, it is necessary to extend the original set of sites with the bifurcation points that lie in between (Figure 1—figure supplement 1C). With these sites, we thus define a tree graph that provides the scaffold for our fit (and our reduced model).
The fitting process proceeds in four steps: (1) fit the passive leak and coupling conductances, (2) fit the capacitances, (3) fit the maximal conductances of the ion channels, and (4) fit the reversal potentials to obtain the same resting membrane potentials as in the biophysical model. The order of steps 2 and 3 is interchangeable, but not of the other steps. We describe these steps in detail below:
If the biophysical model contains active conductances, we compute their opening probabilities at rest and add them to the leak. Otherwise, we simply take the leak as is. We then compute for the compartment sites. In the passive case, is independent of the holding potential. In (Equation 21), we substitute , . The terms are a constant contribution, and can thus be absorbed in . We obtain:(23)
which, when recast in matrix form, yields (Equation 3). (Equation 4) then yields a system of equations, linear in and (note that this system always has unknowns). We recast this system of equations in a form and solve it in the least-squares sense for .
To fit the capacitances, we set and again absorb in in (Equation 21) to obtain:(24)
In matrix form, we obtain (Equation 6) with . We require that this matrix has the same smallest eigenvalue (corresponding to the largest time scale ) and eigenvector as the biophysical model, as defined by (Equation 19). Hence, we obtain the system of (Equation 7) that can be solved for .
We compute the maximal conductances of each ion-channel type separately. Thus, next to setting , we set for all but one of the channels – call the non-zero channel – and replace with its quasi-active, zero-frequency approximation (Equation 15). We choose a spatially uniform holding potential and expansion point for ion channel . We absorb constant terms in and obtain (Equation 21) in terms of :(25)
We again recast this equation in matrix form, but distinguish the known passive component from the unknown diagonal matrix containing as its ’th diagonal element the channel term :(26)
From the biophysical model, we compute by setting all ion channel conductances except channel to zero, to obtain the fit (Equation 5).
By consequence, we have a system of equations, linear in the unknown maximal conductances , that can be recast in a form , where and depend on the holding potential and expansion points. On the one hand, we aim to compute these linearizations for a sufficiently large range of holding potentials and expansion points. On the other hand, the fit should be restricted to domains of the ion-channel phase space where the channel resides during normal input integration, so as to avoid over-fitting on domains of the phase space that are never reached. We choose four holding potentials around which to linearize: and mV. For channels with a single state variable , we choose for these four holding potentials. For channels with two state variables, we computed and for the four holding potentials, and choose sixteen expansion points as all possible combinations of these two state variables. We then weigh the matrices and vectors with the inverse of the open probability for their respective expansion point, concatenate the matrices and vectors for each of these expansion points, and obtain the system of equations (with the number of expansion points) and unknowns. We solve this system in the least-squares sense for and repeat this procedure for each voltage-dependent ion channel in the biophysical model.
Finally, we fit () to reproduce the resting membrane potential of the biophysical model evaluated at the compartment sites. We substitute this potential in (Equation 21), set and for and obtain:(27)
This is a system of equations, linear in the unknowns (), and can thus be solved with standard algebraic techniques.
To be able to analytically compute the weight-rescale factor when a synapse with temporal conductance is moved from its original site to a compartment site on a dendritic tree, we use the steady-state approximation (Equation 18). Because we implicitly convolve the rescaled synaptic current with the temporal input impedance kernel at the compartment site when we simulate the reduced model, the weight-rescale factor still yields accurate temporal voltage responses (Figure 4B,E,H). In the original configuration, we have
with an arbitrary current at the compartment site and resp. the synaptic conductance resp. reversal. Eliminating from this yields
In the reduced configuration we have:
From requiring that , is found as follows:
When , which is true in basal dendrites and a reasonable approximation in many apical dendrites if is on the direct path from to the soma, (Equation 31) reduces to
The weight-rescale factor hence depends on time. Decomposing the time-varying into , with the temporal average and the fluctuations around , we find (Equation 8) and (Equation 9) from requiring that the denominator of (Equation 32) be constant in time:
In the case of an NMDA synapse, the conductance depends sigmoidally on the local voltage. Using scale factor (Equation 8), we obtain for the rescaled synaptic current at the compartment site:
This current still depends on the voltage at the synaptic site in the sigmoid. A current at the compartment site that causes a voltage there would have caused a voltage
if it was placed at the synapse site. Hence, to retain the same activation level of the NMDA synapse, we substitute (Equation 35) in the sigmoid. This amounts to a change in threshold and width of the sigmoid:
We will denote this modified sigmoid as . We have not yet succeeded in reformulating (Equation 34) by only rescaling its parameters with constant rescale factors, since still depends on time through its dependence on . To obtain constant rescale factors, we substitute the (Equation 34) in (Equation 17):
Here, the current in square brackets is the synaptic current. The convolution to obtain is implemented implicitly by the compartmental model. We then approximately eliminate the denominator:
to obtain for the synaptic current:
We thus find that the synaptic weight is rescaled by a constant factor and the reversal potential is shifted to . Note however that this reduction relies on two key assumptions: that there are no other inputs to the dendritic tree and that there are no fluctuations in generated by input currents not at site , as this could lead to spurious activation or suppression of .
Suppose now we shift conductance synapses to compartments. Let be the vector with components containing the voltages at the compartment sites and at the synapse sites. Similarly is a vector of components containing zeros and the synaptic conductances, while contains the synaptic reversals. In matrix form, (Equation 18) for this system becomes:
and its solution:
with a matrix given by:
In the reduced setting, we assign each synapse to one compartment site (a reasonable choice could be the closest site). We introduce an compartment assignment matrix where an element is 1 if synapse is assigned to compartment and zero otherwise. We further introduce a vector of reduced compartment voltages , a vector of rescaled synaptic conductances and a vector of synaptic reversals . In the reduced model, voltages are obtained from:
We then require that , with a vector containing the first components of . We denote by the matrix containing the first rows and last columns of , and note that we can write (Equation 41) as . Substituting this in (Equation 43) yields:
We then fit the parameters (here absorbed in ) in the least-square sense from:
which again is a linear fit.
Coronal slices (300 μm thick) containing the anterior cingulate cortex (ACC) were prepared from 10 to 12 week old C57BL/6 mice using a vibratome on a block angled at 15 degree to the horizontal in ice-cold oxygenated artificial cerebral spinal fluid (ACSF) and then maintained in the same solution at 37°C for 15–120 min. Normal ACSF contained (in mM) NaCl, 125; NaHCO3, 25; KCl, 2.5; NaH2PO4, 1.25; MgCl2, 1; glucose, 25; CaCl2, 2; pH 7.4. Individual neurons were visualized with a Nikon Eclipse E600FN fit with a combination of oblique infrared illumination optics and epifluorescence, the switch between optical configurations was software-triggered (Sieber et al., 2013). Pyramidal neurons were selected on the clearly visible, proximal apical dendrite. This selection criterion resulted in a homogeneous population of pyramidal neurons based on their firing properties and shape of the AP (i.e. all cells possessed a prominent after-hyperpolarization and a significant sag ratio at the soma). Dual somatic and dendritic whole-cell patch-clamp recordings were performed from identified L5 pyramidal neurons in the rostroventral ACC (1.1–1.4 mm below the pial surface, 1.1–0.2 mm rostral to the Bregma) using two Dagan BVC-700. During the experiments, the external recording solution (normal ACSF) was supplemented with 0.5 mM CNQX and 0.5 mM AP-5 to block excitatory glutamatergic synaptic transmission. Experiments were performed at physiological temperatures between 34–37°C. Whole-cell recording pipettes (somatic, 4 to 8 MΩ; dendritic, 12 to 32 MΩ), were pulled from borosilicate glass. The internal pipette solution consisted of (in mM) potassium gluconate, 135; KCl, 7; Hepes, 10; Na2-phosphocreatine, 10; Mg-ATP, 4; GTP, 0.3; 0.2% biocytin; pH 7.2 (with KOH); 291–293 mosmol l−1. For somatic recordings, 10–20 μm Alexa 594 was added to the intracellular solution: first, the soma was patched (whole-cell configuration by negative pressure); after 5 min of intracellular perfusion, the fluorescent signal allowed for the clear identification of the apical dendritic tree, then the dendritic region of interest was patched with a smaller pipette. Compensation was performed in current clamp mode by recovering the fast, initial square voltage response to a hyperpolarizing current injection (−100 pA, 50 ms). First the pipette capacitance was compensated to the level that the voltage response showed an immediate voltage drop due to the series resistance of the pipette that was adjusted subsequently. Compensation of dendritic series resistance yielded values between 30 and 60 MΩ for pipettes with a resistance between 17 and 21 MΩ. On average, series resistance was 2.3 times larger than the pipette resistance. Series resistance of both somatic and dendritic recording electrodes was monitored frequently, and experiments were terminated when proper compensation was not possible anymore (i.e. reached values of more than four times the pipette resistance). All cells were filled with biocytin, and PFA-fixed slices were developed with the avidin–biotin-peroxidase method for Neurolucida reconstructions (Egger et al., 2008). Data analysis was performed using Igor software (Wavemetrics) and Excel (Microsoft).
For the sequence detection (Figure 1f), the synapse model is as in Branco and Häusser, 2010. The maximal conductance of the AMPA component is nS and its conductance window shape given by an alpha function (Rotter and Diesmann, 1999) with a time scale of 2 ms. The NMDA component is given by a kinetic model (Destexhe et al., 1998) with nS and external magnesium concentration of 1 mM, and receives an exponentially decaying ( ms) neurotransmitter concentration with amplitude of 5 mM. For the input-order detection (Figure 1G), AMPA synapses 1 and 2 use the standard parameters and have respective maximal conductances of 10 and 5 nS. For the simulation with the L5 pyramidal cell, excitatory synapses have AMPA+NMDA components with and nS. For inhibitory synapses, nS. In the Purkinje cell, excitatory synapses only have AMPA components with nS, while for inhibitory synapses, nS. For the L2/3 pyramid, L5 pyramid, resp. Purkinje cell, excitatory Poisson rates are 2, five resp. 6 Hz and inhibitory firing rates are 4, one resp. 2 Hz. Simulation were run for 10,000 ms.
APs are evoked with a DC current pulse. For panels A–C, pulse amplitude is nA and pulse duration is ms. For panels D–F, we have nA and ms. For panels G–I, nA and ms. For panels J–K, the somatic current pulse had nA and ms, while the dendritic current injection had a double exponential waveform, with ms and ms and amplitude nA. Onset of the somatic current pulse precedes onset of the dendritic current injection by 5 ms.
In panel B, we inject Ornstein–Uhlenbeck (OU) processes for current (mean nA and standard deviation nA) and conductance ( μS, μS). Reversal was 0 mV for the excitatory conductance and −80 mV for the inhibitory conductance. All OU processes had a time scale of 30 ms. In panels D–F, we use AMPA+NMDA synapses with nS and . Burst firing is mimicked by drawing synapse activation times from a Gaussian distribution with as mean the burst time and a standard deviation of 2 ms.
In panels G–J, AMPA+NMDA synapses at the compartment site (green square) have nS and . To determine the NMDA-spike threshold, we activate between 0 and 200 synapses in a burst. The burst is again modeled by drawing spike times from a Gaussian distribution with a 2 ms standard deviation. We then multiply the amplitude of the resulting waveform with its half-width, and average this quantity over five trials for each number of activated synapses. The number of synapses at which this quantity increases most is taken to be the NMDA-spike threshold.
We aim to test the synaptic weight-rescale factors for the AMPA and GABA synapses under a wide range of input conditions. To do so, we conduct simulations with different total time-averaged conductance loads , exerted by the AMPA and a GABA synapse on the subtree that is to be reduced. These synapses are activated with spike times drawn from a homogeneous Poisson process. We also also change the size of the conductance fluctuations, by conducting simulations with different firing rates for the homogeneous Poisson processes (since the total conductance load for a simulation is fixed, a small firing rate means that individual synaptic weights have to be increased to reach the desired total conductance load, hence resulting in larger conductance fluctuations). We furthermore change the number of locations on the subtree over which the conductance load is distributed. Note that by consequence, the conductance load at each location is . We also adapt these locations to a specified average difference in input resistance between the compartment site and the location. Finally, we change the average voltage level achieved by the combined AMPA and GABA input by changing the ‘nudging potential’ (Urbanczik and Senn, 2014)
where resp. are the time-average conductances of individual AMPA resp. GABA synapses, and mV resp. mV their respective reversal potentials.
We thus draw five meta-parameters for each simulation: , , , and . The time-averaged conductance at each location is given by
Together with (Equation 46), this fixes the time-averaged conductance of each individual AMPA synapse and GABA synapse. Since the temporal conductance profile of a synapse (e.g. the AMPA synapse) following the arrival of a single input AP is modeled as the product of a weight factor and a unitary conductance window , the time-averaged conductance of that synapses activated by a homogeneous Poisson process input is
Thus, given and , the synaptic weight can be extracted (and similarly for the GABA synapse).
To perform simulations across a wide range of input conditions, we draw 200 samples of these five meta-parameters from the intervals: , , , , and following the Latin hypercube (LH) method (Press et al., 2007). and are sampled on a log scale, whereas for all other parameters, we use a linear scale.
To obtain compartment sites, we divide the longest branch in each basal subtree in parts of equal length, thus giving us distances from the soma, where we increase from 0 (point-neuron) to 10. In each subtree, we distribute compartments at all sites at these distances, and also add all bifurcation sites in between compartment sites. In this way, we obtain 11 reductions that we quantify according to ‘no. of compartments per 100 μm of dendrite’. We implement 400 ‘background’ synapses; 200 AMPA and 200 GABA synapses with an average conductance of resp. and firing rate resp. . We implement 20 synapse clusters, consisting of AMPA+NMDA synapses with maximal conductance and . These clusters are activated with a Poissonian burst rate , the number of spikes per burst is drawn from a Poisson distribution with parameter , and spike times are drawn for each burst time according to a Gaussian distribution with standard deviation of 5 ms. For each of the parameters not given previously, 10 LH samples were drawn from , , , , and , resulting in 10 different output spike rates shown in B. For each parameter set, a simulation was run for 10,000 ms.
Hyper- resp. depolarizing current steps have nA resp. nA and ms. The full model was optimized with an evolutionary algorithm using the BluePyOpt library (Van Geit et al., 2016), where we ran 100 iterations with and offspring size of 100. Goodness of fit was evaluated in a multi-objective manner as the root mean square error of the resting voltage (average voltage 100 ms before each current step), the final step voltage amplitudes after sag (average voltage during the last 100 ms of the DC current injection), and the voltage root mean square error of the full trace. We optimized the specific membrane capacitance and the conductance densities for passive leak and h, and channels. The membrane currents followed an exponential distribution , with the distance from the soma, and as parameters – the conductance at the soma – and – the length constant of the distribution.
NEAT (NEural Analysis Toolbox), our open-source Python toolbox to obtain reduced models, is available on https://github.com/unibe-cns/NEAT (copy archived at swh:1:rev:1cb15f36aa0a764105348541d046c85ef38e21ee).
All data generated or analysed during this study are included in the manuscript and supporting files.
ModelDBID 139653. L5b PC model constrained for BAC firing and perisomatic current step firing.
ModelDBID 140828. Dendritic Discrimination of Temporal Input Sequences.
NeuroMorpho.orgID NMO_100072. Mature Purkinje Cells Require the Retinoic Acid-Related Orphan Receptor-α (RORα) to Maintain Climbing Fiber Mono-Innervation and Other Adult Characteristics.
An efficient analytical reduction of detailed nonlinear neuron modelsNature Communications 11:1–13.https://doi.org/10.1038/s41467-019-13932-6
Mobilizing the base of neuroscience data: the case of neuronal morphologiesNature Reviews Neuroscience 7:318–324.https://doi.org/10.1038/nrn1885
Automated optimization of a reduced layer 5 pyramidal cell model based on experimental dataJournal of Neuroscience Methods 210:22–34.https://doi.org/10.1016/j.jneumeth.2012.04.006
The single dendritic branch as a fundamental functional unit in the nervous systemCurrent Opinion in Neurobiology 20:494–502.https://doi.org/10.1016/j.conb.2010.07.009
BookKinetic models of synaptic transmissionIn: Koch C, Segev I, editors. Methods in Neuronal Modeling. MIT Press. pp. 1–26.
The high-conductance state of neocortical neurons in vivoNature Reviews Neuroscience 4:739–751.https://doi.org/10.1038/nrn1198
BookCan we predict every spike?In: DiLorenzo P. M, editors. Spike Timing. CRC Press. pp. 70–72.
The capabilities and limitations of conductance-based compartmental neuron models with reduced branched or unbranched morphologies and active dendritesJournal of Computational Neuroscience 30:301–321.https://doi.org/10.1007/s10827-010-0258-z
Interpretation of time constant and electrotonic length estimates in multicylinder or branched neuronal structuresJournal of Neurophysiology 68:1401–1420.https://doi.org/10.1152/jn.1918.104.22.1681
A quantitative description of NMDA receptor-channel kinetic behaviorThe Journal of Neuroscience 10:1830–1837.https://doi.org/10.1523/JNEUROSCI.10-06-01830.1990
Voltage dependence of NMDA-activated macroscopic conductances predicted by single-channel kineticsThe Journal of Neuroscience 10:3178–3182.https://doi.org/10.1523/JNEUROSCI.10-09-03178.1990
The quantitative single-neuron modeling competitionBiological Cybernetics 99:417–426.https://doi.org/10.1007/s00422-008-0261-x
Low-dimensional, morphologically accurate models of subthreshold membrane potentialJournal of Computational Neuroscience 27:161–176.https://doi.org/10.1007/s10827-008-0134-2
Morphologically accurate reduced order modeling of spiking neuronsJournal of Computational Neuroscience 28:477–494.https://doi.org/10.1007/s10827-010-0229-4
A simple algorithm for solving the cable equation in dendritic trees of arbitrary geometryJournal of Neuroscience Methods 12:303–315.https://doi.org/10.1016/0165-0270(85)90015-9
Spatially distributed dendritic resonance selectively filters synaptic inputPLOS Computational Biology 10:e1003775.https://doi.org/10.1371/journal.pcbi.1003775
The effects of L-glutamate and its analogues upon the membrane conductance of central murine neurones in cultureCanadian Journal of Physiology and Pharmacology 60:282–296.https://doi.org/10.1139/y82-039
Spatiotemporally graded NMDA spike/plateau potentials in basal dendrites of neocortical pyramidal neuronsJournal of Neurophysiology 99:2584–2601.https://doi.org/10.1152/jn.00011.2008
Subthreshold behavior and phenomenological impedance of the squid giant axonJournal of General Physiology 55:497–523.https://doi.org/10.1085/jgp.55.4.497
Twenty years of ModelDB and beyond: building essential modeling tools for the future of neuroscienceJournal of Computational Neuroscience 42:1–10.https://doi.org/10.1007/s10827-016-0623-7
Synaptic integration in an excitable dendritic treeJournal of Neurophysiology 70:1086–1101.https://doi.org/10.1152/jn.1922.214.171.1246
Spike-timing prediction in cortical neurons with active dendritesFrontiers in Computational Neuroscience 8:90.https://doi.org/10.3389/fncom.2014.00090
Intrinsic and network rhythmogenesis in a reduced traub model for CA3 neuronsJournal of Computational Neuroscience 1:39–60.https://doi.org/10.1007/BF00962717
Dendritic spikes expand the range of well tolerated population noise structuresThe Journal of Neuroscience 39:9173–9184.https://doi.org/10.1523/JNEUROSCI.0638-19.2019
Temporal whitening by power-law adaptation in neocortical neuronsNature Neuroscience 16:942–948.https://doi.org/10.1038/nn.3431
Automated High-Throughput characterization of single neurons by means of simplified spiking modelsPLOS Computational Biology 11:e1004275.https://doi.org/10.1371/journal.pcbi.1004275
BookNumerical Recipes 3rd Edition: The Art of Scientific ComputingCambridge University Press.
Non-Hebbian long-term potentiation of inhibitory synapses in the thalamusJournal of Neuroscience 33:15675–15685.https://doi.org/10.1523/JNEUROSCI.0247-13.2013
BookNonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry and EngineeringWestview Press.
Action potential initiation and backpropagation in neurons of the mammalian CNSTrends in Neurosciences 20:125–131.https://doi.org/10.1016/S0166-2236(96)10075-8
Creation and reduction of a morphologically detailed model of a leech heart interneuronJournal of Neurophysiology 96:2107–2120.https://doi.org/10.1152/jn.00026.2006
Systematic mapping between dendritic function and structureNetwork: Computation in Neural Systems 20:69–105.https://doi.org/10.1080/09548980902984833
The green's function formalism as a bridge between single- and multi-compartmental modelingBiological Cybernetics 107:685–694.https://doi.org/10.1007/s00422-013-0568-0
Timothy O'LearyReviewing Editor; University of Cambridge, United Kingdom
Ronald L CalabreseSenior Editor; Emory University, United States
Hugh PC RobinsonReviewer; University of Cambridge, United Kingdom
Matthew F NolanReviewer; University of Edinburgh, United Kingdom
In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.
A general principle in systems theory is that a complicated model often reduces to a much simpler one – a fact exploited by Rall in his classic work on reduction of passive dendritic trees. However, obtaining simplified models of nonlinear dendritic integration of distributed synaptic input has remained challenging. Wybo and colleagues develop methods to achieve such a reduction systematically, offering an approach that reduces the complexity of simulations while providing insight into the factors that allow lumping of dendritic compartments into a smaller equivalent system.
Decision letter after peer review:
Thank you for submitting your article "Data-driven reduction of dendritic morphologies with preserved dendro-somatic responses" for consideration by eLife. Your article has been reviewed by two peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Ronald Calabrese as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Hugh PC Robinson (Reviewer #1); Matthew F Nolan (Reviewer #2).
The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.
We would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). Specifically, we are asking editors to accept without delay manuscripts, like yours, that they judge can stand as eLife papers without additional data, even if they feel that they would make the manuscript stronger. Thus the revisions requested below only address clarity and presentation.
This manuscript proposes a plausible and promising tool for reducing model complexity of morphological conductance-based models, illustrated in a variety of situations including complex simulations and experimental data fitting. This methodology will likely be useful to a variety of neurophysiologists and theorists and can permit more efficient simulation while shedding light on the core parameters that matter in neural computation at the single cell level.
Reviewers agreed that the writing needs to be clearer throughout. Terms need to be defined. Implicit references to other work or to concepts need to be replaced with explicit, declarative sentences and explanations. On the technical side the paper also needs a fuller explanation of how fitting of voltage-gated conductances is performed. Finally, a fuller discussion and illustration of the limitations would be appropriate, e.g. explicitly showing cases where further reductions cannot be achieved without fundamentally changing the system. These necessarily exist in a nonlinear system so this is not a shortcoming of the method, rather an important point that some readers may not appreciate.
Full reviewer comments are included below for information.
This study addresses an important issue in computational neuroscience: how to reduce highly-detailed biophysical models which reproduce the full morphological complexity of neurons to models which have a degree of complexity permitting them to be understood conceptually, and simulated much more efficiently, while preserving the accuracy of the original model as far as possible, and in a well-defined way. While there is a large literature on this problem for passive neurons, it is really necessary to address this problem for active/nonlinear membranes, given what we now know about voltage-dependent ion channels in dendrites. This is a problem of fitting the reduced model parameters to the output of the highly-detailed model, and the authors take a systematic and theoretically-satisfying approach to it, by matching the impedance matrices including the phenomenological impedance of voltage-gated channels which describes the reduced model to the output of the detailed model recorded at a small number of locations. This was demonstrated for neuronal models of several key active dendritic mechanisms: back-propagating sodium spikes, calcium and NMDA spikes. The authors also provide a toolbox to facilitate this process.
I have no major concerns: I found that the approach was described and illustrated very carefully and clearly, and the method's accuracy and utility were demonstrated convincingly. The motivation was well established and the writing accurate and transparent.
The study introduces algorithms and software that aims to generate accurate reduced compartmental models of neurons. The approach appears elegant and is well validated by exemplar applications. I think it will be of interest to many neuroscientists. For the network modeler, the software provides a practical solution to the tradeoff involved in choosing neuronal models that are accurate and those that are computationally efficient. At a conceptual level the approach may be useful in addressing questions about which details of a neuron's morphology are critical for the computations that it contributes to. A caveat of my review is that I haven't had time to carefully check any of the math – it looks about right and the validation is generally good.
(1) The initial part of the Results section is important for understanding how the software works but is hard to follow. I think more care needs to be taken with defining terms clearly and explaining the logic of the approach in an accessible way.
(2) How the approach deals with voltage-gated ion channels could be made much clearer. E.g. in the subsection “A systematic simplification of complex neuron morphologies”, the text states that GVh,chan depends on the unknown maximal ion channel conductance parameters. How? Why maximal? Voltage-gated channels cannot be maximally activated at all of the voltages tested. I don't understand how to get from here to channel conductances. It could be helpful to make a figure illustrating how fits are obtained for exemplar voltage-gated conductances.
(3) The section 'Conditions under which afferent spatial connectivity motifs can be simplified' could also be written more clearly. Concerns are similar to in point 1 above.https://doi.org/10.7554/eLife.60936.sa1
[…] Reviewers agreed that the writing needs to be clearer throughout. Terms need to be defined. Implicit references to other work or to concepts need to be replaced with explicit, declarative sentences and explanations. On the technical side the paper also needs a fuller explanation of how fitting of voltage-gated conductances is performed. Finally, a fuller discussion and illustration of the limitations would be appropriate, e.g. explicitly showing cases where further reductions cannot be achieved without fundamentally changing the system. These necessarily exist in a nonlinear system so this is not a shortcoming of the method, rather an important point that some readers may not appreciate.
We have rewritten the sections that were highlighted by the reviewers, in order to explain the unclarities. We have now clearly defined the quantities that we use. We have also explained more elaborately how the voltage-gated channels are fitted. To aid with this, we have added a further panel to Figure 1—figure supplement 1. Finally, we have added a paragraph to the Discussion, where we explain possible lines of inquiry if a reduction does not reproduce dynamics of the full model that need to be retained. To illustrate these lines of inquiry, we have added a new supplementary figure (Figure 3—figure supplement 1) where we perform a similar simulation as in Figure 3J, K, but where we left out compartments in the apical trunk. This reduction is too strong and the generation of a spike-burst cannot be mimicked by the reduced model.
Full reviewer comments are included below for information.
[…] (1) The initial part of the Results section is important for understanding how the software works but is hard to follow. I think more care needs to be taken with defining terms clearly and explaining the logic of the approach in an accessible way.
We have added substantial clarifications to this section.
(2) How the approach deals with voltage-gated ion channels could be made much clearer. E.g. In the subsection “A systematic simplification of complex neuron morphologies”, the text states that GVh,chan depends on the unknown maximal ion channel conductance parameters. How? Why maximal? Voltage-gated channels cannot be maximally activated at all of the voltages tested. I don't understand how to get from here to channel conductances. It could be helpful to make a figure illustrating how fits are obtained for exemplar voltage-gated conductances.
We have explicitly described the elements of the matrix Gvh in the manuscript, so that the parameters that are fitted are now clear. In the Materials and methods – “Quasi-active channels” subsection, we show how the linearized channel currents that go in the matrix Gvh are obtained. It can be seen that they are the product of a ‘maximal conductance parameter’ with a factor that follows from the linearization, and that determines the fraction of that maximal conductance that is open at a given vh. This factor thus changes with vh. Fitting our model simultaneously at a representative set of vh values allows the fit to find a best estimate for the maximal conductance parameter. An additional panel to Figure 1—figure supplement 1 now illustrates how the impedance matrix of the full model, and the inverse of the conductance matrix for the reduced model, changes under different vh. We have furthermore added a subsection to the Materials and methods, titled ”The conductance matrix”, where we explicitly describe how Equation 3 in the main text is obtained. We believe that with our added explanations, it is now clear what the matrix Gvh is and how the maximal conductance parameter is fitted.
(3) The section 'Conditions under which afferent spatial connectivity motifs can be simplified' could also be written more clearly. Concerns are similar to in point 1 above.
We have added additional explanations to this section, more accurately describing things that may have been unclear. We now more explicitly motivate this section, by stating that up until this section we only considered reductions where the synaptic inputs (or electrode current inputs) were located at the compartment sites. Here, we investigate what happens if this is not the case. We also changed the terminology from ‘impedances’ to ‘resistances’. We furthermore explicitly describe why βcurr is close to one (because the transfer impedance zcs is close to the input impedance zcc if the compartment site c is located more proximal than the synaptic input site s), with help of an additional panel to Figure 1—figure supplement 1. Finally, we have substantially extended the second-to-last paragraph of this section, and were able to describe the precise nature and motivation of our simulation experiments (Figure 4G-J) more clearly.https://doi.org/10.7554/eLife.60936.sa2
- Walter Senn
- Walter Senn
- Thomas Nevian
- Walter Senn
- Walter Senn
- Thomas Nevian
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
This work was supported by the Swiss National Science Foundation (Grant 159872 to TN; Grant 180316 to WS – with F Helmchen), the European Union’s Horizon 2020 Framework Programme (Grant 720270, 785907 and 945539 to WS – Human Brain Project SGA 1–3) and the European Research Council (Grant 682905 to TN). We thank all our lab colleagues for helpful discussions and critical comments on the figures.
- Ronald L Calabrese, Emory University, United States
- Timothy O'Leary, University of Cambridge, United Kingdom
- Hugh PC Robinson, University of Cambridge, United Kingdom
- Matthew F Nolan, University of Edinburgh, United Kingdom
- Received: July 10, 2020
- Accepted: January 4, 2021
- Version of Record published: January 26, 2021 (version 1)
© 2021, Wybo et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Mechanical nociception is an evolutionarily conserved sensory process required for the survival of living organisms. Previous studies have revealed much about the neural circuits and sensory molecules in mechanical nociception, but the cellular mechanisms adopted by nociceptors in force detection remain elusive. To address this issue, we study the mechanosensation of a fly larval nociceptor (class IV da neurons, c4da) using a customized mechanical device. We find that c4da are sensitive to mN-scale forces and make uniform responses to the forces applied at different dendritic regions. Moreover, c4da showed a greater sensitivity to localized forces, consistent with them being able to detect the poking of sharp objects, such as wasp ovipositor. Further analysis reveals that high morphological complexity, mechanosensitivity to lateral tension and possibly also active signal propagation in dendrites contribute to the sensory features of c4da. In particular, we discover that Piezo and Ppk1/Ppk26, two key mechanosensory molecules, make differential but additive contributions to the mechanosensitivity of c4da. In all, our results provide updates into understanding how c4da process mechanical signals at the cellular level and reveal the contributions of key molecules.
Asymmetries of the cerebral cortex are found across diverse phyla and are particularly pronounced in humans, with important implications for brain function and disease. However, many prior studies have confounded asymmetries due to size with those due to shape. Here, we introduce a novel approach to characterize asymmetries of the whole cortical shape, independent of size, across different spatial frequencies using magnetic resonance imaging data in three independent datasets. We find that cortical shape asymmetry is highly individualized and robust, akin to a cortical fingerprint, and identifies individuals more accurately than size-based descriptors, such as cortical thickness and surface area, or measures of inter-regional functional coupling of brain activity. Individual identifiability is optimal at coarse spatial scales (~37 mm wavelength), and shape asymmetries show scale-specific associations with sex and cognition, but not handedness. While unihemispheric cortical shape shows significant heritability at coarse scales (~65 mm wavelength), shape asymmetries are determined primarily by subject-specific environmental effects. Thus, coarse-scale shape asymmetries are highly personalized, sexually dimorphic, linked to individual differences in cognition, and are primarily driven by stochastic environmental influences.