Datadriven reduction of dendritic morphologies with preserved dendrosomatic responses
Abstract
Dendrites shape information flow in neurons. Yet, there is little consensus on the level of spatial complexity at which they operate. Through carefully chosen parameter fits, solvable in the leastsquares sense, we obtain accurate reduced compartmental models at any level of complexity. We show that (backpropagating) action potentials, Ca^{2+} spikes, and NmethylDaspartate spikes can all be reproduced with few compartments. We also investigate whether afferent spatial connectivity motifs admit simplification by ablating targeted branches and grouping affected synapses onto the next proximal dendrite. We find that voltage in the remaining branches is reproduced if temporal conductance fluctuations stay below a limit that depends on the average difference in input resistance between the ablated branches and the next proximal dendrite. Furthermore, our methodology fits reduced models directly from experimental data, without requiring morphological reconstructions. We provide software that automatizes the simplification, eliminating a common hurdle toward including dendritic computations in network models.
Introduction
Morphological neuron models have been instrumental in neuroscience (Segev and London, 2000). Major experimental discoveries, for instance that NmethylDaspartate (NMDA) channels (MacDonald and Wojtowicz, 1982) produce local dendritic all or none responses (Schiller et al., 2000; Major et al., 2008), or that dendritic Ca^{2+} spikes mediate coincidence detection between distal inputs and somatic action potentials (APs) (Larkum et al., 1999), have been combined in morphological models to arrive at a consistent picture of dendritic integration: the dendrite is an intricate system of semiindependent subunits (Mel, 1993; Poirazi et al., 2003b; Poirazi et al., 2003a), amenable to dynamic regulation (PolegPolsky et al., 2018; Wybo et al., 2019), and able to distinguish specific input patterns (Branco and Häusser, 2010; Laudanski et al., 2014).
Nevertheless, morphological models are not without shortcomings. They are highly complex and consist of thousands of coupled compartments, each receiving multiple nonlinear currents. The parameters of the models, typically fitted with evolutionary algorithms to electrophysiological recordings (Hay et al., 2011; Almog and Korngreen, 2014; Van Geit et al., 2016), number in the tens of thousands. Since recordings can only be obtained at a few dendritic sites, these fits are underconstrained, and thus susceptible to overfitting. Accordingly, the singleneuron fitting challenge, where model performance was measured on unseen spike trains, was not won by a biophysical model, but by an abstract spiking model (Gerstner and Naud, 2009; DiLorenzo and Victor, 2013). Finally, many networklevel observations can be explained without morphological models (Gerstner et al., 2012).
These shortcomings underline the need to find the essential computational repertoire of a neuron: the set of computations needed to understand brain functions such as learning and memory. Model simplification is crucial in this endeavor, as it elucidates the lowest level of complexity at which computational features are preserved. Conceptually, the simplification thus extracts the essential elements required for the computation from the underlying biophysics. Simulationwise, the reduced model requires few resources and can thus be integrated in largescale networks. Experimentally, the reduced model likely leads to a wellconstrained fit.
Past simplification efforts can be grouped in two categories: approaches that use traditional compartments, but with adapted parameters (Pinsky and Rinzel, 1994; Destexhe, 2001; Tobin et al., 2006; Bahl et al., 2012; Marasco et al., 2013; Amsalem et al., 2020), and approaches that rely on more advanced mathematical techniques (Kellems et al., 2009; Kellems et al., 2010; Wybo et al., 2013; Wybo et al., 2015). Many authors apply ad hoc morphological simplifications and then adjust model parameters to conserve geometrical quantities, such as surface area (Davison et al., 2000; Hendrickson et al., 2011; Marasco et al., 2013), or electrical quantities, such as attenuation (Destexhe, 2001) and transfer impedance (Amsalem et al., 2020). Other authors propose twocompartment models whose parameters are adjusted to reproduce certain response properties (Pinsky and Rinzel, 1994; Naud et al., 2014). Nevertheless, these approaches often lack the flexibility to be adapted to a wide range of dendritic computations. Furthermore, particular afferent spatial connectivity motifs might be lost, so that essential computations are not captured by such a reduction. Hence, in a network model, the computational relevance of these motifs might unknowingly be ignored. Advanced mathematical techniques on the other hand are very flexible, as they can incorporate the response properties of the morphology implicitly. (Wybo et al., 2013 and Wybo et al., 2015 propose a system of convolutions whose kernels capture the passive properties of the compartmental model, whereas Kellems et al., 2009 and Kellems et al., 2010 linearly transform the compartmental model into a lowdimensional basis.) However, these techniques are not supported by standard simulation software, such as NEURON (Carnevale and Hines, 2004), a considerable hurdle toward their integration in the canonical neuroscience toolset.
Here, we introduce a simplification method with the flexibility of advanced mathematical techniques, but using traditional compartments. The approach accommodates any morphological model in public repositories (McDougal et al., 2017) and commonly used ion channels (Podlaski et al., 2017). The reduced models represent the optimal approximation to the dendritic resistance matrix, evaluated at any spatial resolution of choice. We fit ion channels by approximating the quasiactive resistance matrices (Koch and Poggio, 1985). All our fits are uniquely solvable linear leastsquares problems. The obtained models extrapolate well to nonlinear dynamics, and reproduce backpropagating APs (bAPs), Ca^{2+} spikes (Larkum et al., 1999), and NMDA spikes (Schiller et al., 2000; Major et al., 2008) with few compartments. Additionally, we investigate whether a dendritic tree with given afferent spatial connectivity motifs can be simplified by ablating subtrees or branches and grouping synapses in the next proximal compartment. We find that effective weightrescale factors for synapses can be computed if temporal conductance fluctuations stay below a limit that depends on the difference in input resistance between the ablated branch and the next proximal dendrite. Under application of these factors, voltage responses in the simplified tree are preserved. Finally, we demonstrate that our approach can fit reduced models to experimental recordings, without the need to reconstruct full morphologies. We have created a Python toolbox (NEural Analysis Toolbox – NEAT – https://github.com/unibecns/NEAT) (copy archived at swh:1:rev:1cb15f36aa0a764105348541d046c85ef38e21ee) that implements this method together with an extension to NEURON to simulate the obtained models.
Results
A systematic simplification of complex neuron morphologies
Our simplification strategy fits compartments with voltagegated channels to a reduced set of dendritic locations of interest (Figure 1A, leftmiddle). The locations, together with the morphology, provide a tree structure for the reduced model (Figure 1A, middleright) where each node corresponds to a traditional compartment and each edge to a coupling conductance. The fit requires that the reduced model responds similarly to perturbative current steps as the full neuron, at the set of chosen locations (Figure 1B). We consider perturbations around any spatially uniform holding potential $v}_{h$. Experimentally, $v}_{h$ may be reached by injecting a constant current $i}_{h$, and thus $v}_{h$ is the effective equilibrium potential under this current injection. In models, we do not need to know $i}_{h$; we simply assume that $v}_{h$ is the equilibrium potential around which to linearize the model. Note that if ${i}_{h}=0$, $v}_{h$ is the resting membrane potential.
Suppose we have chosen $N$ locations. Let $\delta \mathbf{\mathbf{i}}$ be a vector describing perturbative input currents to each of those $N$ locations. The linearized voltage response of the full neuron, at those locations, is given in the Fourier domain for an input of a particular frequency $\omega$ by
with ${Z}_{{v}_{h}}$ the quasiactive (Koch and Poggio, 1985) $N\times N$ impedance matrix. In experiments, ${Z}_{{v}_{h}}$ is extracted from $\delta \mathbf{\mathbf{v}}$ measured in response to $\delta \mathbf{\mathbf{i}}$. In models, ${Z}_{{v}_{h}}$ is algorithmically computed based on the full morphology, the parameters of the passive currents, and the dynamics of the voltagegated ion channels (Koch and Poggio, 1985). For our reduction method, we consider perturbative current steps, so that after an equilibration period, we obtain a steadystate voltage response $\delta \mathbf{\mathbf{v}}$. Note that this corresponds to evaluating (Equation 1) at zero frequency, and that the corresponding matrix ${Z}_{{v}_{h}}(\omega =0)$ contains the input and transfer resistances. Throughout the manuscript, we maintain the notation that $z$ without argument signifies $z(\omega =0)$, representing an individual input or transfer resistance, and that $Z$ signifies $Z(\omega =0)$, representing the resistance matrix.
Linearizing the reduced compartmental model around a holding potential (see Materials and methods — The conductance matrix) yields an $N\times N$ conductance matrix ${G}_{{v}_{h}}$, with on its diagonal
where ${g}_{Li}$ is the unknown leak conductance of the $i$th compartment, ${\mathcal{N}}_{i}$ the set of nearest neighbor compartments, and ${g}_{C,in}$ the unknown coupling conductance between compartment $i$ and compartment $n\in {\mathcal{N}}_{i}$. The second sum runs over the set ${\mathcal{I}}_{i}$ of all linearized currents of ion channels present in compartment $i$, where ${\overline{g}}_{di}$ is the unknown maximal conductance of ion channel $d$ and ${l}_{d}({v}_{h})$ is a factor that follows from linearizing the dynamics of channel $d$ around the holding potential $v}_{h$ (see Materials and methods – Quasiactive channels). Thus, the unknowns ${g}_{Li}$, ${g}_{C,in}$ for $n\in {\mathcal{N}}_{i}$, and ${\overline{g}}_{di}$ for $d\in {\mathcal{I}}_{i}$ are to be fitted for each compartment, and the factors ${l}_{d}({v}_{h})$ are known and determined by the ionchannel dynamics. The $ij$’th offdiagonal element of ${G}_{{v}_{h}}$ is ${g}_{C,ij}$ – the negative of the coupling conductance between compartments $i$ and $j$ – if $i$ and $j$ are nearest neighbor compartments on the reduced tree structure, and zero otherwise. The conductance matrix relates the reduced model’s voltage response to the perturbative input current steps
The full neuron and reduced model will behave similarly for all possible perturbative input steps $\delta \mathbf{\mathbf{i}}$ if their responses $\delta \mathbf{\mathbf{v}}$ match. From (Equation 1) and (Equation 3), it follows that ${Z}_{{v}_{h}}$ should be the inverse of ${G}_{{v}_{h}}$. Consequently, we require that multiplying the known ${Z}_{{v}_{h}}$ (measured or calculated) by the parametric ${G}_{{v}_{h}}$ yields the identity
From (Equation 2), it can be seen that (Equation 4) is linear in the parameters that have to be fitted (leak, coupling, and maximal ionchannel conductances of the reduced compartments). By consequence, (Equation 4) can be cast into a leastsquares problem and solved accurately (Figure 1C, Figure 1—figure supplement 1A).
Since the linearized ion channel activation ${l}_{d}({v}_{h})$ depends on the holding potential, ${Z}_{{v}_{h}}$ changes with $v}_{h$ (Figure 1D). The fit must disentangle the changes in ${Z}_{{v}_{h}}$ induced by the various channels. In models, we first block all voltagegated ion channels in the full and reduced models and fit ${G}_{\text{pas}}$ to ${Z}_{\text{pas}}$ according to (Equation 4), and thus obtain leak and coupling conductances for each compartment. Then, we unblock one ion channel at a time and decompose the conductance matrix into ${G}_{{v}_{h}}={G}_{\text{pas}}+{G}_{{v}_{h},\text{chan}}$, with ${G}_{{v}_{h},\text{chan}}$ a diagonal matrix containing the conductances of the unblocked channel, linearized around $v}_{h$, at each compartment. Thus ${G}_{{v}_{h},\text{chan}}$ depends linearly on the unknown maximal ion channel conductance parameters. With ${Z}_{{v}_{h}}$ and ${G}_{\text{pas}}$ known, we optimize these maximal conductances, so that the lefthand side of
matches the righthand side, and that it does so for multiple holding potentials (we chose ${v}_{h}=75,55,35$, and $15$ mV, Figure 1D, Figure 1—figure supplement 1D). Thus, we obtain an overdetermined system of equations in the unknown maximal channel conductances at each compartment and compute its solution in the least mean squares sense.
Having fixed the steadystate behavior of the reduced model, we are left with the capacitance of each compartment of the reduced model to match the temporal dynamics of the full model as closely as possible. In our reductions, the capacitance of each compartment is thus a parameter to be fitted. To do so, we found it sufficient to consider passive membrane dynamics. Blocking all voltagegated ion channels in the reduced model (${G}_{{v}_{h},\text{chan}}=0$), temporal voltage responses follow
with $\mathbf{\mathbf{c}}$ a vector containing the unknown capacitance of each compartment. We require that voltagedecay back to rest, as described by (Equation 6), matches voltage decay in the full neuron with all active channels blocked, at the $N$ chosen locations. According to linear dynamical systems theory (Strogatz, 2000), this decay can be decomposed as a sum of exponentially decaying eigenmodes, each with an associated eigenvalue ${\alpha}_{k}$ and eigenvector ${\mathit{\varphi}}_{k}$. The former gives the exponential time scale ${\tau}_{k}=1/{\alpha}_{k}$ of the decay and the latter the spatial profile of the mode. The most prominent of these modes has the largest time scale ${\tau}_{0}=1/{\alpha}_{0}$, and primarily models the voltage decay back to rest through transmembrane currents (Holmes et al., 1992). In experiments, we extract this mode by fitting an exponential to the voltage decay. In full models, we compute the eigenvalue ${\alpha}_{0}$ and eigenvector ${\mathit{\varphi}}_{0}$ – restricted to the $N$ chosen locations – with the separationofvariables method (Major et al., 1993). In the reduced model, eigenmodes are found as the eigenvalues and eigenvectors of the matrix $S=\text{diag}{(\mathbf{\mathbf{c}})}^{1}{G}_{\text{pas}}$. To fit the capacitances, we require that $S$ has ${\alpha}_{0}$ as eigenvalue and ${\mathit{\varphi}}_{0}$ as corresponding eigenvector:
This system of equations is linear in the reciprocals of the capacitances entering in $S$ and can thus be solved efficiently.
To accurately reproduce the spatiotemporal voltage responses of the full model, the reduced model must approximate the ‘impedance kernels’ $z(t)$ (Wybo et al., 2015), the inverse Fourier transforms of the frequencydependent elements of the impedance matrix. Despite only fitting the largest time scale $1/{\alpha}_{0}$, we accurately reproduce kernels at all times (Figure 1E,F, Figure 1—figure supplement 1B).
Unless a large number of closely spaced compartment sites is retained on the morphology, our reductions replace the true axial currents flowing along the dendritic tree, which are shaped by the distribution of ion channels in the dendritic membrane, with a strong abstraction: a single coupling conductance between compartments that may be far apart. Hence, it is not guaranteed that the resting voltage of reduced model will be equal to the resting voltage of the full model, at the $N$ chosen locations. We ensure that this will be the case by fitting the leak reversals for each compartment. The product of leak conductance and reversal represents a constant current for each compartment, and thus does not influence model dynamics beyond fixing it’s resting voltage. To fit the leak reversals, we evaluate all ionchannel currents in the reduced model at the full model’s resting voltage and require that temporal voltage derivatives are zero. The obtained equations are linear in the reduced model’s leak reversals, and hence can be solved efficiently. Note that our methodology does not guarantee that the obtained reversals are within physiological limits. Thus, they should purely be seen as fit parameters that determine the resting membrane potential.
Reduced models match the voltage response of their full counterparts
We demonstrated the reduction on two computations that require accurate spatiotemporal interaction. We reproduced sequence discrimination in an L2/3 pyramidal cell model (Branco and Häusser, 2010), where a neuron responds more strongly to a centripetal sequence of inputs than to a centrifugal one, by only retaining compartments on a single branch (Figure 2A). We also reproduced inputorder detection (TorbenNielsen and Stiefel, 2009), where a neuron responds more strongly when one input arrives before the other, by only retaining compartments at the soma and the two input sites (Figure 2B).
We then tested our approach with nonlinear currents concentrated at discrete points (hotspots) along the morphology. In two biophysical models – an L5 neocortical pyramid (Hay et al., 2011; Figure 2C) and a Purkinje cell (Chen et al., 2013; Miyasho et al., 2001; Figure 2D) – we removed active dendritic channels, but added their total conductance at rest to the passive leak (we term this the 'passified’ model), while retaining all active channels at the soma. The soma, with its APgenerating channels, naturally forms such a nonlinear hotspot. We implemented dendritic hotspots by clustering sets of excitatory (αamino3hydroxy5methyl4isoxazolepropionic acid [AMPA] + NMDA) and inhibitory (γaminobutyric acid [GABA]) synapses at randomly selected points on the morphology. Both the reduced pyramidal cell model and the reduced Purkinje cell model accurately reproduced the somatic and dendritic membrane voltage traces of their full counterparts (Figure 2C,D, middle panels). Furthermore, 97% and 89% of APs coincided within a 6 ms window, respectively (Jolivet et al., 2008; Figure 2C,D, right panels).
Models with passive dendritic membranes but nonlinear synapse clusters can already reproduce much of the canonical computational repertoire of dendrites. Clusters of AMPA+NMDA synapses at the distal tips of basal dendrites can make neurons function as twolayer networks (Mel, 1993; Poirazi et al., 2003b; Poirazi et al., 2003a). Such reduced models also exhibit the same robustness to input noise as the full models (PolegPolsky, 2019), and can include modulation of the cooperativity between AMPA+NMDA clusters through compartments with shunting conductances (Wybo et al., 2019). Furthermore, because our reduced models reproduce dendritic input impedance properties, long timescale NMDA currents activate strongly in distal compartments as high distal input impedance helps to overcome their voltagedependent Mg^{2+} block (Jahr and Stevens, 1990a; Jahr and Stevens, 1990b), whereas short timescale AMPA currents dominate in more proximal compartments. Thus passive models with nonlinear synapse clusters also capture the shift in computational strategy from a proximal temporal code to a distal rate code (Branco and Häusser, 2011).
What level of spatial complexity reproduces characteristic responses generated by somatic and dendritic ion channels? These responses arise through the interplay of integrative properties of the morphology – captured in the impedance kernels $z(t)$ – and channel dynamics. If a full model (here the L2/3 pyramid) is reduced to a single somatic compartment (Figure 3A), the impedance kernel has only one time scale – the membrane timescale (Figure 3B, top – purple). The kernel of the full model contains additional shorter time scales (Figure 3B, top – gray), leading to a faster AP onset than in the reduced model (Figure 3B, bottom). This effect is thus a fundamental limitation of the singlecompartment reduction. Extending the compartmental model with four nearby dendritic sites (Figure 3A) adds fast components to the somatic impedance kernel (Figure 3B, top – green) that model the spread of charge to the other compartments. These components increase AP amplitude and decrease AP delay (Figure 3C), leading to a better match with the full model.
Dendritic ion channels support the backpropagation of APs (Stuart et al., 1997). For each basal branch of at least 150 μm in the L2/3 pyramid, we derived reduced models with three compartments (at 50, 100, and 150 μm from the soma) and with one compartment. We compared bAP amplitudes and delays relative to the somatic AP (Figure 3E) and found that even models with a single distal compartment support active backpropagation (Figure 3E,F). In the apical dendrite of the L5 pyramid (Figure 3G), we found that a distance step of 100 μm between compartments was required to support bAPs to the same degree as in the full model (Figure 3H,I).
Finally, we considered the pairing of a somatic current pulse with a dendritic postsynaptic potential waveform. We found that an 11compartment reduction (Figure 3J) – with compartments spaced at 100 μm through the apical trunk to support bAPs until the Ca^{2+} hotzone – reproduced the Ca^{2+}mediated coincidence detection mechanism (Larkum et al., 1999; PérezGarci et al., 2013).
Conditions under which afferent spatial connectivity motifs can be simplified
Our method allows us to reduce morphological complexity by removing any branch or subtree without affecting the integrative properties of other branches or subtrees. Up until now, we have only considered reductions where the tobe removed branch or subtree does not receive direct synaptic input. However, if this branch or subtree does receive synaptic input, the morphological reduction also simplifies the afferent spatial connectivity motifs targeting it, as synapses have to be grouped at the nearest proximal intact compartment. We assess whether the computational repertoire of the neuron remains intact under such a simplification, or which elements of the repertoire might be lost. We study two proxies for this repertoire: the voltage difference between full and reduced models at the intact compartments – in order to assess whether the output of local computations in the ablated branch or subtree can be recovered, and the difference between full and reduced models in NMDAspike threshold at the nearest proximal intact compartment (quantified as the smallest number of AMPA+NMDA synapses with a weight of 1 nS that need to be activated simultaneously to elicit an NMDA spike) – in order to assess whether local integrative properties at the intact compartments can also be retained under the shift of synapses. We allow ourselves the liberty of rescaling the weights of the shifted synapses by a temporally constant factor $\beta$ (Figure 4A). Which spatial synapse distributions admit simplification in this sense, and under which input conditions?
For a single, currentbased synapse, shifted from its original location on the ablated branch to the next proximal compartment, we analytically compute the weightrescale factor and find ${\beta}_{\text{curr}}={z}_{cs}/{z}_{cc}$, with ${z}_{cs}$ the transfer resistance from synapse site $s$ to compartment site $c$ and ${z}_{cc}$ the input resistance at $c$. Since ${z}_{cs}<{z}_{cc}$ (Koch, 1998), the weightrescale factor weakens the shifted synapse, so that it matches the attenuation of its distal counterpart. Nevertheless, when $s$ is located more distally on the same branch or subtree as $c$, the transfer resistance ${z}_{cs}$ is often close to the input resistance ${z}_{cc}$ (Figure 1—figure supplement 1E). Thus, ${\beta}_{\text{curr}}$ is often close to one, and hence negligible (Figure 4B,C). For a conductancebased synapse, rescaling weights by ${\beta}_{\text{curr}}$ does not accurately fit the full model (Figure 4B). We decompose the synaptic conductance $g(t)$ as $g+\delta g(t)$, with $g$ the temporal average and $\delta g(t)$ the fluctuations around $g$. We then analytically compute (see Materials and methods — Synaptic weightrescale factors) that the weightrescale factor for a conductancebased synapse
recovers the full model’s voltage if
where $\mathrm{\Delta}z={z}_{ss}{z}_{cc}$ is the difference between input resistance ${z}_{ss}$ at the original synapse site $s$ and input resistance ${z}_{cc}$ at the nearest proximal compartment site $c$. Multiplying the synaptic weights by ${\beta}_{\text{cond}}$ weakens the synapse so that it matches the shunt effect of its distal counterpart, and so that the voltage at all compartments more proximal than $c$ and in their respective side branches is reproduced (Figure 4D).
If $s$ and $c$ are close, $\mathrm{\Delta}z$ is small and (Equation 9) is satisfied for plausible conductance fluctuations. Thus, shifting the synapse to the next proximal compartment keeps the voltage response intact. If the separation between $s$ and $c$ increases, $\mathrm{\Delta}z$ increases accordingly, and the magnitude of tolerated conductance fluctuations shrinks (Equation 9). Thus, for a given magnitude of $\delta g(t)$, it is possible that a thick branch can be removed, but not a thin branch, as the input resistance increase in the former is much more gradual than in the latter. Nevertheless, (Equation 9) suggests that even for large $\mathrm{\Delta}z$ the full model’s voltage can be recovered by downscaling synaptic weight by ${\beta}_{\text{cond}}$, but only if $\delta g(t)\ll g$. A tonic level of AMPA and/or GABA activation, as in the highconductance state (Destexhe et al., 2003), can implement this input condition. For NMDA synapses, this input condition does not hold, as NMDA spikes arise through a strong, transient conductance increase. However, when a cluster of such synapses – to be moved to a proximal site – is considered in isolation, a workaround can be found by applying constant rescale factors not just to synaptic weight, but also to threshold and width of the Mg^{2+} block, as well as to the reversal (see Materials and methods — Synaptic weightrescale factors). These factors recover NMDAspike shape (Figure 4E) and threshold (Figure 4F) when the cluster is moved to from $s$ to $c$.
Our analytical weightrescale factor ${\beta}_{\text{cond}}$ treated reductions where a single site $s$ was moved to $c$. We consider now the ablation of a whole subtree and the shift of all its synapses to $c$ (Figure 4G). In our simulations, we distributed 100 AMPA and 100 GABA synapses on the to be ablated subtree, and activated them with a Poisson process with fixed rate. We studied how the reduction influenced the local integrative properties at the intact compartment $c$. As a proxy for those integrative properties, we checked the shape of the waveform elicited by activating AMPA+NMDA synapses at $c$ in a short burst (Figure 4H), and we quantified the threshold for NMDAspike generation. We found that neither shape nor threshold were preserved by the analytical weightrescale factors ${\beta}_{\text{cond}}$, derived for the case where only a single input site is present on the subtree. We numerically extend the derivation of the weightrescale factors to multisite inputs (see Materials and methods – Synaptic weightrescale factors). These extended weightrescale factors ${\beta}_{\text{cond}}^{\text{ext}}$ reproduced local voltage (Figure 4H) and NMDAspike threshold (Figure 4I). We also investigate whether ${\beta}_{\text{cond}}^{\text{ext}}$ can still be conceptualized as depending on $\mathrm{\Delta}{z}_{\text{avg}}{g}_{\text{avg}}$ – the product between (1) a difference in average input resistance between synapses on the to be ablated subtree and the compartment site and (2) the average conductance load exerted by those synapses. To do so, we activated the 100 AMPA and 100 GABA synapses on the to be ablated subtree at a wide range of input conditions (see Materials and methods – Simulationspecific parameters), and measured the error in threshold for NMDAspike generation at $c$. We found that this error depended strongly on $\mathrm{\Delta}{z}_{\text{avg}}{g}_{\text{avg}}$ and remained small for $\mathrm{\Delta}{z}_{\text{avg}}{g}_{\text{avg}}\ll 1$ (Figure 4J).
In conclusion, we can now assess whether given spatial synapse motifs on branches or subtrees admit simplification by shifting the synapses to the nearest proximal compartment site and ablating the branch or subtree. We find that reduction in this sense is possible for any motif if fluctuations are small, so that $\delta g(t)\ll g$, by applying temporally constant weightrescale factors. Otherwise, the synapses must be located sufficiently closely to the compartment site, so that $\mathrm{\Delta}z$ is small and that $\mathrm{\Delta}z\delta g$ remains below $1+\mathrm{\Delta}zg$ for all relevant activation levels.
Active and passive reduced dendrites under synaptic bombardment
We next investigated how many compartments are needed to reproduce somatic and dendritic responses under a synaptic bombardment and quantified the contribution of active dendritic Na^{+}, Ca^{2+}, and K^{+} channels. We mimicked the in vivo state in the basal dendrites of the L2/3 pyramid by distributing 200 AMPA and 200 GABA synapses receiving Poisson inputs and 20 clusters of AMPA+NMDA synapses receiving bursts of inputs (Figure 5A). We sampled 10 different sets of metaparameters governing synaptic activation (Poisson and burst rates and synaptic weights, see Materials and methods — Simulationspecific parameters), resulting in output spike rates between 0 and 8 Hz (Figure 5B). We derived reduced models, once based on the full model with all active channels (‘active reduced dendrite’ in Figure 5C) and once based on the passified full model (‘passive reduced dendrite’ in Figure 5C). We distributed increasing numbers of compartment sites on the basal branches and measured somatic and dendritic voltage responses for the active full model and for the reduced models with active and passive dendrites (Figure 5C, Figure 5—figure supplement 1 for dendritic traces). Spike coincidence (Figure 5D), subthreshold somatic voltage error (Figure 5E), and dendritic voltage error (Figure 5F) showed a significant improvement when compartment numbers increased from 0 (pointneuron – top row in Figure 5C) to 2 per 100 μm. In the presence of active channels, the error decreases until up to three compartments per 100 μm (Figure 5E,F). Spike coincidence factors were consistently ∼10% higher than in the passive case (Figure 5D).
Fitting reduced models directly to experimental data
Traditionally, creating a morphological neuron model involves a reconstruction of the morphology and an evolutionary fit of the electrical parameters (Hay et al., 2011; Almog and Korngreen, 2014; Van Geit et al., 2016). Our method allows skipping these resourceintensive steps (Figure 6A). We adapt our method to a common experimental paradigm where hyper and depolarizing current steps are injected under applications of various ionchannel blockers (Marti Mengual et al., 2020). We extract resistance matrices and holding potentials from voltage step amplitudes (Figure 6B), and the time scale of the largest eigenmode from the average voltage decay back to rest (Figure 6C). If the modeling goal is a reduced model, the traditional reconstruction and subsequent reduction both introduce errors (Figure 6D). Thus, we find that while our reduction of the full model is faithful within an expected accuracy margin (Figure 6E), fitting a reduced model directly to the experimental traces is more accurate (Figure 6F).
Discussion
We have presented a flexible yet accurate method to construct reduced neuron models from experimental data and morphological reconstructions. The method consists of linear parameter fits at holding potentials that are informative for the full nonlinear dynamics. First, we derive leak and coupling conductances from the passified version of the full model. Second, we fit the maximal conductances of the active ionchannels, so that the reduced model, at various holding potentials, responds similarly to input perturbations as the full model. Third, we fit the capacitances to reproduce spatiotemporal integrative properties. Finally, we ensure that resting membrane potentials in the full and reduced models match. The resulting method can be adapted to a wide variety of use cases.
A fundamental use case is deriving reduced models that retain specific elements of the dendritic computational repertoire, to explore how those dendritic computations may improve network function. To that purpose, one can apply our method to compartments placed at input sites needed for the computation of interest. For instance, if a computation requires independent computational subunits (Poirazi et al., 2003b; Kastellakis et al., 2016), one would place compartments at the dendritic tips. If one wants to modulate cooperativity of excitatory inputs at distal sites, one would place compartments with shunting conductances at the bifurcations between the distal sites (Wybo et al., 2019). Detecting input sequences (Branco and Häusser, 2010) or implementing a dendritic integration gradient (Branco and Häusser, 2011) would require placing two or more compartments along the branches of interest. For many of these computations, a passive morphology with nonlinear (NMDA) synapses suffices. Furthermore, our method can be combined with abstract models of AP generation (Pozzorini et al., 2013; Pozzorini et al., 2015).
A second use case is constructing models directly from electrophysiological recordings. Our method avoids the laborintensive detour of reconstructing the morphology and optimizing model parameters with an evolutionary algorithm (Marti Mengual et al., 2020). In combination with advances in voltagesensitive dye imaging, our method may thus form the basis of a highthroughput experimenttomodel paradigm.
A third use case is elucidating the effective complexity of a given dendritic tree. We assessed whether afferent spatial input motifs could be simplified by removing a branch or subtree and grouping the affected synapses at the next proximal compartment. We found that such simplification is possible when the input resistance of the affected synapse sites is close to the input resistance of the compartment site. If this condition does not hold, the simplification can still proceed if conductance fluctuations are small compared to the average conductance magnitude, but not otherwise. What is the simplest model for a given dendritic tree that captures its full computational repertoire? We find that spike coincidence and subthreshold voltage metrics reach satisfying accuracy at, but do not improve much beyond ∼3 compartments per 100 μm. Further research is required to tease apart mere numerical errors from potentially missed computational features.
The process of simplifying complex dynamical systems, such as morphological neuron models, is inherently approximate. Due to the vast variety of morphologies, voltagegated ionchannel configurations and input patterns, it is beyond the scope of any single paper to give an exhaustive overview of possible inaccuracies. Rather, we propose three lines of inquiry if a reduction fails to reproduce a particular response pattern. First, we suggest checking the discrepancy between the impedance kernels $z(t)$ of the full and reduced models. Often, response mismatches arise from slight inaccuracies in how ionchannel dynamics are integrated by the passive system (Figure 3A–C). Second, we suggest assessing to what degree afferent spatial input motifs, present in the full model, are simplified, and whether that simplification still admits a sufficient accuracy in reproducing local voltagedependent responses, such as NMDA spikes (Figure 4). Finally, we suggest checking whether the distribution of compartments on the morphology admits ion channel dynamics that are sufficiently rich to reproduce the response characteristic in question. To illustrate these lines of inquiry, we have further simplified the L5 pyramid to five compartments, by omitting the compartments in the apical trunk (Figure 3—figure supplement 1A vs Figure 3J). This reduction results a change in AP shape, and in a failure to elicit AP bursts through the pairing of a somatic and a dendritic input, even though a Ca^{2+} spike can still be elicited (Figure 3—figure supplement 1B). Likely, the change in AP shape can be traced back to the difference in the somatic input impedance kernels (Figure 3—figure supplement 1C), while the failure to elicit an AP burst may be due to insufficient Na^{+}driven crosstalk between soma and apical tuft.
Computational tools have played a key role in neuroscience. Take NEURON (Carnevale and Hines, 2004), which accelerated our understanding of morphological neurons, or NEST (Gewaltig and Diesmann, 2007), which enabled simulation of largescale spiking pointneuron networks. We have implemented a Python toolbox that automatizes the simplification process (NEural Analysis Toolkit – NEAT – https://github.com/unibecns/NEAT; copy archived at swh:1:rev:1cb15f36aa0a764105348541d046c85ef38e21ee). The toolbox reads any morphology in the standard ‘.swc’ format (Ascoli, 2006) and returns the parameters of the reduced models, while also providing tools to export the models to NEURON (Carnevale and Hines, 2004).
Our method and toolbox fill a void in between the extremes of modeling largescale networks with abstract models and modeling single cells in all their detail. By enabling the efficient systematic derivation of simplified neurons, amenable to simulation at the network level, our work bridges the gap between two branches of neuroscience that historically have remained separate.
Materials and methods
Morpohologies
Request a detailed protocolThree exemplar cell models were used for the analysis: a cortical L2/3 pyramidal cell (Branco and Häusser, 2010; Figure 2D), a cortical L5 pyramidal cell (Hay et al., 2011; Figure 2E), and a cerebellar Purkinje cell (Figure 2F). The latter morphology was retrieved from the NeuroMorpho.org repository (Ascoli, 2006) and the two others from the ModelDB repository (McDougal et al., 2017).
Physiological parameters
Request a detailed protocolIn our passive models, physiological parameters were set according to Major et al., 2008: the equilibrium potential was $75\text{mV}$, the membrane conductance $100\mu \text{S}/{\text{cm}}^{2}$, the capacitance $0.8\mu \text{F}/{\text{cm}}^{2}$, and the intracellular resistivity $100\mathrm{\Omega}\cdot \text{cm}$. For the active models, we took ion channels and parameters according to Branco and Häusser, 2010 for the L2/3 pyramid, Hay et al., 2011 for the L5 pyramid, and Miyasho et al., 2001 for the Purkinje cell.
AMPA and GABA synaptic input currents were implemented as the product of a conductance profile, here a double exponential (Rotter and Diesmann, 1999), with a driving force:
AMPA rise resp. decay times were ${\tau}_{r}=0.2\text{ms}$, ${\tau}_{d}=3\text{ms}$ and AMPA reversal potential was $e=0\text{mV}$. For GABA, we used ${\tau}_{r}=0.2\text{ms}$, ${\tau}_{d}=10\text{ms}$, and $e=80\text{mV}$. NMDA currents (Jahr and Stevens, 1990a) had the form:
with rise resp. decay time ${\tau}_{r}=0.2\text{ms}$, ${\tau}_{d}=43\text{ms}$, and $e=0\text{mV}$, while $\sigma (v)$, modeling the channel’s magnesium block, had the form (Behabadi and Mel, 2014):
The ‘conductance’ of an AMPA or GABA synapse signifies the maximum value of its conductance window. For an AMPA+NMDA synapse, the conductance is the maximal value of the AMPA conductance window, and the conductance of the NMDA component is determined by multiplying the AMPA conductance value with an NMDA ratio ${R}_{\text{NMDA}}$, set to be either 2 or 3.
Biophysical models
Request a detailed protocolWe used the NEURON simulator (Carnevale and Hines, 2004) to implement biophysical and reduced models. For the biophysical models, the distance step was set according the lambda rule (Carnevale and Hines, 2004) or smaller.
Quasiactive channels
Request a detailed protocolA voltagedependent ion channel described by the Hodgkin–Huxley formalism can in general be written as follows:
where $\overline{g}$ is the channel’s maximal conductance, $e$ its reversal potential, ${y}_{1},\mathrm{\dots},{y}_{K}$ its state variables, $f{(}_{\u25fc})$ a function that depends on the channel type (e.g. for a typical sodium channel $f(m,h)={m}^{3}h$), and ${g}_{k}({y}_{k},v)$ ($k=1,\mathrm{\dots},K$) the functions governing statevariable activation (${g}_{k}({y}_{k},v)$ can usually be written as $({y}_{\mathrm{\infty}k}(v){y}_{k})/{\tau}_{k}(v)$ with ${y}_{\mathrm{\infty}k}(v)$ the statevariable’s activation and ${\tau}_{k}(v)$ its time scale). To obtain the channel’s quasiactive approximation (Mauro et al., 1970; Koch and Poggio, 1985) around a holding potential $v}_{h$ and a state variable expansion point ${y}_{1}^{0},\mathrm{\dots},{y}_{K}^{0}$, we linearize (Equation 13):
To obtain the zerofrequency contribution of (Equation 14) to the resistance and conductance matrices, we set ${\dot{y}}_{k}=0$, solve the linearized statevariable equations for $({y}_{k}{y}_{k}^{0})$, and substitute the result in ${i}_{\text{lin}}$:
Introducing a shorthand for the factor in square brackets
results in the linearized ionchannel current as described in (Equation 2).
The impedance matrix
Request a detailed protocolThe voltage response $\delta {v}_{x}(t)$ at a location $x$ along the neuron to an input current perturbation $\delta {i}_{{x}^{\prime}}(t)$ at location ${x}^{\prime}$ can be computed as the convolution of an impedance kernel with $\delta {i}_{{x}^{\prime}}(t)$:
The impedance kernel itself can be computed in the frequency domain from the quasiactive cable equation using Koch’s algorithm (Koch and Poggio, 1985). We may assume any a priori arbitrary set of holding potentials and ionchannel state variables to compute the quasiactive expansion (with Koch’s algorithm, the only constraint is that their distribution is uniform for each cylindrical segment). For our purpose, spatially uniform holding potentials and statevariable expansion points suffice. For a steadystate current $\delta {i}_{{x}^{\prime}}$, (Equation 17) simplifies to:
where we term ${z}_{x{x}^{\prime}}={\int}_{0}^{\mathrm{\infty}}dt{z}_{x{x}^{\prime}}(t)$ the ‘resistance’ (${z}_{x{x}^{\prime}}$ is also known as the input resistance if $x={x}^{\prime}$ or the transfer resistance if $x\ne {x}^{\prime}$). For passive membranes, impedance kernels can also be computed as a sum of exponentials (Holmes et al., 1992; Major et al., 1993):
where we adopt the ordering ${\tau}_{0}\ge {\tau}_{1}\ge {\tau}_{2}\ge \mathrm{\dots}$ Essentially, this infinite sum can be thought of as the generalization of the eigenmodes for linear dynamical systems (Strogatz, 2000) to partial differential equations.
When a current perturbation is applied at multiple sites along the neuron (we write $\delta \mathbf{\mathbf{i}}(t)=(\delta {i}_{1}(t),\mathrm{\dots},\delta {i}_{n}(t))$), we obtain the voltage response $\delta \mathbf{\mathbf{v}}(t)=(\delta {v}_{1}(t),\mathrm{\dots},\delta {v}_{n}(t))$ at those sites from:
with $Z(t)$ the matrix of impedance kernels (the $ij$’th elements of this matrix is the impedance kernel between sites $i$ and $j$). In the steadystate case, we call $Z$ the resistance matrix.
Compartmental models
Request a detailed protocolThe voltage $v}_{i$ in a compartment $i$, connected to a set ${\mathcal{N}}_{i}$ of nearest neighbor compartments, and with a set of ion channels ${\mathcal{I}}_{i}$, is given by
where $c}_{i$ is the capacitance of compartment $i$, ${g}_{Li}$ resp. ${e}_{Li}$ its leak conductance resp. reversal, ${g}_{Cin}$ its coupling conductance to the neighboring compartment $n$, ${i}_{di}$ the ion channel current (Equation 13) of channel $d$ in compartment $i$ and $i}_{i$ an arbitrary input current to compartment $i$.
The conductance matrix
Request a detailed protocolTo obtain the conductance matrix of the compartmental model for steadystate inputs, (Equation 21) is linearized around a certain holding potential $v}_{h$ and we assume that ${\dot{v}}_{i}=0$. After absorbing all constant terms into $i}_{i$, we obtain
where $\delta {v}_{i}={v}_{i}{v}_{h}$, where ${\overline{g}}_{di}{l}_{d}({v}_{h})$ is the linearized ionchannel current (Equation 15), and where we used the notation (Equation 16). Summarizing the voltage responses for each compartment in a vector $\delta \mathbf{\mathbf{v}}=(\delta {v}_{1},\mathrm{\dots},\delta {v}_{N})$, (Equation 22) for each compartment $i=1,\mathrm{\dots},N$ can be recast as a matrix equation, yielding (Equation 3).
Simplification method details
Request a detailed protocolFor any given set of $M$ sites on the morphology, we construct a simplified compartmental model whose connection structure follows a tree graph defined by the original morphology (Figure 1A). We do not allow triplet connections (e.g. sites 1–2, sites 2–3, and sites 3–1 all mutually connected) in our reduced models. Hence, to obtain accurate results, it is necessary to extend the original set of $M$ sites with the $B$ bifurcation points that lie in between (Figure 1—figure supplement 1C). With these $N=M+B$ sites, we thus define a tree graph that provides the scaffold for our fit (and our reduced model).
The fitting process proceeds in four steps: (1) fit the passive leak and coupling conductances, (2) fit the capacitances, (3) fit the maximal conductances of the ion channels, and (4) fit the reversal potentials to obtain the same resting membrane potentials as in the biophysical model. The order of steps 2 and 3 is interchangeable, but not of the other steps. We describe these steps in detail below:
If the biophysical model contains active conductances, we compute their opening probabilities at rest and add them to the leak. Otherwise, we simply take the leak as is. We then compute $Z$ for the $N$ compartment sites. In the passive case, $Z$ is independent of the holding potential. In (Equation 21), we substitute $\dot{{v}_{i}}=0$, ${i}_{c}=0$. The terms ${g}_{Li}{e}_{Li}$ are a constant contribution, and can thus be absorbed in $i}_{i$. We obtain:
(23) ${g}_{Li}{v}_{i}+\sum _{n\in {\mathcal{N}}_{i}}{g}_{C,in}({v}_{i}{v}_{n})={i}_{i},i=1,\mathrm{\dots},n$which, when recast in matrix form, yields (Equation 3). (Equation 4) then yields a system of ${N}^{2}$ equations, linear in ${g}_{Li}$ and ${g}_{C,in}$ (note that this system always has $2N1$ unknowns). We recast this system of equations in a form $A\mathbf{\mathbf{g}}=\mathbf{\mathbf{b}}$ and solve it in the leastsquares sense for $\mathbf{\mathbf{g}}$.
To fit the capacitances, we set ${i}_{c}=0$ and again absorb ${g}_{Li}{e}_{Li}$ in $i}_{i$ in (Equation 21) to obtain:
(24) ${\dot{v}}_{i}=\frac{1}{{c}_{i}}\left({g}_{Li}{v}_{i}+\sum _{n\in {\mathcal{N}}_{i}}{g}_{C,in}({v}_{n}{v}_{i})\right)+\frac{{i}_{i}}{{c}_{i}},i=1,\mathrm{\dots},n.$In matrix form, we obtain (Equation 6) with $S=\text{diag}{(\mathbf{\mathbf{c}})}^{1}G$. We require that this matrix has the same smallest eigenvalue (corresponding to the largest time scale ${\tau}_{0}$) ${\alpha}_{0}=1/{\tau}_{0}$ and eigenvector ${\varphi}_{0}=({\varphi}_{0}({x}_{1}),\mathrm{\dots},{\varphi}_{0}({x}_{N}))$ as the biophysical model, as defined by (Equation 19). Hence, we obtain the system of (Equation 7) that can be solved for $\mathbf{\mathbf{c}}$.
We compute the maximal conductances of each ionchannel type separately. Thus, next to setting $\dot{{v}_{i}}=0$, we set ${i}_{c}=0$ for all but one of the channels – call the nonzero channel $d$ – and replace $i}_{d$ with its quasiactive, zerofrequency approximation (Equation 15). We choose a spatially uniform holding potential $v}_{h$ and expansion point $({y}_{1}^{0},\mathrm{\dots},{y}_{K}^{0})$ for ion channel $d$. We absorb constant terms in $i}_{i$ and obtain (Equation 21) in terms of $\delta {v}_{i}=({v}_{i}{v}_{h})$:
(25) ${g}_{Li}\delta {v}_{i}+\sum _{n\in {\mathcal{N}}_{i}}{g}_{C,in}(\delta {v}_{i}\delta {v}_{n})+{\overline{g}}_{di}{l}_{d}({v}_{h})\delta {v}_{i}={i}_{i}.$We again recast this equation in matrix form, but distinguish the known passive component ${G}_{\text{pas}}$ from the unknown diagonal matrix ${G}_{{v}_{h},\text{chan d}}$ containing as its $i$’th diagonal element the channel term ${\overline{g}}_{di}{l}_{d}({v}_{h})$:
(26) $\left({G}_{\text{pas}}+{G}_{{v}_{h},\text{chan d}}\right)\delta \mathbf{v}=\mathbf{i}.$From the biophysical model, we compute $Z$ by setting all ion channel conductances except channel $d$ to zero, to obtain the fit (Equation 5).
By consequence, we have a system of ${N}^{2}$ equations, linear in the $N$ unknown maximal conductances ${\overline{\mathbf{\mathbf{g}}}}_{d}=({\overline{g}}_{d1},\mathrm{\dots},{\overline{g}}_{dN})$, that can be recast in a form $A{\overline{\mathbf{\mathbf{g}}}}_{d}=\mathbf{\mathbf{b}}$, where $A$ and $\mathbf{\mathbf{b}}$ depend on the holding potential and expansion points. On the one hand, we aim to compute these linearizations for a sufficiently large range of holding potentials and expansion points. On the other hand, the fit should be restricted to domains of the ionchannel phase space where the channel resides during normal input integration, so as to avoid overfitting on domains of the phase space that are never reached. We choose four holding potentials around which to linearize: ${v}_{h}=75,55,35$ and $15$ mV. For channels with a single state variable $y$, we choose ${y}^{0}={y}_{\mathrm{\infty}}({v}_{h})$ for these four holding potentials. For channels with two state variables, we computed ${y}_{1}^{0}={y}_{\mathrm{\infty}1}({v}_{h})$ and ${y}_{2}^{0}={y}_{\mathrm{\infty}2}({v}_{h})$ for the four holding potentials, and choose sixteen expansion points as all possible combinations of these two state variables. We then weigh the matrices $A$ and vectors $\mathbf{b}$ with the inverse of the open probability $f{(}_{\u25fc})$ for their respective expansion point, concatenate the matrices $A$ and vectors $\mathbf{\mathbf{b}}$ for each of these expansion points, and obtain the system ${A}_{\text{ext}}{\overline{\mathbf{\mathbf{g}}}}_{d}={\mathbf{\mathbf{b}}}_{\text{ext}}$ of $E{N}^{2}$ equations (with $E$ the number of expansion points) and $N$ unknowns. We solve this system in the leastsquares sense for ${\overline{\mathbf{\mathbf{g}}}}_{d}$ and repeat this procedure for each voltagedependent ion channel in the biophysical model.
Finally, we fit ${e}_{Li}$ ($i=1,\mathrm{\dots},N$) to reproduce the resting membrane potential ${\mathbf{\mathbf{v}}}_{\text{eq}}=({v}_{\text{eq}1},\mathrm{\dots},{v}_{\text{eq}N})$ of the biophysical model evaluated at the compartment sites. We substitute this potential in (Equation 21), set ${\dot{v}}_{i}=0$ and ${i}_{i}=0$ for $i=1,\mathrm{\dots},N$ and obtain:
(27) ${g}_{Li}({v}_{\text{eq}i}{e}_{Li})+\sum _{n\in {\mathcal{N}}_{i}}{g}_{C,in}({v}_{\text{eq}i}{v}_{\text{eq}n})+\sum _{c\in {\mathcal{C}}_{i}}{i}_{c}({v}_{\text{eq}i})=0,i=1,\mathrm{\dots},n.$This is a system of $N$ equations, linear in the $N$ unknowns ${e}_{Li}$ ($i=1,\mathrm{\dots},N$), and can thus be solved with standard algebraic techniques.
Synaptic weightrescale factors
Request a detailed protocolTo be able to analytically compute the weightrescale factor when a synapse with temporal conductance $g(t)$ is moved from its original site $s$ to a compartment site $c$ on a dendritic tree, we use the steadystate approximation (Equation 18). Because we implicitly convolve the rescaled synaptic current with the temporal input impedance kernel ${z}_{cc}(t)$ at the compartment site when we simulate the reduced model, the weightrescale factor still yields accurate temporal voltage responses (Figure 4B,E,H). In the original configuration, we have
with $i}_{c$ an arbitrary current at the compartment site and $g$ resp. $e$ the synaptic conductance resp. reversal. Eliminating $v}_{s$ from this yields
In the reduced configuration we have:
From requiring that ${v}_{c}={v}_{c}^{\prime}$, $\beta$ is found as follows:
When ${z}_{cc}\approx {z}_{cs}$, which is true in basal dendrites and a reasonable approximation in many apical dendrites if $c$ is on the direct path from $s$ to the soma, (Equation 31) reduces to
The weightrescale factor hence depends on time. Decomposing the timevarying $g(t)$ into $g+\delta g(t)$, with $g$ the temporal average and $\delta g(t)$ the fluctuations around $g$, we find (Equation 8) and (Equation 9) from requiring that the denominator of (Equation 32) be constant in time:
where $\mathrm{\Delta}z={z}_{ss}{z}_{cc}$.
In the case of an NMDA synapse, the conductance depends sigmoidally on the local voltage. Using scale factor (Equation 8), we obtain for the rescaled synaptic current at the compartment site:
This current still depends on the voltage at the synaptic site $v}_{s$ in the sigmoid. A current at the compartment site that causes a voltage $v}_{c$ there would have caused a voltage
if it was placed at the synapse site. Hence, to retain the same activation level of the NMDA synapse, we substitute (Equation 35) in the sigmoid. This amounts to a change in threshold and width of the sigmoid:
We will denote this modified sigmoid as ${\sigma}^{\prime}(v)$. We have not yet succeeded in reformulating (Equation 34) by only rescaling its parameters with constant rescale factors, since $1/(1+({z}_{ss}{z}_{cc})g{\sigma}^{\prime}({v}_{c}))$ still depends on time through its dependence on $v}_{c$. To obtain constant rescale factors, we substitute the (Equation 34) in (Equation 17):
Here, the current in square brackets is the synaptic current. The convolution to obtain $v}_{c$ is implemented implicitly by the compartmental model. We then approximately eliminate the denominator:
to obtain for the synaptic current:
We thus find that the synaptic weight is rescaled by a constant factor $\frac{{z}_{ss}}{{z}_{cc}}$ and the reversal potential is shifted to $\frac{{z}_{cc}}{{z}_{ss}}(e{v}_{eq})+{v}_{eq}$. Note however that this reduction relies on two key assumptions: that there are no other inputs to the dendritic tree and that there are no fluctuations in $v}_{c$ generated by input currents not at site $s$, as this could lead to spurious activation or suppression of ${\sigma}^{\prime}({v}_{c})$.
Suppose now we shift $M$ conductance synapses to $N$ compartments. Let $\mathbf{\mathbf{v}}=({v}_{1},\mathrm{\dots},{v}_{N},{v}_{N+1},\mathrm{\dots},{v}_{N+M})$ be the vector with $K=N+M$ components containing the voltages at the $N$ compartment sites and at the $M$ synapse sites. Similarly $\mathbf{\mathbf{g}}=(0,\mathrm{\dots},0,{g}_{1},\mathrm{\dots},{g}_{M})$ is a vector of $K$ components containing $N$ zeros and the $M$ synaptic conductances, while $\mathbf{\mathbf{e}}=(0,\mathrm{\dots},0,{e}_{1},\mathrm{\dots},{e}_{M})$ contains the synaptic reversals. In matrix form, (Equation 18) for this system becomes:
and its solution:
with $\mathrm{\Gamma}$ a matrix given by:
In the reduced setting, we assign each synapse to one compartment site (a reasonable choice could be the closest site). We introduce an $N\times M$ compartment assignment matrix $C$ where an element ${c}_{nm}$ is 1 if synapse $m$ is assigned to compartment $n$ and zero otherwise. We further introduce a vector of reduced compartment voltages ${\mathbf{\mathbf{v}}}^{\prime}=({v}_{1}^{\prime},\mathrm{\dots},{v}_{N}^{\prime})$, a vector of rescaled synaptic conductances ${\mathbf{\mathbf{g}}}_{\beta}^{\prime}=({\beta}_{1}{g}_{1},\mathrm{\dots},{\beta}_{M}{g}_{M})$ and a vector of synaptic reversals ${\mathbf{\mathbf{e}}}^{\prime}=({e}_{1},\mathrm{\dots},{e}_{M})$. In the reduced model, voltages are obtained from:
We then require that ${\mathbf{\mathbf{v}}}_{N}={\mathbf{\mathbf{v}}}^{\prime}$, with ${\mathbf{\mathbf{v}}}_{N}$ a vector containing the first $N$ components of $\mathbf{\mathbf{v}}$. We denote by ${\mathrm{\Gamma}}_{NM}$ the matrix containing the first $N$ rows and last $M$ columns of $\mathrm{\Gamma}$, and note that we can write (Equation 41) as ${\mathbf{\mathbf{v}}}_{N}={\mathrm{\Gamma}}_{NM}{\mathbf{\mathbf{e}}}^{\prime}$. Substituting this in (Equation 43) yields:
We then fit the parameters ${\beta}_{1},\mathrm{\dots},{\beta}_{M}$ (here absorbed in ${\mathbf{\mathbf{g}}}_{\beta}^{\prime}$) in the leastsquare sense from:
which again is a linear fit.
Experimental recordings
Request a detailed protocolCoronal slices (300 μm thick) containing the anterior cingulate cortex (ACC) were prepared from 10 to 12 week old C57BL/6 mice using a vibratome on a block angled at 15 degree to the horizontal in icecold oxygenated artificial cerebral spinal fluid (ACSF) and then maintained in the same solution at 37°C for 15–120 min. Normal ACSF contained (in mM) NaCl, 125; NaHCO_{3}, 25; KCl, 2.5; NaH_{2}PO_{4}, 1.25; MgCl_{2}, 1; glucose, 25; CaCl_{2}, 2; pH 7.4. Individual neurons were visualized with a Nikon Eclipse E600FN fit with a combination of oblique infrared illumination optics and epifluorescence, the switch between optical configurations was softwaretriggered (Sieber et al., 2013). Pyramidal neurons were selected on the clearly visible, proximal apical dendrite. This selection criterion resulted in a homogeneous population of pyramidal neurons based on their firing properties and shape of the AP (i.e. all cells possessed a prominent afterhyperpolarization and a significant sag ratio at the soma). Dual somatic and dendritic wholecell patchclamp recordings were performed from identified L5 pyramidal neurons in the rostroventral ACC (1.1–1.4 mm below the pial surface, 1.1–0.2 mm rostral to the Bregma) using two Dagan BVC700. During the experiments, the external recording solution (normal ACSF) was supplemented with 0.5 mM CNQX and 0.5 mM AP5 to block excitatory glutamatergic synaptic transmission. Experiments were performed at physiological temperatures between 34–37°C. Wholecell recording pipettes (somatic, 4 to 8 MΩ; dendritic, 12 to 32 MΩ), were pulled from borosilicate glass. The internal pipette solution consisted of (in mM) potassium gluconate, 135; KCl, 7; Hepes, 10; Na_{2}phosphocreatine, 10; MgATP, 4; GTP, 0.3; 0.2% biocytin; pH 7.2 (with KOH); 291–293 mosmol l^{−1}. For somatic recordings, 10–20 μm Alexa 594 was added to the intracellular solution: first, the soma was patched (wholecell configuration by negative pressure); after 5 min of intracellular perfusion, the fluorescent signal allowed for the clear identification of the apical dendritic tree, then the dendritic region of interest was patched with a smaller pipette. Compensation was performed in current clamp mode by recovering the fast, initial square voltage response to a hyperpolarizing current injection (−100 pA, 50 ms). First the pipette capacitance was compensated to the level that the voltage response showed an immediate voltage drop due to the series resistance of the pipette that was adjusted subsequently. Compensation of dendritic series resistance yielded values between 30 and 60 MΩ for pipettes with a resistance between 17 and 21 MΩ. On average, series resistance was 2.3 times larger than the pipette resistance. Series resistance of both somatic and dendritic recording electrodes was monitored frequently, and experiments were terminated when proper compensation was not possible anymore (i.e. reached values of more than four times the pipette resistance). All cells were filled with biocytin, and PFAfixed slices were developed with the avidin–biotinperoxidase method for Neurolucida reconstructions (Egger et al., 2008). Data analysis was performed using Igor software (Wavemetrics) and Excel (Microsoft).
Simulationspecific parameters
Parameters Figure 2
Request a detailed protocolFor the sequence detection (Figure 1f), the synapse model is as in Branco and Häusser, 2010. The maximal conductance of the AMPA component is ${\overline{g}}_{\text{AMPA}}=0.5$ nS and its conductance window shape given by an alpha function (Rotter and Diesmann, 1999) with a time scale of 2 ms. The NMDA component is given by a kinetic model (Destexhe et al., 1998) with ${\overline{g}}_{\text{NMDA}}=8$ nS and external magnesium concentration of 1 mM, and receives an exponentially decaying ($\tau =.5$ ms) neurotransmitter concentration with amplitude of 5 mM. For the inputorder detection (Figure 1G), AMPA synapses 1 and 2 use the standard parameters and have respective maximal conductances of 10 and 5 nS. For the simulation with the L5 pyramidal cell, excitatory synapses have AMPA+NMDA components with ${R}_{\text{NMDA}}=2$ and $\overline{g}=3$ nS. For inhibitory synapses, $\overline{g}=2$ nS. In the Purkinje cell, excitatory synapses only have AMPA components with $\overline{g}=10$ nS, while for inhibitory synapses, $\overline{g}=5$ nS. For the L2/3 pyramid, L5 pyramid, resp. Purkinje cell, excitatory Poisson rates are 2, five resp. 6 Hz and inhibitory firing rates are 4, one resp. 2 Hz. Simulation were run for 10,000 ms.
Parameters Figure 3
Request a detailed protocolAPs are evoked with a DC current pulse. For panels A–C, pulse amplitude is ${i}_{\text{amp}}=0.5$ nA and pulse duration is ${t}_{\text{dur}}=5$ ms. For panels D–F, we have ${i}_{\text{amp}}=3$ nA and ${t}_{\text{dur}}=1$ ms. For panels G–I, ${i}_{\text{amp}}=1.5$ nA and ${t}_{\text{dur}}=5$ ms. For panels J–K, the somatic current pulse had ${i}_{\text{amp}}=1.9$ nA and ${t}_{\text{dur}}=5$ ms, while the dendritic current injection had a double exponential waveform, with ${\tau}_{r}=0.5$ ms and ${\tau}_{d}=5$ ms and amplitude ${i}_{\text{amp}}=0.5$ nA. Onset of the somatic current pulse precedes onset of the dendritic current injection by 5 ms.
Parameters Figure 4
Request a detailed protocolIn panel B, we inject Ornstein–Uhlenbeck (OU) processes for current (mean $\mu =0.08$ nA and standard deviation $\sigma =0.025$ nA) and conductance ($\mu =0.005$ μS, $\sigma =0.0025$ μS). Reversal was 0 mV for the excitatory conductance and −80 mV for the inhibitory conductance. All OU processes had a time scale of 30 ms. In panels D–F, we use AMPA+NMDA synapses with $\overline{g}=0.5$ nS and ${R}_{\text{NMDA}}=3$. Burst firing is mimicked by drawing synapse activation times from a Gaussian distribution with as mean the burst time and a standard deviation of 2 ms.
In panels G–J, AMPA+NMDA synapses at the compartment site (green square) have $\overline{g}=1$ nS and ${R}_{\text{NMDA}}=2$. To determine the NMDAspike threshold, we activate between 0 and 200 synapses in a burst. The burst is again modeled by drawing spike times from a Gaussian distribution with a 2 ms standard deviation. We then multiply the amplitude of the resulting waveform with its halfwidth, and average this quantity over five trials for each number of activated synapses. The number of synapses at which this quantity increases most is taken to be the NMDAspike threshold.
We aim to test the synaptic weightrescale factors for the AMPA and GABA synapses ${\beta}_{\text{cond}}^{\text{ext}}$ under a wide range of input conditions. To do so, we conduct simulations with different total timeaveraged conductance loads ${g}_{\text{avg}}$, exerted by the AMPA and a GABA synapse on the subtree that is to be reduced. These synapses are activated with spike times drawn from a homogeneous Poisson process. We also also change the size of the conductance fluctuations, by conducting simulations with different firing rates for the homogeneous Poisson processes (since the total conductance load for a simulation is fixed, a small firing rate means that individual synaptic weights have to be increased to reach the desired total conductance load, hence resulting in larger conductance fluctuations). We furthermore change the number of locations on the subtree ${n}_{\text{loc}}$ over which the conductance load is distributed. Note that by consequence, the conductance load at each location is ${g}_{\text{avg}}/{n}_{\text{loc}}$. We also adapt these locations to a specified average difference in input resistance $\mathrm{\Delta}{z}_{\text{avg}}$ between the compartment site and the location. Finally, we change the average voltage level achieved by the combined AMPA and GABA input by changing the ‘nudging potential’ $e}_{n$ (Urbanczik and Senn, 2014)
where ${g}_{\text{AMPA}}$ resp. ${g}_{\text{GABA}}$ are the timeaverage conductances of individual AMPA resp. GABA synapses, and ${g}_{\text{AMPA}}=0$ mV resp. ${g}_{\text{GABA}}=80$ mV their respective reversal potentials.
We thus draw five metaparameters for each simulation: ${g}_{\text{avg}}$, ${r}_{\text{avg}}$, ${n}_{\text{loc}}$, $\mathrm{\Delta}{z}_{\text{avg}}$ and $e}_{n$. The timeaveraged conductance at each location is given by
Together with (Equation 46), this fixes the timeaveraged conductance of each individual AMPA synapse and GABA synapse. Since the temporal conductance profile of a synapse (e.g. the AMPA synapse) following the arrival of a single input AP is modeled as the product of a weight factor ${w}_{\text{AMPA}}$ and a unitary conductance window ${g}_{u\text{AMPA}}(t)$, the timeaveraged conductance of that synapses activated by a homogeneous Poisson process input is
Thus, given ${r}_{\text{avg}}$ and ${g}_{\text{AMPA}}$, the synaptic weight ${w}_{\text{AMPA}}$ can be extracted (and similarly for the GABA synapse).
To perform simulations across a wide range of input conditions, we draw 200 samples of these five metaparameters from the intervals: ${e}_{n}\in [80\text{mV},50\text{mV}]$, ${g}_{\text{avg}}\in [0.01\text{nS},300\text{nS}]$, ${r}_{\text{avg}}\in [1\text{Hz},100\text{Hz}]$, ${n}_{\text{loc}}\in [1,20]$, and $\mathrm{\Delta}{z}_{\text{avg}}\in [0\text{M}\mathrm{\Omega},1023.2\text{M}\mathrm{\Omega}]$ following the Latin hypercube (LH) method (Press et al., 2007). ${g}_{\text{avg}}$ and ${r}_{\text{avg}}$ are sampled on a log scale, whereas for all other parameters, we use a linear scale.
Parameters Figure 5
Request a detailed protocolTo obtain compartment sites, we divide the longest branch in each basal subtree in $n$ parts of equal length, thus giving us $n$ distances from the soma, where we increase $n$ from 0 (pointneuron) to 10. In each subtree, we distribute compartments at all sites at these distances, and also add all bifurcation sites in between compartment sites. In this way, we obtain 11 reductions that we quantify according to ‘no. of compartments per 100 μm of dendrite’. We implement 400 ‘background’ synapses; 200 AMPA and 200 GABA synapses with an average conductance of ${g}_{\text{AMPA}}$ resp. ${g}_{\text{GABA}}=4{g}_{\text{AMPA}}$ and firing rate ${r}_{\text{AMPA}}={r}_{\text{avg}}$ resp. ${r}_{\text{GABA}}={r}_{\text{avg}}$. We implement 20 synapse clusters, consisting of AMPA+NMDA synapses with maximal conductance ${\overline{g}}_{\text{A+N}}$ and ${R}_{\text{NMDA}}=2$. These clusters are activated with a Poissonian burst rate ${r}_{\text{burst}}$, the number of spikes per burst is drawn from a Poisson distribution with parameter ${n}_{\text{avg}}$, and spike times are drawn for each burst time according to a Gaussian distribution with standard deviation of 5 ms. For each of the parameters not given previously, 10 LH samples were drawn from ${g}_{\text{AMPA}}\in [0.4\phantom{\rule{thinmathspace}{0ex}}\text{nS},0.8\phantom{\rule{thinmathspace}{0ex}}\text{nS}]$, ${r}_{\text{avg}}\in [1.5\phantom{\rule{thinmathspace}{0ex}}\text{Hz},3.0\phantom{\rule{thinmathspace}{0ex}}\text{Hz}]$, ${\overline{g}}_{\text{A+N}}\in [0.5\phantom{\rule{thinmathspace}{0ex}}\text{nS},1.5\phantom{\rule{thinmathspace}{0ex}}\text{nS}]$, ${n}_{\text{avg}}\in [10,30]$, and ${r}_{\text{burst}}\in [0.25\phantom{\rule{thinmathspace}{0ex}}\text{Hz},0.6\phantom{\rule{thinmathspace}{0ex}}\text{Hz}]$, resulting in 10 different output spike rates shown in B. For each parameter set, a simulation was run for 10,000 ms.
Parameters Figure 6
Request a detailed protocolHyper resp. depolarizing current steps have ${i}_{\text{amp}}=.3$ nA resp. ${i}_{\text{amp}}=.1$ nA and ${t}_{\text{dur}}=500$ ms. The full model was optimized with an evolutionary algorithm using the BluePyOpt library (Van Geit et al., 2016), where we ran 100 iterations with and offspring size of 100. Goodness of fit was evaluated in a multiobjective manner as the root mean square error of the resting voltage (average voltage 100 ms before each current step), the final step voltage amplitudes after sag (average voltage during the last 100 ms of the DC current injection), and the voltage root mean square error of the full trace. We optimized the specific membrane capacitance and the conductance densities for passive leak and h, ${K}_{ir}$ and ${K}_{m}$ channels. The membrane currents followed an exponential distribution $g(x)={g}_{0}{e}^{x/{d}_{x}}$, with $x$ the distance from the soma, and as parameters $g}_{0$ – the conductance at the soma – and $d}_{x$ – the length constant of the distribution.
Data and software availability
Request a detailed protocolNEAT (NEural Analysis Toolbox), our opensource Python toolbox to obtain reduced models, is available on https://github.com/unibecns/NEAT (copy archived at swh:1:rev:1cb15f36aa0a764105348541d046c85ef38e21ee).
Data availability
All data generated or analysed during this study are included in the manuscript and supporting files.

ModelDBID 139653. L5b PC model constrained for BAC firing and perisomatic current step firing.

ModelDBID 140828. Dendritic Discrimination of Temporal Input Sequences.

NeuroMorpho.orgID NMO_100072. Mature Purkinje Cells Require the Retinoic AcidRelated Orphan Receptorα (RORα) to Maintain Climbing Fiber MonoInnervation and Other Adult Characteristics.
References

An efficient analytical reduction of detailed nonlinear neuron modelsNature Communications 11:1–13.https://doi.org/10.1038/s41467019139326

Mobilizing the base of neuroscience data: the case of neuronal morphologiesNature Reviews Neuroscience 7:318–324.https://doi.org/10.1038/nrn1885

Automated optimization of a reduced layer 5 pyramidal cell model based on experimental dataJournal of Neuroscience Methods 210:22–34.https://doi.org/10.1016/j.jneumeth.2012.04.006

The single dendritic branch as a fundamental functional unit in the nervous systemCurrent Opinion in Neurobiology 20:494–502.https://doi.org/10.1016/j.conb.2010.07.009

BookKinetic models of synaptic transmissionIn: Koch C, Segev I, editors. Methods in Neuronal Modeling. MIT Press. pp. 1–26.

The highconductance state of neocortical neurons in vivoNature Reviews Neuroscience 4:739–751.https://doi.org/10.1038/nrn1198

BookCan we predict every spike?In: DiLorenzo P. M, editors. Spike Timing. CRC Press. pp. 70–72.

The capabilities and limitations of conductancebased compartmental neuron models with reduced branched or unbranched morphologies and active dendritesJournal of Computational Neuroscience 30:301–321.https://doi.org/10.1007/s108270100258z

Interpretation of time constant and electrotonic length estimates in multicylinder or branched neuronal structuresJournal of Neurophysiology 68:1401–1420.https://doi.org/10.1152/jn.1992.68.4.1401

A quantitative description of NMDA receptorchannel kinetic behaviorThe Journal of Neuroscience 10:1830–1837.https://doi.org/10.1523/JNEUROSCI.100601830.1990

Voltage dependence of NMDAactivated macroscopic conductances predicted by singlechannel kineticsThe Journal of Neuroscience 10:3178–3182.https://doi.org/10.1523/JNEUROSCI.100903178.1990

The quantitative singleneuron modeling competitionBiological Cybernetics 99:417–426.https://doi.org/10.1007/s004220080261x

Lowdimensional, morphologically accurate models of subthreshold membrane potentialJournal of Computational Neuroscience 27:161–176.https://doi.org/10.1007/s1082700801342

Morphologically accurate reduced order modeling of spiking neuronsJournal of Computational Neuroscience 28:477–494.https://doi.org/10.1007/s1082701002294

A simple algorithm for solving the cable equation in dendritic trees of arbitrary geometryJournal of Neuroscience Methods 12:303–315.https://doi.org/10.1016/01650270(85)900159

Spatially distributed dendritic resonance selectively filters synaptic inputPLOS Computational Biology 10:e1003775.https://doi.org/10.1371/journal.pcbi.1003775

The effects of Lglutamate and its analogues upon the membrane conductance of central murine neurones in cultureCanadian Journal of Physiology and Pharmacology 60:282–296.https://doi.org/10.1139/y82039

Spatiotemporally graded NMDA spike/plateau potentials in basal dendrites of neocortical pyramidal neuronsJournal of Neurophysiology 99:2584–2601.https://doi.org/10.1152/jn.00011.2008

Subthreshold behavior and phenomenological impedance of the squid giant axonJournal of General Physiology 55:497–523.https://doi.org/10.1085/jgp.55.4.497

Twenty years of ModelDB and beyond: building essential modeling tools for the future of neuroscienceJournal of Computational Neuroscience 42:1–10.https://doi.org/10.1007/s1082701606237

Synaptic integration in an excitable dendritic treeJournal of Neurophysiology 70:1086–1101.https://doi.org/10.1152/jn.1993.70.3.1086

Spiketiming prediction in cortical neurons with active dendritesFrontiers in Computational Neuroscience 8:90.https://doi.org/10.3389/fncom.2014.00090

Intrinsic and network rhythmogenesis in a reduced traub model for CA3 neuronsJournal of Computational Neuroscience 1:39–60.https://doi.org/10.1007/BF00962717

Dendritic spikes expand the range of well tolerated population noise structuresThe Journal of Neuroscience 39:9173–9184.https://doi.org/10.1523/JNEUROSCI.063819.2019

Temporal whitening by powerlaw adaptation in neocortical neuronsNature Neuroscience 16:942–948.https://doi.org/10.1038/nn.3431

Automated HighThroughput characterization of single neurons by means of simplified spiking modelsPLOS Computational Biology 11:e1004275.https://doi.org/10.1371/journal.pcbi.1004275

BookNumerical Recipes 3rd Edition: The Art of Scientific ComputingCambridge University Press.

NonHebbian longterm potentiation of inhibitory synapses in the thalamusJournal of Neuroscience 33:15675–15685.https://doi.org/10.1523/JNEUROSCI.024713.2013

BookNonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry and EngineeringWestview Press.

Action potential initiation and backpropagation in neurons of the mammalian CNSTrends in Neurosciences 20:125–131.https://doi.org/10.1016/S01662236(96)100758

Creation and reduction of a morphologically detailed model of a leech heart interneuronJournal of Neurophysiology 96:2107–2120.https://doi.org/10.1152/jn.00026.2006

Systematic mapping between dendritic function and structureNetwork: Computation in Neural Systems 20:69–105.https://doi.org/10.1080/09548980902984833

The green's function formalism as a bridge between single and multicompartmental modelingBiological Cybernetics 107:685–694.https://doi.org/10.1007/s0042201305680

Electrical compartmentalization in neuronsCell Reports 26:1759–1773.https://doi.org/10.1016/j.celrep.2019.01.074
Decision letter

Timothy O'LearyReviewing Editor; University of Cambridge, United Kingdom

Ronald L CalabreseSenior Editor; Emory University, United States

Hugh PC RobinsonReviewer; University of Cambridge, United Kingdom

Matthew F NolanReviewer; University of Edinburgh, United Kingdom
In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.
Acceptance summary:
A general principle in systems theory is that a complicated model often reduces to a much simpler one – a fact exploited by Rall in his classic work on reduction of passive dendritic trees. However, obtaining simplified models of nonlinear dendritic integration of distributed synaptic input has remained challenging. Wybo and colleagues develop methods to achieve such a reduction systematically, offering an approach that reduces the complexity of simulations while providing insight into the factors that allow lumping of dendritic compartments into a smaller equivalent system.
Decision letter after peer review:
Thank you for submitting your article "Datadriven reduction of dendritic morphologies with preserved dendrosomatic responses" for consideration by eLife. Your article has been reviewed by two peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Ronald Calabrese as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Hugh PC Robinson (Reviewer #1); Matthew F Nolan (Reviewer #2).
The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.
We would like to draw your attention to changes in our revision policy that we have made in response to COVID19 (https://elifesciences.org/articles/57162). Specifically, we are asking editors to accept without delay manuscripts, like yours, that they judge can stand as eLife papers without additional data, even if they feel that they would make the manuscript stronger. Thus the revisions requested below only address clarity and presentation.
This manuscript proposes a plausible and promising tool for reducing model complexity of morphological conductancebased models, illustrated in a variety of situations including complex simulations and experimental data fitting. This methodology will likely be useful to a variety of neurophysiologists and theorists and can permit more efficient simulation while shedding light on the core parameters that matter in neural computation at the single cell level.
Reviewers agreed that the writing needs to be clearer throughout. Terms need to be defined. Implicit references to other work or to concepts need to be replaced with explicit, declarative sentences and explanations. On the technical side the paper also needs a fuller explanation of how fitting of voltagegated conductances is performed. Finally, a fuller discussion and illustration of the limitations would be appropriate, e.g. explicitly showing cases where further reductions cannot be achieved without fundamentally changing the system. These necessarily exist in a nonlinear system so this is not a shortcoming of the method, rather an important point that some readers may not appreciate.
Full reviewer comments are included below for information.
Reviewer #1:
This study addresses an important issue in computational neuroscience: how to reduce highlydetailed biophysical models which reproduce the full morphological complexity of neurons to models which have a degree of complexity permitting them to be understood conceptually, and simulated much more efficiently, while preserving the accuracy of the original model as far as possible, and in a welldefined way. While there is a large literature on this problem for passive neurons, it is really necessary to address this problem for active/nonlinear membranes, given what we now know about voltagedependent ion channels in dendrites. This is a problem of fitting the reduced model parameters to the output of the highlydetailed model, and the authors take a systematic and theoreticallysatisfying approach to it, by matching the impedance matrices including the phenomenological impedance of voltagegated channels which describes the reduced model to the output of the detailed model recorded at a small number of locations. This was demonstrated for neuronal models of several key active dendritic mechanisms: backpropagating sodium spikes, calcium and NMDA spikes. The authors also provide a toolbox to facilitate this process.
I have no major concerns: I found that the approach was described and illustrated very carefully and clearly, and the method's accuracy and utility were demonstrated convincingly. The motivation was well established and the writing accurate and transparent.
Reviewer #2:
The study introduces algorithms and software that aims to generate accurate reduced compartmental models of neurons. The approach appears elegant and is well validated by exemplar applications. I think it will be of interest to many neuroscientists. For the network modeler, the software provides a practical solution to the tradeoff involved in choosing neuronal models that are accurate and those that are computationally efficient. At a conceptual level the approach may be useful in addressing questions about which details of a neuron's morphology are critical for the computations that it contributes to. A caveat of my review is that I haven't had time to carefully check any of the math – it looks about right and the validation is generally good.
(1) The initial part of the Results section is important for understanding how the software works but is hard to follow. I think more care needs to be taken with defining terms clearly and explaining the logic of the approach in an accessible way.
(2) How the approach deals with voltagegated ion channels could be made much clearer. E.g. in the subsection “A systematic simplification of complex neuron morphologies”, the text states that G_{Vh,chan} depends on the unknown maximal ion channel conductance parameters. How? Why maximal? Voltagegated channels cannot be maximally activated at all of the voltages tested. I don't understand how to get from here to channel conductances. It could be helpful to make a figure illustrating how fits are obtained for exemplar voltagegated conductances.
(3) The section 'Conditions under which afferent spatial connectivity motifs can be simplified' could also be written more clearly. Concerns are similar to in point 1 above.
https://doi.org/10.7554/eLife.60936.sa1Author response
[…] Reviewers agreed that the writing needs to be clearer throughout. Terms need to be defined. Implicit references to other work or to concepts need to be replaced with explicit, declarative sentences and explanations. On the technical side the paper also needs a fuller explanation of how fitting of voltagegated conductances is performed. Finally, a fuller discussion and illustration of the limitations would be appropriate, e.g. explicitly showing cases where further reductions cannot be achieved without fundamentally changing the system. These necessarily exist in a nonlinear system so this is not a shortcoming of the method, rather an important point that some readers may not appreciate.
We have rewritten the sections that were highlighted by the reviewers, in order to explain the unclarities. We have now clearly defined the quantities that we use. We have also explained more elaborately how the voltagegated channels are fitted. To aid with this, we have added a further panel to Figure 1—figure supplement 1. Finally, we have added a paragraph to the Discussion, where we explain possible lines of inquiry if a reduction does not reproduce dynamics of the full model that need to be retained. To illustrate these lines of inquiry, we have added a new supplementary figure (Figure 3—figure supplement 1) where we perform a similar simulation as in Figure 3J, K, but where we left out compartments in the apical trunk. This reduction is too strong and the generation of a spikeburst cannot be mimicked by the reduced model.
Full reviewer comments are included below for information.
Reviewer #2:
[…] (1) The initial part of the Results section is important for understanding how the software works but is hard to follow. I think more care needs to be taken with defining terms clearly and explaining the logic of the approach in an accessible way.
We have added substantial clarifications to this section.
(2) How the approach deals with voltagegated ion channels could be made much clearer. E.g. In the subsection “A systematic simplification of complex neuron morphologies”, the text states that G_{Vh,chan} depends on the unknown maximal ion channel conductance parameters. How? Why maximal? Voltagegated channels cannot be maximally activated at all of the voltages tested. I don't understand how to get from here to channel conductances. It could be helpful to make a figure illustrating how fits are obtained for exemplar voltagegated conductances.
We have explicitly described the elements of the matrix G_{vh} in the manuscript, so that the parameters that are fitted are now clear. In the Materials and methods – “Quasiactive channels” subsection, we show how the linearized channel currents that go in the matrix G_{vh} are obtained. It can be seen that they are the product of a ‘maximal conductance parameter’ with a factor that follows from the linearization, and that determines the fraction of that maximal conductance that is open at a given vh. This factor thus changes with vh. Fitting our model simultaneously at a representative set of vh values allows the fit to find a best estimate for the maximal conductance parameter. An additional panel to Figure 1—figure supplement 1 now illustrates how the impedance matrix of the full model, and the inverse of the conductance matrix for the reduced model, changes under different vh. We have furthermore added a subsection to the Materials and methods, titled ”The conductance matrix”, where we explicitly describe how Equation 3 in the main text is obtained. We believe that with our added explanations, it is now clear what the matrix G_{vh} is and how the maximal conductance parameter is fitted.
(3) The section 'Conditions under which afferent spatial connectivity motifs can be simplified' could also be written more clearly. Concerns are similar to in point 1 above.
We have added additional explanations to this section, more accurately describing things that may have been unclear. We now more explicitly motivate this section, by stating that up until this section we only considered reductions where the synaptic inputs (or electrode current inputs) were located at the compartment sites. Here, we investigate what happens if this is not the case. We also changed the terminology from ‘impedances’ to ‘resistances’. We furthermore explicitly describe why β_{curr} is close to one (because the transfer impedance z_{cs} is close to the input impedance z_{cc} if the compartment site c is located more proximal than the synaptic input site s), with help of an additional panel to Figure 1—figure supplement 1. Finally, we have substantially extended the secondtolast paragraph of this section, and were able to describe the precise nature and motivation of our simulation experiments (Figure 4GJ) more clearly.
https://doi.org/10.7554/eLife.60936.sa2Article and author information
Author details
Funding
H2020 European Research Council (720270)
 Walter Senn
Swiss National Science Foundation (180316)
 Walter Senn
Swiss National Science Foundation (159872)
 Thomas Nevian
H2020 European Research Council (785907)
 Walter Senn
H2020 European Research Council (945539)
 Walter Senn
H2020 European Research Council (682905)
 Thomas Nevian
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
This work was supported by the Swiss National Science Foundation (Grant 159872 to TN; Grant 180316 to WS – with F Helmchen), the European Union’s Horizon 2020 Framework Programme (Grant 720270, 785907 and 945539 to WS – Human Brain Project SGA 1–3) and the European Research Council (Grant 682905 to TN). We thank all our lab colleagues for helpful discussions and critical comments on the figures.
Senior Editor
 Ronald L Calabrese, Emory University, United States
Reviewing Editor
 Timothy O'Leary, University of Cambridge, United Kingdom
Reviewers
 Hugh PC Robinson, University of Cambridge, United Kingdom
 Matthew F Nolan, University of Edinburgh, United Kingdom
Publication history
 Received: July 10, 2020
 Accepted: January 4, 2021
 Version of Record published: January 26, 2021 (version 1)
Copyright
© 2021, Wybo et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 1,470
 Page views

 213
 Downloads

 6
 Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Neuroscience
According to the efficient coding hypothesis, sensory neurons are adapted to provide maximal information about the environment, given some biophysical constraints. In early visual areas, stimulusinduced modulations of neural activity (or tunings) are predominantly singlepeaked. However, periodic tuning, as exhibited by grid cells, has been linked to a significant increase in decoding performance. Does this imply that the tuning curves in early visual areas are suboptimal? We argue that the time scale at which neurons encode information is imperative to understand the advantages of singlepeaked and periodic tuning curves, respectively. Here, we show that the possibility of catastrophic (large) errors creates a tradeoff between decoding time and decoding ability. We investigate how decoding time and stimulus dimensionality affect the optimal shape of tuning curves for removing catastrophic errors. In particular, we focus on the spatial periods of the tuning curves for a class of circular tuning curves. We show an overall trend for minimal decoding time to increase with increasing Fisher information, implying a tradeoff between accuracy and speed. This tradeoff is reinforced whenever the stimulus dimensionality is high, or there is ongoing activity. Thus, given constraints on processing speed, we present normative arguments for the existence of the singlepeaked tuning organization observed in early visual areas.

 Genetics and Genomics
 Neuroscience
Aging is a major risk factor for Alzheimer’s disease (AD), and celltype vulnerability underlies its characteristic clinical manifestations. We have performed longitudinal, singlecell RNAsequencing in Drosophila with panneuronal expression of human tau, which forms AD neurofibrillary tangle pathology. Whereas tau and aginginduced gene expression strongly overlap (93%), they differ in the affected cell types. In contrast to the broad impact of aging, tautriggered changes are strongly polarized to excitatory neurons and glia. Further, tau can either activate or suppress innate immune gene expression signatures in a celltypespecific manner. Integration of cellular abundance and gene expression pinpoints nuclear factor kappa B signaling in neurons as a marker for cellular vulnerability. We also highlight the conservation of celltypespecific transcriptional patterns between Drosophila and human postmortem brain tissue. Overall, our results create a resource for dissection of dynamic, agedependent gene expression changes at cellular resolution in a genetically tractable model of tauopathy.