1. Computational and Systems Biology
  2. Neuroscience
Download icon

Bayesian inference for biophysical neuron models enables stimulus optimization for retinal neuroprosthetics

  1. Jonathan Oesterle
  2. Christian Behrens
  3. Cornelius Schröder
  4. Thoralf Hermann
  5. Thomas Euler
  6. Katrin Franke
  7. Robert G Smith
  8. Günther Zeck
  9. Philipp Berens  Is a corresponding author
  1. Institute for Ophthalmic Research, University of Tübingen, Germany
  2. Naturwissenschaftliches und Medizinisches Institut an der Universität Tübingen, Germany
  3. Center for Integrative Neuroscience, University of Tübingen, Germany
  4. Bernstein Center for Computational Neuroscience, University of Tübingen, Germany
  5. Department of Neuroscience, University of Pennsylvania, United States
  6. Institute for Bioinformatics and Medical Informatics, University of Tübingen, Germany
Research Article
  • Cited 0
  • Views 457
  • Annotations
Cite this article as: eLife 2020;9:e54997 doi: 10.7554/eLife.54997

Abstract

While multicompartment models have long been used to study the biophysics of neurons, it is still challenging to infer the parameters of such models from data including uncertainty estimates. Here, we performed Bayesian inference for the parameters of detailed neuron models of a photoreceptor and an OFF- and an ON-cone bipolar cell from the mouse retina based on two-photon imaging data. We obtained multivariate posterior distributions specifying plausible parameter ranges consistent with the data and allowing to identify parameters poorly constrained by the data. To demonstrate the potential of such mechanistic data-driven neuron models, we created a simulation environment for external electrical stimulation of the retina and optimized stimulus waveforms to target OFF- and ON-cone bipolar cells, a current major problem of retinal neuroprosthetics.

Introduction

Mechanistic models have been extensively used to study the biophysics underlying information processing in single neurons and small networks in great detail (Gerstner and Kistler, 2002; Koch, 2004). In contrast to phenomenological models used for neural system identification, such models try to preserve certain physical properties of the studied system to facilitate interpretation and a causal understanding. For example, biophysical models can incorporate the detailed anatomy of a neuron (Golding et al., 2001; Poirazi et al., 2003; Hay et al., 2011), its ion channel types (Hodgkin and Huxley, 1952; Fohlmeister and Miller, 1997) and the distributions of these channels (Rattay et al., 2017) as well as synaptic connections to other cells (O'Leary et al., 2014). For all these properties, the degree of realism can be adjusted as needed. While increased realism may enable models to capture the highly non-linear dynamics of neural activity more effectively, it usually also increases the number of model parameters. While the classical Hodgkin-Huxley model with one compartment has already 10 free parameters (Hodgkin and Huxley, 1952), detailed multicompartment models of neurons can have dozens or even hundreds of parameters (Taylor et al., 2009; Hay et al., 2011).

Constraining many of these model parameters such as channel densities requires highly specialized and technically challenging experiments, and hence it is usually not viable to measure every single parameter for a neuron model of a specific neuron type. Rather, parameters for mechanistic simulations are often aggregated over different neuron types and even across species. Even though this may be justified in specific cases, it likely limits our ability to identify mechanistic models of individual cell types. Alternatively, parameter search methods have been proposed to identify the parameters of mechanistic neuron models from standardized patch-clamp protocols based on exhaustive grid-searches (Goldman et al., 2001; Prinz et al., 2003; Stringer et al., 2016) or evolutionary algorithms (Gerken et al., 2006; Keren et al., 2005; Achard and De Schutter, 2006; Rossant et al., 2011). Such methods are often inefficient and identify only a single point estimate consistent with the data (for discussion, see Gonçalves et al., 2020).

Here, we built on recent advances in Bayesian simulation-based inference to fit multicompartment models of neurons with realistic anatomy in the mouse retina. We used a framework called Sequential Neural Posterior Estimation (SNPE) (Lueckmann et al., 2017; Gonçalves et al., 2020) to identify model parameters based on high-throughput two-photon measurements of these neurons’ responses to light stimuli. SNPE is a Bayesian simulation-based inference algorithm that allows parameter estimation for simulator models for which the likelihood cannot be evaluated easily. The algorithm estimates the distribution of model parameters consistent with specified target data by evaluating the model for different sets of parameters and comparing the model output to the target data. To this end, parameters are drawn from a prior distribution, which is an initial guess about which parameters are likely to produce the desired model output. For example, the choice of prior distribution can be informed by the literature, without constraining the model to specific values. The model output for the sampled parameter sets can then be used to refine the distribution over plausible parameters given the data. This updated distribution, containing information from both the prior and the observed simulations, is known as the posterior. For high-dimensional parameter spaces, many samples are necessary to obtain an informative posterior estimate. Therefore, to make efficient use of simulation time, SNPE iteratively updates its sampling distribution, such that only in the first round samples are drawn from the prior, while in subsequent rounds samples are drawn from intermediate posteriors. This procedure increases the fraction of samples leading to simulations close to the target data. Since this approach for parameter estimation not only returns a point-estimate but also a posterior distribution over parameters consistent with the data, it allows one to straightforwardly determine how well the parameters are constrained. While the method has been used previously to fit simple neuron models (Lueckmann et al., 2017; Gonçalves et al., 2020), it has so far not been applied to models as complex and realistic as the ones presented here.

We estimated the posterior parameter distribution of multicompartment models of three retinal neurons, a cone photoreceptor (cone), an OFF- and an ON-bipolar cell (BC). The structure of the BC models was based on high-resolution electron microscopy reconstructions (Helmstaedter et al., 2013) and in six independently parameterized regions. We performed parameter inference based on the responses of these neurons to standard light stimuli measured with two-photon imaging of glutamate release using iGluSnFR as an indicator (Franke et al., 2017). Our analysis shows that many of the model parameters can be constrained well, yielding simulation results consistent with the observed data. After validating our model, we show that the inferred models and the inference algorithm can be used to efficiently guide the design of electrical stimuli for retinal neuroprosthetics to selectively activate OFF- or ON-BCs. This is an important step toward solving a long-standing question in the quest to provide efficient neuroprosthetic devices for the blind.

Materials and methods

Key resources table
Reagent type (species)
or resource
DesignationSource or referenceIdentifiersAdditional information
Genetic reagent (mouse)B6;129S6-Chattm2(cre)LowlJJackson laboratory
JAX 006410
RRID:IMSR_JAX:006410
Genetic reagent (mouse)Gt(ROSA)26Sor tm9(CAG-tdTomato)HzeJackson laboratory
JAX 007905
RRID:IMSR_JAX:007905
Strain (mouse, female)B6.CXB1-Pde6brd10Jackson laboratory
JAX 004297
RRID:IMSR_JAX:004297
Strain
(Adeno-associated virus)
AAV2.7m8.hSyn.iGluSnFRVirus facility,
Institute de la Vision, Paris
Software, algorithmNeuronChttps://retina.anatomy.upenn.
edu/rob/neuronc.html
RRID:SCR_014148Version 6.3.14
Software, algorithmSNPE
https://github.com/mackelab/delfi
See Inference algorithm
Software, algorithmCOMSOL MultiphysicsCOMSOL MultiphysicsRRID:SCR_014767

Biophysical neuron models

We created detailed models of three retinal cell types: a cone, an ON- (Figure 1A,Bi) and an OFF-BC (Figure 1Bii). From the different OFF- and ON-BC types, we chose to model the types 3a and 5o, respectively, because those were the retinal cone bipolar cell (CBC) types in mice for which we could gather most information. To model the light response, the OFF-BC model received input from five and the ON-BC from three cones (Behrens et al., 2016). Every cone made two synaptic connection with each BC. The postsynaptic conductances were set to 0.25 nS per connection.

From serial block-face electron microscopy (EM) data of retinal BCs to multicompartment models.

(A) Raw morphology extracted from EM data of an ON-BC of type 5o. (Bi) Processed morphology connected to three presynaptic cones (red) and several postsynaptic dummy compartments that are generated to create the synapses in the model (yellow). The cone and BC morphologies are divided into color-coded regions with a legend shown on the right. (Bii) Same as (Bi) but for an OFF-BC of type 3. (C) Three cylindrical compartments of a multicompartment model. Every compartment (blue) n consists of a membrane capacitance cmn, a membrane resistance rmn, a leak conductance voltage source Vrn, an extracellular voltage source Vexn and at least one axial resistor rin that is connected to a neighboring compartment. Vexn is only used to simulate electrical stimulation and is otherwise replaced by a shortcut. Compartments may have one or more further voltage- or ligand-dependent resistances ren with respective voltage sources ren to simulate ion channels (indicated in gray).

Multicompartment models

Request a detailed protocol

We used NeuronC (Smith, 1992) to implement multicompartment models of these neurons. A multicompartment model subdivides a neuron into a finite number of compartments. Every compartment is modeled as an electrical circuit, has a position in space, a spatial shape and is connected to at least one neighboring compartment (Figure 1C). The voltage in a compartment n, connected to the compartments n-1 and n+1 is described by:

(1) δδtVmn=1cmn(Vmn+1Vmnrin+1+rin+Vmn1Vmnrin1+rin+Vrn+VexnVmnrmn+eVen+VexnVmnren(...))+δδtVexn.

Here, compartments are either modeled as cylinders or spheres. The membrane capacitance cmn, membrane resistance rmn and axial resistance rin of a compartment n are assumed to be dependent on the compartment surface area Amn and/or the compartment length lcn:

(2) rmn=Rm/Amn,cmn=CmAmn,rin=Rilcn/Amn.

We assumed the specific membrane resistance Rm, the specific membrane capacitance Cm, and the axial resistivity Ri to be constant over all compartments within a cell model. We used Ri=132Ω cm for all cell types and informed our priors for Cm and Rm, which we estimated for every cell type individually, based on estimates for rod bipolar cells of rats (Oltedal et al., 2009). Parameters of NeuronC and the used values are summarized in Appendix 1—table 3.

Anatomy

Request a detailed protocol

We used a simplified cone morphology consisting of four compartments: one cone-shaped compartment for the outer segment, one spherical compartment for the combination of inner segment and soma, one cylindrical compartment for the axon, and another spherical one for the axonal terminals (Figure 1). The light collecting area in the outer segment was set to 0.2 µm² (Nikonov et al., 2006). The diameter of the soma dSc, the axon dAc and axonal terminals dATc, the length of the axon lAc and the length of the outer segment lOSc were based on electron microscopy data (Carter-Dawson and LaVail, 1979):

(3) dSc=5.13μm,dAc=1.3μm,dATc=6μm,lAc=15μm,lOSc=14.4μm.

The BC morphologies in this study were based on serial block-face electron microscopy data of mouse bipolar cells (Helmstaedter et al., 2013). We extracted the raw voxel-based morphologies from the segmentation of the EM dataset and transformed them into a skeleton plus diameter representation using Vaa3D-Neuron2 auto tracing (Xiao and Peng, 2013). These where then manually refined using Neuromantic (Myatt et al., 2012) to correct errors originating from small segmentation errors (Figure 1).

The ON-BC morphology we chose was classified as type 5o, equal to the functional type of the model. For the OFF-BC, we decided for a morphology classified as type 3b, although we functionally modeled a type 3a cell, because the chosen reconstructed morphology was of higher quality than all available type 3a reconstructions and because type 3a and 3b BCs have very similar morphologies. Additionally, type 3a and 3b mostly differ in the average axonal field size (Wässle et al., 2009; Helmstaedter et al., 2013), with that of type 3a being larger than that of type 3b. The selected morphology had the largest axonal field among all cells classified as 3b in the dataset, well within the range of type 3a cells.

Because the computational time scales approximately linear with the number of BC compartments, using the full number of compartments of the EM reconstructions (>1000) during parameter inference was computationally infeasible. Therefore, we utilized the compartment condensation algorithm of NeuronC, which iteratively reduces the number of compartments while preserving biophysical properties (Smith, 1992). To be able to draw a sufficient number of samples, we reduced the number of compartments during parameter inference to 22 and 19 for the OFF- and ON-BC respectively (requiring approximately 3 min per simulation for a 31 s light stimulus). To simulate the electrical stimulation, more compartments are necessary to capture the effect of the electrical field on the neurites of the BC models. Therefore, we increased the number of compartments to 139 and 152 for the OFF- and an ON-BC, respectively, which is sufficient to accurately represent all major neurites without becoming computationally too expensive (requiring approximately 20 min and 13 min per simulation for a 31 s light stimulus for the OFF- and ON-BC, respectively).

Ion channels and synapses - literature review

Request a detailed protocol

The complement and distribution of voltage- or ligand-gated ion channels shapes the response of neurons. Here, ion channels are modeled as additional electrical elements in the compartments’ membrane with conductances dependent on time varying parameters, such as the membrane potential and the calcium concentration within the cell. In addition to the equations that govern a channel’s kinetics, their location in the cell has to be defined. After a literature review of retinal cone bipolar cell types in mice, we decided to model the OFF- and ON-type for which we could gather most information, namely BC3a and BC5. Currently, there are three accepted subtypes of BC5: 5o, 5i and 5t (Greene et al., 2016). Here, we modeled the BC5 subtype that expresses voltage-gated sodium channels (Hellmer et al., 2016) which probably also corresponds to the more transient BC5 subtype reported in Ichinose et al., 2014. The TTX sensitivity observed in Matsumoto et al., 2019 suggests that both, 5o and 5i express voltage-gated sodium channels. To make our model consistent, we used data from the same BC5 subtype (5o) for the morphology, the target data and the number of cone contacts. A summary of all used channels, their location within the models and the respective references can be found in Table 1. The following paragraphs describe which channels were included in the models and why. Note, however, that for all channels (except the L-type calcium channel in the axon terminals, as calcium channels are necessary in the model for neurotransmitter release) channel densities of zero were included in the prior distributions, thereby allowing the parameter inference to effectively remove ion channels from the model.

Table 1
Ion channels of biophysical models.
  1. Regions of ion channels and the respective abbreviations as in Figure 1.

    D refers to the combination of DD and PD. All refers to the combination of IS/S, A, and AT.

  2. If multiple regions are stated for a neuron, the ion channel density differs between them.

In their axon terminals, cones express L-type calcium (CaL) channels that mediate release of the transmitter glutamate (Morgans et al., 2005; Mansergh et al., 2005; Ingram et al., 2020). We modeled calcium extrusion purely with calcium pumps (CaP) since other mechanisms such as sodium-calcium-exchangers probably only play a minor functional role in cones (Morgans et al., 1998). Additionally, there is evidence that cones express hyperpolarization-activated cyclic nucleotide-gated cation (HCN) channels of the type 1, mostly in the inner segment but also in the axon (Knop et al., 2008; Van Hook et al., 2019). The presence of HCN3 channels in mouse cones is more controversial. These channels have been observed in rat cones (Müller et al., 2003), and a more recent study also found evidence for HCN3 channels at the synaptic terminals of mouse cones, but could not observe any functional differences between wild-type and HCN3-knockout mice. To restrict the number of model parameters, we did not include HCN3 in our cone model. However, we added calcium-activated chloride (ClCa) channels to the axon terminals (Yang et al., 2008; Caputo et al., 2015) and voltage-gated potassium channels KV at the inner segment (Van Hook et al., 2019).

Our BC5 type expresses voltage-gated sodium (NaV) channels at the axon shaft (Hellmer et al., 2016). Another study found inward-rectifier potassium (Kir) channels at the soma of BC5 (Knop et al., 2008), which were also found in the homologous type in rat (Cui and Pan, 2008). Additionally, BC5 express HCN channels at the axon terminal, the soma and the dendrites (Knop et al., 2008; Hellmer et al., 2016). From the four subtypes of HCN, BC5 seem to almost exclusively express HCN1. In the rat, there is also evidence for the expression of HCN4 channels in BC5 (Müller et al., 2003; Ivanova and Müller, 2006), but this could not be verified for mice. Data from rat suggests that BCs with NaV channels also express KV channels (Ma et al., 2005). We therefore added KV channels at the dendrites and the axon.

Similar to BC5, BC3a express HCN channels at the axon terminals, the soma and the dendrites. However, instead of HCN4 they express HCN4 (Hellmer et al., 2016; Knop et al., 2008). There is also evidence that BC3a express NaV channels at the axon shaft (Hellmer et al., 2016), which were also found in the homologous type in rat (Cui and Pan, 2008). Just like for BC5, we added also KV in BC3a were only reported for rat so far (Cui and Pan, 2008). As we could not find any evidence for the lack of Kir channels in mouse BC3a and the channel repertoires of BC3a in mouse and rat are overall very consistent, we included them in our model.

The distribution of calcium channels in mouse CBCs is largely unknown (Van Hook et al., 2019). In the rat retina, there is evidence for T-type calcium (CaT) channels in BC3a (Ivanova and Müller, 2006). Calcium currents of unspecified type were observed in BC5 (Cui and Pan, 2008). Generally, L-type calcium (CaL) channels are believed to mediate neurotransmitter release in almost all BCs across types and species (Van Hook et al., 2019). Therefore, we included them in both BC models. The literature review in Van Hook et al., 2019 suggests that T-type calcium channels might be exclusively expressed in BC3. In mouse BC3b, the simultaneous expression of both CaT and CaL has been described (Cui et al., 2012). Furthermore, the latter and other studies (Hu et al., 2009; Satoh et al., 1998) suggest that voltage-gated calcium channels might not be located in the axon terminals only, but also in the soma and might play a role in signal transmission within the cell. Based on the studies mentioned, we assumed that BC3a and BC5 express CaL in the axon terminals and potentially also at the soma. The BC3a model may additionally use CaT channels, both at the soma and at the axon terminals. For calcium extrusion, we added calcium pumps (Morgans et al., 1998).

BC5 receive input from cones via the metabotropic glutamate receptor 6 (mGluR6) (Van Hook et al., 2019). BC3a receive input from cones via kainate receptors (Ichinose and Hellmer, 2016). We modeled the kainate receptors by modifying the inactivation time constant τγ of the AMPA receptors included with NeuronC.

Ion channels and synapses - implementation

Request a detailed protocol

All ion channels in this study were based on the models available in NeuronC. We used both Hodgkin-Huxley (HH) and Markov-Sequential-State (MS) channel implementations. Since we did not add channel noise to our model, every HH channel could have also been described as an equivalent MS channel. However, since HH channels are computationally less expensive, we used HH implementations wherever possible. Implementation details and references are listed in Table 2. The L-type calcium channel, for example, was based on the HH model defined by the following equations:

(4) 1re:=ge=c3gmax,δδtc=(1c)α(V)cβ(V),
(5) α(V)=ηT0.04(V+15)exp(0.04(V+15))11ms,β(V)=ηT5exp(V+3818)1ms.

Here, ηT corrects for differences between the temperature of the simulated cell Tsim and the temperature for which the channel equations were defined Teq based on a temperature sensitivity Q10 which can vary between ion channels and state transitions:

(6) ηT=exp(log(Q10)(TsimTeq)/10K).

There are several sources for model uncertainty about the exact channel kinetics. First, not all channel models used here were developed based on mouse data resulting in species-dependent differences. Second, we do not always know the exact subtypes of ion channels, for example in the case of the T-type calcium channel. Third, the exact temperature sensitivities Q10 are not known. Therefore, we estimated transition rates and thresholds for state transitions during the parameter inference. For this, we allowed for offsets ΔV relative to V in the rate equations and additionally, we estimated relative time constants τ for the rates. For example Equation 4 was changed to:

(7) δδtc=(1-c)1ταα(V-ΔVα)-c1τββ(V-ΔVβ).
Table 2
Ion channel implementation details and optimized channel parameters.
ChannelNeuronCTypeStatesParametersChannel remarks and references
Kainate rec.AMPA1MS7STC, τγBased on Jonas et al., 1993.
mGluR6mGluRSTCSee NeuronC documenation.
CaLCA0HH(4)ΔVα, ταBased on Karschin and Lipton, 1989.
CaTCA7MS12ΔVα, ταModification of Lee et al., 2003.
CaPCaPKSee NeuronC documenation.
HCN1/2/4K4MS10Based on Altomare et al., 2001.
KVK0HH(5)ΔVα, ταBased on Hodgkin and Huxley, 1952.
KirK5MS3ΔVαModification of Dong and Werblin, 1995.
ClCaCLCA1MS12Modification of Hirschberg et al., 1998.
NaVNA5MS9ΔVα, ΔVγ, τallBased on Clancy and Kass, 2004.

To keep the parameter space as small as possible, we only optimized the kinetics of ion channels with high uncertainty (e.g. KV) or with high relevance for the exact timing of the neurotransmitter release (e.g. CaL and CaT). Additionally, we constrained the channel parameters to physiologically plausible ranges. Table 2 summarizes which channel parameters were estimated during parameter optimization. Time constants τ and voltage offsets ΔV not optimized were set to one and zero, respectively. For the NaV, a single time constant τall was used to modify all time constants proportionally. The calcium pump dynamics were modified by changing the calcium concentration CaPK that causes half of the maximum calcium extrusion velocity. The BC glutamate receptors were optimized by allowing for a change in the synaptic transmitter concentration at the receptors by a factor of STC, which might be smaller for the OFF-BC than for the ON-BC given the greater distance between the release sites of the cones and the dendritic tips of the BCs (Behrens et al., 2016). The simulated cell temperature Tsim was set to 37°C if not stated otherwise. For further information we refer to the NeuronC documentation (Smith, 1992).

Neurotransmitter release

Request a detailed protocol

The glutamate release of cones and BCs release is mediated through ribbon synapses that release vesicles in response to calcium influx in a nonlinear way (Matthews and Fuchs, 2010; tom Dieck and Brandstätter, 2006; Baden et al., 2013). We modeled the ribbon synapses with a standard model (Smith, 1992) including a readily releasable pool (RRP) from which vesicles can be released (Lagnado and Schmitz, 2015). The presence of multiple release pools shapes the dynamic of release at the ribbon synapse and make it state dependent, allowing for rapid adaptational processes at the synaptic site (Baden et al., 2013). In the model, the current release rate is dependent on the number of vesicles currently available vRRP in the RRP, the maximum number of vesicles vRRPmax in the RRP and the intracellular calcium concentration [Ca]. In NeuronC, calcium is modeled in radial shells through which calcium can diffuse deeper into the neuron. For the release of neurotransmitter, only the calcium concentration in the first shell [Ca]0 (equivalent to the concentration at the membrane) is considered. The release rate r is computed as:

(8) r(t)=([Ca]0(t)1e6mol)2vRRP(t)vRRPmaxglvesicless,

where gl is a linear gain factor. gl and vRRPmax were optimized for every cell type individually. The RRP is constantly replenished with a constant rate that is equivalent to the maximum sustainable release rate rmsr. At a time t, for a simulation time step Δt, the vesicles in the pool are updated as follows:

(9) vRRP(t+Δt)=vRRP(t)-r(t)Δt+rmsrΔt(1-vRRP(t)vRRPmax).

For the cone model, rmsr was set to 100 vesicles per second based on Berntson and Taylor, 2003. The prior for vRRPmax was based on RRP sizes reported for salamander (Thoreson et al., 2016; Bartoletti et al., 2010). For the BCs, rmsr was set to eight vesicles per second based on the reported value for rat rod bipolar cells in Singer and Diamond, 2006. The prior for vRRPmax was based on Wan and Heidelberger, 2011.

Bayesian inference for model parameters

To estimate the free parameters of the multicompartment models, we used a Bayesian likelihood-free inference framework called Sequential Neural Posterior Estimation (SNPE) (Lueckmann et al., 2017; Gonçalves et al., 2020). The goal of the parameter estimation was to find parameter regions for which the model outputs match the experimentally observed glutamate release in response to a light stimulus. Details of the target data, the stimulus, the comparison between experimental and simulated data and the inference algorithm are described below. To be able to simulate the light response of the BC models, we inferred the parameters of the cone model first.

Target data of neuron models

Request a detailed protocol

As target data, we used two-photon imaging data recorded with an intensity-based glutamate-sensing fluorescent reporter (iGluSnFR) (Marvin et al., 2013). All animal procedures were approved by the governmental review board (Regierungspräsidium Tübingen, Baden-Württemberg, Konrad-Adenauer-Str. 20, 72072 Tübingen, Germany) and performed according to the laws governing animal experimentation issued by the German Government.

To constrain the cone models, we used glutamate traces of two cone axon terminals (Figure 5—figure supplement 1A) in response to a full-field chirp light stimulus (Figure 5A). The traces were recorded in one transgenic mouse (B6;129S6-Chattm2(cre)LowlJ, JAX 006410, crossbred with Gt(ROSA)26Sortm9(CAG-tdTomato)Hze, JAX 007905) that expressed the glutamate biosensor iGluSnFR ubiquitously across all retinal layers after intravitreal injection of the viral vector AAV2.7m8.hSyn.iGluSnFR (provided by D. Dalkara, Institut de la Vision, Paris). The cone glutamate release in the outer plexiform layer was recorded in x-z scans (64 × 56 pixels at 11.16 Hz; Zhao et al., 2019). Region-of-interest (ROIs) were drawn manually and traces of single ROIs were then normalized and upsampled to 500 Hz as described previously (Franke et al., 2017; Szatko et al., 2019). For each axon terminal, we computed the mean over five traces. Both means were then aligned by minimizing the mean squared error between them, and the mean of the two aligned means was used as target data for the cone model (Figure 5—figure supplement 1B).

For the BC models, we used mean glutamate traces of BC3a (n = 19 ROIs) and BC5o (n = 13 ROIs) (Figure 5—figure supplement 1C–F) in response to a chirp light stimulus (Figure 6A) from a recently published dataset (Franke et al., 2017). In that study, glutamate responses were recorded from BC terminals at different depths of the inner plexiform layer (x-y scans, 64 × 16 pixels at 31.25 Hz). ROIs were drawn automatically based on local image correlation and traces of single ROIs were normalized and upsampled to 500 Hz (see above). Since we simulated isolated BCs (except for the cone input), we used the responses to a local ‘chirp’ light stimulus recorded with the glycine receptor blocker strychnine, which means that the target data is less affected by inhibition from small-field amacrine cells. We did not consider input from GABAergic, wide-field amacrine cells, because these are not strongly activated by the local chirp stimulus (Franke et al., 2017). The shape of the BC stimulus differed from the cone stimulus as contrast was not linearized for the BC recordings and therefore intensity modulations below 20% brightness were weakly rectified.

Light stimulus and cell response

Request a detailed protocol

We first matched the experimental with the simulated stimulus. For this, we used the digital stimuli and corrected both timing and amplitude (using a sigmoid function) to minimize the mean squared error with respect to the experimentally recorded stimuli, correcting for delays and non-linearities in the displaying process. Then we linearly transformed the light stimulus such that the simulated photon absorption rates were 10×103P/(scone) for the lowest and 31×103P/(scone) for the highest stimulus intensity including the background illumination, approximating the values reported in Franke et al., 2017. In NeuronC, the photon absorption rate acts as input to a phototransduction model (Nikonov et al., 1998), which provides the hyperpolarizing current entering the inner segment. The membrane potential in the axon terminal compartment regulates the calcium influx into the cell which in turn influences the glutamate release rate. This glutamate release from the simulated cones modifies the opening probability (the fraction of open channels in the deterministic case) of postsynaptic receptors, which drive the BC models.

Discrepancy function

Request a detailed protocol

To compare model outputs to the experimentally observed target data, we defined a discrepancy function 𝜹. Since the target traces were relative fluorescence intensities, the absolute number of released glutamate vesicles could not be directly inferred from the target data, and the data only constrained relative variations in the release rate during simulation. Because we also wanted to constrain our models to plausible membrane potentials and release rates, we combined the following seven discrepancy measures:

  • δiGluSnFR: The mean squared error between the experimental and simulated iGluSnFR trace.

  • δRateRest: A penalty for implausibly high resting release rates.

  • δVRest: A penalty for implausibly low or implausible high resting membrane potentials.

  • δRateΔ: A penalty for implausibly low release rate changes.

  • δVΔ: A penalty for implausibly low membrane potential changes.

  • δVmin: A penalty for implausibly low membrane potentials.

  • δVmax: A penalty for implausibly high membrane potentials.

The discrepancy between a model output m and the target data 𝝂t was computed as:

(10) 𝜹(𝝂t,m)=[δiGluSnFR(𝝂t,m),δRateRest(m),δVRest(m),δRateΔ(m),δVΔ(m),δVmin(m),δVmax(m)].

To identify the overall ‘best’ samples, we computed the total discrepancy as the absolute-value norm of the discrepancy vector: δtot(𝝂t,m)=δ(𝝂t,m).

The discrepancy function δiGluSnFR (Equation 10) computes the distance between a simulated iGluSnFR trace 𝝂m and an iGluSnFR target 𝝂t. To estimate the simulated iGluSnFR trace, we convolved the glutamate release rate 𝐫m with an iGluSnFR kernel κ. Here, the time-dependent kernel function κ was approximated with an exponential decay function, based on iGluSnFR intensity changes to spontaneous vesicle release reported in Marvin et al., 2013:

(11) νm=rmκ,κ(t)=exp(t/60ms).

The discrepancy was then computed as the euclidean distance between the simulated and the target iGluSnFR trace with respect to a distance minimizing linear transformation of the simulated trace. This linear transformation was necessary because the target traces only reflect relative fluorescence changes. The discrepancy was normalized to be between zero and one by dividing by the variance 𝝂t-μt2, where μt is the mean of the target data.

(12) δiGluSnFR(𝝂t,𝝂m)=mina,b𝝂t-(a+b𝝂m)2𝝂t-μt2,b0.

For all other discrepancies, specific values of the glutamate release rate (in the case of the BCs, the mean release rate over all synapses) or the somatic membrane potential were compared to a lower and an upper bound of target values tl and tu, such that values within these bounds were assigned a discrepancy of 0.0. Outside this range, the discrepancy was defined by additional bounds pl and pu. Given a specific value of the simulation ym, the respective discrepancy δ(ym) was computed as:

(13) δ(ym)={1+exp(2(ymtl)2(tlpl)2),if ym(,tl),0,if ym[tl,tu],1exp(2(ymtu)2(tupu)2),if ym(tu,).

To compute δRateRest and δVRest, the resting release rate rm0 and resting membrane potential vm0 for the background light adapted state were extracted. For the BC models, the resting membrane potential was not penalized for values between tl=65mV and tu=45mV based on reported values for mice (Ichinose et al., 2014; Ichinose and Hellmer, 2016) and rat retina (Ma et al., 2005). For the cone model, the expected resting membrane potential was more depolarized between tl=55mV and tu=40mV(Cangiano et al., 2012).

The discrepancy of the resting release rate δRateRest was computed similarly. For the BC models, the lower bound tl was set to zero. As mentioned earlier, we limited our BC models to have a maximum sustainable release rate of 8 vesicles per second based on Singer and Diamond, 2006. We allowed non-zero resting release rates due to the background light and spontaneous vesicle fusion but constrained it to values lower than the maximum sustainable release rate (Kavalali, 2015; Baden et al., 2014). For the OFF-BC we chose an upper bound of 4 vesicles per second (half the maximum sustainable release rate). For the ON-BC, we chose a slightly smaller value of 3 vesicles per second. This difference was based on the observation that the ON-BC target never falls significantly below the value of the resting state, indicating that the resting release rate is probably close to zero and can therefore not become smaller. In contrast, the OFF-BC target falls below the resting value right after stimulus onset, indicating a small but non-zero resting release rate. For the cone model, we assumed a comparably high resting release rate between tl=50 and tu=80 vesicles per second based on the assumed higher maximum sustainable release rate and the fact that cones show steady release in darkness (Choi et al., 2005; Sheng et al., 2007).

For the penalty on implausible release changes δRateΔ, we computed the largest absolute difference Δr between the resting release rate rm0 and release rates 𝐫m after stimulus onset. δVΔ was computed analogously but for the membrane potential 𝐯m and the resting membrane potential vm0:

(14) Δr=max|rmrm0|andΔv=max|vmvm0|.

δRateΔ(ym) and δVΔ(ym) were then computed by using the differences ym=Δr and ym=Δv, respectively, in Equation 13. For the BC release rate, we did not penalize differences larger than tl=5 vesicles per second. For the cone, we expected much larger differences between tl=50 to tu=65 vesicles per second due to their larger maximum sustainable release rate. For the membrane potential, we expected a difference of at least tl=5mV based on light step responses recorded with patch clamp in mouse BCs (Ichinose et al., 2014; Ichinose and Hellmer, 2016). Since here, the stimulus contrast was higher, we only used the reported values as lower bounds but allowed the model to have larger variation, namely up to tu=25mV for the OFF- and tu=15mV for the ON-BC. We allowed greater membrane potential variation in the OFF-BC, because it receives input from more cones.

For the discrepancy measures δVmin and δVmax, we computed the minimum and maximum of the membrane potential 𝐯𝐦 after stimulus onset and used again Equation 13. For δVmin, we chose tl=80mV for the BCs and tl=60mV for the cone model, and in both cases tu=. For δVmax, we chose tu=10mV for the BCs and tu=35mV for the cone, and in both cases we set tl=-. The BC values are based on data from rat (Ma et al., 2005) and ground squirrel (Saszik and DeVries, 2012); the cone values are based on Cangiano et al., 2012.

All values for pl and pu were based on pilot simulations with the goal to distribute the penalties where they most mattered. All discrepancies (except for δiGluSnFR) and their respective values pl, pu, tl and tu are illustrated in Figure 2 for clarity and summarized in Table 3.

Discrepancy measures based on Equation 13 for the cone (red dashed line), the OFF- (blue solid line) and ON-BC (green dotted line).

The parameters defining the discrepancy measures are listed in Table 3. All discrepancy measures are between zero and one per definition.

Table 3
Parameters of discrepancy measures.
δConeBC (3a | 5o)References
pltltupupltltupu
δRateRest05080100003 | 47Choi et al., 2005; Sheng et al., 2007; Berntson and Taylor, 2003; Singer and Diamond, 2006
δVRest−80−55−40−20−80−65−45−30Cangiano et al., 2012; Ichinose et al., 2014; Ichinose and Hellmer, 2016; Ma et al., 2005
δRateΔ0506510005
δVΔ0510200515 | 2540Ichinose et al., 2014; Ichinose and Hellmer, 2016
δVmin−7560−100−80Cangiano et al., 2012; Ma et al., 2005; Saszik and DeVries, 2012
δVmax--−35−20--−100Cangiano et al., 2012; Ma et al., 2005; Saszik and DeVries, 2012

Priors

The inference method SNPE is a Bayesian method and therefore it needs a prior distribution p(𝜽) for the parameters 𝜽 to estimate the posterior. We chose truncated normal distributions for all priors because they allow for weighting of more plausible parameters (in contrast to e.g. uniform distributions), while they enable restrictions to plausible ranges (in contrast to e.g. normal distributions). A d-dimensional truncated normal distribution 𝒩T is defined by a mean 𝝁=(μ1,,μd)T, a d×d covariance matrix Σ and a d-dimensional space W=[a1,b1]××[ad,bd]:

(15) 𝒩T(θ|μ,Σ,W)={exp(0.5(θμ)TΣ1(θμ))Wexp(0.5(ωμ)TΣ1(ωμ))dωifθW,0otherwise.

The prior means μi and truncation bounds [ai,bi] were based on experimental data wherever possible (see Appendix 1—table 1 and 2), including data from rat and different cell types such as rod bipolar cells, as well as pilot simulations. For parameter inference, we normalized the parameter space such that the truncation bounds were [0, 1] in all dimensions. The diagonal entries of the prior covariance matrix Σ were set to 0.32. Because it is difficult to find prior knowledge about the dependencies of parameters, we set all non-diagonal entries to zero. To sample from 𝒩T, we implemented a rejection sampler, that samples from a normal distribution with the same mean 𝝁 and covariance matrix Σ and resamples all 𝜽 not in W.

Inference algorithm

Request a detailed protocol

SNPE estimates a posterior parameter distribution represented by a mixture-density network, based on sampling, that is, model evaluations for randomly drawn parameters. Inference is performed in several rounds. In every round j, the algorithm draws N parameters from a sampling distribution p~j(𝜽) to estimate the posterior distribution p(𝜽|𝐱target), where 𝐱target is a summary statistic of the target data.

In the first round, parameter samples 𝜽n are drawn from the prior, that is, p~1(𝜽)=p(𝜽), and the multicompartment model is evaluated for all 𝜽n. From each simulated response, a summary statistic 𝐱n is computed, resulting in N pairs of parameters and summary statistics (𝜽n,𝐱n). At the end of the round, a mixture-density network is trained with summary statistics 𝐱 as input, and the parameters ϕ of a mixture of Gaussian distribution qϕ(𝜽|𝐱) as output. The network is trained by minimizing the loss function :

(16) (ϕ)=1NnNp(θn)p~(θn)K(xn)log(qϕ(θn|xn)),

where K is a kernel function with values between zero and one that weights the influence of samples on the network training. K(𝐱n) is close to one for samples with summary statistics 𝐱n close to the the target summary statistic 𝐱target and becomes smaller with increasing discrepancy between 𝐱n and 𝐱target. This means, the network tries to find parameter distributions qϕ(𝜽|𝐱) that describe the distribution of samples for any given summary statistic 𝐱. Or, in other words, the network is trained to find a mapping from summary statistics to parameter distributions. K ensures that the network focuses its capacity on summary statistics close to the target summary statistic. After training the network, it can be evaluated at a summary statistic 𝐱* to obtain the posterior parameter distributions for the given summary statistic. Evaluating at 𝐱*=𝐱target yields an approximation of the true posterior distribution p(𝜽|𝐱target)qϕ(𝜽|𝐱target). This posterior can either be used as the sampling distribution of the next round p~j+1(𝜽), or—if the algorithm is stopped—as the final posterior distribution. The relative probability p(𝜽n)/p~(𝜽n) in Equation 16 weights samples not drawn from the prior, which ensures that Bayes’s rule is not violated. A detailed proof that this actually yields an approximation of the true posterior in the Bayesian sense can be found in Lueckmann et al., 2017.

We based our algorithm on the Python code available at https://github.com/mackelab/delfi version 0.5.1 (Macke Lab, 2020) with the following settings and modifications. We modeled qϕ as a single Gaussian, because we noticed that mixture of Gaussians almost always collapsed to a single component after a few rounds. Both, intermediate and final posteriors were truncated using the truncation bounds of the prior. The truncation was performed after network training. For every neuron model, we drew N=2,000 samples per round and stopped the algorithm after the fourth round. Two hundred additional samples were drawn from the posterior for further analysis. Since we wanted to use the posterior samples to simulate the effects of electrical stimulation on the BCs, the number of compartments was increased in this last step to 139 and 152 for the OFF- and ON-BC, respectively.

As summary statistics of model outputs mn, we used the discrepancy function δ(𝝂t,mn)=𝐱n (see Equation 10), which describes the discrepancy between model outputs and the target data. The target summary statistic was set to be a zero-vector 𝐱target=[0,,0], since the target should have a discrepancy of zero with respect to itself. The first dimension of 𝜹, δiGluSnFR, computes the distance between the simulated and experimentally observed iGluSnFR trace. Considering the noise in the target data, observing a discrepancy of zero is virtually impossible. Therefore, evaluating the network at 𝐱target=[0,,0] is based on extrapolation, that is, the mixture-density network is evaluated for an input where it was not trained on. This, as we observed during pilot experiments, often led to posterior estimates of poor quality or endless loops of resampling. So instead of evaluating the network at 𝐱target=[0,,0] to obtain the posterior estimate, the network was instead evaluated at 𝐱target=[xminiGluSnFR,0,,0], where xminiGluSnFR is the the smallest δiGluSnFR sampled during this round. This is roughly equivalent to assuming that the best strategy for extrapolation is to simply use the estimate at the boundary. For the weighting function K, we used zero-centered Gaussian kernels k with a bandwidth of σ=0.25 in all dimensions but the first one. In the first dimension, that is the weighting kernel for δiGluSnFR, we also used an adaptive strategy and both, the mean μiGluSnFR and the bandwidth σiGluSnFR of the kernel, were updated in every round:

(17) μiGluSnFR=xminiGluSnFR,σiGluSnFR=q20iGluSnFR-xminiGluSnFR,

where q20iGluSnFR is the 20th percentile of all sampled iGluSnFR discrepancies of the same round. K was computed as the product of all scalar kernels k.

Some parameter combinations caused the neuron simulation to become numerically unstable. If a simulation could not successfully terminate for this reason, the sample was ignored during training of the mixture-density network by setting the kernel weight to zero. In other cases, the BC models had a second, strongly depolarized and therefore biologically implausible equilibrium state. To test for this, we simulated a somatic voltage clamp to 30 mV for 100 ms and checked whether the membrane potential would recover to a value of −30 mV or lower within additional 300 ms. Samples not recovering to ≤ -30 mV were also ignored during training.

Data analysis of simulated traces

Request a detailed protocol

The distance function δiGluSnFR(𝝂t,𝝂m) (see Equation 12) was used not only to compute the discrepancy between simulations and the respective targets but also more generally to compare different experimental and simulated iGluSnFR traces. The distance between two iGluSnFR traces 𝝂1 and 𝝂2 was computed as δiGluSnFR(𝝂1,𝝂2).

To quantify the timing precision of our neuron models, we estimated peak times in simulated and target iGluSnFR traces to compute pairwise peak time differences. For every peak in the simulated trace, we computed the time difference to the closest peak of the same polarity (positive or negative) in the target. We did not consider peaks between 16 s and 23 s of the stimulation for the cone and between 16 s and 21 s for the BC models, because the targets were to noisy for precise peak detection in these time windows. This resulted in approximately 35 positive and negative peak time differences per trace.

Simulation of electrical stimulation

To simulate external electrical stimulation of our BC models, we implemented a two-step procedure. In the first step, the electrical field is estimated as a function of space and time across the whole retina for a given stimulation current. By setting a position of the BC multicompartment models within the retina, the extracellular voltage for every compartment can be extracted. In the second step, the extracellular voltages are applied to the respective compartments (Figure 1C) to simulate the neural response in NeuronC. To be able to perform the first step, we estimated the electrical properties of retinal tissue first. For this, we utilized the same algorithm that was used for parameter inference of the neuron models. To validate the framework, we simulated the electrical stimulation in Corna et al., 2018 and compared experimental and simulated neural responses. Finally, we utilized the framework to find electrical stimuli for selective stimulation of OFF- and ON-bipolar cells. Details of the implementation and the experimental data are described in the following.

Computing the extracellular voltage

Request a detailed protocol

We estimated the electrical field in the retina for a given electrical stimulus with the finite-element method using the software COMSOL Multiphysics (Comsol, 2019). We modeled the photoreceptor degenerated retina as a cylinder with a radius of 2 mm and a height of 105 µm (Pennesi et al., 2012). The stimulation electrodes were modeled as flat disks on the bottom of the retina. Above the retina, an additional cylinder with the same radius and a height of 2 mm was placed to model the electrolyte. The top of this cylinder was assumed to be the return electrode. The implementation of such a model with the subdivision into finite elements is shown in Figure 3. For a single circular stimulation electrode, the model was radially symmetric and could therefore be reduced to a half cross-section as shown in Figure 3 to increase the simulation speed without altering the results. The following initial and boundary conditions were applied to the model. The initial voltage was set to zero at every point V(x,y,z,t=0)=0. The surface normal current density jstim of stimulation electrodes was always spatially homogeneous and dependent on the total stimulation current istim and the total surface area of all electrodes Aelectrode:

(18) jstim=istimAelectrode.
Model for the external electrical stimulation of the retina.

(A) Schematic figure of the experimental setup for subretinal stimulation of ex vivo retina combined with epiretinal recording of retinal ganglion cells. Schematic modified from Corna et al., 2018. (B,C) Model for simulating the electrical field potential in the retina in 3D and 2D, respectively. The retina (darker blue) and the electrolyte above (lighter blue) are modeled as cylinders. The shown 3D model is radially symmetric with respect to the central axis (red dashed line). Therefore, the 3D and 2D implementations are equivalent, except that the computational costs for the 2D model are much lower. The 2D implementation is annotated with parameters that were either taken from the literature or inferred from experimental data. (D) Electrical field potential in the retina for a constant stimulation current of 0.5 µA for a single stimulation electrode with a diameter of 30 µm. Additionally, the compartments (black circles with white filling) of the ON-BC model are shown. The stimulation is subretinal meaning that the dendrites are facing the electrode (horizontal black line on bottom).

The potential of the return electrode was kept constant Vreturn(t)=0. At all other boundaries, the model was assumed to be a perfect insulator jother=0. We assumed a spatially and temporally homogeneous conductivity and permittivity in both the retina and the electrolyte. The conductivity of the electrolyte was set to σames=1.54S/m based on Eickenscheidt and Zeck, 2014 and its relative permittivity was assumed to be εames=78, based on the value for water. The conductivity σretina and relative permittivity εretina of the retina were optimized with respect to experimental target data as described below.

Target data to infer the electrical parameters of the retina

Request a detailed protocol

To estimate the electrical properties of the retina, we first recorded target data. All procedures were approved by the governmental review board (Regierungspräsidium Tübingen, Baden-Württemberg, Konrad-Adenauer-Str. 20, 72072 Tübingen, Germany, AZ 35/9185.82–7) and performed according to the laws governing animal experimentation issued by the German Government. We applied different sinusoidal stimulation voltages vstim and recorded the evoked currents. Currents were recorded with (iretinarec) and without (iamesrec) retinal tissue placed on the micro-electrode array. In both cases, the recording chamber was filled with an electrolyte (Ames’ medium, A 1420, Sigma, Germany). A single Ag/AgCl pellet (E201ML, Science Products) was used as a reference electrode and located approximately 1 cm above a customized micro-electrode array. The electrodes, made of sputtered iridium oxide had diameters of 30 µm and center-to-center distance of 70 µm. The stimulation current was calculated from the voltage drop across a serial 10 resistor in series with the Ag/AgCl electrode (Corna et al., 2018). The voltage drop was amplified using a commercial voltage amplifier (DLPVA, Femto Messtechnik GmbH, Berlin, Germany) and recorded using the analog input (ME 2100, Multi Channel Systems MCS GmnH, Germany). Stimulation currents were measured across an ex vivo retina of a rd10 mouse (female; post-natal day 114; strain: Pde6brd10 JAX Stock No: 004297).

We applied sinusoidal voltages of 25 and 40 Hz. For 25 Hz, we applied amplitudes from 100 to 600 mV with steps of 100 mV. For 40 Hz all amplitudes were halved.

Procedure to infer the electrical parameters of the retina

Request a detailed protocol

We estimated the conductivity σretina and relative permittivity εretina of the retina in three steps based on the experimental voltages vstim and the respective recorded currents iretinarec and iamesrec. To facilitate the following steps, we fitted sinusoids iretina and iames to the slightly skewed recorded currents and used them in the following (Figure 8C). To fit the sinusoids, we minimized the mean squared error between recorded currents and idealized sinusoidal currents of the same frequency f, resulting in estimates of the phase ϕ(iames) and the amplitude A(iames) of the currents:

(19) ϕ(x),A(x)=argminϕ,At(xAsin(2πft+ϕ))2dt.

During parameter inference, we only used two voltage amplitudes per frequency, resulting in four voltage and eight current traces. The other amplitudes were used for model validation. First, we estimated the electrical properties of the electrode. Here, ‘electrode’ is meant to include the electrical double layer and all parasitic resistances and capacitances in the electrical circuit. We simulated the voltage vames across the electrolyte without retinal tissue by applying the currents iames as stimuli (Figure 8Ai). Since this setup does not contain anything besides the electrolyte and the electrode, the difference between the experimental stimulus vstim, which was applied to record iames, and the simulated voltage vames was assumed to have dropped over the electrode:

velectrode=vstimvames.

Based on that assumption, we could estimate the electrical properties of the electrode. We modeled the electrode as a RC parallel circuit (Figure 8Aii). Having both, sinusoidal voltages (velectrode) over and the respective sinusoidal currents (iames) through the electrode, we analytically computed the values for Re and Ce as follows. We assumed Re and Ce to be dependent on vstim and therefore to be dependent on the stimulus frequency and amplitude. From the data we derived the phase ϕZ and amplitude |Z| of the impedance formed by the RC circuit. For every velectrode, we estimated ϕ(velectrode) and A(velectrode) using Equation 19. ϕZ and |Z| were then computed as:

(20) ϕZ=ϕ(velectrode)-ϕ(iames),|Z|=A(velectrode)/A(iames).

Then, knowing the frequency f, ϕZ and |Z| are sufficient to compute Re and Ce:

(21) Re=|Z|1+tan(ϕZ)2,Ce=-tan(ϕZ)/(2πfRe).

With the estimated values of the RC circuit, we created a model with only two unknowns, the conductivity σretina and the relative permittivity εretina of the retina (Figure 8Aiii). To estimate the unknown parameters of this model, we used the same inference algorithm as for the neuron models but with a different discrepancy function. Here, the discrepancy δR(vstim) for a stimulus vstim was computed as the mean squared error between the respective experimental current (now with retinal tissue) iretina and the simulated current iretinasim:

(22) δR(vstim)=vstimt(iretina-iretinasim)2𝑑t.

The total discrepancy was computed as the sum of all discrepancies δR(vstim) for the four different vstim stimuli that were used. To cover a wider range of possible parameters, we first estimated the parameters in a logarithmic space by sampling the exponents pσ and pε of the parameters:

(23) σretina=2pσ0.1S/m,ϵretina=2pε106.

We used normal distributions (without truncation) as priors for pσ and pε and set the means to 1.0 and the standard deviations to 2.0. After three rounds with 50 samples each, we computed the minimum (aσ, aε), maximum (bσ, bε) and mean (μσ, με) for both parameters σretina and εretina from the 10% best samples. Then, we then ran the parameter inference algorithm again, but now in a linear parameter space around the best samples observed in the logarithmic space. For the priors of σretina and εretina, we used truncated normal priors bound to [aσ,bσ] and [aε,bε] with means μσ and με, respectively. As for the cell parameter inference, we normalized the parameter space to values between in [0, 1]. The diagonal entries of the prior covariance matrix were set to 0.32, with non-diagonal entries of zero. The parameters resulting in the lowest sampled discrepancy during optimization are referred to as the optimized parameters and were used to simulate the neural responses to electrical stimulation.

Simulation of the neural response to electrical stimulation

Request a detailed protocol

With the optimized parameters for the electrical properties of the retina, we were able to compute the BC responses for any given stimulation current. Note that for this, we used the model illustrated in Figure 3 as described earlier but with the optimized parameters for σretina and εretina. To simulate the neural response, we first used the stimulation current to simulate the extracellular voltage over time within the retina. After defining the relative position of the multicompartment model with respect to the retinal cylinder, we extracted the extracellular voltage for each compartment at its the central position (Figure 3C). Finally, these extracellular voltages were applied to the compartment models in NeuronC to simulate their response (Figure 1C). To estimate the uncertainty of the BC responses to electrical stimulation, we simulated different cell parametrizations in every stimulation setting. For this, we used the five best posterior samples, that is, the five (out of 200) samples with the smallest δtot, for both BC models. In all simulations, we modeled subretinal stimulation of photoreceptor degenerated retina (Zrenner, 2002). For this, we removed all cone input from the BCs and virtually placed the multicompartment models in the retinal cylinder such that the dendrites were facing towards the electrode. The z-position of BC somata, that is, the distance to the bottom of the retinal cylinder, was set to 30 µm.

Model validation

Request a detailed protocol

To validate the model for electrical stimulation, we compared simulated BC responses to experimentally recorded retinal ganglion cell (RGC) thresholds to 4 ms biphasic current pulses reported in Corna et al., 2018. In this study, the RGC thresholds were recorded epiretinally under subretinal stimulation of photoreceptor degenerated (rd10) mouse retina using a micro-electrode array (Figure 3A). The stimulation threshold was defined as the charge delivered during the anodic stimulation phase evoking 50% of the firing rate of a specific RGC. On the micro-electrode array. The 30 µm diameter electrodes were arranged on a regular grid with a center-center spacing of 70 µm. The RGC thresholds were measured for different numbers N of N×N active electrodes.

We simulated the electrical field in the retina for the configurations with 1×1, 2×2, 4×4 and 10×10 active electrodes using the respective currents from the experimental data. The electrodes were centered with respect to the retina. For every stimulation current, we simulated the response of the OFF- and ON-BC at six xy-positions with distances from 0 to 500 µm relative to the center. Simulation temperature Tsim was set to 33.5°C to match experimental conditions. For every 40 ms simulation, we computed the mean number of vesicles released per synapse.

Optimizing electrical stimulation to separately activate ON- and OFF-BCs

Request a detailed protocol

To find stimuli for selective stimulation of ON- and OFF-BCs, we simulated the response of the BC models to different electrical stimuli. For this, we used a single 30 µm diameter electrode and centered the dendrites of the simulated BCs above this electrode. To find stimuli that stimulate the OFF-BC without stimulating the ON-BC or vice versa, we utilized the same algorithm used for estimating the BC parameters. Here, the inference algorithm was used to estimate parameters of a 40 ms stimulation current istim parametrized by four free parameters p1,,p4. The current was defined as a cubic spline fit through the knot vector 𝐚=(0,p1,,p4,p*,0) spaced equidistantly in time between zero and 40 ms, where p* is chosen such that the stimulus is charge neutral (i.e. the integral over the current is zero). For all stimuli, the maximum stimulus amplitude was normalized to 0.5 µA. An illustration is shown in Figure 10.

Here, the priors over p1,,p4 were Gaussian with zero means and standard deviations of 0.3. For every sampled stimulus istimn, we simulated the response of the BCs for Δt=60ms starting with the stimulus onset. For parameter estimation, we defined the discrepancy measure δstimt as the ratio between the relative release Rn of the OFF- and ON-BC which was defined as:

(24) Rn=(μ(𝐫n)-μ(𝐫base))ΔtvRRPmax+(rmsr-μ(𝐫base))Δt,

where μ(𝐫n) is the evoked mean release and μ(𝐫base) is the base release rate in the absence of electrical stimulation; that is, the numerator is equal to the number of released vesicles (as a mean over all synapses) caused by the stimulation. The denominator is equal to the theoretical maximum of releasable vesicles per synapse (see Equation 9). δstimt was computed as:

(25) δstimt(istimn)=Rn(other BC) / Rn(target BC).

We ran the parameter inference twice (each with one round only), once with the ON- and once with the OFF-BC as target. We drew 400 samples from the prior that were reused for the second run of inference, and 100 more samples from the posterior. Here, the posteriors were two-component mixture of Gaussians without truncation.

Code and data availability

Request a detailed protocol

Models and simulation code is available at https://github.com/berenslab/CBC_inference (Oesterle, 2020; copy archived at swh:1:rev:2b8ec4ac0ca916d42cba0404229298f8ff79c3a3). Experimental and inference data is available at https://zenodo.org/record/4185955.

Results

We used a high-resolution electronmicroscopy data set (Helmstaedter et al., 2013) to create biophysically realistic multicompartment models of three neuron types from the mouse retina including cones, an OFF- and an ON-bipolar cell (BC) type. These neurons form the very beginning of the visual pathway, with cones converting light into electrochemical signals and providing input via sign-preserving and -reversing synapses to OFF- and ON-BCs, respectively. The parameters of these models include the basic electrical properties of the cells as well as the density of different ion channel types in different compartments. Given a set of parameters, simulations from the model can easily be generated; however, it is not possible to evaluate the likelihood for a given set of parameters, which would be required for standard Bayesian inference procedures for example through MCMC.

To overcome the challenge of choosing the resulting 20 to 32 parameters of these models, we adapted a recently developed technique called Sequential Neural Posterior Estimation (SNPE) (Lueckmann et al., 2017) (for details, see Materials and methods). Starting from prior assumptions about the parameters, the algorithm compared the result of simulations from the model to data obtained by two-photon imaging of the glutamate release from the neurons (Franke et al., 2017) and measured a discrepancy value between the simulation and the data. Based on this information, the algorithm used a neural network to iteratively find a distribution over parameters consistent with the measured data. This yielded optimized biophysically realistic models for the considered neuron types.

Inference of cone parameters

We first estimated the posterior distribution over the parameters of a cone based on the glutamate release of a cone stimulated with a full-field chirp light stimulus, consisting of changes in brightness, temporal frequency, and temporal contrast (Figure 4A and Figure 5). The cone model had a simplified morphology and consisted of four compartments (Figure 1, see Materials and methods). We included a number of ion channels in the model reported to exist in the cones of mice or closely related species (see Table 1). Prior distributions were chosen based on the literature. For inference, we drew 2000 samples of different parameter settings per round and stopped the algorithm after the fourth round. Then, 200 more parameter samples were drawn from the respective posteriors for further analysis. The chosen discrepancy functions penalized discrepancies between the target and simulated iGluSnFR trace δiGluSnFR, implausible membrane potentials, and implausible release rates. To compare different model fits, the discrepancy measures were added to yield a total discrepancy δtot. We found that the total discrepancy δtot of the cone model was relatively high for most samples drawn from the prior but decreased over four rounds of sampling (Figure 4A). The discrepancy measuring the fit quality to the glutamate recording δiGluSnFR was already relatively small in the first round for most, but not all samples. In the following rounds, the number of samples with large δiGluSnFR was strongly reduced (Figure 4A).

Discrepancies of samples from the cone and the BC models during and after parameter estimation.

(A, B, C) Sampled discrepancies for the cone (A), the OFF- (B), and ON-BC (C) respectively. For every model, the total discrepancy δtot (left) and the discrepancy between the simulated and target iGluSnFR trace δiGluSnFR (right) are shown. For every model, four optimization rounds with 2000 samples each were drawn (indicated by gray vertical lines). After the last round (indicated by dashed vertical lines, ‘p.’), 200 more samples were drawn from the posteriors. For the BCs, the number of compartments was increased in this last step to 139 and 152 for the OFF- and ON-BC, respectively. Additionally, 200 samples were drawn from assuming independent posterior marginals for comparison (indicated by blue vertical lines, ‘m.’). For every round, the discrepancy distribution (horizontal histograms), the median discrepancies (red vertical lines), the 25th to 75th percentile (blue shaded area) and the 5th to 95th percentile (gray-shaded area) are shown. (D) Discrepancies between different iGluSnFR traces of BCs to demonstrate the high precision of the model fit. The pairwise discrepancy computed with equation Equation 12 between eight iGluSnFR traces is depicted in a heat map. The column and row labels indicate which 𝝂t and 𝝂m were used in equation Equation 12 respectively. The traces consists of the optimized BC models (‘Model’), the targets used during optimization (‘Target’), experimental data from the same cell type without the application of any drug (‘No drug’) and experimental data from another retinal CBC type with the application of strychnine (‘BC4’ and ‘BC7’). Note that strychnine was also applied to record the targets.

Figure 5 with 2 supplements see all
Optimized cone model.

(A) Normalized light stimulus. (B) Somatic membrane potential relative to the resting potential for the best parameters (blue line) and for 200 samples from the posterior shown as the median (gray dashed line) and 10th to 90th percentile (gray shaded area). A histogram over all resting potentials is shown on the right. (C) Release rate relative to the resting rate. Otherwise as in (B). (D) Simulated iGluSnFR trace (as in (B)) compared to target trace (orange). Three regions (indicated by gray dashed boxes) are shown in more detail below without samples from the posterior. Estimates of positive and negative peaks are highlighted (up- and downward facing triangles respectively) in the target (brown) and in the simulated trace (blue and cyan respectively). Pairwise time differences between target and simulated peaks (indicated by triangle pairs connected by a black line) are shown as histograms for positive (blue) and negative (cyan) peaks on the right. The median over all peak time differences is shown as a black vertical line.

The parameter setting with lowest discrepancy (δtot=0.10) modeled accurately the response of the cone to full-field stimulation with the chirp light stimulus (Figure 5). The simulated iGluSnFR signal nicely matched the data both on a coarse timescale and in the millisecond regime (Figure 5D). Indeed, for this sample, all discrepancies besides δiGluSnFR were zero or almost zero (δtot-δiGluSnFR<0.0001) and most of the remaining discrepancy could be attributed to the noisy target data.

As our inference algorithm returned not only a single best set of parameters, but also a posterior distribution, we could obtain additional parameter samples from the model which should produce simulations consistent with the data. Almost all samples from the posterior yielded simulations that matched the target data well (median δiGluSnFR: 0.12) and the overall total discrepancy was small (median δtot: 0.21). Besides the discrepancy between the experimental and simulated glutamate trace δiGluSnFR, most of the remaining discrepancy in the posterior samples was caused by rate variation (mean δRateΔ: 0.18) and resting rates (mean δRateRest: 0.14) that were too low in some of the models. While in principle we could propagate the remaining uncertainty about the model parameters provided by the posterior to the inference about BC models, we used only the parameter set with the smallest total discrepancy δtot for efficiency and refer to this as the optimized cone model. To analyze the role of active ion channels, we removed ion channels individually (except for the CaL channel with is necessary to simulate the vesicle release) from the optimized cone and simulated the light response (Figure 5—figure supplement 2). We found that the HCN channel contributed most, while the contribution of ClCa was negligible. Since ClCa did not alter the light response for both, the cone and BC light stimulus, we removed it in the following steps for computational efficiency.

Inference of bipolar cell parameters

We next turned to anatomically detailed multicompartment models of two BC types. We chose to model type 3a and type 5o because these were the OFF- and ON-CBC types for which we could gather most prior information from the literature. The anatomy of the cells was extracted from a 3D reconstruction of examples of these cell types based on electron microscopy data (Helmstaedter et al., 2013) and divided into regions sharing ion channel parameters (Figure 1). As for the cone model, the channels included in the model and the prior distributions were chosen based on the literature (see Table 1). This yielded 32 and 27 free parameters for the OFF- and ON-BC, respectively.

We fitted the BC type models to published two-photon glutamate imaging data reflecting the glutamate release from the BC axon terminals (Franke et al., 2017). In this case, we used responses to a local chirp light stimulus activating largely the excitatory center of the cells. In addition, the responses were measured in the presence of the drug strychnine to block locally acting inhibitory feedback through small-field amacrine cells (Franke et al., 2017) (see Materials and methods for details). Similar to what we observed in cones, the total discrepancy δtot for parameter sets sampled for the OFF- and ON-BC model decreased over the four rounds of optimization (Figure 4B and C). In contrast to the the cone model, the discrepancy measure penalizing deviations from the glutamate trace δiGluSnFR was relatively large for prior samples and declined approximately equally fast as the total discrepancy δtot.

We found that simulations generated with the parameter set with minimal total discrepancy or parameters sampled from the posterior matched the target traces very well for both OFF- and ON-BC models (Figure 6). For these parameters, the cells were relatively ispotential units throughout the light stimulus (Figure 6—figure supplement 1 and Figure 6—figure supplement 2) with a larger voltage gradients from dendrites to the axon in the ON-BC. The optimized BC models, that is the BC samples with the lowest total discrepancies δtot, had discrepancies of zero except for the iGluSnFR discrepancy δiGluSnFR. To get a more quantitative impression of the quality of the model fits, we compared the pairwise iGluSnFR discrepancies δiGluSnFR between the optimized BC models, the experimentally measured response traces as used during optimization, traces recorded from the same cell type without application of strychnine and responses of another OFF- and ON-BC. For both optimized cell model outputs, the discrepancy was smallest for the targets used during optimization. This shows that the optimized models were able to reproduce cell-type specific details in light response properties that go beyond the simple distinction of ON and OFF responses. While the discrepancies between traces of different ON-BC types were overall relatively small for local stimulation (Franke et al., 2017), the discrepancies between traces from OFF cells were larger likely due to network modulation of the target cell type by amacrine cells (indicated by the difference between the target and the no-drug condition) and larger response differences between the two compared OFF-BC types. The posterior samples of both BC models had a low discrepancy, except for a few samples (median δtot: 0.29 and 0.26 of the OFF- and ON-BC, respectively). The only discrepancy measure with a non-zero median of the absolute values was δiGluSnFR, which also accounts for 88% and 82% of the mean total discrepancy for the OFF- and ON-BC respectively.

Figure 6 with 6 supplements see all
Optimized BC models.

(A) Normalized light stimulus. Responses of the OFF- and ON-BC are shown in (B–D) and (E–G), respectively. (B, E) Somatic membrane potential relative to the resting potential for the best parameters (blue line) and for 200 samples from the posterior shown as the median (gray dashed line) and 10th to 90th percentile (gray-shaded area). A histogram over all resting potentials is shown on the right. (C, F) Mean release rate over all synapses relative to the mean resting rate. Otherwise as in (B). (D, G) Simulated iGluSnFR trace (as in (B)) compared to respective target trace (orange). Three regions (indicated by gray dashed boxes) are shown in more detail below without samples from the posterior. Estimates of positive and negative peaks are highlighted (up- and downward facing triangles, respectively) in the target (brown) and in the simulated trace (blue and cyan, respectively). Pairwise time differences between target and simulated peaks (indicated by triangle pairs connected by a black line) are shown as histograms for positive (blue) and negative (cyan) peaks on the right. The median over all peak time differences is shown as a black vertical line.

Despite the overall high resemblance between optimized model outputs and targets, there were some visible systematic differences. For the ON-BC, the target showed a skewed sinusoidal response with faster rise than fall times during the frequency and amplitude increasing sinusoidal light stimulation between 10 s and 18 s and between 20 s and 27 s respectively. In contrast, the optimized model output showed approximately equal rise and fall times, resulting in a systematic delay of positive and negative peaks (median delay of all peaks: 15.6 ms) in the simulated iGluSnFR trace relative to the target (Figure 6G). Additionally, some of the positive peaks of the optimized ON-BC model during sinusoidal light stimulation were too small (e.g. at 11.5 s). This effect might have been a side-effect of the peak timing difference between target and model: Amplitude increases were inefficient in reducing the discrepancy as long as the peaks were not precisely aligned. In contrast, the peak time precision of the OFF-BC model (Figure 6D) was much higher (median delay of all peaks: 0.0 ms). In this case, the main difficulty for the model appeared to be its inability to reproduce the non-linearity in the cell response to the increasing amplitude sinusoidal light stimulation between 20 s and 27 s.

After having verified that the posterior over parameters provided a good fit to the experimental data, we inspected the one-dimensional marginal distributions to learn more about the resulting cellular models (Figure 7). For most parameters, the marginal posteriors had smaller variances than the priors, indicating that the parameter bounds were not chosen too narrow. For some parameters, the posterior mean differed substantially from the prior mean (e.g. the CaT channel density at the axon terminal of OFF-BC) while it was largely unchanged for others (e.g. the CaL channel density at the soma for the OFF-BC). The algorithm also inferred the dependencies of some parameters, visible in the two-dimensional marginals (Figure 7—figure supplement 1 and Figure 7—figure supplement 2). Because of these correlations, the full posterior in the high-dimensional parameter space led to simulations which were on average better (median: 0.29 vs. 0.31 and 0.26 vs. 0.33 for the OFF- and ON-BC, respectively) and less variable in their quality (95%-CIs: 0.53 vs. 1.01 and 0.64 vs. 1.42 for the OFF- and ON-BC, respectively) than parameters drawn from a posterior obtained by assuming independent marginal distributions. In most cases, the parameters resulting in the lowest total discrepancy were close to the means of the respective posteriors. For some parameters there was a strong difference between the marginal posteriors of the OFF- and ON-BC. For example, the two parameters controlling the leak conductance, Vr and Rm, were much lower for the OFF-BC consistent with the strong variation of membrane resistances reported in Oltedal et al., 2009. The membrane conductance was lower for the ON-BC, which could increase signal transduction speed in the longer axon. Even though the posteriors were narrower than the priors, they still covered a wide range of different parameters. To some extent, this may reflect the fact that we fit the model parameters solely on the cells output, and for example dendritic parameters may be underconstrained by such data; in addition, it may also reflect variability between cells of the same type seen in the experimental data that has also been reported in other studies (Franke et al., 2017).

Figure 7 with 2 supplements see all
Parameter distributions of the BC models.

1D-marginal prior (dashed gray line) and posterior distributions (solid lines) are shown for the OFF- (blue) and ON-BC (green). The parameters of the posterior samples with the lowest total discrepancy are shown as dashed vertical lines in the respective color. XY@Z refers to the channel density of channel XY at location Z. Locations are abbreviated; S: soma, A: axon, D: dendrite and AT: axon terminals (see Figure 1 and main text for details). Note that although these 1D-marginal distributions seem relatively wide in some cases, the full high-dimensional posterior has much more structure than a posterior distribution obtained from assuming independent marginals (see Figure 4). Not all parameter distributions are shown.

After the fourth optimization round, 200 samples were drawn from the posterior distribution with an increased number of compartments to find model parameters to simulate electrical stimulation (see Methods). For comparison, we also ran simulations with the same parameters but the original number of compartments (data not shown). Interestingly, more than 85% of the samples had a lower discrepancy if the models were simulated with more compartments for both BCs. For the best 20% (i.e. 40 samples) of the posterior samples (sorted with respect to samples with fewer compartments), the samples with more compartments had lower discrepancies with only one exception per cell. This indicates that, given enough computational power, the same parameter inference approach but with more compartments could further improve the model outputs. From these samples, we used the five samples with the smallest total discrepancies δtot for the simulation of electrical stimulation.

Additionally, we used these five samples to analyze the effect of active ion conductances on the light response by removing individual ion channels types from the BCs (Figure 6—figure supplement 3 and Figure 6—figure supplement 4). Similar to the optimized cone model, the HCN channels played the most important role in shaping the light response. For both cells, the NaV and somatic calcium channels barely had any influence on the membrane voltage or the vesicle release rate.

Simulating electrical stimulation of the retina

To provide an exemplary use-case for our data-driven biophysical models of retinal neurons, we asked whether we could use our simulation framework to optimize the stimulation strategy for retinal neuroprosthetic implants. These implants have been developed for patients who lose sight due to degeneration of their photoreceptors (Zrenner, 2002). While existing implants have been reported to provide basic vision (Zrenner, 2002; Edwards et al., 2018; Luo and da Cruz, 2016), they are far from perfect. For example, most current stimulation strategies likely activate OFF- and ON-pathways at the same time (Barriga-Rivera et al., 2017). To this end, we created a simulation framework for subretinal electrical stimulation of retinal tissue with micro-electrode arrays. We estimated the conductivity and relative permittivity of the retina based on experimentally measured currents evoked by sinusoidal voltages and then validated simulations of the electrical stimulation of our fitted BC models with standard pulse like stimuli against responses measured in RGCs (Corna et al., 2018). Finally, we used the simulation framework to find stimuli that can separately stimulate OFF- and ON-BCs.

Our framework for simulating the effect of external electrical stimulation using the inferred BC models consisted of two steps: we first estimated the electrical field resulting from a stimulation protocol as a function of space and time across the whole retina (Figure 3). The corresponding extracellular voltages were then applied to the respective compartments to simulate the neural response. To do so, we needed a model of the electrical properties of the electrode and the retinal tissue. We assumed disk electrodes and a simplified model assuming homogeneous electrical properties of the retina and the surrounding medium (see Materials and methods). This model contained two free parameters that needed to be estimated from data: the conductivity and relative permittivity of the retinal tissue.

To estimate these parameters we recorded electrical currents resulting from sinusoidal voltage stimulation with different frequencies in a dish with and without a retina (Figure 8B,C). We used the data without a retina to estimate the properties of the stimulation electrode (Figure 8A,D,E and Materials and methods). Based on the estimates of the electrode properties and the data recorded with a retina, we estimated the conductivity and relative permittivity of the retina with the same parameter inference method as for the neuron models.

Estimation of the conductivity σretina and the relative permittivity εretina of the retina for simulating electrical stimulation.

(A) Electrical circuits used during parameter estimation. (B) Stimulation voltages vstim at 25 (left) and 40 Hz (right). From the six experimentally applied amplitudes, only the amplitudes used for parameter inference are shown. (C) Experimentally measured currents iames through electrolyte (Ames’ solution) without retinal tissue for the stimulus voltages vstim in (B). The mean traces over all (but the first two) repetitions are shown (black dashed lines). Sine waves were fitted to the mean traces (solid lines) with colors referring to the voltages in (B). (D) Simulated voltages over the electrolyte vames using the fitted currents in (C) as stimuli applied to the circuit in (Ai). (Aii) Electrical circuit used to model the electrode plus interface. (E) Stimulus frequency and amplitude dependent estimates of Re (i) and Ce (ii) based on the electrical circuit shown in (Aii) for 25 (black) and 40 Hz (gray). Note that the values were derived analytically (see main text). The values corresponding to the stimulus voltages shown in (B) are highlighted with respective color. (Aiii) Electrical circuit used to estimate σretina and εretina. The respective values for Re and Ce are shown in (E) and are dependent on vstim. The current iretina through the model is measured for a given stimulus voltage vstim. (F) Sampled parameters of εretina and σretina and the respective sample losses. First, samples were drawn in a wide logarithmic space (gray dots) and then in a narrower linear parameter space. The best sample (lowest discrepancy) is highlighted in red. (G) Simulated currents iretina (solid lines) through the circuit in (Aiii) with optimized parameters (red dot in (F)) and respective experimentally measured currents (broken lines). Here, results for all six stimulus amplitudes are shown for both frequencies.

We found that both parameters are very well constrained by the measured data (Figure 8F). The parameters resulting in the lowest discrepancy were σretina=0.076S/m and εretina=1.1×107 in accordance with the conductivity of 0.077S/m reported for rabbit Eickenscheidt et al., 2012 and the relative permittivity of gray matter estimated in Gabriel et al., 1996. With these parameters, we simulated currents for all stimulus amplitudes we recorded experimentally. The simulated and experimental currents matched for the amplitudes used during parameter inference but also for amplitudes reserved for model validation (Figure 8G). Therefore, we used them in all following experiments.

To validate our simulation framework, we compared simulated BC responses to experimentally measured RGC thresholds (Corna et al., 2018). We simulated BCs at different positions for four different electrode configurations (Figure 9A) and nine stimulation current wave forms (Figure 9B). For small stimulus charge densities, the BCs barely responded, whereas for very high charge densities the cells released all glutamate vesicles available in the readily release pool (Figure 9C and D). In between, the response increased from no response to saturation, dependent on the distances of the simulated cell to the active electrodes. Across stimulation conditions, these threshold regions coincide with the measured RGC thresholds to the same stimulation, when assuming that the stimulated RGCs were not too far away from the stimulation electrode. Since the reported RGC thresholds likely result from indirect stimulation via BCs, the consistency between the RGC and simulated BC thresholds can be interpreted as evidence that our model was well calibrated to simulate electrical stimulation.

Threshold of electrical stimulation for experimentally measured RGCs and simulated BCs of photoreceptor degenerated mouse retina.

(A) Stimulation currents measured experimentally and used as stimuli in the simulations. (B) xy-positions of BCs (crosses) and electrodes (red dots) for 1×1, 2×2, 4×4, 10×10 stimulation electrodes, respectively. Every electrode is modeled as a disc with a 30 µm diameter. Except for the electrode configuration, the models were as in Figure 8. (C, D) Mean synaptic glutamate release relative to the size of the readily releasable pool vRRPmax of simulated OFF- and ON-BCs, respectively. Values can be greater than 1, because the pool is replenished during simulation. Errorbars indicate the standard deviation between simulations of BCs with different parametrizations; for both BCs, the five best posterior samples were simulated. Glutamate release is shown for different charge densities (x-axis) and cell positions (colors correspond to xy-positions in (A); for example the darkest blue corresponds to the central BC). Experimentally measured RGC thresholds (gray dashed lines) plus-minus one standard deviation (gray-shaded ares) are shown in the same plots.

Optimized electrical stimulation for selective OFF- and ON-BC stimulation

We finally used our framework for electrical stimulation to find stimuli that excite OFF- or ON-BCs selectively. To this end, we performed Bayesian inference over an electrical charge-neutral stimulus (Figure 10A) with the SNPE algorithm, using the response ratio between the two BC types (Figure 10B) as a discrepancy function to minimize. Using this procedure, we found that triphasic, anodic first stimuli with a cathodic middle phase of high amplitude (Figure 10C) evoked substantial neurotransmitter release in the OFF-BC (Figure 10Di) while evoking almost no response in the ON-BC (Figure 10Dii). The stimuli optimized to target the ON-BC were biphasic, with no clear preference for anodic or cathodic first currents (Figure 10E). In contrast to the stimuli optimized for the OFF-BC, these stimuli did not exclusively stimulate the targeted ON-BC (Figure 10Fii) but also the OFF-BC (Figure 10Fi). We did not find stimuli evoking stronger release (defined as in Equation 24) in the ON-BC than the OFF-BC. This lower threshold of the OFF-BC, which we also observed for the biphasic current pulses (Figure 9), was partially caused by the shorter axon of the OFF-BC resulting in slightly larger changes of the extracellular voltage at the axon terminals during stimulation (Figure 10—figure supplement 1A,B). While the morphologies and ion channel distributions contributed to differences in the membrane voltage at the axon terminals (Figure 10—figure supplement 1C), the decisive factor for differences in the OFF and ON response presumably lies in the presence or absence of CaT channels at the axon terminals. Removing these channels from the OFF-BC had almost no effect on the membrane voltage, but resulted in qualitatively similar responses for both cell types (Figure 10—figure supplement 1D,E). Removing the CaL channels at the axon terminals on the other hand, left the OFF response for the triphasic and biphasic cathodic first stimuli largely unchanged (Figure 10—figure supplement 1F).

Figure 10 with 1 supplement see all
Electrical currents optimized for selective ON- and OFF-BC stimulation.

(A) Illustration of the stimulus parametrization. A stimulation current was computed by fitting a cubic spline through points defined by p1p4 and p* (black dots) that were placed equidistantly in time. p1p4 were free parameters and p* was chosen such that the stimulus was charge neutral, that is the integral over the current (gray-shaded area) was zero. Currents were normalized such that the maximum amplitude was always 0.5 µA. (B) Illustration of the electrical stimulation of an OFF- (i) and ON-BC (ii) multicompartment model. (C, E) Stimuli optimized for selective OFF- and ON-BC stimulation, respectively. The two best stimuli observed during optimization are shown. (D, F) Cumulative vesicle release (as a mean over all synapses) relative to the size of the readily releasable pool vRRPmax in response to the electrical stimuli shown in (C) and (E), respectively. Stimuli and responses are shown in matching colors. Release is shown for the five best posterior cell parametrizations for the OFF- (i) and ON-BC (ii) as a mean (line) plus-minus one standard deviation (shaded area).

Discussion

In this study, we showed how the recently developed Bayesian likelihood-free parameter inference method called Sequential Neural Posterior Estimation (SNPE) (Lueckmann et al., 2017) can be used to estimate the parameters of multicompartment models of retinal neurons based on light-response measurements. In addition, we built a model for electrical stimulation of the retina, and optimized electrical stimulation protocols for retinal neuroprosthetic devices designed to support vision in the blind.

Performing Bayesian inference for mechanistic models is difficult, as they typically do not come with an analytical solution of the likelihood function. The SNPE algorithm — like many approximate Bayesian computation (ABC) methods (Sisson et al., 2018) — does not require such an analytical formulation, as it builds on simulations of the model. In contrast to standard ABC methods, the SNPE algorithm makes efficient use of simulation time by using all simulated data to train a mixture density network to update the parameter distributions (Lueckmann et al., 2017; Gonçalves et al., 2020). Moreover, SNPE makes minimal assumptions about the simulator, giving full flexibility to use it with different simulation environments and software. As all Bayesian methods, SNPE allows the incorporation of literature knowledge in the form of priors which can be used to constrain parameters and to put more weight on more plausible parameters. Finally, it does not only yield a point-estimate of the best parameters — like exhaustive grid-search techniques (Goldman et al., 2001; Prinz et al., 2003; Stringer et al., 2016) or evolutionary algorithms (Gerken et al., 2006; Keren et al., 2005; Achard and De Schutter, 2006; Rossant et al., 2011) — but also returns a posterior distribution that reflects remaining parameter uncertainties and allows one to detect dependencies between parameters.

Recently, there has been a surge of interest in Bayesian simulator-based inference with many recently published algorithms (Gutmann and Corander, 2016; Papamakarios and Murray, 2016; Lueckmann et al., 2017; Lintusaari et al., 2017; Papamakarios et al., 2018; Wood, 2010; Durkan et al., 2018; Sisson et al., 2018; Gonçalves et al., 2020; Bittner et al., 2019). While we initially evaluated different algorithms, we did not perform a systematic comparison or benchmarking effort, which is beyond the scope of this project. Much of the literature on simulator-based inference evaluates the proposed algorithms on fairly simple models. In contrast, we used SNPE here to perform parameter inference of comparatively complex multicompartment models of neurons. Importantly, we did not need targeted experiments to constrain these models, but based our framework on two-photon imaging data of glutamate release in response to simple light stimuli using a genetically encoded indicator called iGluSnFR (Marvin et al., 2013; Franke et al., 2017). This methods allows direct experimental access to the neurotransmitter release of all excitatory neurons of the retina (Euler et al., 2019). Using this data, we inferred the distributions of model parameters relevant for all the intermediate steps between light stimulation of cones to the glutamate release from synaptic ribbons. While we optimized many parameters in the models using SNPE, we chose to keep some parameters on sensible default values to avoid issues with computational complexity. Of course, it is possible that optimization of the full parameter space would have yielded slightly better results or that some parameters would have been set to slightly different values, as a mechanism whose parameter was allowed to vary compensated for the one whose parameter was held fixed. As an alternative to our approach, one can combine classical systems identification approaches with inference for only some of the biophysical mechanisms such as the ribbon synapse (Schröder et al., 2019). Our approach, however, allows the exploration of mechanisms within neurons which are not or only barely experimentally accessible. For example, in BCs, it is currently difficult to obtain direct voltage measurements from any part of the cell but the soma. If one is interested in how the electrical signal propagates through the axon or the axon terminal, our simulations may help to obtain mechanistic insights and develop causal hypotheses.

Because our inference framework works with experimental measurements which can be performed in a high-throughput manner, it allows for a comparably easy scaling to infer model parameters of a larger number of multicompartment models e.g. of different neuron types. In principle it could even be possible to infer the parameters of a neuron by imaging another neuron. For example, one could attempt to infer parameters of an amacrine cell by observing the neurotransmitter release of a connected BC — although such an indirect inference would likely result in larger uncertainties. Ideally, such a large-scale approach would also include realistic morphologies, for example from electron microscopy as shown here. In fact, BCs are anatomically relatively ‘simple’ neurons, and it will be interesting to test our inference methods on neurons with more complicated structure such as some amacrine cells (Masland, 2012). While we did not aim at an exhaustive analysis of the effect of morphology on the neuron responses, one could easily explore how details of the morphology influence the distribution of optimal biophysical parameters. Further, we extended our model to simulate and optimize external electrical stimulation of the retina. For blind patients suffering from photoreceptor degeneration, for example caused by diseases like Retinitis Pigmentosa, neuroprosthetic devices electrically stimulating remaining retinal neurons can restore basic visual abilities (Edwards et al., 2018; Luo and da Cruz, 2016). The spatial and temporal resolution of such retinal implants is, however, still very limited (Weitz et al., 2015) with the highest reported restored visual acuity of 20/546 (Chuang et al., 2014). While many experimental studies have explored different strategies of stimulation (Jensen et al., 2005; Jensen and Rizzo, 2008; Tsai et al., 2009; Eickenscheidt et al., 2012), most of them are restricted to very specific stimulus types such as current or voltages pulses. As a consequence, retinal implants are not able to specifically target cell types such as the independent stimulation of the ON and OFF pathways of the retina (Lee and Im, 2019; Barriga-Rivera et al., 2017; Twyford et al., 2014). To facilitate a systematic stimulus optimization in silico, we developed a simulation framework for electrical stimulation of the retina. While the idea to simulate responses of BCs to electrical stimuli is not new, previous studies restricted their models to point/ball source electrodes (Resatz and Rattay, 2004; Rattay et al., 2017), simplified BCs to passive cables (Gerhardt et al., 2010) or used simplified BC models that only express L- or T-type channels (Werginz et al., 2015). Our framework combines the simulation of micro-electrode arrays used in neuroprosthetic devices (Edwards et al., 2018; Luo and da Cruz, 2016) with detailed models of an OFF- and ON-BC. This allowed us to test a large number of stimulus waveforms, optimizing for stimuli selectively targeting either OFF- or ON-BCs, which could help to better encode visual scenes into electrical signals of retinal implants. We found stimuli selectively targeting the OFF-BC without stimulating the ON-BC, but not vice versa. Likely, the main reason for the differential response of the two BC types was that only the OFF-BC had T-type calcium channels at the axon terminals. These channels were more sensitive to transient changes in the membrane voltage which were evoked by the stimuli optimized to selectively target the OFF-BC. The ON-BC, having no T-type calcium channels and an overall higher threshold, did not respond to these stimuli. However, it could be stimulated with longer anodic stimulus phases activating the L-type calcium channels. Since we modeled the cells in isolation, network effects through synaptic activation of amacrine cells might further modulate the activity of the BCs. However, the neurites and somata of amacrine cells are substantially farther away from the stimulation electrode than those of the BCs, and these effects might be comparably small. That notwithstanding, simulations including network effects and also more diverse BC types will be required in the future. Ideally, in silico optimized stimulation strategies would be first verified in ex vivo experiments before implementing them in neuroprosthetic devices to improve the quality of visual impressions.

To be able to simulate the electrical stimulation of the retina, we first inferred the conductivity and relative permittivity of the rd10 retina based on recorded currents evoked by sinusoidal stimulation voltages. While the estimated conductivity (σretina=0.076S/m) is consistent with the value (σretina=0.077S/m) reported in Eickenscheidt et al., 2012, also smaller (0.025S/m, Karwoski and Xu, 1999) and larger (≈0.75S/m, Wang and Weiland, 2015) conductivities have been reported for the retina. These differences may be due to different ways in tissue handling and preparation. Comparing the estimated values of the relative permittivity (εretina=1.1×107) to the literature is more difficult, and most simulation studies choose to ignore its effects. The relative permittivity of retinal tissue has been reported for very high frequencies (10 MHz to 10 GHz), but the strong frequency dependence makes a direct comparison to frequencies several orders of magnitude smaller (e.g. 40 Hz) not meaningful. Additionally, data from gray matter suggest a relative permittivity of 1.5 × 107 at 40 Hz very close to our estimate (Gabriel et al., 1996). This opens the question weather the common assumption to ignore the effects of the relative permittivity in other simulations (Gerhardt et al., 2010; Werginz et al., 2015; Rattay et al., 2017) is valid.

In summary, mechanistic models in neuroscience such as biophysically realistic multicompartment models have long been regarded as requiring many manual parameter choices or highly specific experiments to constrain them. We showed here how relatively standard, high-throughput imaging data in combination with likelihood-free inference techniques can be used to perform Bayesian inference on such models, allowing unprecedented possibilities for efficient optimization and analysis of such models. Importantly, this allow us to understand which parameters in such models are well constrained, and which are not, and determine which parameter combinations lead to similar simulation outcomes (Gonçalves et al., 2020; Alonso and Marder, 2019) — questions that have hindered progress in the field for years.

Appendix 1

Cone prior parameters

Appendix 1—table 1
Cone model parameter priors.
ParameterUnitaiµibiReferences: direct and additional
CmμFcm20.911.3Oltedal et al., 2009
RmkΩcm215100Oltedal et al., 2009
VrmV-90-70-50
CaLmScm20.1210Morgans et al., 2005; Mansergh et al., 2005; Cui et al., 2012
KVmScm200.13Van Hook et al., 2019
HCN@SmScm20310Knop et al., 2008; Hellmer et al., 2016; Van Hook et al., 2019
HCN@AmScm20110Knop et al., 2008; Hellmer et al., 2016; Van Hook et al., 2019
HCN@ATmScm200.110Knop et al., 2008; Hellmer et al., 2016; Van Hook et al., 2019
ClCamScm200.110Yang et al., 2008; Caputo et al., 2015
CaPμScm20.110100Morgans et al., 1998
τα(CaL)0.7511.5
τα(KV)0.1110
ΔVα(CaL)mV−505
ΔVα(KV)mV−10010
CaPKμmol0.015100
vRRPmaxvesicles102030Thoreson et al., 2016; Bartoletti et al., 2010
gl0.313

BC prior parameters

Appendix 1—table 2
BC model parameter priors.
ParameterUnitai
(3a|5)
µi
(3a|5)
bi
(3a | 5)
References: direct and additional
CmμFcm20.91.181.3Oltedal et al., 2009
RmkΩ cm2126100Oltedal et al., 2009
VrmV−90−70−50
CaL@SmScm200.53Cui et al., 2012, Van Hook et al., 2019
CaL@ATmScm20.10.53Cui et al., 2012, Van Hook et al., 2019
CaT@SmScm20|n/a0.5|n/a3|n/aCui et al., 2012, Van Hook et al., 2019; Hu et al., 2009; Satoh et al., 1998
CaT@ATmScm20|n/a0.5|n/a3|n/aCui et al., 2012, Van Hook et al., 2019; Hu et al., 2009; Satoh et al., 1998
KV@DmScm200.42Ma et al., 2005
KV@PAmScm200.42Ma et al., 2005
KV@AmScm200.42Ma et al., 2005
Kir@SmScm2012Cui and Pan, 2008, Knop et al., 2008
HCN@DmScm200.22Hellmer et al., 2016, Knop et al., 2008
HCN@SmScm200.22Hellmer et al., 2016, Knop et al., 2008
HCN@ATmScm200.22Hellmer et al., 2016, Knop et al., 2008
NaV@DAmScm2020100Hellmer et al., 2016
CaP@SμScm20.110100Morgans et al., 1998
CaP@ATμScm20.110100Morgans et al., 1998
τγ(Kainate)1|n/a5|n/a20|n/a
τα(CaL)0.512
τα(CaT)0.5|n/a1|n/a2|n/a
τα(KV)0.1110
τall(NaV)0.512
ΔVα(CaL)mV−10010
ΔVα(CaT)mV−10|n/a0|n/a10|n/a
ΔVα(KV)mV−10010
ΔVα(Kir)mV−505
ΔVα(NaV)mV−505
ΔVγ(NaV)mV−505
CaPKμmol0.1|0.0120100
STCmmol0.05|10.5|1.51|3
vRRPmaxvesicles4815Singer and Diamond, 2006
gl0.513

NeuronC parameters

Appendix 1—table 3
Other NeuronC parameters.
ParameterUnitValueRemarks
vnamV65Reversal potential sodium
vkmV−89Reversal potential potassium
vclmV−70Reversal potential chloride
dnaommol151.5Extracellular sodium concentration
dkommol2.5Extracellular potassium concentration
dclommol133.5Extracellular chloride concentration
dcaommol2Extracellular calcium concentration
dicafrac1Fraction of calcium pump current that is added to total current
use_ghki1Use Goldman-Hodgkin-Katz equation
cone_timec0.2Time constant of cone phototransduction
cone_loopg0.0Gain of calcium feedback loop in cones
cone_maxcondnS0.2Max. conductance of of outer segment
timincms0.001 | 0.01Simulation time step (Figure 9)
plotims0.01 | 1Recording time step (Figure 9 and Figure 10 | otherwise)
stimincms0.01 | 0.1Synaptic time step (Figure 9 and Figure 10 | otherwise)
srtimestepms0.001 | 0.01 | 0.1Stimulus update time step (Figure 9Figure 10 | otherwise)
predurs5 | ≥10Simulation time to reach equilibrium potential (Cone | BCs)

References

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
    Comsol Multiphysics, version 5.4
    1. AB Comsol
    (2019)
    Comsol Multiphysics, Stockholm, Sweden.
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27
  28. 28
  29. 29
  30. 30
  31. 31
  32. 32
  33. 33
  34. 34
  35. 35
  36. 36
  37. 37
    Bayesian optimization for likelihood-free inference of simulator-based statistical models
    1. MU Gutmann
    2. J Corander
    (2016)
    The Journal of Machine Learning Research 17:4256–4302.
  38. 38
  39. 39
  40. 40
  41. 41
  42. 42
  43. 43
  44. 44
  45. 45
  46. 46
  47. 47
  48. 48
  49. 49
  50. 50
  51. 51
  52. 52
  53. 53
  54. 54
  55. 55
  56. 56
    Biophysics of Computation: Information Processing in Single Neurons
    1. C Koch
    (2004)
    Oxford University Press.
  57. 57
  58. 58
  59. 59
  60. 60
  61. 61
    Flexible statistical inference for mechanistic models of neural dynamics
    1. J-M Lueckmann
    2. PJ Goncalves
    3. G Bassetto
    4. K Öcal
    5. M Nonnenmacher
    6. JH Macke
    (2017)
    Advances in Neural Information Processing Systems. pp. 1289–1299.
  62. 62
  63. 63
  64. 64
  65. 65
  66. 66
  67. 67
  68. 68
  69. 69
  70. 70
  71. 71
  72. 72
  73. 73
  74. 74
  75. 75
  76. 76
  77. 77
  78. 78
  79. 79
  80. 80
    Fast ε-free inference of simulation models with bayesian conditional density estimation
    1. G Papamakarios
    2. I Murray
    (2016)
    Advances in Neural Information Processing Systems. pp. 1028–1036.
  81. 81
  82. 82
  83. 83
  84. 84
  85. 85
    A model for the electrically stimulated retina
    1. S Resatz
    2. F Rattay
    (2004)
    Mathematical and Computer Modelling of Dynamical Systems 10:93–106.
    https://doi.org/10.1080/13873950412331318080
  86. 86
  87. 87
  88. 88
  89. 89
    Approximate bayesian inference for a mechanistic model of vesicle release at a ribbon synapse
    1. C Schröder
    2. B James
    3. L Lagnado
    4. P Berens
    (2019)
    Advances in Neural Information Processing Systems.
  90. 90
  91. 91
  92. 92
    Handbook of Approximate Bayesian Computation
    1. SA Sisson
    2. Y Fan
    3. M Beaumont
    (2018)
    Chapman and Hall/CRC.
  93. 93
  94. 94
  95. 95
  96. 96
  97. 97
  98. 98
  99. 99
  100. 100
  101. 101
  102. 102
  103. 103
    Resistivity profiles of wild-type, rd1, and rd10 mouse retina
    1. B Wang
    2. JD Weiland
    (2015)
    37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) IEEE. pp. 1650–1653.
    https://doi.org/10.1109/EMBC.2015.7318692
  104. 104
  105. 105
  106. 106
  107. 107
  108. 108
  109. 109
  110. 110
  111. 111

Decision letter

  1. Alexander Borst
    Reviewing Editor; Max Planck Institute of Neurobiology, Germany
  2. John R Huguenard
    Senior Editor; Stanford University School of Medicine, United States
  3. Alexander Borst
    Reviewer; Max Planck Institute of Neurobiology, Germany
  4. Adrienne L Fairhall
    Reviewer; University of Washington, United States

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

In your paper, you apply Bayesian inference to estimate parameters of multi-compartmental models of cone and cone bipolar cells from 2P-glutamate imaging data. You next use the resulting compartmental models to optimize electrical stimulation of the retina for neuroprosthetics.

As a major advantage of the parameter estimation procedure, the result is not just a singular point in parameter space resulting in an optimal fit, but instead a likelihood distribution showing how well each parameter is constrained by the data. I see this as a significant step forward compared to previous model parameter estimation procedures.

Decision letter after peer review:

Thank you for submitting your article "Bayesian inference for biophysical neuron models enables stimulus optimization for retinal neuroprosthetics" for consideration by eLife. Your article has been reviewed by three peer reviewers, including Alexander Borst as the Reviewing Editor and Reviewer #1, and the evaluation has been overseen by John Huguenard as the Senior Editor. The following individual involved in review of your submission has agreed to reveal their identity: Adrienne L Fairhall (Reviewer #3).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

We would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). Specifically, we are asking editors to accept without delay manuscripts, like yours, that they judge can stand as eLife papers without additional data, even if they feel that they would make the manuscript stronger. Thus the revisions requested below only address clarity and presentation.

Summary:

In this paper, Oesterle and colleagues apply Bayesian inference to estimate parameters of multi-compartmental models of cone and cone ON and OFF bipolar cells from light stimulation and 2P-glutamate imaging as well as EM reconstructions. The method employed (Sequential Neural Posterior Estimation) makes use of a mixture density network, which is an approach that sounds very promising for work in this domain. Importantly, the resulting model parameters are not only given as points to result in an optimal fit, but are given as a likelihood distribution showing how well a specific parameter is constrained by the data. I see this as a significant step forward compared to previous model parameter estimation procedures.

The authors then extended the work to model the effects of electrical stimulation in view of building a modeling test bed for electrical retinal prostheses. This required data constraints from electrical stimulation and current recording. Using the model, the authors were able to design stimuli that selectively targeted different bipolar cells. This work stands as a useful general contribution as a method and the authors have done a thorough and careful job in their modeling. The extension to predictions for retinal prostheses demonstrates the power of the approach.

Essential revisions:

1) This is a systematic study, primarily aimed to demonstrate the advantage of Bayesian simulation-based inference (BSI) for estimating the biophysical parameters of compartmental models. While BSI is convincingly demonstrated as a powerful tool for this purpose, it is not clear whether it is superior in any way over the present-day most-popular alternative, the Multiple Objective Optimization (MOO) approach (Drukmann et al., 2007) which is presently used for building data-constrained compartmental modelling for a large variety of neuron types, in hundreds of labs. worldwide, including at the Allen Inst. the Blue Brain projects etc. One key missing aspect in the present study is a systematic comparison (at least in the Discussion) between BSI and MOO methods.

2) Another key missing aspect in this work is the lack of biophysical intuitions emerging from the compartmental models built. Specifically, how does the synaptic input from the cones propagate along the ON and the OFF BP cells' model that we see in Figure 1? We actually do not see any signal (synaptic potential) in this work neither its propagation along the different compartments – from the distal dendrites to soma, to axon. Does voltage attenuate significantly along these BP – compartments or are the modeled cell close to isopotentiality? What is the role of active ion channels during signals propagation in these models? What is the synaptic conductances (between Cone and BP cells) in these models (and what is the justification to use such a complicated model for transmitter release with Ca-dependent pool-release, rather than transient (double exponential?) conductance change as synaptic inputs)? What are the key differences between the ON and OFF compt. models that make them respond differentially to extracellular stimulus? The authors write in the Discussion: "Likely, the different density for some ion channels contributed to the differential response of the two BC types". This is clearly an unsounded claim which needs to be shown and discussed. After all – an important usage of compartmental modelling is to gain insights into the interaction between structure/membrane channels and synaptic/input-output properties of the modeled cells and for calibrating the model against experiments. This key aspect is missing in this work and, therefore, it is impossible for the reader to grasp the underlying mechanisms responsible for the emergent properties of the modeled cells in response to light stimulus.

3) Another query is whether the response of the modeled ON and OFF BP cells will not be very different when they are immersed in actual retina circuit, with electrical field generated also by other cell types (AC, GC) when the retina is electrically-stimulated. This point should be discussed.

4) It is essential to provide the readers with Neuron models of the 3-cell types as well as with the respective data that was used to constrain these models

5) While the paper is generally well written and clear, the model exposition (Section entitled "Inference algorithm") leaves considerable room for improvement; ideally the paper should be self-contained. The presentation is a little confusing with respect to the status of p(θ), the "proposal prior" p ~, the posterior p(θ|x) and the "auxiliary" distribution q (which when equated to the posterior is no longer written as a conditional distribution as it appears in the cost function). It would be good to explain the form of the cost function that is minimized-it looks like it is based on a KL divergence but is a bit unclear of what. This exposition could do a much better job of walking pedagogically through the goal of the algorithm and how the goal is achieved by the variables defined and the cost function. Also, one should shorten this part of the paper and shift many of the figures to Appendices – as it is now standing with 10 figures.

6) It is difficult to find a quantitative reporting of the variation between data from the same cell type. I took the method to be applied to fit distributions over parameters for models accounting for each experimental trace separately; and for the (beautiful!) results in Figures 5 and 6 to be from one example cell, but maybe this is not true. Could this be clarified? Are the distributions of 7 over models that fit all the experimental data for that cell type? If not, it would be good to show the measured responses with an error bar, and show variations between models. Understanding the extent of intercellular variability seems important in the design of isolating stimuli.

https://doi.org/10.7554/eLife.54997.sa1

Author response

Essential revisions:

1) This is a systematic study, primarily aimed to demonstrate the advantage of Bayesian simulation-based inference (BSI) for estimating the biophysical parameters of compartmental models. While BSI is convincingly demonstrated as a powerful tool for this purpose, it is not clear whether it is superior in any way over the present-day most-popular alternative, the Multiple Objective Optimization (MOO) approach (Drukmann et al., 2007) which is presently used for building data-constrained compartmental modelling for a large variety of neuron types, in hundreds of labs. worldwide, including at the Allen Inst. the Blue Brain projects etc. One key missing aspect in the present study is a systematic comparison (at least in the Discussion) between BSI and MOO methods.

We thank the reviewers for giving us the chance to clarify this point. Our contribution in this paper is to show that Bayesian inference for biophysical multicompartment models of neurons is feasible using fairly standard light stimuli and recordings and one can obtain meaningful posterior distributions even for such high-dimensional inference problems.

Indeed, many other labs use MOO algorithms for similar models. However, recent parallel work by Lueckmann et al. also under review at eLife

(https://www.biorxiv.org/content/10.1101/838383v3.full.pdf)​ has shown that already for single compartment Hodgkin-Huxley models MOO algorithms do not in general perform proper statistical inference – in particular, the best simulations obtained by such algorithms are often more concentrated in the parameter space and one cannot easily get a good sense of the uncertainty in the parameters from such algorithms. We expanded our discussion of alternative approaches, highlighting also the comparative analysis in the abovementioned paper. Also, we expanded our analysis (e.g. in Figures 9 and 10)​ to explicitly take the posterior into account.

2) Another key missing aspect in this work is the lack of biophysical intuitions emerging from the compartmental models built. Specifically, how does the synaptic input from the cones propagate along the ON and the OFF BP cells' model that we see in Figure 1? We actually do not see any signal (synaptic potential) in this work neither its propagation along the different compartments – from the distal dendrites to soma, to axon. Does voltage attenuate significantly along these BP – compartments or are the modeled cell close to isopotentiality? What is the role of active ion channels during signals propagation in these models?

To address this point, we created animated heatmaps overlaid on the BC’s morphology to show the voltage, calcium and glutamate signals as a function of space and time, available as video files, for the chirp stimulus. Snapshots from these videos at discrete time points are available as Figure 6—figure supplements 1 and 2. We discuss this analysis in the Results​ section. It shows that BCs are relatively isopotential units. We checked for the importance of active ion channels by removing them from the optimized model (calcium channels in the axon terminals were left in, as they are required for simulating release and do not contribute strongly to the voltage). We found that for the cone model, the calcium-activated chloride channels did almost not contribute to the light response in the optimized model, neither for the cone nor the BC stimulus. We therefore removed it in the following steps, to save computational resources. For the BCs, we extended this analysis to the five best posterior samples (and not only the best sample). We found that, in both cells, the sodium channel (NaV) and somatic calcium channels could be removed from the model without altering the light response. Separate from the calcium channels in the axon terminals, the HCN was most significant for shaping the light response. We discuss all these findings in the paper now.

What is the synaptic conductances (between Cone and BP cells) in these models?

This parameter was missing from the manuscript. We now provide it in the Materials and methods section.

What is the justification to use such a complicated model for transmitter release with Ca-dependent pool-release, rather than transient (double exponential?) conductance change as synaptic inputs?

Ribbon synapses are a hallmark of glutamate release in cones and bipolar cells (for review, see Baden et al., 2011, https://doi.org/10.1016/j.tins.2013.04.006)​. The dynamics of release at this type of synapse cannot be easily matched with a simple transient conductance change, as the state of the pools determines the kinetics of the release. For example, if all pools are filled, ribbon synapses can release a large amount of glutamate very rapidly, emphasizing transient signals. We added a sentence justifying this choice in the Materials and methods section.

What are the key differences between the ON and OFF compt. models that make them respond differentially to extracellular stimulus? The authors write in the Discussion: "Likely, the different density for some ion channels contributed to the differential response of the two BC types". This is clearly an unsounded claim which needs to be shown and discussed.

We investigated this finding in more detail by varying additionally the cell position relative to the electrode and sampling cell parameters from the posterior. We found that the optimal stimulus waveform for the OFF cell was robust against changes in the cell position relative to the electrode and the precise choice model parameters, while the ON cell could less reliably be activated exclusively, with typical stimuli good for ON cells also evoking sizable OFF cell responses (similar to what could be seen in the original Figure 10). This different behavior was due to the T-type calcium channel in the OFF-BC, which strongly reacts to transient stimulation. The ON-BC can only be selectively activated if the threshold for the OFF-BC is larger, which depends on the cell position and the exact cell parametrization. These new results are now discussed in the paper.

After all – an important usage of compartmental modelling is to gain insights into the interaction between structure/membrane channels and synaptic/input-output properties of the modeled cells and for calibrating the model against experiments. This key aspect is missing in this work and, therefore, it is impossible for the reader to grasp the underlying mechanisms responsible for the emergent properties of the modeled cells in response to light stimulus.

As detailed above, we added several new figures and performed new analysis following the feedback of the reviewer. We feel that these have improved the paper significantly.

3) Another query is whether the response of the modeled ON and OFF BP cells will not be very different when they are immersed in actual retina circuit, with electrical field generated also by other cell types (AC, GC) when the retina is electrically-stimulated. This point should be discussed.

The reviewers are right that being able to selectively stimulate OFF/ON BCs in isolation is not the same as being able to stimulate them selectively when embedded in the retinal network. Of course, network effects through synaptic activation of ACs will further modulate the activity evoked by the stimulation. It is unlikely that ACs or RGCs get directly activated by the electrical stimulation protocol suggested here, as their neurites and somata are substantially farther away from the stimulation electrode than those of BCs, and Figures 3D​ and 10 illustrate that even for BCs mostly the dendrites are directly affected by the electric​ field. That notwithstanding, simulations including network effects and also more diverse BC types will be required in the future. We added this to the Discussion.

4) It is essential to provide the readers with Neuron models of the 3-cell types as well as with the respective data that was used to constrain these models

We will make the code and the data available upon publication at our github account https://github.com/berenslab and will provide the version used to create the figures for the​ paper as version 1.0 archived on zenodo.

5) While the paper is generally well written and clear, the model exposition (Section entitled "Inference algorithm") leaves considerable room for improvement; ideally the paper should be self-contained. The presentation is a little confusing with respect to the status of p(θ), the "proposal prior" p ~, the posterior p(θ|x) and the "auxiliary" distribution q (which when equated to the posterior is no longer written as a conditional distribution as it appears in the cost function). It would be good to explain the form of the cost function that is minimized-it looks like it is based on a KL divergence but is a bit unclear of what. This exposition could do a much better job of walking pedagogically through the goal of the algorithm and how the goal is achieved by the variables defined and the cost function. Also, one should shorten this part of the paper and shift many of the figures to Appendices – as it is now standing with 10 figures.

We reordered this part of the Materials and methods section. Now the section starts with the target data and we explain the discrepancy measure between simulations and target data. We then explained the inference algorithm step-by-step and changed the wording to make it clearer. We feel that moving figures from the Materials and methods section to the Appendix would rather make the paper harder to read.

6) It is difficult to find a quantitative reporting of the variation between data from the same cell type. I took the method to be applied to fit distributions over parameters for models accounting for each experimental trace separately; and for the (beautiful!) results in Figures 5 and 6 to be from one example cell, but maybe this is not true. Could this be clarified? Are the distributions of 7 over models that fit all the experimental data for that cell type? If not, it would be good to show the measured responses with an error bar, and show variations between models. Understanding the extent of intercellular variability seems important in the design of isolating stimuli.

Indeed, the models were fit to the BC type means, as individual recordings can be quite noisy – we added Figure 5—figure supplement 1​ to illustrate the variability and mention this in the text.

https://doi.org/10.7554/eLife.54997.sa2

Article and author information

Author details

  1. Jonathan Oesterle

    Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
    Contribution
    Software, Formal analysis, Investigation, Visualization, Methodology, Writing - original draft, Writing - review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-8919-1445
  2. Christian Behrens

    Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
    Contribution
    Supervision, Investigation, Writing - review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-3623-352X
  3. Cornelius Schröder

    Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
    Contribution
    Methodology, Writing - review and editing
    Competing interests
    No competing interests declared
  4. Thoralf Hermann

    Naturwissenschaftliches und Medizinisches Institut an der Universität Tübingen, Reutlingen, Germany
    Contribution
    Data curation, Investigation
    Competing interests
    No competing interests declared
  5. Thomas Euler

    1. Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
    2. Center for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
    3. Bernstein Center for Computational Neuroscience, University of Tübingen, Tübingen, Germany
    Contribution
    Supervision, Writing - review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-4567-6966
  6. Katrin Franke

    1. Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
    2. Bernstein Center for Computational Neuroscience, University of Tübingen, Tübingen, Germany
    Contribution
    Data curation, Investigation
    Competing interests
    No competing interests declared
  7. Robert G Smith

    Department of Neuroscience, University of Pennsylvania, Philadelphia, United States
    Contribution
    Conceptualization, Resources, Funding acquisition, Investigation, Writing - review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-5703-1324
  8. Günther Zeck

    Naturwissenschaftliches und Medizinisches Institut an der Universität Tübingen, Reutlingen, Germany
    Contribution
    Conceptualization, Supervision, Funding acquisition, Writing - review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-3998-9883
  9. Philipp Berens

    1. Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
    2. Center for Integrative Neuroscience, University of Tübingen, Tübingen, Germany
    3. Bernstein Center for Computational Neuroscience, University of Tübingen, Tübingen, Germany
    4. Institute for Bioinformatics and Medical Informatics, University of Tübingen, Tübingen, Germany
    Contribution
    Conceptualization, Supervision, Funding acquisition, Writing - original draft, Writing - review and editing
    For correspondence
    philipp.berens@uni-tuebingen.de
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-0199-4727

Funding

Bundesministerium für Bildung und Forschung (01GQ1601)

  • Philipp Berens

Bundesministerium für Bildung und Forschung (01IS18052)

  • Philipp Berens

Deutsche Forschungsgemeinschaft (EXC 2064 - 390727645)

  • Philipp Berens

Deutsche Forschungsgemeinschaft (BE5601/4-1)

  • Philipp Berens

Baden-Württemberg Stiftung (NEU013)

  • Günther Zeck
  • Philipp Berens

National Institutes of Health (EY022070)

  • Robert G Smith

National Institutes of Health (EY023766)

  • Robert G Smith

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

We thank Yves Bernaerts for preliminary work on electrical stimulation of biophysical models, Timm Schubert for support with animal protocols and Gordon Eske and Adam vor Daranyi for technical support. Additionally, we thank Jakob Macke, Philipp Hennig and Lara Hoefling for discussion. This work was funded by the Federal Ministry of Education and Research (BMBF, FKZ 01GQ1601 and 01IS18052 to PB), the German Science Foundation through the Excellence Cluster 2064 ‘Machine Learning - New Perspectives for Science’ (390727645), a Heisenberg Professorship (BE5601/4-1 to PB), the Baden-Württemberg Stiftung (NEU013 to PB and GZ), and the National Institutes of Health (NIH) (EY022070 and EY023766 to RGS).

Ethics

Animal experimentation: All procedures were approved by the governmental review board (Regierungspräsidium Tübingen, Baden-Württemberg, Konrad-Adenauer-Str. 20, 72072 Tübingen, Germany, AZ 35/9185.82-7) and performed according to the laws governing animal experimentation issued by the German Government.

Senior Editor

  1. John R Huguenard, Stanford University School of Medicine, United States

Reviewing Editor

  1. Alexander Borst, Max Planck Institute of Neurobiology, Germany

Reviewers

  1. Alexander Borst, Max Planck Institute of Neurobiology, Germany
  2. Adrienne L Fairhall, University of Washington, United States

Publication history

  1. Received: January 8, 2020
  2. Accepted: October 26, 2020
  3. Accepted Manuscript published: October 27, 2020 (version 1)
  4. Version of Record published: November 18, 2020 (version 2)

Copyright

© 2020, Oesterle et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 457
    Page views
  • 82
    Downloads
  • 0
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

Further reading

    1. Computational and Systems Biology
    2. Microbiology and Infectious Disease
    Thomas Stoeger, Luís A Nunes Amaral
    Feature Article

    It is known that research into human genes is heavily skewed towards genes that have been widely studied for decades, including many genes that were being studied before the productive phase of the Human Genome Project. This means that the genes most frequently investigated by the research community tend to be only marginally more important to human physiology and disease than a random selection of genes. Based on an analysis of 10,395 research publications about SARS-CoV-2 that mention at least one human gene, we report here that the COVID-19 literature up to mid-October 2020 follows a similar pattern. This means that a large number of host genes that have been implicated in SARS-CoV-2 infection by four genome-wide studies remain unstudied. While quantifying the consequences of this neglect is not possible, they could be significant.

    1. Computational and Systems Biology
    2. Immunology and Inflammation
    Antonio Cappuccio et al.
    Tools and Resources

    From cellular activation to drug combinations, immunological responses are shaped by the action of multiple stimuli. Synergistic and antagonistic interactions between stimuli play major roles in shaping immune processes. To understand combinatorial regulation, we present the immune Synergistic/Antagonistic Interaction Learner (iSAIL). iSAIL includes a machine learning classifier to map and interpret interactions, a curated compendium of immunological combination treatment datasets, and their global integration into a landscape of ~30,000 interactions. The landscape is mined to reveal combinatorial control of interleukins, checkpoints, and other immune modulators. The resource helps elucidate the modulation of a stimulus by interactions with other cofactors, showing that TNF has strikingly different effects depending on co-stimulators. We discover new functional synergies between TNF and IFNβ controlling dendritic cell-T cell crosstalk. Analysis of laboratory or public combination treatment studies with this user-friendly web-based resource will help resolve the complex role of interaction effects on immune processes.