Multivariate analysis of electrophysiological diversity of Xenopus visual neurons during development and plasticity
 Cited 11
 Views 1,519
 Annotations
Abstract
Biophysical properties of neurons become increasingly diverse over development, but mechanisms underlying and constraining this diversity are not fully understood. Here we investigate electrophysiological characteristics of Xenopus tadpole midbrain neurons across development and during homeostatic plasticity induced by patterned visual stimulation. We show that in development tectal neuron properties not only change on average, but also become increasingly diverse. After sensory stimulation, both electrophysiological diversity and functional differentiation of cells are reduced. At the same time, the amount of crosscorrelations between cell properties increase after patterned stimulation as a result of homeostatic plasticity. We show that tectal neurons with similar spiking profiles often have strikingly different electrophysiological properties, and demonstrate that changes in intrinsic excitability during development and in response to sensory stimulation are mediated by different underlying mechanisms. Overall, this analysis and the accompanying dataset provide a unique framework for further studies of network maturation in Xenopus tadpoles.
https://doi.org/10.7554/eLife.11351.001eLife digest
Brains consist of many cells called neurons: billions of them in a human brain, and hundreds of thousands in the brain of a small fish or a frog tadpole. Many of these neurons are very much alike, and work together to process information in the brain. Yet while they are similar, they are not exactly identical. One of the reasons for these differences seems to be to allow each neuron to contribute something unique to the overall working of the brain. By looking at how individual neurons within a specific type differ from each other, it is possible to understand more about how they work together.
Ciarleglio, Khakhalin et al. have now compared the properties of the neurons in a part of the brain of a developing frog tadpole that processes sensory information. This showed that these neurons appear relatively similar to each other in young tadpoles. However, as the tadpoles grow and their brains become more elaborate the neurons become increasingly diverse, and their properties become more unique and nuanced.
One possible explanation is that this diversity reflects new types of neurons being formed; another, that the differences between the neurons reflect how these cells have adapted to different patterns of sensory input they may have experienced. To distinguish between these two possibilities, Ciarleglio, Khakhalin et al. provided a group of older tadpoles with strobelike visual stimulation and observed that this caused the neurons to become more similar once again. This suggests that neurons can change their response properties to adapt to the type of sensory input they receive, which would allow the animal to better process different types of sensory information. The data collected through these experiments could now be used to build computational models of this part of the tadpole brain.
https://doi.org/10.7554/eLife.11351.002Introduction
Electrophysiological properties of neurons become increasingly diverse over development in ways that are critical for proper nervous system function and maturation (Turrigiano and Nelson, 2004; Marder and Goaillard, 2006). Perturbation of these processes can have broad and devastating consequences leading to neurodevelopmental disorders such as mental retardation, autism, and schizophrenia (Rice and Barone, 2000; Belmonte et al., 2004; Pratt and Khakhalin, 2013). It remains unclear, however, to what degree this diversity in electrophysiological tuning reflects intrinsic developmental differentiation, and how much it reflects the particular activation history of a given neuron, as well as the constraints that shape how well neurons adapt to changes in their input patterns.
The adaptability of electrophysiological properties is central for allowing developing neural circuits to maintain functional stability, while simultaneously providing flexibility for accommodating developmental changes. One mechanism that contributes to this balance is homeostatic plasticity, whereby neurons adjust their synaptic and intrinsic properties based on the activity of the circuit in which they are embedded (Daoudal and Debanne, 2003; Desai, 2003; Turrigiano and Nelson, 2004; Ibata et al., 2008; Turrigiano, 2008; Marder, 2011). Homeostatic plasticity allows developing circuits to function stably by maximizing their dynamic range as new inputs become incorporated (Bucher et al., 2005; Marder and Goaillard, 2006; Pratt and Aizenman, 2007). This is particularly relevant to developing animals: their nervous system must be functional and able to interact with its environment even as nascent circuitry is still developing.
One place where this adaptability in synaptic and intrinsic properties is particularly salient, is in the optic tectum of Xenopus laevis tadpoles—a midbrain area that processes inputs from visual, auditory, and mechanosensory systems (Cline, 1991; Ewert, 1997; Cline, 2001; Ruthazer and Cline, 2004; Ruthazer and Aizenman, 2010). Sensory inputs to the tectum are strengthened over development, resulting in increasingly robust synaptic responses, yet this strengthening is accompanied with decreases in intrinsic excitability that may function to maintain a stable dynamic range in this circuit (Pratt and Aizenman, 2007). As a consequence, visually guided behaviors, such as collision avoidance, improve and become more tuned to specific stimuli (Dong et al., 2009). Changes in sensory environment can also elicit homeostatic plasticity in tectal cells, resulting in adjustment of both synaptic and intrinsic properties (Aizenman et al., 2003; Deeg and Aizenman, 2011).
Since homeostatic plasticity coordinates changes of different cellular properties, over time it is expected to constrain these properties, limiting ways in which they can covary within the population of cells (O'Leary et al., 2013): for example, strong excitatory synaptic drive results in lower intrinsic excitability. Coordinated changes in different physiological properties may contribute to diversification of cell tuning that happens as networks mature, creating and shaping differences in cell phenotypes both between cell types as they emerge (Ewert, 1974; Frost and Sun, 2004; Kang and Li, 2010; Nakagawa and Hongjian, 2010; Liu et al., 2011), and within each cell type in a functional network (Tripathy et al., 2013; Elstrott et al., 2014). These considerations suggest that multivariate distributions of different physiological properties sampled across many cells in a network may contain unique information both about current tuning of this network, and the mechanisms behind this tuning that may act through local recalibration of properties in individual cells (O'Leary et al., 2013). Yet relatively few studies have attempted this kind of analysis on a large scale so far.
Here we perform a largescale electrophysiological census of retinorecipient neurons in the developing Xenopus laevis tectum to better understand the electrophysiological variability of tectal neurons in development, and in response to a need for homeostatic change. Using a comprehensive suite of tests we describe relationships between 33 electrophysiological variables, and show that both the variability and the predictability of multivariate cell tuning increases over development, and undergo changes in response to sensory stimulation. By analyzing groups of neurons that produce similar spike trains, we also show that similar spiking behaviors may be supported by different combinations of underlying electrophysiological properties.
Results
Main dataset and correlation analysis
We recorded from 155 deeplayer, retinorecipient tectal cells across developmental stages 43 to 49 (Nieuwkoop and Faber, 1994) from 42 animals, measuring from 9 to 33 different electrophysiological variables in each cell (median of 26 variables per cell; see Figure 1 and Supplementary file 1 for a graphical description and a concise table of variables, and the Materials and methods for a detailed description of each variable). Of 155 cells, 35 cells contributed to all 33 variables, 62 contributed to 30 variables, 124 to 20 variables, and 154 to 10 variables. Across different variables, the least covered variable had measurements from 64 cells, while the most covered one had measurements from 154 cells (median of 134 cells per variable); in total, 18% of all possible observations were missing. The dataset containing analysis parameters is available online as Supplementary file 2. The entire dataset including raw electrophysiology files has also been made available and can be accessed with the following doi:10.5061/dryad.18kk6.
With 33 different cell parameters, we had 528 potential pairwise correlations to assess. Ninety of these correlations were significant after FDR adjustment (α = 0.05, corresponding to Pearson correlation pvalue threshold of about 8e−3; Figure 2A); all correlations with r > 0.5 (n=11) were also significant for Spearman correlation after same FDR adjustment. We found that there were at least three distinct clusters of tightly correlated variables representing different neural characteristics: tectal cell spikiness (a tendency to produce more spikes); measurements of spike shape and dynamics that strongly anticorrelated with spikiness; and intrinsic neuronal properties, such as membrane capacitance and amplitudes of intrinsic ionic currents. Some of the more salient correlations are shown in Figure 2B.
The presence of correlations in our dataset suggested that not all variables were independent, which is expected for sets of variables that describe different aspects of common underlying cellular qualities, such as spikiness or synaptic effectiveness (McGarry et al., 2010). To make sure that none of the variables were redundant (brought no new information to the set), or too noisy (having no interactions with other variables in the set), we ran the socalled Principal Variable Analysis, quantifying the total amount of variance in the dataset explained by each variable (Mccabe, 1984). We found that the most informative, and thus least independent, variables were related to the number of spikes the cell was able to generate (N spikes to cosine and step injections explained 15% and 14% of total variance, respectively), or different aspects of spike shape (9–13% of total variance; Figure 2C, top of the list). Conversely, some variables did not serve as good linear predictors for other properties in the set (bottom of Figure 2C), suggesting that they were more independent — the lowest variable explained 4% of total variance, which was still more than the 3% expected for a fully uncorrelated variable. In summary, it can be seen from Figure 2C that no single variable was 'too good' in explaining overall variance in the dataset, but also that none of them fell at or below the predicted noise level; there was no clear division of variables into distinct groups, but rather a smooth decline across the list. Finally, variables of different biological nature were diversely distributed across the parts of the plot corresponding to higher and lower explanatory power. From this we concluded that our dataset was well balanced for exploratory analysis, offering a healthy mix of independent and interacting variables (Guyon and Elisseeff, 2003).
Knowing that the maximal number of spikes in response to current step injections can alone explain 14% of total variance in the dataset made us wonder which protocol of those we used was the most informative in the sense of best capturing the electrophysiological identity of each tectal cell. To answer this question, we ran the Principal Variance Analysis on a combination of variables coming from different protocols. We found that 9 variables from the step current injection protocol together explained 42% of total variance in the set; 7 variables quantifying IVcurves explained 34%; 6 variables from cosine current injections explained 33%; 5 synaptic variables and 4 passive membrane properties both accounted for 21%; and 2 variables describing miniature postsynaptic currents (mEPSCs) predicted 11% of total variance (shares of explained variance don't have to add up to 100%). Based on this analysis, we conclude that the step current injection protocol (see Figure 1E–G) is thus one of the most informative, and should be recommended for fast profiling of cell types in the future.
Changes with development
We next asked which electrophysiological characteristics changed with development as tadpoles matured from stage 43, a time when axons from the eye make first functional connections in the brain (Holt and Harris, 1983), to stage 49, when tectal networks become sufficiently refined to support spatially coordinated visual behaviors and multisensory integration (Deeg et al., 2009; Dong et al., 2009; Xu et al., 2011; Khakhalin et al., 2014). To compensate for the unevenness of data availability across cell properties and developmental stages, we combined data from stages 43–44, 45–46, and 48–49, as these groupings have proven useful in previous studies (Pratt et al., 2008; Dong et al., 2009; Sharma and Cline, 2010). We did not group stage 47 with others however, as there were indications that this transient stage in development may be unique in terms of tadpole behavior, network excitability, and average tectal cell properties (Pratt and Aizenman, 2007; Bell et al., 2011). We found that 12 cell properties changed significantly across these developmental periods (P_{ANOVA} < 0.05), with 5 values decreasing overall, and 6 variables increasing (Figure 3; also see Supplementary file 3 for summary table).
While average values changed in both directions during development, the effect on variability of cellular properties was much more consistent: 14 variables out of 33 increased their variability from stages 45–46 to 48–49 (P_{V} < 0.05, corresponding to increases in standard deviation of 40% and higher; see Supplementary file 3). Among others, spiking inactivation (as measured by 'Wave decay’), maximal amplitudes of sodium and slow potassium ionic currents, the frequency of minis, and the total synaptic charge all experienced an almost twofold increase in variability. Only two properties out of 33 became less variable over development: synaptic resonance and synaptic resonance width. These data suggest that by stages 48–49, neurons became more electrophysiologically diverse than they were at stages 45–46.
Factor analysis
To visualize and explore the patterns behind codependence and covariance of variables in our dataset, and to better measure common features that underlie these correlations, we used principal component analysis (PCA). As not every variable was measured in every cell, we used an iterative Bayesian version of PCA known as 'PCA with missing values' (Ilin and Raiko, 2010), followed by a promax oblique rotation. We extensively verified the validity of our PCA analysis, comparing it to standard PCA on restricted and imputed data, PCA on ranktransformed data, as well as two most common nonlinear dimensionality reduction approaches: Isomap and Local Linear Embedding (see Materials and methods). We concluded that our PCA analysis was the most appropriate analysis for for this data set, and performed better than local nonlinear approaches, with the first two principal components explaining 15% and 8% of total variance respectively (this total of 23% of variance explained would have corresponded to ~35% of variance if we had every type of observation in every cell; see Materials and methods for details).
A loading plot (Figure 4A) shows contributions of individual variables from the dataset to rotated PCA components. Points on the plot are colored according to the biological nature of each variable (Figure 4, see legend); variables shown on the right contributed positively to the first component (C1), while those on the left contributed negatively to this component; variables in the upper part of the cloud contributed positively to the second component (C2), while those at the bottom contributed negatively to it. Consistent with high predictive value of individual variables related to spiking, we found that C1 describes the overall 'Spikiness' of each cell. Cells with large values of C1 generated many sharp and narrow spikes in response to current injections (Figure 4A, green points describing spike shape, left; red points describing number of spikes, right). These cells spiked strongly in response to prolonged current injections (Figure 4A, red point for 'Spiking resonance width,' upper right quadrant); had low accommodation, both in spike amplitude (Figure 4A, green, left bottom corner) and interspike interval (Figure 4A, blue, left top corner), and had high trialtotrial spiketiming precision (low jitter; Figure 4A, blue point, left top corner). Properties of cells with different C1 and C2 values can also be illustrated in a modified scoreplot, in which traces of membrane potential responses to a 100 pA step current injection are arranged within the C1C2 principal component space (Figure 4B). Note the difference in spiking responses between cells on right (high C1) and left (low C1) sides of Figure 4B. Cells with large C1 also tended to be involved in polysynaptic networks (Figure 4A, purple point for 'Monosynapticity coefficient,' lower left quadrant) and did not exhibit shortterm facilitation of synaptic inputs during repeated stimulation (Figure 4A, purple point for 'Synaptic PPF,' lower left quadrant). Cells with low values of C1 exhibited opposite traits: they produced few broad, squat, quickly accommodating spikes (Figure 4B, left), were not recruited in recurrent networks, and tended to have strong synaptic facilitation that could potentially indicate high plasticity of synaptic inputs (Kleschevnikov et al., 1997).
The second component (C2) can be loosely dubbed 'Current density': cells with high values of C2 had large intrinsic ionic currents (voltagegated sodium I_{Na} and slow potassium I_{KS} currents), high membrane capacitance (Cm) and low membrane resistance (Rm), consistent with a larger membrane surface, and received strong synaptic inputs, in terms of both frequency and amplitude of mEPSCs. These cells produced frequent and sharp spikes (Figure 4A, blue point for 'Spike ISI' and green points for 'Spike risetime' and 'Spike width,' lower part of the plot), but also tended to have higher values of spiketiming jitter and interspike interval accommodation. Conversely, cells with low values of C2 behaved as smaller cells electrophysiologically (low Cm, high Rm), and had weak intrinsic and spontaneous synaptic currents. As principal neurons in the optic tectum have relatively uniform geometrical cell body sizes (Lazar, 1973), these differences in electrophysiological properties may indicate different levels of electrical coupling between the cell bodies, where the recording was performed, and both dendritic and axonal compartments, where the currents are generated.
After classifying all cells according to their position in the PCA space (Figure 5A), we were able to visualize developmental maturation of tectal cells as movements of point clouds within the C1C2 plane, and as changes of these clouds' shapes (Figure 5B). It can be seen from Figure 5B that as cells matured, their representations in the PCA space migrated from the left to the right side of the plot. Indeed, the Spikiness (C1) changed with age (P_{ANOVA} = 1e−5), first increasing from stages 43–44, through 45–46, and up to stage 47 (P_{MW} < 4e−3 at each transition, N = 11, 64 and 24 respectively), and then decreasing again at stages 48–49 (P_{MW} = 0.02, N = 24, 56; Figure 5B). The value of C2, or 'Current density', did not change much over development (P_{ANOVA} = 0.08) except for a slight decrease between stages 47 and 48–49 (P_{MW} = 0.02, N = 24, 56).
The clouds of points shown in Figure 5B also differ in size and structure, with the stages 45–46 cloud being more compact and simple, and the stages 48–49 cloud being more sprawled with an uneven distribution of points. To quantify cloud size, we compared medians of pairwise Euclidian distances between points in the PCA space and found that they were larger at stage 48–49 than at any other developmental group (P_{MW} < 5e−7 for all comparisons). Likewise, the value of 'Current density' (C2) was more variable at stages 48–49 than at stages 45–46 (P_{V} = 0.004, N = 64, 56), and the average pairwise difference between cells in the original 33dimentional space was 22% larger at stages 48–49 than at stages 45–46 (P_{TT} < 1e−11; see Materials and methods for details). This suggests that from stages 42 to 47, neurons changed their properties consistently, gradually getting more spiky, but then scaling their average spikiness back at stages 48–49, while simultaneously expanding to the whole range of possible C1C2 values, increasing the diversity of electrophysiological tuning across individual tectal cells.
We then quantified the presence of internal structure within the original 33dimentional datasets, separately for younger and older cells, by performing multiple imputation on the data, and then running two different types of unsupervised feature analyses that identify structure in multivariate distributions: the hierarchical cluster analysis that looks for subclouds of points within a larger cloud, and local PCA (as opposed to previously described global PCA that was run on the full set of data) to quantify linear interdependencies between cell properties within each age group. For cluster analysis, we used the agglomerative nesting coefficient AGNES (Struyf et al., 1996) and found (Figure 6A) that the degree of clustering was 36% larger at stages 48–49 compared to stages 45–46 (0.67 ± 0.04 and 0.49 ± 0.03 respectively, P_{TT} < 1e−11), reflecting a substantial increase in withingroup heterogeneity. For local PCA analyses, we looked at the amount of total variance explained by the first 2 components within each stage group as a measure of internal structure and linear interactions between different variables. We found (Figure 6B) that despite an increase in total variance and heterogeneity in older cells, locally run PCA explained a slightly higher degree of variance in these cells (0.35 ± 0.04) than in younger cells at stages 45–46 (0.31 ± 0.02; P_{TT} < 1e−11) suggesting that in more mature networks, cell properties are more coordinated with each other.
Effects of sensory stimulation
While our analysis of neuronal maturation demonstrated that several cellular properties changed over development, both in terms of their average values and their variability, we were not able to tell whether these phenomena represented genetically determined developmental programs, or if they reflected experiencedependent adaptations of electrophysiological properties resulting from the cumulative sensory experience of each cell (Dong et al., 2009; Munz et al., 2014). It is known from previous studies that tadpoles exposed for several hours to strongly patterned visual stimulation show synaptic (Aizenman et al., 2002; Aizenman and Cline, 2007), intrinsic (Aizenman et al., 2003; Dong et al., 2009) and morphological (Sin et al., 2002) changes in individual tectal cells, as well as in tectal network activity (Pratt et al., 2008), and animal behavior (Dong et al., 2009). We provided four hours of strongly patterned visual stimulation to an experimental group of stage 49 tadpoles and then measured and analyzed same 33 variables from 65 cells (across 19 animals).
Of 33 electrophysiological properties, 8 changed significantly between naïve and visually stimulated cells (see Supplementary file 3). As previously described (Aizenman et al., 2003), visually stimulated cells spiked more in response to step current injections, producing on average 7.1 ± 4.7 spikes per injection, compared to only 4.6 ± 3.3 spikes in naïve cells (P_{MW} = 5e−3, N = 51, 60; Figure 7A). Interestingly, while for naïve cells the number of spikes produced in response to cosine injections strongly correlated with the number of spikes in response to step injections (r = 0.82, P_{corr} = 2e−27, N = 108), visually stimulated cells did not change their spiking response to cosine injections (0.8 ± 0.5 for naïve, 0.9 ± 0.5 for stimulated cells; p = 0.3, N = 38, 60), indicating that changes in spikiness in development, and after visual experience, may be implemented by different biophysical mechanisms. Other variables that increased after stimulation included the absolute value of the holding current, and spike threshold potential (Figure 7B). Three properties decreased after visual stimulation: membrane capacitance (Figure 7C), speed of response buildup during cosine stimulation ("Wave buildup"), and jitter.
Figure 7D shows η^{2} effect sizes and marks statistical significance for changes during development and in response to sensory stimulation (Ferguson, 2009). It can be seen that the overlap between normal changes in development and changes after visual stimulation is very small: only 3 electrophysiological properties changed significantly both in development and after stimulation — of these, one (Spike threshold) changed in opposite directions, and two others changed in the same direction. These results suggest that while visually stimulated cells from stage 49 animals seem to spike similarly to naïve stage 47 animals, the biophysical mechanisms that underlie high cellular excitability in these groups are not likely to be shared.
Curiously, the only electrophysiological property that was significantly affected by both developmental stage and sensory history of the animal, and that could conceivably contribute to differences in cell excitability, was membrane capacitance — a variable that is usually interpreted as an estimation of cell size, and thus does not make an obvious first candidate as a underlying mechanism for shortterm intrinsic cellular plasticity. Significant changes in membrane capacitance (Figure 7D, low left corner) and spike threshold potential (Figure 7D, low right corner) may indicate a change in electrical coupling between cell soma, from which recordings were performed, and the spikegenerating axon hillocks, similar to what has been described in other preparations (Grubb and Burrone, 2010; Kuba et al., 2010).
To better compare changes after visual stimulation to those occurring over development, we projected electrophysiological data from sensory stimulated cells into the original PCA space defined by data from naïve tadpoles, and compared the resulting point cloud to points from naïve stage 48–49 animals (Figure 7E). Compared to the naïve group (Figure 7E, blue) the 'Spikiness' (C1) of neurons from visually stimulated animals (Figure 7E, black) was higher (P_{MW} = 0.02, N = 56 and 60 for naïve and visually stimulated groups), while average 'Current density' (C2 value) of stimulated cells did not change (P_{MW} = 0.06). The variances of C1 and C2 did not change significantly (P_{V} > 0.05), however the size of the cloud, as measured by median pairwise Euclidean distance between points in PCA space, decreased in stimulated cells (P_{MW} < 1e−8). Likewise, the average pairwise difference between cells in the original 33dimensional space decreased by 11% after visual stimulation (29 ± 1 for stimulated, compared to 33 ± 1 for naïve stage 48–49 cells; P_{TT} < 1e−15). This change in cloud size is almost certainly because 8 out of 33 electrophysiological properties became significantly less variable after visual stimulation: namely, maximal Na^{+} and slow K^{+} voltagegated currents, transient K^{+} current activation potential, spiking threshold, jitter, frequency and amplitude of miniature EPSCs, and synaptic resonance interstimulus interval (P_{V} < 0.05 for each of these comparisons). Only two properties were more variable in visually stimulated cells: mean number of spikes in response to step injections and membrane resistance Rm (P_{V} < 0.01 for both).
We then analyzed the internal structure of naïve and stimulated datasets from stage 48–49 animals in the same way as it was done for different developmental stages. We found that the amount of internal heterogeneity, as quantified by the agglomerative nesting coefficient, was reduced in stimulated neurons (0.62 ± 0.02) compared to naïve cells (0.67 ± 0.04; P_{TT} < 1e−15; Figure 6A), while the share of total variance explained by first two components of local PCA was higher in stimulated cells (0.41 ± 0.02) than in naïve cells (0.35 ± 0.04; P_{TT} < 1e−15; Figure 6B). Together these findings suggest that visual stimulation of tectal neurons increased their propensity to spike and lowered overall celltocell variability and withingroup heterogeneity, making cells more alike. Visual stimulation also made different cell properties more interdependent and predictable, consistent with the consequences of a strong homeostatic constraint experienced by these neurons.
Convergence of spiking phenotypes
Our data suggest that spiking properties of tectal neurons can be modulated in several different ways during development and in response to sensory stimulation. We used our dataset to try to infer biological processes that could underlie adjustment of spike output. To deduce these processes, we looked for basic intrinsic properties that could serve as good predictors of cell spiking output across all experimental groups (220 cells). We ran a set of competitive sequential linear regression models (Calcagno and de Mazancourt, 2010) to test whether we could predict the number of spikes produced by cells in response to current step injections based on their lowlevel electrophysiological properties, namely: membrane resistance (Rm), membrane capacitance (Cm), and both ionic current activation potentials and maximal amplitudes for sodium (I_{Na}), stable potassium (I_{KS}) and transient potassium (I_{KT}) voltagegated currents (8 variables total). We found that across all cells, the pair of properties that explained most of spikeoutput variance was a pair of activation potentials for sodium and stable potassium voltagegated currents (I_{Na} and I_{KS}; 18% of variance without interaction, 19% with interaction). After activation potentials were considered, the next most informative variable was membrane resistance (Rm, 7% of variance; 13% with interactions) followed by voltagegated sodium channel amplitude (2% of explained variance, 4% with interactions), and membrane capacitance (5% without, 15% with interactions). Together these 5 variables explained 33% of variation in the number of spikes if taken without interactions, and as much as 51% if multipleorder interactions were included. Unfortunately, linear models did not help to differentiate between variables underlying tuning of spike output during development and after sensory stimulation, as none of the effects disappeared (based on P_{F} < 0.05 criterion) when both developmental stage and experimental manipulation were factored in.
At the same time, the relative abundance of significant interactions (7 out of 26 investigated in the model) and the high share of variance in spiking output explained by these interactions (18%, compared to main linear effects of 33%) indicated that simple electrophysiological properties included in our analysis do not regulate cell spiking independently, but are likely to be constrained (O'Leary et al., 2013). For example, the cell property that predicted most of spiking output variance, voltagegated sodium current activation potential, interacted significantly (P_{F} < 0.05) with slow potassium current activation potential, membrane resistance and capacitance, as well as several second and thirdorder combinations of these variables. In practical terms it means that to keep the spike output of a tectal cell constant, a change in any of these variables should be accompanied by a balancing correction of sodium current activation potential, and vice versa. To further investigate this point, we analyzed distributions of lowlevel electrophysiological properties in a subset of cells that had similar spiking output in response to current injections (Figure 8).
To objectively identify cells with similar spike outputs, we combined each cell’s responses to consecutive step injections of increasing current amplitudes into one trace and extracted spiketiming data from this trace. We then applied a commonlyused standard costbased metric sensitive to both number of spikes and their timing to quantify similarity between spiketrains of different cells (Victor and Purpura, 1996, 1997), and used multidimensional scaling to represent a matrix of pairwise celltocell distances in a 2D plot (Figure 8A). We used this analysis to select groups of cells (6 in each group) that generated similar spike trains in response to current step injections (Figure 8B) by pulling 5 nearest neighbors to arbitrarily selected reference cells. Although the spiketrains produced by cells within each group were very similar to each other, we found that these cells did not form compact clusters when projected on the C1C2 PCA space, but resembled sparse constellations that were partially overlapping between groups (Figure 8C). These results suggest that although spiking output was similar between these cells, they differed drastically in other electrophysiological aspects.
Finally, we combined these two approaches and labeled groups of similarlyspiking cells in correlograms of electrophysiological values that were found to be best linear predictors for cell spiking output, as described above. These variables included activation potentials for sodium and slow potassium voltagegated ionic currents (Figure 8D); passive properties, such as membrane resistance and capacitance (Figure 8E), and ionic current amplitudes (Figure 8F). Although spiking responses of cells (Figure 8B) were similar to each other within each group, and strikingly different between the groups, corresponding markers formed neither clusters nor layered structures indicative of lowdimensional constraints that could link different properties together (Figure 8D,E,F). This suggests that in our system, cells with similar spiking phenotypes may have very diverse underlying electrophysiological properties, and conversely, cells that are strikingly different in their spiking output can have very similar lowlevel physiological properties (Figure 8, black and red points respectively).
Discussion
In this study we systematically assessed celltocell electrophysiological variability of primary neurons in the optic tectum of Xenopus tadpoles across several developmental periods and in response to sensory stimulation. Our results indicate that during development cells in the deep layer of the tectum become more diverse — although at the stages we studied they do not split into distinct nonoverlapping cell types that are reported in the tecta of other species and at later stages of development in frogs (Lazar, 1973; Ewert, 1974; Grüsser and GrüsserCornehls, 1976; Frost and Sun, 2004; Kang and Li, 2010; Nakagawa and Hongjian, 2010; Liu et al., 2011). We also found that several key electrophysiological properties of tectal cells change over development. We confirmed previously described changes in the average intrinsic excitability of tectal cells with age (Pratt and Aizenman, 2007), and showed that at these stages most physiological differences between cells are linked to their overall spikiness (based on the results of Principal Variable Analysis, Principal Component Analysis, and the comparison of statistical efficiency of different protocols).
More importantly, we report an increased diversification of cell phenotypes at later developmental stages, and a shrinkage of this diversity in response to strong sensory stimulation. The celltocell variability remained relatively low at stages 43–47, and different electrophysiological parameters were more random with respect to each other, both in terms of clustering and linear interdependencies between different variables. By stages 48–49 cell variability in the tectum increased, and some internal structure in the PCA cloud began to emerge, with patterns of cell properties agglomerating into clusters, which although poorly resolved at the PCA plot, were noticeable through the quantitative clustering analysis. Complementing previously described receptive field refinement (Dong et al., 2009), and temporal decorrelation of spiking activity (Xu et al., 2011), this tuning and differentiation of cell properties likely reflects maturation of tectal networks. An increase in cell tuning variability is reminiscent of reports from other experimental models, including mammalian sensory cortex (Jadhav et al., 2009; Yassin et al., 2010), where the nonuniformity of neuronal recruitment thresholds was shown to be a common feature of developed, functional networks (Elstrott et al., 2014).
This emerging structure and differentiation of cell properties was, however, decreased by strongly patterned visual stimulation, which reduced celltocell variability, making neurons more similar to each other electrophysiologically. At the same time, the amount of variance explained by linear correlations between different variables increased after visual stimulation. This suggests that sensory stimulation, and associated homeostatic plasticity (Aizenman et al., 2003; Dong et al., 2009) left a predictable trace in the mutual arrangement of different physiological properties within each cell (Turrigiano et al., 1994; Dong et al., 2009; Munz et al., 2014). These predictable traces and correlations were then picked up by local factor analysis, making our results similar to reports from the stomatogastric ganglion model (O'Leary et al., 2013). We also show that the shift in neuronal excitability induced by visual stimulation was supported by different underlying electrophysiological properties than were the similar changes in excitability observed during development.
Among the practical consequences of this study, we point to developmental stage 47 as a likely candidate for the critical tuning period for tectal network maturation. We describe a previously undocumented sharp, transient increase in excitability in tectal cells during stage 47, providing an explanation for the previously unarticulated practice of aggregating developmental data over stages 45–46 and 48–49, but avoiding pools of stage 47 neurons with other stages (Pratt et al., 2008; Deeg et al., 2009; Dong et al., 2009; Sharma and Cline, 2010; Xu et al., 2011; Khakhalin and Aizenman, 2012; Spawn and Aizenman, 2012). This transient development stage, which lasts for only about 12–18 hr, and is traditionally defined solely on the basis of embryonic morphology (Nieuwkoop and Faber, 1994), was accompanied by rapid changes in cell tuning variability, and a powerful (almost twofold) increase in cell excitability. This intriguing developmental pattern could be explored in the future as a model for a critical period in development.
Finally, our analysis of neurons with similar spiking outputs demonstrated that strikingly different combinations of underlying lowlevel electrophysiological properties can lead to similar spiking phenotypes, and conversely, that the predictability of spiking phenotype from any small set of cell properties is low. This reinforces the notion that the conflicting biological goals of developmental flexibility and stability in response to perturbations rely on the redundancy of parameters underlying dynamic behavior of these systems (Marder and Taylor, 2011; Marder et al., 2014). The consequence of this redundancy is that multiple parameter configurations can produce phenomenologically identical patterns of network activation (Goaillard et al., 2009; Caplan et al., 2014).
Altogether, our results provide a promising framework for studying mechanisms of network maturation and calibration in Xenopus tadpoles, as well as a unique dataset that will be helpful to inform computational modeling of the optic tectum. In future studies we plan to combine electrophysiological identification of single cells with the transcriptional mapping of relevant genes (Nelson et al., 2006; Schulz et al., 2006) to further advance our understanding of the molecular biology underlying development and plasticity in dynamic systems.
Materials and methods
Animals and housing
Request a detailed protocolWildtype Xenopus laevis adults were bred overnight through natural mating in the Brown University animal care facility. Females were primed with 800U human chorionic gonadotropic (hCG); males were primed with 300U hCG ([1000 U/mL]; SigmaAldrich; St. Louis, MO). Embryos were collected the following day; cleaned by removal of unhealthy/unfertilized oocytes, and kept in a variant of 10% Steinberg’s Solution (also known as ½x MR) [in mM: 5.8 NaCl, 0.067 KCl, 0.034 Ca(NO_{3})_{2} · 4H_{2}O, 0.083 MgSO_{4} · 7H_{2}O, 5 HEPES; pH 7.4] in incubators at 18–21°C under a 12:12 light:dark cycle. Developmental stages are determined according to Nieuwkoop and Faber (Nieuwkoop and Faber, 1994). Under our rearing conditions, tadpoles reach stages 44–46 at 9–12 days postfertilization (dpf), and 48/9 at 18–20 dpf. Animals between stages 43 and 49 were used in experiments. Tadpoles used to characterize development of tectal electrophysiological properties were taken directly from the 18–21°C incubators, while those stage 49 tadpoles that were used to assess homeostatic changes in the tectum were first placed in a custom black acrylic box with four rows of four green LEDs flashing in sequence at 1 Hz for 4 hr.
Tadpole brains were prepared as described in (Aizenman et al., 2003). All experiments were performed between ZT 3–9 (10:00–16:00 EST), where ZT 0 is lightson for a diurnal animal. In brief, tadpoles were anesthetized with 0.02% (w/v) tricaine methanosulfonate (MS222) in 10% Steinberg’s solution and brains were then dissected out in HEPESbuffered extracellular media (containing in mM: 115 NaCl, 4 KCl, 3 CaCl_{2}, 3 MgCl_{2}, 5 HEPES, 10 glucose, 10 µM glycine; pH 7.2 at 255 mOsm/Kg). To access the soma layer of the tectum, brains were filleted along the dorsal midline and extracted for pinning to a submerged block of Sylgard 184 Silicone Elastomer (Dow Corning; Midland, MI) in a custom recording chamber at room temperature (23°C). Using a largebore glass electrode, the ventricular membrane was suctioned to reveal the tectal cell body layer.
Electrophysiology
Request a detailed protocolTectal cells were visualized using a Nikon (Tokyo, Japan) FN1 light microscope with a 60x waterimmersion objective. While a visually heterogeneous population of tectal neurons were selected, care was taken to only patch those principal tectal neurons that looked healthy (clear, no granulation) and to avoid particularly large cells (size and shape) that might be mesencephalic trigeminal neurons (Pratt and Aizenman, 2009).
To ensure valid comparisons across stages of development, we restricted our recordings to the middle third of the tectum, thus reducing developmental variability along the rostrocaudal axis (Wu et al., 1996; Khakhalin and Aizenman, 2012; Hamodi and Pratt, 2014). All cells were recorded within 3 hr of dissection. Drugs and chemicals were obtained from Sigma (SigmaAldrich; St. Louis, MO).
Glass electrodes were pulled on a Sutter P97 or P1000 puller (Sutter Instruments; Novato, CA) from either Corning 7056 thin wall capillary glass tubing (G75165T4, Warner Instruments; Hamden, CT) or Sutter thick wall capillary glass tubing (B1508610) to a tip resistance of 8–12 MΩ. The electrodes were then filled with a Kgluconatebased intracellular saline (containing in mM: 100 Kgluconate, 5 NaCl, 8 KCl, 1.5 MgCl_{2}, 20 HEPES, 10 EGTA, 2 ATP, 0.3 GTP; pH 7.2 at 255 mOsm/Kg) through a 200 nm syringe filter (#171, NalgeneThermo; Waltham, MA). Filled electrodes were placed in an Axon headstage containing a AgCl wire and controlled by a motorized micromanipulator (MX7600R and MC1000eR, Siskiyou; Grants Pass, OR). Electrode was lowered into the chamber with slight positive pressure in the pipette until the target cell was contacted, at which time negative pressure was applied to form a high resistance seal (>1GΩ), and then again to break through the neuronal membrane. Whole cell patch clamp electrophysiological signals were measured with an Axon Instruments MultiClamp 700B amplifier, filtered with a 5 kHz bandpass filter, and digitized at 10 kHz by an Axon Instruments DigiData 1440A, and acquired with pCLAMP 10 software (Molecular Devices; Sunnyvale, CA).
Upon initial whole cell patch, a seal test and membrane test measured the following parameters: cell membrane capacitance (Cm), cell membrane resistance (Rm), access resistance (Ra), and holding current (I hold) necessary to keep the membrane at 65 mV. Each clamped neuron was subjected to a series of voltage clamp and current clamp protocols to assess its characteristics. First, cells were held at 65 mV in voltage clamp to ensure that Na^{+} channels are fully deinactivated. Then cells were stepped for 150ms to membrane potentials from 65 mV to 115 mV in 20 mV increments to get the data for IV curves. Cells were then switched to current clamp and subjected to ten square pulses of current from 0 pA to 180 pA in 20 pA increments. We next explored potential resonances in tectal neurons (Hutcheon and Yarom, 2000) by probing their ability to fire in response to cosine current injections of varying frequencies. Each sweep contained five separate 200 ms bouts of cosine injections with frequencies of 100, 50, 30, 25, and 20 Hz, and peak amplitude of 135 pA. To determine health of the neurons, they were switched back to voltage clamp and subjected to the IVcurve voltage step protocol once again. After that, the spontaneous activity was continuously recorded for one minute at −45 mV holding potential. Continuing to hold at 45mV, cells were then subjected to synaptic stimulation protocols, in which the optic chiasm (OCh) was stimulated with a bipolar stimulating electrode (FHC, Bowdoin, ME) to activate retinal ganglion cell axons. Five stimuli of 150 µA to 800 µA and duration of 180 µs were provided at varying frequencies, with interstimulus intervals of 10, 20, 30, 40, 50, 100, 150, 200, 250, and 300 ms; the protocol was repeated 5 times. Finally, the IVcurve voltage step protocol was repeated a third time to ensure the health of the cell and stability of the data. The microscope was then switched to the 10x objective, and the cell location was recorded (Khakhalin and Aizenman, 2012). The cell was suctioned away from the tissue and the process was repeated for up to 6 neurons. Some cells did not survive the entire series of protocols, but as long as the nearest IVcurve recording from these cells was stable, they were included in the dataset. No potentials reported in this paper were corrected for the expected junction potential of +12 mV.
Data processing and analysis
Here we introduce and enumerate variables that were included in the analysis, and are presented further. In total, up to 33 different variables were measured for each cell, with 4 variables coming from a standard seal test; 6 variables from the IVcurve protocol; 10 variables from the step current injection protocol; 6 from the protocol of cosineshaped current injections of different frequencies; 5 from the synaptic stimulation protocol, and 2 from the recording of spontaneous postsynaptic potentials. Data analysis was performed in MATLAB (Mathworks, MA) and R Studio.
Passive electric parameters
Request a detailed protocolBased on the standard seal test, as implemented in pClamp 10, cell capacitance (variable #1: Cm, pF); membrane resistance (variable #2: Rm, GΩ), and access resistance (#3: Ra, MΩ) were measured. We did not measure the resting membrane potential Em of each cell, but recorded the holding current (#4: I hold, pA) required to keep the cell membrane at −65 mV. This value is linked to Em by a simple linear equation: I hold = (−65 − Em)/Rm, and thus can be used as a substitute for Em in exploratory analysis. Based on our data, Em in naïve cells was −49 ± 13 and did not change over development (P_{ANOVA}>0.05); in stimulated cells, Em increased to −35 ± 27, which was significant (P_{ANOVA} = 1e−3), and translated from significant changes in I hold (see Results). It is also important to note that tectal cells are relatively small, and in them the value of Em is dominated by the ionic concentrations of the internal and external solutions. The original resting membrane potential is disrupted within 2–3 seconds after the wholecell patch is established (Khakhalin and Aizenman, 2012).
IV protocol
Request a detailed protocolIt was previously shown (Aizenman et al., 2003) that in Xenopus tadpoles OT neurons Na^{+} and K^{+} ionic currents are isolated enough temporally to allow their simultaneous measurements from traces produced by brief membrane depolarization in voltageclamp mode (Figure 1A). A time window from 0 to 185ms after the membrane potential change was used to estimate peak sodium (I_{Na}) and potassium currents (Figure 1A, right); an average current over the window from 1165 to 1365ms after the potential change was used as an estimation of the steady state potassium current (I_{KS}; Figure 1A, left). The amplitude of transient potassium current I_{KT} was estimated as a difference between peak potassium current within the first window and the steady state potassium current. To quantify IV curves for these currents, for each cell the curves were fit with a smooth function of membrane potential, and the parameters of this fit function were reported. For I_{Na} and I_{KT} currents (Figure 1B,D) fit curves were defined as
where v is the depolarization step potential and ad are parameters. The potential at which each of these ionic currents reached ½ of its maximal value, and the maximal current (maximum of the fit curve) were used as variables #5 (I_{Na} activation, mV); #6 (I_{Na}, pA); #9 (I_{KT} activation, mV), and #10 (I_{KT}, pA). For I_{KS} current IV curve (Figure 1C) was fit with a model
where v is the potential, and ad are parameters. The first potential at which I_{Ks} was activated and the fit curve maximum were reported as variables #7 (I_{KS} activation, mV), and #8 (I_{KS}, pA). Note that 'activation potentials' are empirical potentials at which macroscopic ionic currents were activated, and they are likely to be different from threshold potentials of individual channels, both because of a different mathematical definition of these potentials, and because macroscopic activation potentials are also affected by the geometry and cable characteristics of each tectal cell.
As in IVcurve experiments we tested holding potentials at increments of 10 mV, and as activation of voltagegated currents was sharp, the distribution of ionic current activation potential estimations had modes around discrete values separated by 10 mV. Raw data was used for all calculations, but in Figure 8D, to avoid overplotting, we homeomorphically transformed the data by first ranktransforming it, and then scaling it back from ranks to original readings in mV using a least squares best fit cubic polynomial. This mapping strictly preserved the relative arrangement of points, but made the local density of points in Figure 8D less banded.
Step current injection protocol
Request a detailed protocolFor each step current injection (Figure 1E), the number of evoked spikes, their amplitudes, and latencies at peak were measured. Spikes were detected automatically through adaptive filtering with subsequent thresholding, which discriminated against spikelet shapes that were either too small or too broad. All results of spike detection were also visually verified by two people blinded to neuronal identity. The amplitude of each spike was measured as a difference between peak potential during the spike and the potential at the kink point, defined as a point at which the 2nd derivative of potential over time went through a maximum (Figure 1F). For every spike, rise time (10% to 90% of potential increase from kink point to the peak) and width at halfheight (measured at potential between the kink point and the peak) were measured automatically (Figure 1F). As the current injection ended, and the neuron repolarized, the shape of this repolarization potential curve was fit exponentially.
For every cell the following properties were reported: median time constant of repolarization after current injection as variable #11 'Tail' (Figure 1E); potential of the kink point of the first spike generated at the smallest current injection as #12 'Spike threshold' (Figure 1E , blue traces); amplitude of the first spike at the smallest current potential as #13 'Spike amplitude', and its rise time and width at halflength as #14 'Spike risetime' and #15 'Spike width' respectively (Figure 1F).
The number of evoked spikes as a function of injected current (Figure 1G) was fitted with a curve:
where i is current, and ad are fit parameters. From this fit two variables were estimated: the current that generated maximal spiking (defined as xcoordinate of fit curve maximum) as #16 'I best', and weighted maximal spiking, estimated as maximum of a fit curve, as #17 'N spikes, step'.
For the trace that produced highest number of spikes and was the closest to the inferred 'optimal current' (Figure 1E, black trace), and if more than one spike was generated, we reported the interspike interval in ms (#18, 'Spike ISI'). If more than two spikes were generated we also reported the ratio between the 2nd and the 1st interspike intervals (#19, 'Spike ISI accommodation'). For cells that generated at least 2 spikes, the amplitude ratio of the 1st and the 2nd spikes in the train was reported as #20 'Spike accommodation' (Figure 1E).
To identify cells that produced similar spiketrains in response to step current injections (as presented in Figure 8) we used a standard costbased metric of spiketrain similarity (Victor and Purpura, 1996, 1997) with a cost of 0.1/ms for spiketiming adjustments; this procedure is sensitive to both number of spikes and their latencies. For this calculation, responses to step current injections of all amplitudes were combined and treated as one long recording (similar to that shown in Figure 1H , but for step, rather than cosine injections).
Cosine current injections of different frequencies
Request a detailed protocolTo assess dynamic resonances in tectal neurons (Tan and Borst, 2007) we injected them with cosineshaped currents of different frequencies (Figure 1H). Spikes were detected in MATLAB and verified visually, similar to stepinjections. The number of spikes as a function of wave period (Figure 1J) was fit with the formula:
where T stands for wave period, while ad are optimization parameters. From this fit, the optimal wave period in ms was estimated (variable #22, 'Spiking resonance'). Mean number of spikes averaged across waves of all frequencies was reported as #21 'N spikes, cosine.' The time constant of spikeoutput buildup with cosine injection period increase (parameter b from the formula above) was reported as a measure of spikenoninactivation in response to slowfrequency currents (#23: 'Spiking resonance width').
For the highest frequency of cosine current injections (100 Hz) number of spikes in response to each current wave, taken as a function of wave number ns(x), was fit with the formula:
where x is a continuous independent variable interpolating integer wave numbers, and ae are fit parameters (Figure 1K). From this fit we inferred properties of spiking activation and inactivation in each cell, and reported the wave number that was expected to produce highest spiking in this set based on the fit (#24, 'Wave buildup'), and the speed of spike number adaptation, given by the c parameter from the formula above (#25, 'Wave decay').
Cosine current injections of different frequencies were also used to estimate the index of spiking unpredictability, or jitter (#26, 'Jitter'). To estimate this value, 10 different spike trains generated in response to cosine current injections (Figure 1I) were represented by δfunctions in a 10 kHz trace, convolved with a Gaussian (σ = 2 ms) and normalized. Pairwise scalar products (correlations) were calculated for each suitable pair of traces; these correlations were then averaged across all sweep pairs. The resulting value represented a measure of spiketime consistency, as it would be equal to 1 for identical trains, and approach 0 for perfectly unmatched trains. To move from a 'consistency index' to a 'jitter index' we inversed the value, and calculated a natural logarithm of it. The final formula could thus be expressed as:
where
Synaptic stimulation protocol
Request a detailed protocolFor responses to synaptic stimulation (Figure 1J), we removed stimulation artifacts, averaged trials with the same interstimulus intervals (ISIs), and calculated total synaptic charge for each of the ISIs. The total charge Q as a function of ISI (Figure 1M) was fit with a function:
where τ stands for the ISI value in ms, and ae are fit parameters. From this fit we reported estimated optimal interstimulus interval to produce maximal synaptic input as #27 'Synaptic resonance' (ms); the parameter c from the fit formula as #28 'Synaptic resonance width' (the measure of sharpness of synaptic frequency tuning), and the maximal total synaptic charge produced in a cell as #29 'Synaptic charge' (pA·s). We also calculated the ratio between the maximal total synaptic charge observed in each cell and the projected total charge in response to infinitely slow stimulation (max Q/e from the fit formula above). We dubbed this variable #30 'Synaptic PPF' and interpreted it as a measure of nonlinear synaptic summation, even though unlike actual PairedPulse Facilitation (PPF), our measure was based on five, and not 2 stimuli; involved a ratio of total charges, and was reported for the best interstimulus interval out of 10 different intervals we tried for each cell.
Finally, traces with longer interstimulus intervals (100, 150, 200, 250 and 300 ms) were used to compare the average amplitude of early monosynaptic components of the response (those measured between 5 and 14 ms after the shock at the optic chiasm) to that of polysynaptic recurrent responses that occurred later after the stimulus (observed from 15 to 145 ms after the shock; Figure 1N). The ratio of average currents over these time windows was interpreted as the measurement of relative strength of monosynaptic and polysynaptic (recurrent) inputs to each cell; it was reported as variable #31, 'Monosynapticity’.
Spontaneous synaptic events
Request a detailed protocolFor 64 cells we recorded spontaneous synaptic activity, and calculated average frequency (#32, 'Minis frequency’) and amplitude (#33, 'Minis amplitude’) of spontaneous excitatory postsynaptic currents. While spiking activity in the preparation was not blocked, and so some of these spontaneous events could have been influenced by background spiking in the network, our previously published results show that these data can be used as good proxy for 'true' miniature postsynaptic potentials (Pratt and Aizenman, 2007).
Potential sources of variability
Request a detailed protocolTo get a better understanding of sources of cell properties variability, we checked whether values of any of 33 variables we measured correlated with cell location within the tectum, as previously described in (Wu et al., 1996; Khakhalin and Aizenman, 2012; Hamodi and Pratt, 2014), and despite our attempts to consistently sample from the middle third of the tectum. For tadpoles at stages 48–49, from the main dataset (see below), only two variables significantly correlated with rostrocaudal distance in our preparation (after FDR adjustment of P_{corr} with α = 0.05): membrane capacitance Cm (r = −0.41, N = 46), and membrane resistance Rm (r = −0.42, N = 46). While comparison of rostrocaudal distances between different developmental stages is problematic, there was no difference in rostrocaudal distances of sampled cells between main (naïve) and visually stimulated experimental sets at stages 48–49 (P_{MW} = 0.3, N = 20 and 36 respectively). We also looked at whether the age of the preparation (time since dissection in hours) and the time of the day (assuming potential presence of circadian or diurnal rhythmic modulation in the tectum) affected any of our variables (based on significant correlation after FDR adjustment with α = 0.05). For naïve tadpoles at stages 48–49 the preparation age did not affect any of the variables we measured, and none of the variables were affected by the time of day at which cells were recorded.
Statistical procedures
Request a detailed protocolAs many variables analyzed in this paper were not normally distributed, and to stay consistent, we preferred MannWhitney twosample tests for comparing data between groups; pvalues of this test were reported as P_{MW}. At the same time, to make referencing and subsequent metaanalysis possible, we always report means and standard deviations in the text. Other statistical tests used in this study include Pearson correlation (P_{corr}), oneway ANOVA (P_{ANOVA}), multiple linear regression test (P_{F}), Welch generalization of Student’s ttest (P_{TT}), and Ftest for equality of variance (P_{V}). When exploring potential correlations between variables we used False Discovery Rate (FDR) procedure to adjust for multiple testing, keeping the false discovery rate at 0.05 (Benjamini and Hochberg, 1995). The FDR procedure was performed separately for each massive set of comparisons, as indicated in the text. In posthoc pairwise comparisons of data groups after ANOVA tests we used a more conservative Bonferroni correction, but reported uncorrected pvalues. To illustrate correlation levels between variables (Figure 2A) we used a custom MATLAB script inspired by the circular diagram from the 'Circos' visualization package (Krzywinski et al., 2009). As not every pair of variables was available in every cell, different correlation tests were run on different numbers of points (from 38 to 154; median of 108). We verified our Pearson correlationbased analysis by calculating Spearman correlation coefficients; after FDR 112 Spearman correlations were significant (as opposed to 90 for Pearson), and all Pearson correlations with r>0.5 (n=11), including all shown in Figure 2B, remained significant for Spearman calculation.
We ran the Principal Variables Analysis in R — package 'subselect' (Mccabe, 1984; Cadima and Jolliffe, 2001; Cadima et al., 2004) — using the correlation matrix of original nonimputed data, which was computed on all available pairwise observations for each pair of variables. We then compared and presented squared RM coefficients (Cadima and Jolliffe, 2001), to make the results of this analysis comparable to that of factor analysis below.
As not all measurements were available for every cell in the dataset, for factor analysis we used a generalization of a standard Principal Component Analysis (PCA) procedure, called the Variational Bayesian PCA, or the PCA with missing values (PCAMV; (Ilin and Raiko, 2010); MATLAB mfiles are available at the website of Bayesian Group, Aalto University, Finland). To verify that the PCAMV procedure is applicable to our data, we selected a subset of 52 cells and 27 variables to form a full matrix free of missing values. A standard PCA was then run on this reduced data set, and the results of it were compared to the results of PCAMV run on the full data set. The scree plot of a standard PCA on restricted data suggested that two first components (explaining 20% and 16% of variation respectively) could be interpreted meaningfully; the remaining components representing individual variability of the data, or 'noise’; see (Shabalin and Nobel, 2013) for references. We therefore only present the first two PCAMV components in this paper (explaining 15% and 8% of total variance respectively). Component scores found by standard PCA and PCAMV on the same subset of cells (N = 52) were highly correlated (r = 0.90 and 0.85 for components 1 and 2 respectively; p < 3e−15), suggesting that the PCAMV is indeed applicable to our data set.
As 18% of all possible measurements in our dateset were missing, the total share of explained variance (23%) was almost certainly underestimated, and cannot be compared to variance explained by standard PCA. This statement is obvious if you consider that the estimation of total variance in a set with randomly missing values is unbiased, yet explained variance is biased, as missing values cannot contribute to the calculation: while predictions for them are available, the values themselves are not present. As described above, a standard PCA run on a subset of data with full representation of all cells and variables explained 36% of total variance. Similarly, when we performed multiple imputation of missing values using R package 'Mi' (Su et al., 2011) (see below for details), a standard PCA procedure explained on average 35 ± 5% of total variance.
Some of the variables we assessed were distributed nonnormally, and we attempted to run PCAMV on renormalized ranktransformed variables, and compared the results of this analysis to that of PCAMV run on raw variables. Rankbased normalization improved the amount of variance explained by the first component (from 15% on raw data to 21% on ranktransformed data), but did not improve explanatory value of higher components. Upon visual comparison of scoreplots and loadingplots, we concluded that the relative arrangement of individual cells (scoreplot), as well as contributing variables within the 2D plane of first two components (loadingplot), did not change enough to justify the use of ranktransformation. All analysis reported in the paper was therefore performed on raw variables.
While linear approaches to factor analysis, such as PCA or Multidimensional Scaling, are usually considered to be safe and preferable methods when noisy and weakly correlated data are concerned (Nowak et al., 2003; Sobie, 2009; McGarry et al., 2010), we compared the performance of PCA to the two most popular nonlinear 2D ordination approaches: Isomap and Local Linear Embedding. To quantify the quality of 2D ordination we looked at how well the 2D map preserved pairwise differences between points in the original 33D space, using the squared correlation coefficient R^{2} between 2D and 33D distances as an output measure (Pedhazur, 1982). Based on this metric, PCA preserved 51 ± 2% of variance in pairwise differences (5 alternative Bayesian imputations of missing data using R package 'Mi’). The quality of Isomap projection improved as the projection became less and less local, from 17 ± 4% for isomap based on 3 closest neighbors for each point, to 36 ± 7% based on 20 closest neighbors; still it was substantially lower than for PCA. The Local Linear Embedding approach (R package 'lle’, based on (Kouropteva et al., 2005) also produced better results as more neighbors were considered, with the local best solution achieved at 19 neighbors explaining 27 ± 9% of variance in pairwise differences (as opposed to 51 ± 2% for PCA). Based on these results we concluded that for our data linear factor analysis approach is not only adequate, but also the most appropriate. In all cases testing was performed on centered and normalized data.
We also attempted restricting the number of variables included in PCA by prescreening them based on their Principal Variables rank and leaving only variables that explained high amounts of total variance in the dataset. At 17 variables (one half of the original set, corresponding to total explained variance threshold of 6%) PCAMV explained 40% of total variance in the set (as opposed to 23% for full data PCAMV), and some of the effects we describe in the paper became more prominent (for example, Fvalue for changes in PCA cloud size across developmental stages increased from 75 for the full set to 118 for restricted set). However we decided not to present PCA of restricted data in the paper, as thinning out of the multivariate dataset is generally not recommended for exploratory analysis when there is no objective posthoc test to justify the use of one restricted model over another (Guyon and Elisseeff, 2003). We therefore only report it here as another validation of the method.
To simplify interpretation of loading and scoreplots we performed 'promax' oblique rotation of first two PCAMV components using a standard 'rotatefactors' routine from MATLAB statistics toolbox. This approach maximizes varimax criterion using orthogonal rotation, and further simplifies the projection by applying Procrustes oblique rotation, using orthogonal rotation from the first step as a target.
To compare cloud sizes in the PCA space, we calculated all possible pairwise 2D Euclidian distances between the cells and compared their medians. To illustrate positions, shapes and spreads of clusters at the PCA scores plot in Figure 5 and Figure 7 we used the Kernel Density Estimation procedure (Botev et al., 2010). To compare cluster sizes in the original 33dimensional space, we centered and normalized the data, performed 50 alternative Bayesian imputations of missing values using R package 'Mi' (Su et al., 2011), and for each imputation subsampled 10 different sets of 50 points to compensate for small differences in original dataset sizes (n = 64, 56 and 60 for naïve stage 44–45, naïve stage 48–49 and visually stimulated stage 48–49 respectively). For each sampled subset of points we computed cityblock ("Manhattan") pairwise distances between all cells in the subset, and calculated the median of these values. Finally, we used a ttest to compare sets of medians between data groups (resulting in n = 500 values for each group).
To quantify the amount of internal heterogeneity within data sets, we ran agglomerative nesting analysis (AGNES from package 'cluster’) in R, and used the agglomerative nesting coefficient as a measure of heterogeneity (Struyf et al., 1996). In practice, we used the same imputation procedure as described above (50 imputations, each contributing to 10 subsets of 50 points each), applied the AGNES clustering procedure to these data, saved agglomerative clustering coefficients, and finally compared them across data groups. Similarly, for 'local withingroup PCA' we used same imputation / subsetting procedure, ran PCA on each set, and calculated the share of variance explained by first two components.
To statistically link the number of spikes produced in response to step injections to basic electrophysiological properties of each cell we built a family of general linear models using nonmarginal sequential GLM statistics in R package 'glmulti' (Calcagno and de Mazancourt, 2010).
References
 1
 2

3
Enhanced visual activity in vivo forms nascent synapses in the developing retinotectal projectionJournal of Neurophysiology 97:2949–2957.https://doi.org/10.1152/jn.00452.2006

4
A neuroprotective role for polyamines in a xenopus tadpole model of epilepsyNature Neuroscience 14:505–512.https://doi.org/10.1038/nn.2777

5
Autism and abnormal development of brain connectivityThe Journal of Neuroscience 24:9228–9231.https://doi.org/10.1523/JNEUROSCI.334004.2004

6
Controlling the false discovery rate  a practical and powerful approach to multiple testingJournal of the Royal Statistical Society Series BMethodological 57:289–300.https://doi.org/10.2307/2346101

7
Kernel density estimation via diffusionThe Annals of Statistics 38:2916–2957.https://doi.org/10.1214/10AOS799

8
Animaltoanimal variability in motor pattern production in adults and during growthThe Journal of Neuroscience 25:1611–1619.https://doi.org/10.1523/JNEUROSCI.367904.2005

9
Computational aspects of algorithms for variable selection in the context of principal componentsComputational Statistics & Data Analysis 47:225–236.https://doi.org/10.1016/j.csda.2003.11.001

10
Glmulti: an R package for easy automated model selection with (generalized) linear modelsJournal of Statistical Software 34:1–29.https://doi.org/10.18637/jss.v034.i12

11
Many parameter sets in a multicompartment model oscillator are robust to temperature perturbationsThe Journal of Neuroscience 34:4963–4975.https://doi.org/10.1523/JNEUROSCI.028014.2014

12
Activitydependent plasticity in the visual systems of frogs and fishTrends in Neurosciences 14:104–111.https://doi.org/10.1016/01662236(91)900712

13
Dendritic arbor development and synaptogenesisCurrent Opinion in Neurobiology 11:118–126.https://doi.org/10.1016/S09594388(00)001823

14
Longterm plasticity of intrinsic excitability: learning rules and mechanismsLearning & Memory 10:456–465.https://doi.org/10.1101/lm.64103

15
Development of multisensory convergence in the xenopus optic tectumJournal of Neurophysiology 102:3392–3404.https://doi.org/10.1152/jn.00632.2009

16
Sensory modalityspecific homeostatic plasticity in the developing optic tectumNature Neuroscience 14:548–550.https://doi.org/10.1038/nn.2772

17
Homeostatic plasticity in the CNS: synaptic and intrinsic formsJournal of Physiology, Paris 97:391–402.https://doi.org/10.1016/j.jphysparis.2004.01.005

18
Visual avoidance in xenopus tadpoles is correlated with the maturation of visual responses in the optic tectumJournal of Neurophysiology 101:803–815.https://doi.org/10.1152/jn.90848.2008

19
Cellular mechanisms for response heterogeneity among L2/3 pyramidal cells in whisker somatosensory cortexJournal of Neurophysiology 112:233–248.https://doi.org/10.1152/jn.00848.2013

20
The neural basis of visually guided behaviorScientific American 230:34–42.https://doi.org/10.1038/scientificamerican037434

21
Neural correlates of key stimulus and releasing mechanism: a case study and two conceptsTrends in Neurosciences 20:332–339.

22
An effect size primer: a guide for clinicians and researchersProfessional Psychology 40:532–538.https://doi.org/10.1037/a0015808

23
The biological bases of timetocollision computationAdvances in Psychology 135:13–37.https://doi.org/10.1016/S01664115(04)800049

24
Functional consequences of animaltoanimal variation in circuit parametersNature Neuroscience 12:1424–1430.https://doi.org/10.1038/nn.2404
 25

26
Frog Neurobiology297–385, Neurophysiology of the Anuran Visual System, Frog Neurobiology, Berlin, Heidelberg, Springer Berlin Heidelberg, 10.1007/9783642663161_10.

27
An introduction to variable and feature selectionThe Journal of Machine Learning Research 3:1157–1182.

28
Regionspecific regulation of voltagegated intrinsic currents in the developing optic tectum of the xenopus tadpoleJournal of Neurophysiology 112:1644–1655.https://doi.org/10.1152/jn.00068.2014
 29

30
Resonance, oscillation and the intrinsic frequency preferences of neuronsTrends in Neurosciences 23:216–222.
 31

32
Practical approaches to principal component analysis in the presence of missing valuesJournal of Machine Learning Research 11:1957–2000.

33
Variable selection and the interpretation of principal subspacesThe Journal of Agricultural, Biological, and Environmental Statistics 6:62–79.

34
Sparse temporal coding of elementary tactile features during active whisker sensationNature Neuroscience 12:792–800.https://doi.org/10.1038/nn.2328
 35
 36

37
Excitation and inhibition in recurrent networks mediate collision avoidance in xenopus tadpolesThe European Journal of Neuroscience 40:2948–2962.https://doi.org/10.1111/ejn.12664
 38

39
Incremental locally linear embeddingPattern Recognition 38:1764–1767.https://doi.org/10.1016/j.patcog.2005.04.006

40
Circos: an information aesthetic for comparative genomicsGenome Research 19:1639–1645.https://doi.org/10.1101/gr.092759.109
 41

42
The development of the optic tectum in xenopus laevis: a golgi studyJournal of Anatomy 116:347–355.

43
Neuronal responses to looming objects in the superior colliculus of the catBrain, Behavior and Evolution 77:193–205.https://doi.org/10.1159/000327045

44
Variability, compensation and homeostasis in neuron and network functionNature Reviews Neuroscience 7:563–574.https://doi.org/10.1038/nrn1949

45
Variability, compensation, and modulation in neurons and circuitsProceedings of the National Academy of Sciences of the United States of America 108 Suppl 3:15542–15548.https://doi.org/10.1073/pnas.1010674108

46
Multiple models to capture the variability in biological neurons and networksNature Neuroscience 14:133–138.https://doi.org/10.1038/nn.2735

47
Robust circuit rhythms in small circuits arise from variable circuit components and mechanismsCurrent Opinion in Neurobiology 31C:156–163.

48
Quantitative classification of somatostatinpositive neocortical interneurons identifies three interneuron subtypesFrontiers in Neural Circuits, 4, 10.3389/fncir.2010.00012.
 49
 50

51
Collisionsensitive neurons in the optic tectum of the bullfrog, rana catesbeianaJournal of Neurophysiology 104:2487–2499.https://doi.org/10.1152/jn.01055.2009

52
Probing the transcriptome of neuronal cell typesCurrent Opinion in Neurobiology 16:571–576.https://doi.org/10.1016/j.conb.2006.08.006

53
Normal Table of Xenopus Laevis (Daudin): A Systematical and Chronological Survey of the Development From the Fertilized Egg Till the End of MetamorphosisNew York: Garland Pub.

54
Electrophysiological classes of cat primary visual cortical neurons in vivo as revealed by quantitative analysesJournal of Neurophysiology 89:1541–1566.https://doi.org/10.1152/jn.00580.2002

55
Correlations in ion channel expression emerge from homeostatic tuning rulesProceedings of the National Academy of Sciences of the United States of America 110:E2645.https://doi.org/10.1073/pnas.1309966110

56
Multiple regression in behavioral research: explanation and prediction. fort worth, TX: holt, rinehart and winston

57
Homeostatic regulation of intrinsic excitability and synaptic transmission in a developing visual circuitThe Journal of Neuroscience 27:8268–8277.https://doi.org/10.1523/JNEUROSCI.173807.2007
 58

59
Multisensory integration in mesencephalic trigeminal neurons in xenopus tadpolesJournal of Neurophysiology 102:399–412.https://doi.org/10.1152/jn.91317.2008

60
Modeling human neurodevelopmental disorders in the xenopus tadpole: from mechanisms to therapeutic targetsDisease Models & Mechanisms 6:1057–1065.https://doi.org/10.1242/dmm.012138

61
Critical periods of vulnerability for the developing nervous system: evidence from humans and animal modelsEnvironmental Health Perspectives 108:511–533.https://doi.org/10.1289/ehp.00108s3511
 62

63
Learning to see: patterned visual activity and the development of visual functionTrends in Neurosciences 33:183–192.https://doi.org/10.1016/j.tins.2010.01.003
 64

65
Reconstruction of a lowrank matrix in the presence of gaussian noiseJournal of Multivariate Analysis 118:67–76.https://doi.org/10.1016/j.jmva.2013.03.005
 66
 67
 68
 69

70
Clustering in an objectoriented environmentJournal of Statistical Software 1:1–30.https://doi.org/10.18637/jss.v001.i04

71
Multiple imputation with diagnostics (mi) in R: opening windows into the black boxJournal of Statistical Software 45:1–31.
 72

73
Intermediate intrinsic diversity enhances neural population codingProceedings of the National Academy of Sciences of the United States of America 110:8248–8253.https://doi.org/10.1073/pnas.1221214110
 74

75
Homeostatic plasticity in the developing nervous systemNature Reviews Neuroscience 5:97–107.https://doi.org/10.1038/nrn1327
 76

77
Nature and precision of temporal coding in visual cortex: a metricspace analysisJournal of Neurophysiology 76:1310–1326.
 78
 79

80
Visual experiencedependent maturation of correlated neuronal activity patterns in a developing visual systemThe Journal of Neuroscience 31:8025–8036.https://doi.org/10.1523/JNEUROSCI.580210.2011
 81
Decision letter

Mark CW van RossumReviewing Editor; University of Edinburgh, United Kingdom
eLife posts the editorial decision letter and author response on a selection of the published articles (subject to the approval of the authors). An edited version of the letter sent to the authors after peer review is shown, indicating the substantive concerns or comments; minor concerns are not usually shown. Reviewers have the opportunity to discuss the decision before the letter is sent (see review process). Similarly, the author response typically shows only responses to the major concerns raised by the reviewers.
[Editors’ note: a previous version of this study was rejected after peer review, but the authors submitted for reconsideration. The first decision letter after peer review is shown below.]
Thank you for choosing to send your work entitled "On the difficulty of predicting neuronal output from underlying electrophysiological parameters" for consideration at eLife. Your full submission has been evaluated by Eve Marder (Senior editor) and three peer reviewers, and the decision was reached after discussions between the reviewers. Based on our discussions and the individual reviews below, we regret to inform you that your work will not be considered further for publication in eLife.
It has taken us a little time to put together some sort of consensus decision about your manuscript because each person who looks at it seems to see (and not see) somewhat different strengths and weaknesses. I believe that this is a measure of the fact that it is an ambitious undertaking and only partially successful in its present form. Consequently, we are forced to reject this version of the manuscript. Nonetheless, if you find the criticisms, comments here and below helpful, and you feel that you can produce a different version of this work that takes into consideration these reviews, eLife would be willing to consider a new version, which we would consider as a new submission. That manuscript might be evaluated by the same or different reviewers, and there is no assurance that it would be successful in review.
Major Strength:
We are all impressed that you have made a serious effort to collect as many measures as possible from each of a large number of neurons in the frog tectum. In principle, this should constitute an important data set, and could be used to answer a number of interesting questions. And one of the reviewers was, and remains positive about the paper, precisely for this reason. Clearly, the attempt to connect the intrinsic properties and electrophysiological behaviors of the neurons to their underlying component processes is deeply important. But, the devil is in the details, and that is where other reviewers had serious reservations, some of which are expressed in the original reviews and others that surfaced during an extensive set of discussions among the reviewers.
Weaknesses/Limitations:
1) The reviewers were unclear about whether your starting assumption was that all of the recorded neurons were of the same cell type: principal neurons of that tectal region? You made it clear that you were excluding neurons that should have been of a different type, but what evidence is there that all of these neurons should be considered similar? In other words, was it your goal to cluster neurons into reliably different subtypes, or was it your goal to study the range of properties in one type of neuron? The manuscript seems to slide back and forth between these two positions.
2) Having 33 attributes sounds impressive, but the question is how many of these attributes are independent, and how many are derived properties? To be more specific, we would all agree that the Na and K conductance densities are independent attributes. But, spike frequency and interspike interval are two measures of the same property. Obviously, this latter case is trivial, but there are instances in which two attributes that on first glance may appear to be independent are not. So for example, membrane capacitance and synaptic current might together give you the envelope of the synaptic events? If so, then is it fair to use all three as independent attributes? It is not clear how many truly independent measurements there are among the 33?
3) We understand that there is an inherent problem when one is trying to measure many properties from the same neurons, and that it may not be possible to make all measurements perfectly. That said, we don't fully understand why particular choices were made. For example, you do not report the resting potential. And we understand why you made some measurements after bringing all the neurons to 65mV. But that by itself was a decision to not measure the voltage difference between the resting potential and threshold. There are a number of choices of this sort which were not adequately justified in the manuscript, but which might change significantly the takehome messages? You report the number of spikes in response to cosine drive, but from 65mV. These data might look totally different if you had started at the neurons' resting potential (which also has its difficulties of course).
4) A relatively minor consideration that however influences how people approach the paper: the title promises more than the paper delivers and sets it up for criticism.
The full individual reviews are below.
Reviewer #1:
In the present article, Ciarleglio and colleagues investigate the changes in electrical phenotype of neurons of the optic tectum of Xenopus laevis tadpole occurring during development and after imposed sensory stimulation. One of the specificities of the current article is that the authors use different types of multidimensional analyses (mostly PCA) to analyze variations in electrical phenotype defined by 33 intrinsic electrophysiological parameters (passive properties, spiking response, and measurements of specific voltagedependent ion currents). The main conclusion drawn from this analysis is that a high degree of degeneracy underlies the electrical phenotype of tectal neurons, as neurons with similar spiking can display strikingly different underlying electrophysiological properties and neurons with similar underlying electrophysiological properties may display strikingly different electrical phenotype. Moreover, the authors suggest that the differences in electrophysiological properties found between visuallystimulated and early stages of development for similar spiking profiles also demonstrate degeneracy of electrical phenotype. Although the approach is interesting, I have several concerns about the measurements, the analysis and the conclusions drawn from the results.
1) My main concern is about the poor representativeness of the total variance of the observations by the two principal components used for the PCA (presented in Figure 4, Figure 5, Figure 6, and Figure 7). As the authors mention, the 2 principal components account for 23% of the variance of the observations (15% for PC1 and 8% for PC2). This means that the main part of the variance in the observations (77%) is unaccounted for in this analysis. The choice of analyzing neuronal output by measuring a large number of electrophysiological parameters, and using dimensionalityreduction techniques such as PCA to visualize it is a valid approach if dimensionality reduction preserves most of the information contained in the original measurements. In the present case, 77% of the variance (most likely including variance in parameters that are essential for defining neuronal output) is lost in the analysis. This problem may arise from several different sources: i) the high number of parameters used as an input for the PCA (33) ii) the fact that a linear model such as PCA might not be adequate to describe the variance of the parameters analyzed here. Independent of the source of confusion, this means that PCA may not be the adequate model to use in the present study to draw conclusions on the relationships between neuronal output and underlying parameters. In particular, the concept of degeneracy requires that both the target phenotype (here neuronal output) and the underlying parameters are accurately measured (see main point 2) and defined, and that they have relevant relationships. The fact that PCA accounts for only 23% of the variance in "electrical phenotype" is a real concern that hinders the purpose and the conclusions of this study.
2) My second main concern is about the relevance of some of the electrophysiological parameters measured: some of the parameters are flawed with measurements errors, whilst others lack obvious physiological relevance. The thresholds of the 3 ion currents (Na, KT, and KS) are not measured properly as they are contaminated by reversal potential variations (Na, KT) or are based on a nonsigmoidal fragment of the IV curve (KS). Ion current "thresholds" are usually characterized by precisely defining the halfactivation voltages based on conductance vs voltage curves. The measurements of action potential properties in these cells seem to be strongly influenced by variations in the remote location of the spike initiation site (as indicated by the small amplitude of the action potential, around 15–20 mV). In this context, it is difficult to understand whether variations in the kinetics and amplitude of the action potential are mainly attributable to variations in Na and K currents, or variations in the passive properties (i.e. rather related to Cm and Rm) of the compartment located in between the recording site and the spike initiation site. I do not understand the monosynapticity factor.
3) Because of the first two main concerns, it is difficult to really trust the conclusion about the degeneracy of neuronal output in these neurons. Although there is no doubt that biological systems are "degenerate", demonstrating degeneracy first requires to demonstrate that the variable parameters have a strong influence on the output of the system. As an example, action potential shape is degenerate if i) it relies on variable sodium current density and ii) sodium current density has been demonstrated to have a causal influence on action potential shape. The degeneracy argument is irrelevant when the second condition is not fulfilled: as an example, the fact that synaptic transmission at axon terminals located far away from the soma tolerates large variations in soma size does not tell us anything about the degeneracy of synaptic transmission, until we prove that soma size has a significant influence on synaptic transmission.
Reviewer #2:
Ciarleglio, Khakalin et al. have analyzed inputoutput tuning (spiking) and the underlying electrophysiological properties of neurons in the optic tectum of Xenopus tadpoles over a series of stages of development and following visual stimulation at a single stage. Principal component analysis identified properties of spikiness and robustness. Some properties were found to change during development, while others changed with stimulation. Neuronal properties were observed to become more diverse with age. The authors observes substantial degeneracy in that multiple combinations of electrophysiological properties lead to similar spiking behavior so that electrical properties cannot predict spiking activity.
The project is very well conceptualized and executed. The paper is clearly and very well written and ties to the figures nicely. The figures in turn are very well assembled and easy to follow. Figure 7 is particularly informative.
My only suggestion is to change the title to something more positive. Perhaps something like Quantitative analysis of the relation of spiking output to electrophysiological properties reveals substantial degeneracy. I think this will motivate more readers to dip into it.
The authors have addressed an important question about structure (component) – function relationships in the vertebrate brain.
Reviewer #2 (Additional data files and statistical comments (optional)):
The data files and statistical analyses seem appropriate.
Reviewer #3:
The authors have provided a substantial data set of electrophysiological properties of tectal neurons of Xenopus tadpoles, towards a goal of better understanding the changes in spiking properties of these neurons over development. An additional, and perhaps more striking, point of the paper is the apparent variability of properties that underlie "similar" spiking patterns. In its current form, I'm not sure it has accomplished these goals. There is also a much richer literature in which to root these ideas, particularly in computational arenas, which are largely overlooked.
There is inherent value in large electrophysiology data sets, and the authors provide a substantial amount of data in this regard. Ultimately this paper is about analysis choices within this data set. I have comments about some of these analysis choices, organized by figures:
Figure 2: The authors seem to have chosen to combine a large dataset of heterogeneous cell types and then perform posthoc correlation analyses. The rationale for this is unclear, and potentially confounding. The value of the "network" style plot in Figure 2A is minimal. This kind of diagram could potentially be valuable for comparing correlation patterns across groups, but as a standalone, it is difficult to follow any one pair of parameters that the reader would be interested in. The thickness of the lines has no interpretive value without a scale associated. For example, Rvalues that range from 0.1 to 0.3 would still have a threefold range of thickness, but show minimal correlation regardless.
Figure 5: Using a PCA, despite the difficulties inherent in not all data being present for all cell types, can be a powerful approach. However, while one can somewhat impute characterizing "spikiness", a meaningful definition of "robustness" is not provided. The authors then go on to make the point that the PCA cloud is moving across developmental "space", predominantly in the "spikiness" domain. This is seemingly a bit of an overwrought analysis, when a much more direct measure of "spikiness" (i.e. the spiking of the cells) is provided in Figure 3 and demonstrates in much less convoluted terms that spiking propensity does indeed change in these groups.
Figure 7: This is a major thrust of the paper, in support of the notion that cells that are grouped by similar output have variable physiological parameters. This crucial aspect of the manuscript is not well justified. The criterion for determining cells of similar output is only mentioned in passing with a reference to work of Victor and Purpura without enough detail to evaluate. Since then, many computational studies have been published with highly effective means of identifying cells with highly convergent output. This is underscored by the fact that in Figure 7B, the representative traces from these cells are too small to really get a proper feel for this, but the case could easily be made counter to the authors assertion that "spiking outputs are… closely matched within each group". In other words, they don't look that similar to me. If this is based simply on spike number, then it is ignoring the pattern of firing and seems to arbitrarily decide that 23 and 47 are different groups (why not 24 and 57?). A much more thorough analysis and justification for "similar outputs" would greatly strengthen the argument of the authors. As this figure is potentially a centerpiece of the study, this is one of the most critical aspects to address.
Reviewer #3 (Additional data files and statistical comments (optional)): I do have comments about statistical approaches for this paper, as the manuscript is highly dependent on the analyses for its interpretation.
Figure 2: A rationale for Pearson analysis, which is wedded to a linear relationship, as opposed to Spearman or Kendall Tau, would be appreciated (Pearson does not require normality, because most of the data set is nonnormally distributed and rankbased correlation may be more appropriate). The panels in Figure 2B reveal the susceptibility of correlation analysis to outliers and nonlinear relationships. In particular, "Wave decay vs. N spikes (steps)" is a highly suspect correlation, yet is presented as one of the strongest. The power of large sample sizes and the logic of correlation allows for an analysis pathway that can avoid these problems. Specifically, with this level of sample size the authors could remove the tails of the data distribution (perhaps below the 5th and above the 95th percentiles), reanalyze, and see if correlations remain. If a true correlation exists, it will persist among any reasonable subset of the data points.
Figure 3: Nonparametric analyses are performed on these data, but the data are represented with parametric variance. It is unclear what the shaded area represents, or its statistical derivation or value.
Figure 5: If I understand correctly, the authors are making the conclusion that the PCA space changes across developmental time by doing statistics on statistics (an ANOVA of Principal Components). I'm wondering if there are other examples of this approach and whether this derived level of analysis is appropriate. Citations would be appreciated.
Figure 6: It is not clear what the origins of the shaded parts of Figure 6E are? Are these a result of a statistical measure?
[Editors’ note: what now follows is the decision letter after the authors submitted for further consideration.]
Thank you for submitting your work entitled "Multivariate analysis of electrophysiological diversity of Xenopus visual neurons during development and plasticity" for consideration by eLife. Your article has been reviewed by four peer reviewers, one of whom has agreed to reveal his identity: Nicolas Spitzer. The evaluation has been overseen by a Reviewing Editor and Eve Marder as the Senior Editor.
The reviewers have discussed the reviews with one another and the Reviewing editor has drafted this decision to help you prepare a revised submission.
Summary:
Ciarleglio et al. have followed the changes in an exhaustive list of electrophysiological properties of visual neurons in Xenopus during development. This reveals that properties typically diverge over development, but also that visual stimulation can reduce variability.
Essential revisions:
The consensus reached was this is a valuable study well worth publishing, and it was strongly felt that the data set is an important part of the study and should be made public.
There are various options for this. If the data are subject specific then we strongly recommend that the data be deposited in the appropriate location. There is a nice list here: http://www.nature.com/sdata/datapolicies/repositories. If the data are heterogeneous and not well suited to a specific repository then one can work with datacite, zenodo, imeji or figshare.
We can of course assist you in this process.
Reviewer #1:
Ciarlegio and colleagues have gone to a lot of trouble to record as many physiological properties as they can from many neurons in a population of Xenopus tectum neurons. They compare datasets across developmental periods and a stimulation condition. There appear to be three main conclusions. First, some neural properties change during development and with stimulation, becoming more diverse. Second, that neural properties are highly variable, to the extent that it is hard to predict physiological behavior from 'low level' properties. Third, stimulation appears to decrease variability in the sense that certain features become more correlated.
The first conclusion agrees with a large body of existing data, but does not say anything specific about the possible function of such diversification. For example, do the properties that change make sense from the perspective of how the circuit might function? This kind of question is not really addressed in the manuscript, which instead attempts a more datadriven/assumption free approach. Such an approach needs to have a very transparent analysis and should steer clear of jumping to conclusions. Unfortunately, I found the analysis relied heavily on quite opaque methods that in one case reported clustering (Figure 6A) with minute pvalues even though no clustering is evident when all the data are plotted together. This reliance on very complicated analysis methods to find patterns in the data made it hard to interpret the findings and assess their veracity.
Another concern which is common to all experimental data that show high variability is the possibility that measurement error and the experimental procedure itself corrupt the readings. The authors chose quite a punishing experimental protocol, and a substantial fraction of cells did not survive. Those that did presumably had stable properties but the only criterion I could find for including a set of recordings was "as long as the nearest IVcurve recording from these cells was stable". This is unacceptably vague – how, for instance, would someone repeat this study using the same criterion? This makes me worry about the quality of the dataset, quite independently of whether it has been overanalyzed. If the dataset (i.e. raw recordings) were to be made available this would help a lot.
Reviewer #1 (Additional data files and statistical comments):
I think the dataset should be made available (raw recordings) and a qualified statistician should look at the manuscript.
Reviewer #2:
The authors have made significant improvements to the manuscript. They have provided better justifications for their conclusions that a population of neurons becomes electrophysiologically more variable during development, that similar spiking behaviors are generated by different constellations of membrane properties and that sensory input reduces electrophysiological diversity.
They have justified their data analysis more fully, in particular checking the main PCA analysis with two standard methods. The case for retaining all the variables makes sense to me. The variance that can be accounted by PCA is now validated through several procedures. The Methods provides a better justification and explanation of the analysis.
They strike a fair balance between clustering neurons into different subtypes and investigating the range of properties of a single type of neurons: the distinction cannot be drawn from the available data. Choosing between the two would be assisted but not resolved by knowing transcriptomes since there is more to differentiation than RNA synthesis, but this is an issue for another day.
Overall the paper is clearer on both experimental and analytical procedures and is more accessible. I am enthusiastic about publication.
Reviewer #3:The authors have done a fairly thorough and convincing job of repackaging these results into a more tractable and consistent narrative, and this has greatly enhanced the paper. There is substantial value in a large and thorough electrophysiological parameter investigation in a vertebrate system, as this is a rare contribution to the literature. In particular what has emerged is an interesting story not so much about degeneracy in mechanisms of output of a single cell type (I believe a flaw in the initial conception of the original manuscript), but rather a thorough and convincing revelation of a continuum of output as revealed by shifting relations among electrophysiological parameters.
A majority of the rebuttal deals with the analysis, and I will admit that I do not have the depth of experience with PCA to contribute much more to this discussion. It seems well conceived and justified to my nonexpert eye. The remaining concerns are appropriately addressed.
While the sum total content and conclusion of the paper lacks that clear and concise punchline, in some ways this adds to the charm of the paper. Biology is messy, and in some ways this paper helps to quantify this "messiness" and put it into multiple appropriate and interesting contexts.
I think this study makes a contribution to this area that is worthy of publication.
Reviewer #4:
The authors have taken the reviewers’ comment into account, and have modified the manuscript and added significant analysis accordingly. I have no further comments.
https://doi.org/10.7554/eLife.11351.017Author response
[Editors’ note: the author responses to the first round of peer review follow.]
In the revised version we have incorporated the reviewers’ suggestions, and done a better job in justifying the various experimental decisions that were made during this study. We have also altered the emphasis of the paper to focus on a different finding, mainly that the diversity of electrophysiological phenotypes increases over development but can be modulated by experience. We have added additional analyses that address the multivariate distribution of cell electrophysiological properties in the original 33dimentional space, and further support this point, while also allowing us to justify various experimental details. With the altered emphasis and the new data presented we also changed the title of the paper, to better reflect its new message.
For your reference, below is the list of key differences between this new manuscript and the previous version:
• New main point, title, Abstract, and Discussion.
• An indepth analysis of the dataset validity using the Principal Variables approach was introduced. We now make an informed argument that our dataset presents a productive balance between the independence of individual variables, and the representation of potential interactions between different physiological properties of tectal cells. This new analysis also produced a new panel in Figure 2.
• The main Principal Component Analysis (PCA) is now verified using two different nonlinear factorization techniques: Isomapping and Local Linear Embedding. The total share of variance explained by the main PCA is crosschecked and numerically justified in several different ways, including aforementioned nonlinear factorization techniques, Bayesian imputation (bootstrapping) of missing variables, and PCA on restricted sets of varialbles presorted by their explanatory power.
• We introduce two new analyses of multivariate distributions of physiological properties in the original 33dimensional space: quantitative clustering analysis (Agglomerative Clustering coefficients) and withingroup PCA as a tool to quantify the strength of multivariate correlations between neural cell properties under different biological conditions. These results are presented in a new figure (Figure 6).
• A clarification of a distinction between true cell types and functional subtypes within the tectal cell population.
• All claims for the "system degeneracy" are removed, while the findings that originally prompted us to make these claims are retained. While we don't make an emphasis on this finding anymore, we nevertheless put a considerable effort into making this part of the paper more convincing, including a better justification of our methods.
• Reported Pearson correlations are verified by Spearman correlations where necessary.
There are also multiple minor changes in variables names, descriptions, and numbering that together contribute to the consistency of the text, and the ease of reading. We also explicitly address several concerns the reviewers had about the choice of variables to include in the dataset, and the quantification approaches we used.
Major Strength:
[…] We are all impressed that you have made a serious effort to collect as many measures as possible from each of a large number of neurons in the frog tectum. In principle, this should constitute an important data set, and could be used to answer a number of interesting questions. And one of the reviewers was, and remains positive about the paper, precisely for this reason. Clearly, the attempt to connect the intrinsic properties and electrophysiological behaviors of the neurons to their underlying component processes is deeply important. But, the devil is in the details, and that is where other reviewers had serious reservations, some of which are expressed in the original reviews and others that surfaced during an extensive set of discussions among the reviewers.
1) The reviewers were unclear about whether your starting assumption was that all of the recorded neurons were of the same cell type: principal neurons of that tectal region?
We now tried to make it clearer in the text that even at the latest of the stages we describe in this paper (stages 4849 as per (Nieuwkoop and Faber, 1994) almost all neurons in the deep layer of the optic tectum of Xenopus tadpoles are considered to belong to one broad cell type, that we refer to as "principal tectal neurons". From studies in adult frogs it is known that eventually, after metamorphosis, optic tectum harbors several types of neurons that differ both in their morphology (Lazar and Szekely, 1967; Lazar, 1973) and function (Grüsser and GrüsserCornehls, 1976; Yamamoto et al., 2003; Nakagawa and Hongjian, 2010). In premetamorphic tadpoles however all principal neurons seem to be comparable both in terms of their basic physiological properties (Wu et al., 1996; Khakhalin and Aizenman, 2012; Hamodi and Pratt, 2014) and morphology (Chen et al., 2010; Dong and Aizenman, 2012; Marshak et al., 2012), despite the diversity of their responses to visual stimulation (Podgorski et al., 2012; Khakhalin et al., 2014), and differences in underlying gene expression (Miraucourt et al., 2012). The only obvious nonprincipal cell type in the developing optic tectum that is known so far are the MTneurons (Pratt and Aizenman, 2009) that we did not record from.
You made it clear that you were excluding neurons that should have been of a different type, but what evidence is there that all of these neurons should be considered similar? In other words, was it your goal to cluster neurons into reliably different subtypes, or was it your goal to study the range of properties in one type of neuron? The manuscript seems to slide back and forth between these two positions.
In the new version of the manuscript we address this question more directly when discussing clustering of cell properties within experimental groups. Ultimately we cannot tell from our data whether the emerging diversity and clustering of electrophysiological properties at stages 4849 is due to differential tuning with a single cell type (sometimes referred to as "neuronal subtypes" (McGarry et al., 2010), or a sign of emergent cell differentiation that would only become obvious at postmetamorphic stages. We now better address this ambiguity in the paper.
In the Introduction section:
“Coordinated changes in different physiological properties may contribute to diversification of cell tuning that happens as networks mature, creating and shaping differences in cell phenotypes both between cell types as they emerge (Ewert, 1974; Frost and Sun, 2004; Kang and Li, 2010; Nakagawa and Hongjian, 2010; Liu et al., 2011), and within each cell type in a functional network (Tripathy et al., 2013; Elstrott et al., 2014).”
And in the Discussion:
“Our results indicate that during development, cells in the deep layer of the tectum become more diverse, although at the stages we studied they do not split into distinct nonoverlapping cell types that are reported in the tecta of other species and at later stages of development in frogs (Lazar, 1973; Ewert, 1974; Grüsser and GrüsserCornehls, 1976; Frost and Sun, 2004; Kang and Li, 2010; Nakagawa and Hongjian, 2010; Liu et al., 2011).”
The best practical way to answer this question would to be to match electrophysiological properties of tectal neurons with their transcriptomes, which is something we hope to do in the future as a followup for this study, as we indicate in the last paragraph of the Discussion.
2) Having 33 attributes sounds impressive, but the question is how many of these attributes are independent, and how many are derived properties? To be more specific, we would all agree that the Na and K conductance densities are independent attributes. But, spike frequency and interspike interval are two measures of the same property. Obviously, this latter case is trivial, but there are instances in which two attributes that on first glance may appear to be independent are not. So for example, membrane capacitance and synaptic current might together give you the envelope of the synaptic events? If so, then is it fair to use all three as independent attributes? It is not clear how many truly independent measurements there are among the 33?
This is a very important question, and arguably the main reservation the board of reviewers had about our paper, so in the new version of the paper we added a considerable piece of analysis (Principal Variables analysis) entirely dedicated to this problem. The rationale and significance of this analysis as pertains to your point is as follows:
There are four important considerations we need to keep in mind while working with exploratory analysis (Guyon and Elisseeff, 2003). First, there is no clear cut line between "independent" and "derived" variables in a biological system, especially when complex integrative properties, such as quantification of responses to some type of stimulation are concerned. Most practical variables one could come up with lie somewhere on the spectrum between "fully dependent" (as in a trivial case of frequency and period) and "fully independent".
Second, from the data analysis point of view these "gray zone" semiindependent variables are precisely the variables that are of interest. Obviously, variables that are "too dependent" (in the extreme case – identical) bring no new information in the dataset, and so should be excluded, but the same can be said about fully independent variables. If a cell property is truly statistically independent from all other properties in the set, then except for the very fact of this independency it has nothing new to tell the researcher; it cannot point to an underlying physiological principle (as it does not interact with any other variables), and it cannot help cell classification (for the same reason). From the statistical point of view a truly independent variable is as useless for multivariate analysis as pure noise, which is by definition the epitome of statistically independent.
Third, there is a nontrivial interplay between prior (common sense) and posterior (phenomenological) concepts of variable dependency. For example, some pairs of variables can be assumed to be independent prior to the study (as in your example of Na and K conductances), but shown to be dependent in the dataset (for example in our dataset peak Na and K currents correlate positively, and so are strongly codependent). On the contrary, some variables can seem to be very similar based on our limited prior knowledge of the system, but behave surprisingly independently, as it happened for the number of spikes in response to step and cosine injections in our dataset that responded differently to visual stimulation, or two different attempts to quantify overall strength of synaptic inputs to the neuron (through total charge and spontaneous activity) that in practice did not seem to share anything in common.
Finally, the original impetus behind the development of principal component analysis as a method by Galton and Pearson in late 19th – early 20th century was to extract a single underlying parameter (what later became known as first principal component) out of a set of highly correlated variables that were each expected to reflect this underlying parameter without fully representing it. Not only PCA can be extremely useful when applied to datasets of interdependent variables, it relies on the assumption that variables are codependent (specifically – linearly correlated). The actual pitfall of applying PCA to diverse multivariate datasets is not the potential dependency of variables per se, but rather the overrepresentation of some aspects of this dependency at the expense of others. For example, when creating our dataset we had to be mindful of including about the same number of variables to quantify each of different aspects of cell physiology (3 for spiking, 3 for spike shape, 2 for spontaneous activity, 2 for synaptic strength, 3 for temporal synaptic properties etc.), lest one of them overwhelmed the rest and skewed the results in its favor.
Based on these considerations, and owing to your clear and constructive feedback, we now include an analysis of overall involvement of different variables in linear correlations with all other variables in the set (the method known as Principal Variable Analysis (Mccabe, 1984) before moving to the factor analysis (Figure 2C). We show that our dataset presents a healthy range of variables that each individually explain from 4% to 15% of total variance in the set. None of the variables is too good in predicting the variance; all variables are above the predicted noise level; there is no clear dividing "cliff" in terms of variance explained, but rather a smooth decline across the list; there is a good mix of variables of different biological nature in parts of the plot with higher and lower explanatory power. Overall, this analysis convinced us that our dataset is appropriate for the question, and we hope it helped us to better carry this point across to the reader. This explanation of our rationale for selecting our variables and regarding the principal variable analysis can be found in the text:
“In summary, it can be seen from Figure 2C that no single variable was "too good" in explaining overall variance in the dataset, […] offering a healthy mix of independent and interacting variables (Guyon and Elisseeff, 2003).”
3) We understand that there is an inherent problem when one is trying to measure many properties from the same neurons, and that it may not be possible to make all measurements perfectly. That said, we don't fully understand why particular choices were made. For example, you do not report the resting potential. And we understand why you made some measurements after bringing all the neurons to 65mV. But that by itself was a decision to not measure the voltage difference between the resting potential and threshold. There are a number of choices of this sort which were not adequately justified in the manuscript, but which might change significantly the takehome messages? You report the number of spikes in response to cosine drive, but from 65mV. These data might look totally different if you had started at the neurons' resting potential (measurements errors difficulties of course).
As tectal neurons are fairly small, the concentration of ions inside the cell matches that in the internal pipette within about 2A3 seconds after the wholecell patch is established (Khakhalin and Aizenman, 2012). Because of that, a measurement of the "resting membrane potential" in wholecell mode in our system does not represent the actual resting membrane potential the cell used to have before the patch was established. At the same, we did indirectly quantify Em in the paper by recording the holding current required to bring the cell to −65 mV. From the data analysis point of view, the holding potential is linked to resting membrane potential by a simple formula: Ih = (Em−Ehold)/Rm, and therefore indirectly the value of Em (with the reservations above) was accounted for in our dataset. We now included a corresponding clarification passage in the Methods section of the paper (subheading “Data processing and analysis”); also see more detailed comments on this topic below.
4) A relatively minor consideration that however influences how people approach the paper: the title promises more than the paper delivers and sets it up for criticism.
We now changed both the main message of the paper and its title to better represent our findings. While the fact that the spiking phenotype of each tectal cell is not easily predictable is still presented in the paper, we now think that a more interesting finding is an increase in celltocell tuning complexity in development, and the reshaping of this multivariate tuning after sensory stimulation. We therefore added new analysis of these effects (including quantitative cluster analysis and local PCA analysis), featured these results in a new figure (Figure 6), and put this aspect of our data forward.
The full individual reviews are below.Reviewer #1:1) My main concern is about the poor representativeness of the total variance of the observations by the two principal components used for the PCA (presented in Figure 4, Figure 5, Figure 6, and Figure 7). As the authors mention, the 2 principal components account for 23% of the variance of the observations (15% for PC1 and 8% for PC2). This means that the main part of the variance in the observations (77%) is unaccounted for in this analysis.
We now included a clarification in the paper, as 23% of variance explained for an incomplete dataset that misses 18% of all possible observations would translate into a larger explained variance for a full dataset, had it been available. We tried to estimate this value by analyzing restricted subsets of data with better representation; applying standard PCA to imputed data, and using alternative approaches to factor analysis, such as Multidimentional Scaling. We conclude that in a dataset without missing values our analysis would have explained about 35 ± 5% of total variance.
The corresponding statement in the Results section reads:
“We concluded that our PCA analysis was adequate for the data, and performed better than local nonlinear approaches, with first two principal components explaining 15% and 8% of total variance respectively (this total of 23% of variance explained would have corresponded to about 35% of variance if we had every type of observation in every cell…”
In the Methods section, the following paragraph was added:
“As our dataset was not complete, and 18% of all possible measurements were missing, […] a standard PCA procedure explained on average 35 ± 5% of total variance.”
The choice of analyzing neuronal output by measuring a large number of electrophysiological parameters, and using dimensionalityreduction techniques such as PCA to visualize it is a valid approach if dimensionality reduction preserves most of the information contained in the original measurements.
We would like to respectfully disagree, as PCA and related techniques can be used to find patterns behind interacting values even when the eigenvalues of individual components are relatively small. For example, consider an analysis in which 2 first components of PCA explain 80% of variance in a 10 variable dataset, which would constitute a very robust analysis. Adding 10 more variables of pure noise to this dataset would reduce the amount of total variance explained by first 2 components to approximately 40%, and the components would get more noisy, yet the general pattern of the analysis would be preserved, as the original variables would still have higher loading coefficients than the noisy variables. There are several criteria for the rejection of eigenvalues that are too small, and these criteria are routinely applied to high order principal components as researchers chose the dimensionality of PCA projection. At the very least, they include a requirement of having a value above 1 (that is, explaining more variance than a single normalized column of data would explain), and lying above the "break point" on a scree plot, which we also used in this study (Shabalin and Nobel, 2013).
In our dataset, first two components explained 15% and 8% of variance on incomplete data respectively, which is larger than 5% and 4% expected for two first components on a similarly incomplete dataset that lacks internal structure (consists of uncorrelated normally distributed variables). On a scree plot, the eigenvalue for the 2nd component forms the "break point", and thus can be included in the analysis.
In the present case, 77% of the variance (most likely including variance in parameters that are essential for defining neuronal output) is lost in the analysis. This problem may arise from several different sources: i) the high number of parameters used as an input for the PCA (33)
While working on this updated version of the paper, we considered reducing the number of variables included in the PCA analysis, but after considerable internal discussions decided against it. A pre selection of input variables may make our results seem more convincing, as formally it increases the amount of variance explained. However we feel that in the absence of objective validation, such as a comparison of electrophysiological profile of each cell to its morphological type, presorting and picking of individual variables may not be fully justified. We could conceivably argue in favor of one type of analysis and against the other if one of them proved to be more "productive", and helped to uncover new effects, or hidden structure in the dataset. We found however that preselection of variables did not change any of our main findings qualitatively, and thus opted for the inclusion of the original dataset.
The following is a segment about this validation of our PCA approach that we now include in the Methods section of the paper:
“We also attempted restricting the number of variables included in PCA, by prescreening them based on their Principal Variables rank […] there is no objective posthoc test to justify the use of one restricted model over another (Guyon and Elisseeff, 2003). We therefore only report it here as another validation of the method.”
ii) the fact that a linear model such as PCA might not be adequate to describe the variance of the parameters analyzed here. Independent of the source of confusion, this means that PCA may not be the adequate model to use in the present study to draw conclusions on the relationships between neuronal output and underlying parameters.
In this revised version of the paper we report two new attempts of using local nonlinear approaches to twodimensional ordination of our data. We concluded that these approaches are less effective than a simple linear PCA. This is the corresponding statement that is now included in the Methods section of the paper:
“While linear approaches to factor analysis, such as PCA or Multidimensional Scaling, are usually considered to be safe goto methods when noisy weakly correlated data is concerned (Nowak et al., 2003; Sobie, 2009; McGarry et al., 2010), […] these results we concluded that for our data linear factor analysis approach is not only adequate, but also the most appropriate. In all cases testing was performed on centered and normalized data.”
A shorter reference to this validation is also placed in the Results section, prior to the discussion of PCA results:
“We extensively verified the validity of our PCA analysis, comparing it to standard PCA on restricted […] that our PCA analysis was adequate for the data, and performed better than local nonlinear approaches.”
In particular, the concept of degeneracy requires that both the target phenotype (here neuronal output) and the underlying parameters are accurately measured (see main point 2) and defined, and that they have relevant relationships. The fact that PCA accounts for only 23% of the variance in "electrical phenotype" is a real concern that hinders the purpose and the conclusions of this study.
Even though our prior conclusions about the degeneracy of spiking phenotype did not rely on the results of our PCA analysis, we now remove all mentions of degeneracy from this paper.
2) My second main concern is about the relevance of some of the electrophysiological parameters measured: some of the parameters are flawed with measurements errors, whilst others lack obvious physiological relevance. The thresholds of the 3 ion currents (Na, KT, and KS) are not measured properly as they are contaminated by reversal potential variations (Na, KT) or are based on a nonsigmoidal fragment of the IV curve (KS). Ion current "thresholds" are usually characterized by precisely defining the halfactivation voltages based on conductance vs voltage curves. The measurements of action potential properties in these cells seem to be strongly influenced by variations in the remote location of the spike initiation site (as indicated by the small amplitude of the action potential, around 1520 mV). In this context, it is difficult to understand whether variations in the kinetics and amplitude of the action potential are mainly attributable to variations in Na and K currents, or variations in the passive properties (i.e. rather related to Cm and Rm) of the compartment located in between the recording site and the spike initiation site.
In our study we were indeed working with the value of membrane potential at which respective active currents became prominent and affected cell spiking dynamics, even though this potential was dependent both on the kinetics of ionic channels, and the distribution of these channels relative to the cell soma and the central process of the cell, where the integration occurs, and from which both the axon initial segment, and the dendritic tree originate. To avoid terminological conflicts, in this new version of the paper we renamed these variables to "I_{Na} activation potential", "I_{Ks} activation potential" and "I_{Kt} activation potential" respectively.
I do not understand the monosynapticity factor.
We changed the description of this factor:
“Traces with longer interstimulus intervals (100, 150, 200, 250 and 300 ms) were used to compare the average amplitude […] it was reported as variable #31, "Monosynapticity".”
3) Because of the first two main concerns, it is difficult to really trust the conclusion about the degeneracy of neuronal output in these neurons. Although there is no doubt that biological systems are "degenerate", demonstrating degeneracy first requires to demonstrate that the variable parameters have a strong influence on the output of the system. As an example, action potential shape is degenerate if i) it relies on variable sodium current density and ii) sodium current density has been demonstrated to have a causal influence on action potential shape. The degeneracy argument is irrelevant when the second condition is not fulfilled: as an example, the fact that synaptic transmission at axon terminals located far away from the soma tolerates large variations in soma size does not tell us anything about the degeneracy of synaptic transmission, until we prove that soma size has a significant influence on synaptic transmission.
While we feel that the original version of the paper contained at least some justification of the second condition you formulate, as we analyzed which lowlevel factors affect spiking of tectal cells through a set of competitive linear models, we agree that our case for the true "degeneracy" of physiological properties of tectal cells is rather weak. We therefore remove all mentions of degeneracy from this new version of the paper.
Reviewer #2:My only suggestion is to change the title to something more positive. Perhaps something like Quantitative analysis of the relation of spiking output to electrophysiological properties reveals substantial degeneracy. I think this will motivate more readers to dip into it.
After additional analysis of cell clustering in the original 33D space we feel that it would be beneficial to change the main pitch of the paper away from system degeneracy, and towards the changes observed in multivariate variability of cell properties in development, and after sensory stimulation. We therefore have changed the title of the paper to: “Diversity of multivariate tuning of visual neurons in Xenopus tadpoles increases in development but is reduced by strong patterned stimulation”
Reviewer #3:
The authors have provided a substantial data set of electrophysiological properties of tectal neurons of Xenopus tadpoles, towards a goal of better understanding the changes in spiking properties of these neurons over development. An additional, and perhaps more striking, point of the paper is the apparent variability of properties that underlie "similar" spiking patterns. In its current form, I'm not sure it has accomplished these goals. There is also a much richer literature in which to root these ideas, particularly in computational arenas, which are largely overlooked.
We have made every effort to cite the literature relevant to the points raised in this study, but if the reviewer has any specific suggestions we are happy to incorporate them.Figure 2: The authors seem to have chosen to combine a large dataset of heterogeneous cell types and then perform posthoc correlation analyses. The rationale for this is unclear, and potentially confounding. The value of the "network" style plot in Figure 2A is minimal. This kind of diagram could potentially be valuable for comparing correlation patterns across groups, but as a standalone, it is difficult to follow any one pair of parameters that the reader would be interested in. The thickness of the lines has no interpretive value without a scale associated. For example, Rvalues that range from 0.1 to 0.3 would still have a threefold range of thickness, but show minimal correlation regardless.
The main reason we present this diagram in the paper is to show some of the key correlations in the dataset without creating an extensive list of them, or an additional appendix. With that many variables, only strongest correlations with highest r values are relevant anyway, and we think that the diagram helps to see these largescale patterns, although they become really obvious later, as we present the PCA results. In our opinion, the circular diagram send a message that the variables form 3 distinct clusters that dominate the dataset in terms of correlations involved, and that compared to these clusters other correlations are weak. This point is emphasized in the Results section of the paper.
Figure 5: Using a PCA, despite the difficulties inherent in not all data being present for all cell types, can be a powerful approach. However, while one can somewhat impute characterizing "spikiness", a meaningful definition of "robustness" is not provided.
We now renamed "Robustness", as a nickname for the second principal component, into "Current density", as it provides a better summary of those variables that are associated with it.
The authors then go on to make the point that the PCA cloud is moving across developmental "space", predominantly in the "spikiness" domain. This is seemingly a bit of an overwrought analysis, when a much more direct measure of "spikiness" (i.e. the spiking of the cells) is provided in Figure 3 and demonstrates in much less convoluted terms that spiking propensity does indeed change in these groups.
While several individual measurements of spikiness changed with development as well, the benefit of PCA as a method is that it can extract the common "signal" from the noisy data, and thus provide a better estimation of the actual phenotypical "Spikiness" of the cell than any single related variable taken alone. That said, we report changes of individual variables in development as well, both in terms of their mean values, and their variability. The PCA analysis further allows us to better understand variability across the cell population, as shown in the new analysis presented in subheading “Factor analysis” Figure 5 and Figure 6.
Figure 7: This is a major thrust of the paper, in support of the notion that cells that are grouped by similar output have variable physiological parameters. This crucial aspect of the manuscript is not well justified. The criterion for determining cells of similar output is only mentioned in passing with a reference to work of Victor and Purpura without enough detail to evaluate. Since then, many computational studies have been published with highly effective means of identifying cells with highly convergent output. This is underscored by the fact that in Figure 7B, the representative traces from these cells are too small to really get a proper feel for this, but the case could easily be made counter to the authors assertion that "spiking outputs are… closely matched within each group". In other words, they don't look that similar to me. If this is based simply on spike number, then it is ignoring the pattern of firing and seems to arbitrarily decide that 23 and 47 are different groups (why not 24 and 57?). A much more thorough analysis and justification for "similar outputs" would greatly strengthen the argument of the authors. As this figure is potentially a centerpiece of the study, this is one of the most critical aspects to address.
While in this new version of the paper we are now somewhat downplaying the centrality of this figure for the study as a whole, we still think that it is a potentially interesting finding. We used the simple computational approach to spiketrain ordination that is described in the work of Victor and Purpura to calculate "distances" between spiketrains of different cells, and represent these spike trains in a 2D plane. We then used same set of distances to pull closest neighbors for several selected reference cells to form clusters of similarly spiking cells. The number of cells per cluster (6) is indeed arbitrary. The clusters we selected are not supposed to represent all possible spiking "phenotypes" we observed, but we used them to test the compactness and noisiness of projections mapping the 2D spiking phenotype space onto 2D spaces of lowlevel physiological properties of tectal cells.
Here is the updated segment that describes this approach in the paper:
“We then applied a commonlyused standard costbased metric sensitive to both number of spikes and their timing to quantify similarity between spiketrains of different cells (Victor and Purpura, 1996, 1997), and used multidimensional scaling to represent a matrix of pairwise celltocell distances in a 2D plot (Figure 8A). We used this analysis to select groups of cells (6 in each group) that generated similar spike trains in response to current step injections (Figure 8B) by pulling 5 nearest neighbors to randomly selected reference cells.”
Reviewer #3 (Additional data files and statistical comments (optional)):Figure 2: A rationale for Pearson analysis, which is wedded to a linear relationship, as opposed to Spearman or Kendall Tau, would be appreciated (Pearson does not require normality, because most of the data set is nonnormally distributed and rankbased correlation may be more appropriate). The panels in Figure 2B reveal the susceptibility of correlation analysis to outliers and nonlinear relationships. In particular, "Wave decay vs. N spikes (steps)" is a highly suspect correlation, yet is presented as one of the strongest. The power of large sample sizes and the logic of correlation allows for an analysis pathway that can avoid these problems. Specifically, with this level of sample size the authors could remove the tails of the data distribution (perhaps below the 5th and above the 95th percentiles), reanalyze, and see if correlations remain. If a true correlation exists, it will persist among any reasonable subset of the data points.
We agree that in principle Spearman correlation analysis could be more applicable to our dataset, yet we also believe that it is not critical which definition of correlation is used for Figure 2A, as we are not interpreting or commenting on any of the weak correlations found in the set, while all strong correlations were significant regardless of whether Spearman of Pearson test was used. As it usually happens when two similar statistical test are compared to each other, the majority of variable pairs were either significantly correlated according to both Pearson and Spearman definitions (n=73 out of 528, after FDR with α=0.05 for both tests) or were not correlated according to both definitions (n=399 of 528). There were however pairs that were correlated in Pearson sense but not in Spearman (n=17), and even more of those that were correlated under Spearman definition but not Pearson (n=39). Because of this fact, we cannot really use the Spearman correlation to "verify" all Pearson correlations. We can use it to verify several top correlations, and we did it, showing that for all Pearson r>0.5 (n=11) Spearman correlations were also significant (see changes in the text below).
Conceivably, we could also migrate all our correlation analysis from Pearson definition to that of Spearman, however we found that it brings only trivial changes to the look of the correlogram at Figure 2A, as all main "clusters of correlated variables" remain in place. More importantly, all subsequent analyses in the paper (including principal variable analysis and principal component analysis) rely on the assumption of normality of the data, and utilize standard total variance calculations that are based on Pearsonlike products of nonranktransformed data. As we were aware of this potential issue early on, even before performing any extensive analysis of data we separately ran a validation test by comparing PCA on ranktransformed variables to that on the original data. The validation showed that the results of PCA on ranktransformed variables were not qualitatively different from that of standard PCA, which is reported in the paper (subheading “Statistical Procedures”). Moving to Spearman definition for the sake of correlation analysis would therefore introduce a statistical inconsistency, as it would be equivalent to moving to ranktransformed data for preliminary analysis, yet staying with original data with the final analysis; a choice that may be hard to justify. For similar reasons we did not use tail removal to verify our correlations, as strong correlations remained significant under Spearman definition, which is arguably preferable to data range restriction when outliers are concerned (Linn et al., 1981), while weak correlations were not used for any inference.
Because of these considerations, we decided to keep Pearson correlations in Figure 2A. We however removed some of the correlations from Figure 2B, as they were not easily interpretable and do not look nice, even though they are still significant after rank transformation. We also include the following statement in the text (Results):
“We verified our Pearson correlationbased analysis calculating Spearman correlation coefficients; after FDR 112 Spearman correlations were significant (as opposed to 90 for Pearson), and all Pearson correlations with r>0.5 (n=11), including all shown in Figure 2B, remained significant for Spearman calculation.”
Figure 3: Nonparametric analyses are performed on these data, but the data are represented with parametric variance.
The following statement is now added to the paper:
“As many variables analyzed in this paper were not normally distributed, and to stay consistent, we preferred MannWhitney twosample tests for comparing data between groups; Pvalues of this test were reported as P_{MW}. At the same time, to make referencing and subsequent metaanalysis possible, we always report means and standard deviations in the text.”
That said, if the reviewer still thinks it would be appropriate to report medians and IQR as well, we will be happy to make this change.
It is unclear what the shaded area represents, or its statistical derivation or value.
Corrected description:
“All cell properties that significantly changed with development are shown here as mean values (central line) and standard deviations (whiskers and shading). Transitions between points are shown as shapepreserving piecewise cubic interpolations.”
Figure 5: If I understand correctly, the authors are making the conclusion that the PCA space changes across developmental time by doing statistics on statistics (an ANOVA of Principal Components). I'm wondering if there are other examples of this approach and whether this derived level of analysis is appropriate. Citations would be appreciated.
By definition, the principal components are useful linear combinations of original variables, and as such they can be analyzed by any statistical methods, in the same way as any other formulas or transformations of the original variables can be analyzed. In fact, the very idea of dimensionality reduction approach is to replace the original dataset by its projection in a useful space of lower dimensionality, to perform all subsequent analysis there. This was and remains the main goal of this family of methods, linear and nonlinear included, and in most modern applications dimensionality reduction is used not as an exploratory technique, but rather as a preliminary step before main analyses occur. As sample citations, please refer to the introductory section of these texts: (Ghodsi, 2006; Sorzano et al., 2014), page 348 (McKillup, 2011), or page 245 of (Dytham, 2011).
Figure 6: It is not clear what the origins of the shaded parts of Figure 6E are? Are these a result of a statistical measure?
This has been corrected (“Projection of cells from visually stimulated s48A49 animals (black) into PCA space defined by the analysis of naïve dataset, with naïve cells from s48A49 animals shown in blue. Shading shows estimated density kernels for respective groups”).
References:
Chen SX, Tari PK, She K, Haas K (2010) Neurexinneuroligin cell adhesion complexes contribute to synaptotropic dendritogenesis via growth stabilization mechanisms in vivo. Neuron 67:967983.
Dong W, Aizenman CD (2012) A competitionbased mechanism mediates developmental refinement of tectal neuron receptive fields. J Neurosci 32:1687216879.
Dytham C (2011) Choosing and using statistics: a biologist's guide, 3rd Edition. Hoboken, NJ: WileyA Blackwell.
Ghodsi A (2006) Dimensionality Reduction A Short Tutorial. Waterloo, Ontario, Canada: Department of Statistics and Actuarial Science. University of Waterloo.
Lazar G, Szekely G (1967) Golgi studies on the optic center of the frog. J Hirnforsch 9:329344.
Linn RL, Harnisch DL, Dunbar SB (1981) Corrections for range restriction: An empirical investigation of conditions resulting in conservative corrections. Journal of Applied Psychology 66:655.
Marshak S, Meynard MM, De Vries YA, Kidane AH, CohenACory S (2012) Cellautonomous alterations in dendritic arbor morphology and connectivity induced by overexpression of MeCP2 in Xenopus central neurons in vivo. PLoS One 7:e33153.
McKillup S (2011) Statistics explained: an Introductory guide for life scientists, 2nd Edition. Cambridge; New York: Cambridge University Press.
Miraucourt LS, Silva JS, Burgos K, Li J, Abe H, Ruthazer ES, Cline HT (2012) GABA expression and regulation by sensory experience in the developing visual system. PLoS One 7:e29086.
Podgorski K, Dunfield D, Haas K (2012) Functional clustering drives encoding improvement in a developing brain network during awake visual learning. PLoS Biol 10:e1001236.
Sorzano COS, J V, Montano AP (2014) A survey of dimensionality reduction techniques. arXiv preprint arXiv 1403.2877.
Yamamoto K, Nakata M, Nakagawa H (2003) Input and output characteristics of collision avoidance behavior in the frog Rana catesbeiana. Brain Behav Evolut 62:201211.
[Editors' note: the author responses to the rereview follow.]
Essential revisions:The consensus reached was this is a valuable study well worth publishing, and it was strongly felt that the data set is an important part of the study and should be made public.
In our updated submission we have now included all the analysis data as supplementary files. Furthermore, in response to the consensus statement of the reviewers we have uploaded our entire data set to Dryad, including the raw electrophysiological traces of every cell, and the table of extracted parameters and data index file. These data can be accessed at doi:10.5061/dryad.18kk6.
This extensive dataset can be further used to generate physiologically based computational models of the optic tectum, as well as for further analysis of the multiple relationships between various electrophysiological parameters in developing neurons.
https://doi.org/10.7554/eLife.11351.018Article and author information
Author details
Funding
National Institutes of Health (Institutional T32)
 Christopher M Ciarleglio
National Science Foundation (IOS 1353044)
 Arseny S Khakhalin
 Carlos D Aizenman
Brown University (Fox postdoctoral fellowship, UTRA Program)
 Christopher M Ciarleglio
 Arseny S Khakhalin
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
The authors would like to thank Irina Sears and Phouangmaly Mimi Oupravanh for animal and lab care; Aaron Wetzler (Israel Institute of Technology) for sharing useful MATLAB scripts; Andrey Shabalin (Virginia Commonwealth University) for his advice on statistical methods used in this paper. This work was supported by T32 MH01911819 (B. Connors for CMC), the Sidney A. Fox and Dorothea Doctors Fox Postdoctoral Fellowships for Vision Research (CMC, ASK), and NSF IOS 1353044 (CDA).
Ethics
Animal experimentation: All handling of animals was approved by Brown University IACUC in accordance with NIH guidelines. The animal protocol used for these experiments is "Regulation of Neural Excitability and Synaptic Function by Experience in the Developing Visual System (#1308000008C002)".
Reviewing Editor
 Mark CW van Rossum, University of Edinburgh, United Kingdom
Publication history
 Received: September 3, 2015
 Accepted: November 12, 2015
 Accepted Manuscript published: November 14, 2015 (version 1)
 Version of Record published: December 16, 2015 (version 2)
Copyright
© 2015, Ciarleglio et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 1,519
 Page views

 202
 Downloads

 11
 Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Download citations (links to download the citations from this article in formats compatible with various reference manager tools)
Open citations (links to open the citations from this article in various online reference manager services)
Further reading

 Computational and Systems Biology
 Neuroscience