Population codes enable learning from few examples by shaping inductive bias
Abstract
Learning from a limited number of experiences requires suitable inductive biases. To identify how inductive biases are implemented in and shaped by neural codes, we analyze sampleefficient learning of arbitrary stimulusresponse maps from arbitrary neural codes with biologicallyplausible readouts. We develop an analytical theory that predicts the generalization error of the readout as a function of the number of observed examples. Our theory illustrates in a mathematically precise way how the structure of population codes shapes inductive bias, and how a match between the code and the task is crucial for sampleefficient learning. It elucidates a bias to explain observed data with simple stimulusresponse maps. Using recordings from the mouse primary visual cortex, we demonstrate the existence of an efficiency bias towards lowfrequency orientation discrimination tasks for grating stimuli and low spatial frequency reconstruction tasks for natural images. We reproduce the discrimination bias in a simple model of primary visual cortex, and further show how invariances in the code to certain stimulus variations alter learning performance. We extend our methods to timedependent neural codes and predict the sample efficiency of readouts from recurrent networks. We observe that many different codes can support the same inductive bias. By analyzing recordings from the mouse primary visual cortex, we demonstrate that biological codes have lower total activity than other codes with identical bias. Finally, we discuss implications of our theory in the context of recent developments in neuroscience and artificial intelligence. Overall, our study provides a concrete method for elucidating inductive biases of the brain and promotes sampleefficient learning as a general normative coding principle.
Editor's evaluation
This important study presents a theory of generalization in neural population codes and proposes sample efficiency as a new normative principle. The theory can be used to identify the set of 'easily learnable' stimulusresponse mappings from neural data and makes strong behavioral predictions that can be evaluated experimentally. Overall, the new method for elucidating inductive biases of the brain is highly compelling and will be of interest to theoretical and experimental neuroscientists working towards understanding how the cortex works.
https://doi.org/10.7554/eLife.78606.sa0Introduction
The ability to learn quickly is crucial for survival in a complex and an everchanging environment, and the brain effectively supports this capability. Often, only a few experiences are sufficient to learn a task, whether acquiring a new word (Carey and Bartlett, 1978) or recognizing a new face (Peterson et al., 2009). Despite the importance and ubiquity of sample efficient learning, our understanding of the brain’s information encoding strategies that support this faculty remains poor (Tenenbaum et al., 2011; Lake et al., 2017; Sinz et al., 2019).
In particular, when learning and generalizing from past experiences, and especially from few experiences, the brain relies on implicit assumptions it carries about the world, or its inductive biases (Wolpert, 1996; Sinz et al., 2019). Reliance on inductive bias is not a choice: inferring a general rule from finite observations is an illposed problem which requires prior assumptions since many hypotheses can explain the same observed experiences (Hume, 1998). Consider learning a rule that maps photoreceptor responses to a prediction of whether an observed object is a threat or is neutral. Given a limited number of visual experiences of objects and their threat status, many threatdetection rules are consistent with these experiences. By choosing one of these threatdetection rules, the nervous system reveals an inductive bias. Without the right biases that suit the task at hand, successful generalization is impossible (Wolpert, 1996; Sinz et al., 2019). In order to understand why we can quickly learn to perform certain tasks accurately but not others, we must understand the brain’s inductive biases (Tenenbaum et al., 2011; Lake et al., 2017; Sinz et al., 2019).
In this paper, we study sample efficient learning and inductive biases in a general neural circuit model which comprises of a population of sensory neurons and a readout neuron learning a stimulusresponse map with a biologicallyplausible learning rule (Figure 1A). For this circuit and learning rule, inductive bias arises from the nature of the neural code for sensory stimuli, specifically its similarity structure. While different population codes can encode the same stimulus variables and allow learning of the same output with perfect performance given infinitely many samples, learning performance can depend dramatically on the code when restricted to a small number of samples, where the reliance on and the effect of inductive bias are strong (Figure 1B, C and D). Given the same sensory examples and their associated response values, the readout neuron may make drastically different predictions depending on the inductive bias set by the nature of the code, leading to successful or failing generalizations (Figure 1C and D). We say that a code and a learning rule, together, have a good inductive bias for a task if the task can be learned from a small number of examples.
In order to understand how population codes shape inductive bias and allow fast learning of certain tasks over others with a biologically plausible learning rule, we develop an analytical theory of the readout neuron’s learning performance as a function of the number of sampled examples, or sample size. We find that the readout’s performance is completely determined by the code’s kernel, a function which takes in pairs of population response vectors and outputs a representational similarity defined by the inner product of these vectors. We demonstrate that the spectral properties of the kernel introduce an inductive bias toward explaining sampled data with simple stimulusresponse maps and determine compatibility of the population code with the learning task, and hence the sampleefficiency of learning. We apply this theory to data from the mouse primary visual cortex (V1) (Stringer et al., 2021; Pachitariu et al., 2019; Stringer et al., 2018a; Stringer et al., 2018b), and show that mouse V1 responses support sampleefficient learning of low frequency orientation discrimination and low spatial frequency reconstruction tasks over high frequency ones. We demonstrate the discrimination bias in a simple model of V1 and show how response nonlinearity, sparsity, and relative proportion of simple and complex cells influence the code’s bias and performance on learning tasks, including ones that involve invariances. We extend our theory to temporal population codes, including codes generated by recurrent neural networks learning a delayed response task. We observe that many codes could support the same kernel function, however, by analyzing data from mouse primary visual cortex (V1) (Stringer et al., 2021; Pachitariu et al., 2019; Stringer et al., 2018a; Stringer et al., 2018b), we find that the biological code is metabolically more efficient than others.
Overall, our results demonstrate that for a fixed learning rule, the neural sensory representation imposes an inductive bias over the space of learning tasks, allowing some tasks to be learned by a downstream neuron more sampleefficiently than others. Our work provides a concrete method for elucidating inductive biases of populations of neurons and suggest sampleefficient learning as a novel functional role for population codes.
Results
Problem setup
We denote vectors with bold lowercase symbols $\mathbf{r}$ and matrices $\mathbf{K}$ with bold uppercase symbols. We denote an average of a function $g(\mathit{\theta})$ over random variable $\mathit{\theta}$ as $\u27e8g(\mathit{\theta})\u27e9}_{\mathit{\theta}$. Euclidean inner products between vectors are denoted either as $\mathbf{x}\cdot \mathbf{y}$ or ${\mathbf{x}}^{\top}\mathbf{y}$ and real Euclidean $n$space is denoted ${\mathbb{R}}^{n}$. Sets of variables are represented with $\{\cdot \}$.
We consider a population of $N$ neurons whose responses, $\{{r}_{1}(\mathit{\theta}),{r}_{2}(\mathit{\theta}),\mathrm{\dots},{r}_{N}(\mathit{\theta})\}$, vary with the input stimuli, which is parameterized by a vector variable $\mathit{\theta}\in {\mathbb{R}}^{d}$, such as the orientation and the phase of a grating (Figure 1A). These responses define the population code. Throughout this work, we will mostly assume that this population code is deterministic: that identical stimuli generate identical neural responses.
From the population responses, a readout neuron learns its weights $\mathbf{w}$ to approximate a stimulusresponse map, or a target function $y(\mathit{\theta})$, such as one that classifies stimuli as apetitive ($y=1$) or aversive ($y=1$), or a more smooth one that attaches intermediate values of valence. We emphasize that in our model only the readout neuron performs learning, and the population code is assumed to be static through learning. Our theory is general in its assumptions about the structure of the population code and the stimulusresponse map considered (Methods Theory of generalization), and can apply to many scenarios.
The readout neuron learns from $P$ stimulusresponse examples with the goal of generalizing to previously unseen ones. Example stimuli ${\mathit{\theta}}^{\mu}$, ($\mu =1,\mathrm{\dots},P$) are sampled from a probability distribution describing stimulus statistics $p(\mathit{\theta})$. This distribution can be natural or artificially created, for example, for a laboratory experiment (Appendix Discrete stimulus spaces: finding eigenfunctions with matrix eigendecomposition). From the set of learning examples, $\mathcal{D}=\{{\mathit{\theta}}^{\mu},y({\mathit{\theta}}^{\mu}){\}}_{\mu =1}^{P}$, the readout weights are learned with the local, biologicallyplausible deltarule, $\mathrm{\Delta}{w}_{j}=\eta {\sum}_{\mu}{r}_{j}({\mathit{\theta}}^{\mu})(y({\mathit{\theta}}^{\mu})\mathbf{r}({\mathit{\theta}}^{\mu})\cdot \mathbf{w})$,where $\eta $ is a learning rate (Figure 1A). Learning with weight decay, which privileges readouts with smaller norm, can also be accommodated in our theory as we discuss in (Appendix Weight decay and ridge regression). With or without weight decay, the learning rule converges to a unique set of weights ${\mathbf{w}}^{\ast}(\mathcal{D})$ (Appendix Convergence of the deltarule without weight decay). Generalization error with these weights is given by
which quantifies the expected error of the trained readout over the entire stimulus distribution $p(\mathit{\theta})$. This quantity will depend on the population code $\mathbf{r}(\mathit{\theta})$, the target function $y(\mathit{\theta})$ and the set of training examples $\mathcal{D}$. Our theoretical analysis of this model provides insights into how populations of neurons encode information and allow sampleefficient learning.
Kernel structure of population codes controls learning performance
First, we note that the generalization performance of the learned readout on a given task depends entirely on the inner product kernel, defined by $K(\mathit{\theta},{\mathit{\theta}}^{\prime})=\frac{1}{N}{\sum}_{i=1}^{N}{r}_{i}(\mathit{\theta}){r}_{i}({\mathit{\theta}}^{\prime})$, which quantifies the similarity of population responses to two different stimuli $\mathit{\theta}$ and $\mathit{\theta}}^{\mathrm{\prime}$. The kernel, or similarity matrix, encodes the geometry of the neural responses. Concretely, distances (in neural space) between population vectors for stimuli $\mathit{\theta},{\mathit{\theta}}^{\prime}$ can be computed from the kernel $\frac{1}{N}{\mathbf{r}(\mathit{\theta})\mathbf{r}({\mathit{\theta}}^{\prime})}^{2}=K(\mathit{\theta},\mathit{\theta})+K({\mathit{\theta}}^{\prime},{\mathit{\theta}}^{\prime})2K(\mathit{\theta},{\mathit{\theta}}^{\prime})$ (Edelman, 1998; Kriegeskorte et al., 2008; Laakso and Cottrell, 2000; Kornblith et al., 2019; Cadieu et al., 2014; Pehlevan et al., 2018). The fact that the solution to the learning problem only depends on the kernel is due to the convergence of the learning rule to a unique solution ${\mathbf{w}}^{\ast}(\mathcal{D})$ for the training set $\mathcal{D}$ (Neal, 1994; Girosi et al., 1995). The datasetdependent fixed point ${\mathbf{w}}^{\ast}(\mathcal{D})$ of the learning rule is a linear combination of the population vectors on the dataset ${\mathbf{w}}^{\ast}(\mathcal{D})=\frac{1}{N}\sum _{\mu =1}^{P}{\alpha}^{\mu}\mathbf{r}({\mathit{\theta}}^{\mu})$. Thus, the learned function computed by the readout neuron is
where the coefficient vector satisfies $\mathit{\alpha}={\mathbf{K}}^{+}\mathbf{y}$ (Appendix Convergence of the deltarule without weight decay), and the matrix $\mathbf{K}$ has entries ${K}_{\mu \nu}=K({\mathit{\theta}}^{\mu},{\mathit{\theta}}^{\nu})$ and ${y}_{\mu}=y({\mathit{\theta}}^{\mu})$. The matrix K^{+} is the pseudoinverse of $\mathbf{K}$. In these expressions the population code only appears through the kernel $K$, showing that the kernel alone controls the learned response pattern. This result applies also to nonlinear readouts (Appendix Convergence of Deltarule for nonlinear readouts), showing that the kernel can control the learned solution in a variety of cases.
Since predictions only depend on the kernel, a large set of codes achieve identical desired performance on learning tasks. This is because the kernel is invariant with respect to rotation of the population code. An orthogonal transformation $\mathbf{Q}$ applied to a population code $\mathbf{r}(\mathit{\theta})$ generates a new code $\stackrel{~}{\mathbf{r}}(\mathit{\theta})=\mathbf{Qr}(\mathit{\theta})$ with an identical kernel (Appendix Alternative neural codes with same kernel) since $\frac{1}{N}\stackrel{~}{\mathbf{r}}(\mathit{\theta})\cdot \stackrel{~}{\mathbf{r}}({\mathit{\theta}}^{\prime})=\frac{1}{N}\mathbf{r}{(\mathit{\theta})}^{\top}{\mathbf{Q}}^{\top}\mathbf{Qr}({\mathit{\theta}}^{\prime})=\frac{1}{N}\mathbf{r}(\mathit{\theta})\cdot \mathbf{r}({\mathit{\theta}}^{\prime})$. Codes $\mathbf{r}(\mathit{\theta})$ and $\stackrel{~}{\mathbf{r}}(\mathit{\theta})$ will have identical readout performance on all possible learning tasks. We illustrate this degeneracy in Figure 2 using a publicly available dataset which consists of activity recorded from ∼20,000 neurons from the primary visual cortex of a mouse while shown static gratings (Stringer et al., 2021; Pachitariu et al., 2019). An original code $\mathbf{r}(\mathit{\theta})$ is rotated to generate $\stackrel{~}{\mathbf{r}}(\mathit{\theta})$ (Figure 2A) which have the same kernels (Figure 2B) and the same performance on a learning task (Figure 2C).
Codetask alignment governs generalization
We next examine how the population code affects generalization performance of the readout. We calculated analytical expressions of the average generalization error in a task defined by the target response $y(\mathit{\theta})$ after observing $P$ stimuli using methods from statistical physics (Methods Theory of generalization). Because the relevant quantity in learning performance is the kernel, we leveraged results from our previous work studying generalization in kernel regression (Bordelon et al., 2020; Canatar et al., 2021), and approximated the generalization error averaged over all possible realizations of the training dataset composed of $P$ stimuli, $E}_{g}={\u27e8{E}_{g}(\mathcal{D})\u27e9}_{\mathcal{D}$. As $P$ increases, the variance in ${E}_{g}$ due to the composition of the dataset decreases, and our expressions become descriptive of the typical case. Our final analytical result is given in Equation (11) in Methods Theory of generalization. We provide details of our calculations in Methods Theory of generalization and Appendix Theory of generalization, and focus on their implications here.
One of our main observations is that given a population code $\mathbf{r}(\mathit{\theta})$, the singular value decomposition of the code gives the appropriate basis to analyze the inductive biases of the readouts (Figure 3A). The tuning curves for individual neurons ${r}_{i}(\mathit{\theta})$ form an $N$by$M$ matrix $\mathbf{R}$, where $M$, possibly infinite, is the number of all possible stimuli. We discuss the SVD for continuous stimulus spaces in Appendix Singular value decomposition of continuous population responses. The leftsingular vectors (or principal axes) and singular values of this matrix have been used in neuroscience for describing lower dimensional structure in the neural activity and estimating its dimensionality, see e.g. (Stopfer et al., 2003; Kato et al., 2015; Bathellier et al., 2008; Gallego et al., 2017; Sadtler et al., 2014; Stringer et al., 2018b, Stringer et al., 2021; LitwinKumar et al., 2017; Gao et al., 2017; Gao and Ganguli, 2015). We found that the function approximation properties of the code are controlled by the singular values, or rather their squares $\{{\lambda}_{k}\}$ which give variances along principal axes, indexed in decreasing order, and the corresponding right singular vectors $\{{\psi}_{k}(\mathit{\theta})\}$, which are also the kernel eigenfunctions (Methods Theory of generalization and Appendix Singular value decomposition of continuous population responses). This follows from the fact that learned response (Equation (2)) is only a function of the kernel $K$, and the eigenvalues ${\lambda}_{k}$ and orthonormal (uncorrelated) eigenfunctions ${\psi}_{k}(\mathit{\theta})$ collectively define the code’s innerproduct kernel $K(\mathit{\theta},{\mathit{\theta}}^{\prime})$ through an eigendecomposition $K(\mathit{\theta},{\mathit{\theta}}^{\prime})=\frac{1}{N}{\sum}_{i=1}^{N}{r}_{i}(\mathit{\theta}){r}_{i}({\mathit{\theta}}^{\prime})={\sum}_{k}{\lambda}_{k}{\psi}_{k}(\mathit{\theta}){\psi}_{k}({\mathit{\theta}}^{\prime})$ (Mercer, 1909) (Methods Theory of generalization and Appendix Theory of generalization).
Our analysis shows the existence of a bias in the readout towards learning certain target responses faster than others. The target response $y(\mathit{\theta})={\sum}_{k}{v}_{k}{\psi}_{k}(\mathit{\theta})$ and the learned readout response $f(\mathit{\theta})=\sum _{k}{\hat{v}}_{k}(\mathcal{D}){\psi}_{k}(\mathit{\theta})$ can be expressed in terms of these eigenfunctions ${\psi}_{k}$. Our theory shows that the readout’s generalization is better if the target function $y(\mathit{\theta})$ is aligned with the top eigenfunctions ${\psi}_{k}$, equivalent to ${v}_{k}^{2}$ decaying rapidly with $k$ (Appendix Spectral bias and codetask alignment). We formalize this notion by the following metric. Mathematically, generalization error $\u27e8{E}_{g}\u27e9$ can be decomposed into normalized estimation errors ${E}_{k}$ for the coefficients of these eigenfunctions ${\psi}_{k}$, $\u27e8{E}_{g}\u27e9}_{\mathcal{D}}=\sum _{k}{v}_{k}^{2}{E}_{k$, where $E}_{k}={\u27e8({\hat{v}}_{k}(\mathcal{D}){v}_{k}{)}^{2}\u27e9}_{\mathcal{D}}/{v}_{k}^{2$. We found that the ordering of the eigenvalues ${\lambda}_{k}$ controls the rates at which these mode errors ${E}_{k}$ decrease as $P$ increases (Methods Theory of generalization, Appendix Spectral bias and codetask alignment), (Bordelon et al., 2020): $\lambda}_{k}>{\lambda}_{\ell}\phantom{\rule{thickmathspace}{0ex}}\u27f9\phantom{\rule{thickmathspace}{0ex}}{E}_{k}<{E}_{\ell$. Hence, larger eigenvalues mean lower generalization error for those normalized mode errors ${E}_{k}$. We term this phenomenon the spectral bias of the readout. Based on this observation, we propose codetask alignment as a principle for good generalization. To quantify codetask alignment, we use a metric which was introduced in Canatar et al., 2021 to measure the compatibility of a kernel with a learning task. This is the cumulative power distribution $C(k)$ which measures the total power of the target function in the top $k$ eigenmodes, normalized by the total power (Canatar et al., 2021):
Stimulusresponse maps that have high alignment with the population code’s kernel will have quickly rising cumulative power distributions $C(k)$, since a large proportion of power is placed in the top modes. Target responses with high $C(k)$ can be learned with fewer training samples than target responses with low $C(k)$ since the mode errors ${E}_{k}$ are ordered for all $P$ (Appendix Spectral bias and codetask alignment).
Probing learning biases in neural data
Our theory can be used to probe the learning biases of neural populations. Here, we provide various examples of this using publicly available calcium imaging recordings from mouse primary visual cortex (V1). Our examples illustrate how our theory can be used to analyze neural data.
We first analyzed population responses to static grating stimuli oriented at an angle $\theta $ (Stringer et al., 2021; Pachitariu et al., 2019). We found that the kernel eigenfunctions have sinusoidal shape with differing frequency. The ordering of the eigenvalues and eigenfunctions in Figure 3A (and Figure 3—figure supplement 1) indicates a frequency bias: lower frequency functions of $\theta $ are easier to estimate at small sample sizes.
We tested this idea by constructing two different orientation discrimination tasks shown in Figure 3B and C, where we assign static grating orientations to positive or negative valence with different frequency squarewave functions of $\theta $. We trained the readout using a subset of the experimentally measured neural responses, and measured the readout’s generalization performance. We found that the cumulative power distribution for the low frequency task has a more rapidly rising $C(k)$ (Figure 3B). Using our theory of generalization, we predicted learning curves for these two tasks, which express the generalization error as a function of the number of sampled stimuli $P$. The error for the low frequency task is lower at all sample sizes than the hard, highfrequency task. The theoretical predictions and numerical experiments show perfect agreement (Figure 3B). More intuition can be gained by visualizing the projection of the neural response along the top principal axes (Figure 3C). For the lowfrequency task, the two target values are well separated along the top two axes. However, the highfrequency task is not well separated along even the top four axes (Figure 3C).
Using the same ideas, we can use our theory to get insight into tasks which the V1 population code is illsuited to learn. For the task of identifying mice and birds (Stringer et al., 2018b, Stringer et al., 2018a) the linear rise in cumulative power indicates that there is roughly equal power along all kernel eigenfunctions, indicating that the representation is poorly aligned to this task (Figure 3D).
To illustrate how our approach can be used for different learning problems, we evaluate the ability of linear readouts to reconstruct natural images from neural responses to those images (Figure 4). The ability to reconstruct sensory stimuli from a neural code is an influential normative principle for primary visual cortex (Olshausen and Field, 1997). Here, we ask which aspects of the presented natural scene stimuli are easiest to learn to reconstruct. Since mouse V1 neurons tend to be selective towards low spatial frequency bands (Niell and Stryker, 2008Bonin et al., 2011; Vreysen et al., 2012), we consider reconstruction of bandpass filtered images with spatial frequency wavevector $\mathbf{k}\in {\mathbb{R}}^{2}$ constrained to an annulus $\mathbf{k}\in \left[\sqrt{max({s}_{max}^{2}{r}^{2},0)},{s}_{max}\right]$ for $r=0.2$ (in units of ${\text{pixels}}^{1}$) and plot the cumulative power $C(k)$ associated with each choice of the upper limit ${s}_{max}$ (Figure 4C and D). The frequency cutoffs were chosen in this way to preserve the volume in Fourier space to ${V}_{\mathbf{k}}=\pi {r}^{2}$ for $r<{s}_{max}$, which quantifies the dimension of the function space. We see that the lower frequency bandlimited images are easier to reconstruct, as evidenced by their cumulative power $C(k)$ and learning curves ${E}_{g}$ (Figure 4D and E). This reflects the fact that the population preferentially encodes low spatial frequency content in the image (Figure 4F). Experiments with additional values of $r$ are provided in the Figure 4—figure supplement 1 with additional details found in the Appendix Visual scene reconstruction task.
Mechanisms of spectral bias and codetask alignment in a simple model of V1
How do population level inductive biases arise from properties of single neurons? To illustrate that a variety of mechanisms may be involved in a complex manner, we study a simple model of V1 to elucidate neural mechanisms that lead to the low frequency bias at the population level. In particular, we focus on neural nonlinearities and selectivity profiles.
We model responses of V1 neurons as photoreceptor inputs passed through Gabor filters and a subsequent experimentally motivated powerlaw nonlinearity (Adelson and Bergen, 1985; Olshausen and Field, 1997; Rumyantsev et al., 2020), modeling a population of orientation selective simple cells (Figure 5A) (see Appendix A simple feedforward model of V1). In this model, the kernel for static gratings with orientation $\theta \in [0,\pi ]$ is of the form $K(\theta ,{\theta}^{\prime})=\kappa (\theta {\theta}^{\prime})$, and, as a consequence, the eigenfunctions of the kernel in this setting are Fourier modes. The eigenvalues, and hence the strength of the spectral bias, are determined by the nonlinearity as we discuss in Appendix Gabor model spectral bias and fit to V1 data. We numerically fit the parameters of the nonlinearity to the V1 responses and use these parameters our investigations in Figure 5—figure supplement 1.
Next, to further illustrate the importance of codetask alignment, we study how invariances in the code to stimulus variations may affect the learning performance. We introduce complex cells in addition to simple cells in our model with proportion $s\in [0,1]$ of simple cells (Appendix Gabor model spectral bias and fit to V1 data; Figure 5A), and allow phase, $\varphi $, variations in static gratings. We use the energy model (Adelson and Bergen, 1985; Simoncelli and Heeger, 1998) to capture the phase invariant complex cell responses (Appendix Phase variation, complex cells and invariance, complex cell populations are phase invariant). We reason that in tasks that do not depend on phase information, complex cells should improve sample efficiency.
In this model, the kernel for the V1 population is a convex combination of the kernels for the simple and complex cell populations ${K}_{V1}(\theta ,{\theta}^{\prime},\varphi ,{\varphi}^{\prime})=s{K}_{s}(\theta ,{\theta}^{\prime},\varphi ,{\varphi}^{\prime})+(1s){K}_{c}(\theta ,{\theta}^{\prime})$ where ${K}_{s}$ is the kernel for a pure simple cell population that depends on both orientation and phase, and ${K}_{c}$ is the kernel of a pure complex cell population that is invariant to phase (Appendix Complex cell populations are phase invariant). Figure 5C shows top kernel eigenfunctions for various values of $s$ elucidating inductive bias of the readout.
Figure 5D and E show generalization performance on tasks with varying levels of dependence on phase and orientation. On pure orientation discrimination tasks, increasing the proportion of complex cells by decreasing $s$ improves generalization. Increasing the sensitivity to the nuisance phase variable, $\varphi $, only degrades performance. The cumulative power curve is also maximized at $s=0$. However, on a task which only depends on the phase, a pure complex cell population cannot generalize, since variation in the target function due to changes in phase cannot be explained in the codes’ responses. In this setting, a pure simple cell population attains optimal performance. The cumulative power curve is maximized at $s=1$. Lastly, in a nontrivial hybrid task which requires utilization of both variables $\theta ,\varphi $, an optimal mixture $s$ exists for each sample budget $P$ which minimizes the generalization error. The cumulative power curve is maximized at different $s$ values depending on $k$, the component of the target function. This is consistent with an optimal heterogenous mix, because components of the target are learned successively with increasing sample size. V1 must code for a variety of possible tasks and we can expect a nontrivial optimal simple cell fraction $s$. We conclude that the degree of invariance required for the set of natural tasks, and the number of samples determine the optimal simple cell, complex cell mix. We also considered a more realistic model where the relative selectivity of each visual cortex neuron to phase $\varphi $, measured with the F1/F0 ratio takes on a continuum of possible values with some cells more invariant to phase and some less invariant. In (Appendix Energy model with partially phaseselective cells, Figure 5—figure supplement 3) we discuss a simple adaptation of the energy model which can interpolate between a population of entirely simple cells and a population of entirely complex cells, giving diverse selectivity for the intermediate regime. We show that this model reproduces the inductive bias of Figure 5.
Small and large sample size behaviors of generalization
Recently, Stringer et al., 2018b argued that the inputoutput differentiability of the code, governed by the asymptotic rate of spectral decay, may be enabling better generalization. Our results provide a more nuanced view of the relation between generalization and kernel spectra. First, generalization with low sample sizes crucially depend on the top eigenvalues and eigenfunctions of the code’s kernel, not the tail. Second, generalization requires alignment of the code with the task of interest. Nondifferentiable codes can generalize well if there is such an alignment. To illustrate these points, here, we provide examples where asymptotic conditions on the kernel spectrum are insufficient to describe generalization performance for small sample sizes (Figure 6, Figure 6—figure supplement 1 and Appendix Asymptotic power law scaling of learning curves), and where nondifferentiable kernels generalize better than differentiable kernels (Figure 6—figure supplement 2).
Our example demonstrates how a code allowing good generalization for large sample sizes can be disadvantageous for small sizes. In Figure 6A, we plot three different populations of neurons with smooth (infinitely differentiable) tuning curves that tile a periodic stimulus variable, such as the direction of a moving grating. The tuning width, $\sigma $, of the tuning curves strongly influences the structure of these codes: narrower widths have more high frequency content as we illustrate in a random 3D projection of the population code for $\theta \in [0,2\pi ]$ (Figure 6A). Visualization of the corresponding (von Mises) kernels and their spectra are provided in Figure 6B. The width of the tuning curves control bandwidths of the kernel spectra Figure 6B, with narrower curves having an later decay in the spectrum and higher high frequency eigenvalues. These codes can have dramatically different generalization performance, which we illustrate with a simple “bump" target response (Figure 6C). In this example, for illustration purposes, we let the network learn with a deltarule with a weight decay, leading to a regularized kernel regression solution (Appendix Weight decay and ridge regression). For a sample size of $P=10$, we observe that codes with too wide or too narrow tuning curves (and kernels) do not perform well, and there is a wellperforming code with an optimal tuning curve width $\sigma $, which is compatible with the width of the target bump, ${\sigma}_{T}$. We found that optimal $\sigma $ is different for each $P$ (Figure 6C). In the large$P$ regime, the ordering of the performance of the three codes are reversed (Figure 6C). In this regime generalization error scales in a power law (Appendix Asymptotic power law scaling of learning curves) and the narrow code, which performed worst for $P\sim 10$, performs the best. This example demonstrates that asymptotic conditions on the tail of the spectra are insufficient to understand generalization in the small sample size limit. The bulk of the kernel’s spectrum needs to match the spectral structure of the task to generalize efficiently in the lowsample size regime. However, for large sample sizes, the tail of the eigenvalue spectrum becomes important. We repeat the same exercise and draw the same conclusions for a nondifferentiable kernel (Laplace) (Figure 6—figure supplement 1) showing that these results are not an artifact of the infinite differentiability of von Mises kernels. We further provide examples where nondifferentiable kernels generalizing better than differentiable kernels in Figure 6—figure supplement 2.
Timedependent neural codes
Our framework can directly be extended to learning of arbitrary timevarying functions of timevarying inputs from an arbitrary spatiotemporal population code (Methods RNN experiment, Appendix Time dependent neural codes). In this setting, the population code $\mathbf{r}(\{\mathit{\theta}(t)\},t)$ is a function of an input stimulus sequence $\mathit{\theta}(t)$ and possibly its entire history, and time $t$. A downstream linear readout $f(\{\mathit{\theta}\},t)=\mathbf{w}\cdot \mathbf{r}(\{\mathit{\theta}\},t)$ learns a target sequence $y(\{\mathit{\theta}\},t)$ from a total of $\mathcal{P}$ examples that can come at any time during any sequence.
As a concrete example, we focus on readout from a temporal population code generated by a recurrent neural network in a task motivated by a delayed reach task (Ames et al., 2019; Figure 7A and B). In this task, the network is presented for a short time an input cue sequence coding an angular variable which is drawn randomly from a distribution (Figure 7C). The recurrent neural network must remember this angle and reproduce an output sequence which is a simple step function whose height depends on the angle which begins after a time delay from the cessation of input stimulus and lasts for a short time (Figure 7D).
The kernel induced by the spatiotemporal code is shown in Figure 7E. The high dimensional nature of the activity in the recurrent network introduces complex and rich spatiotemporal similarity structure. Figure 7F shows the kernel’s eigensystem, which consists of stimulus dependent timeseries ${\psi}_{k}(\{\mathit{\theta}\};t)$ for each eigenvalue ${\lambda}_{k}$. An interesting link can be made with this eigensystem and linear lowdimensional manifold dynamics observed in several cortical areas (Stopfer et al., 2003; Kato et al., 2015; Gallego et al., 2017; Cunningham and Yu, 2014; Sadtler et al., 2014; Gao and Ganguli, 2015; Gallego et al., 2018; Chapin and Nicolelis, 1999; Bathellier et al., 2008). The kernel eigenfunctions also define the latent variables obtained through a singular value decomposition of the neural activity $\mathbf{r}(\{\mathit{\theta}\};t)={\sum}_{k}\sqrt{{\lambda}_{k}}{\mathbf{u}}_{k}{\psi}_{k}(\{\mathit{\theta}\};t)$ (Gallego et al., 2017). With enough samples, the readout neuron can learn to output the desired angle with high fidelity (Figure 7G). Unsurprisingly, tasks involving long time delays are more difficult and exhibit lower cumulative power curves. Consequently, the generalization error for small delay tasks drops much more quickly with increasing samples $\mathcal{P}$.
Biological codes are metabolically more efficient and more selective than other codes with identical kernels
Although, the performance of linear readouts may be invariant to rotations that preserve kernels (Figure 2), metabolic efficiency may favor certain codes over others (Barlow, 1961; Atick and Redlich, 1992; Attneave, 1954; Olshausen and Field, 1997; Simoncelli and Olshausen, 2001), reducing degeneracy in the space of codes with identical kernels. To formalize this idea, we define $\mathit{\delta}$ to be the vector of spontaneous firing rates of a population of neurons, and ${\mathbf{s}}^{\mu}=\mathbf{r}({\mathit{\theta}}^{\mu})+\mathit{\delta}$ be the spiking rate vector in response to a stimulus ${\mathit{\theta}}^{\mu}$. The vector $\mathit{\delta}$ ensures that neural responses are nonnegative. The modulation with respect to the spontaneous activity, $\mathbf{r}({\mathit{\theta}}^{\mu})$, gives the population code and defines the kernel, $K({\mathit{\theta}}^{\mu},{\mathit{\theta}}^{\mu})=\frac{1}{N}\mathbf{r}({\mathit{\theta}}^{\mu})\cdot \mathbf{r}({\mathit{\theta}}^{\nu})$. To avoid confusion with $\mathbf{r}({\mathit{\theta}}^{\mu})$, we will refer to ${\mathbf{s}}^{\mu}$ as total spiking activity. We propose that population codes prefer smaller spiking activity subject to a fixed kernel. In other words, because the kernel is invariant to any change of the spontaneous firing rates and left rotations of $\mathbf{r}(\mathit{\theta})$, the orientation and shift of the population code $\mathbf{r}(\mathit{\theta})$ should be chosen such that the resulting total spike count ${\sum}_{i=1}^{N}{\sum}_{\mu =1}^{P}{s}_{i}^{\mu}$ is small.
We tested whether biological codes exhibit lower total spiking activity than others exhibiting the same kernel on mouse V1 recordings, using deconvolved calcium activity as a proxy for spiking events (Stringer et al., 2021; Pachitariu et al., 2019; Pachitariu et al., 2018) (Methods Data analysis; Figure 8). To compare the experimental total spiking activity to other codes with identical kernels, we computed random rotations of the neural responses around spontaneous activity, $\stackrel{~}{\mathbf{r}}({\mathit{\theta}}^{\mu})=\mathbf{Qr}({\mathit{\theta}}^{\mu})$, and added the $\delta $ that minimizes total spiking activity and maintains its nonnegativity (Methods Generating RROS codes). We refer to such an operation as RROS (random rotation and optimal shift), and a code generated by an RROS operation as an RROS code. The matrix $\mathbf{Q}$ is a randomly sampled orthogonal matrix (Anderson et al., 1987). In other words, we compare the true code to the most metabolically efficient realizations of its random rotations. This procedure may result in an increased or decreased total spike count in the code, and is illustrated in a synthetic dataset in Figure 8A. We conducted this procedure on subsets of various sizes of mouse V1 neuron populations, as our proposal should hold for any subset of neurons (Methods Generating RROS codes), and found that the true V1 code is much more metabolically efficient than randomly rotated versions of the code (Figure 8B and C). This finding holds for both responses to static gratings and to natural images as we show in Figure 8B and C respectively.
To further explore metabolic efficiency, we posed an optimization problem which identifies the most efficient code with the same kernel as the biological V1 code. This problem searches over rotation matrices $\mathbf{Q}$ and finds the $\mathbf{Q}$ matrix and offset vector $\mathit{\delta}$ which gives the lowest cost ${\sum}_{i\mu}{s}_{i}^{\mu}$ (Methods Comparing sparsity of population codes) (Figure 8). Although the local optimum identified with the algorithm is lower in cost than the biological code, both the optimal and biological codes are significantly displaced from the distribution of random codes with same kernel. Our findings do not change when data is preprocessed with an alternative strategy, an upper bound on neural responses is imposed on rotated codes, or subsets of stimuli are considered (Figure 8—figure supplement 1). We further verified these results on electrophysiological recordings of mouse visual cortex from the Allen Institute Brain Observatory (de Vries et al., 2020), (Figure 8—figure supplement 2). Overall, the large disparity in total spiking activity between the true and randomly generated codes with identical kernels suggests that metabolic constraints may favor the biological code over others that realize the same kernel.
The disparity between the true biological code and the RROS code is not only manifested in terms of total activity level, but also in terms of single neuron and single stimulus sparseness measures, specifically lifetime and population sparseness distributions (Methods Lifetime and population sparseness) (Willmore and Tolhurst, 2001; Lehky et al., 2005; Treves and Rolls, 1991; Pehlevan and Sompolinsky, 2014). In Figure 8D, we compare the lifetime and population sparseness distributions of the true biological code with a RROS version of the same code, revealing biological neurons have significantly higher lifetime sparseness. In Appendix Necessary conditions for optimally sparse codes, we provide analytical arguments which suggest that tuning curves of optimally sparse nonnegative codes with fullrank kernels will have selective tuning.
Discussion
Elucidating inductive biases of the brain is fundamentally important for understanding natural intelligence (Tenenbaum et al., 2011; Lake et al., 2017; Sinz et al., 2019; Zador, 2019). These biases are coded into the brain by the dynamics of its neurons, the architecture of its networks, its representations and plasticity rules. Finding ways to extract the inductive biases from neuroscience datasets requires a deeper theoretical understanding of how all these factors shape the biases, and is an open problem. In this work, we attempted to take a step towards filling this gap by focusing on how the structure of static neural population codes shape inductive biases for learning of a linear readout neuron under a biologically plausible learning rule. If the readout neuron’s output is correlated with behavior, and that correlation is known, then our theory could possibly be modified to predict what behavioral tasks can be learned faster.
Under the delta rule, the generalization performance of the readout is entirely dependent on the code’s inner product kernel; the kernel is a determinant of inductive bias. In its finite dimensional form, the kernel is an example of a representational similarity matrix and is a commonly used tool to study neural representations (Edelman, 1998; Kriegeskorte et al., 2008; Laakso and Cottrell, 2000; Kornblith et al., 2019; Cadieu et al., 2014; Pehlevan et al., 2018). Our work elucidates a concrete link between this experimentally measurable mathematical object, and sampleefficient learning.
We derived an analytical expression for the generalization error as a function of samplesize under very general conditions, for an arbitrary stimulus distribution, arbitrary population code and an arbitrary target stimulusresponse map. We used our findings in both theoretical and experimental analysis of primary visual cortex, and temporal codes in a delayed reach task. This generality of our theory is a particular strength.
Our analysis elucidated two principles that define the inductive bias. The first one is spectral bias: kernel eigenfunctions with large eigenvalues can be estimated using a smaller number of samples. The second principle is the codetask alignment: target functions with most of their power in top kernel eigenfunctions can be estimated efficiently and are compatible with the code. The cumulative power distribution, $C(k)$ (Canatar et al., 2021), provides a measure of this alignment. These findings define a notion of ‘simplicity’ bias in learning from examples, and provides a solution to the question of what stimulusresponse maps are easier to learn. A similar simplicity bias has been also observed in training deep neural networks (Rahaman et al., 2019; Xu et al., 2019; Kalimeris et al., 2019). Due to a correspondence between gradientdescent trained neural networks in the infinitewidth limit and kernel machines (Jacot et al., 2018), results on the spectral bias of kernel machines may shed light onto these findings (Bordelon et al., 2020; Canatar et al., 2021). Though our present analysis focused on learning a single layer weight vector with the biologically plausible deltarule, future work could explore the learning curves of other learning rules for deep networks (Bordelon and Pehlevan, 2022a), such as feedback alignment (Lillicrap et al., 2016) or perturbation methods (Jabri and Flower, 1992). Such analysis could explore how inductive bias is also shaped by choice of learning rule, as well as the structure of the initial population code.
We applied our findings in both theoretical and experimental analysis of mouse primary visual cortex. We demonstrated a bias of neural populations towards low frequency orientation discrimination and low spatial frequency reconstruction tasks. The latter finding is consistent with the finding that mouse visual cortex neurons are selective for low spatial frequency (Niell and Stryker, 2008; Vreysen et al., 2012). The toy model of the visual cortex as a mixture of simple and complex cells demonstrated how invariances, specifically the phase invariance of the complex cells, in the population code can facilitate learning some tasks involving phase invariant responses at the expense of performance on others. The role of invariances in learning with kernel methods and deep networks have recently been investigated in machine learning literature, showing that invariant representations can improve capacity (Farrell et al., 2021) and sample efficiency for invariant learning problems (Mei et al., 2021; Li et al., 2019; Xiao and Pennington, 2022).
A recent proposal considered the possibility that the brain acts as an overparameterized interpolator (Hasson et al., 2020). Suitable inductive biases are crucial to prevent overfitting and generalize well in such a regime (Belkin et al., 2019). Our theory can explain these inductive biases since, when the kernel is fullrank, which typically is the case when there are more neurons in the population than the number of learning examples, the delta rule without weight decay converges to an interpolator of the learning examples. Modern deep learning architectures also operate in an overparameterized regime, but generalize well (Zhang et al., 2016; Belkin et al., 2019), and an inductive bias towards simple functions has been proposed as an explanation (Bordelon et al., 2020; Canatar et al., 2021; Kalimeris et al., 2019; VallePerez et al., 2018). However, we also showed that interpolation can be harmful to prediction accuracy when the target function has some variance unexplained by the neural code or if the neural responses are significantly noisy, motivating use of explicit regularization.
Our work promotes sample efficiency as a general coding principle for neural populations, relating neural representations to the kinds of problems they are well suited to solve. These codes may be shaped through evolution or themselves be learned through prior experience (Zador, 2019). Prior related work in this area demonstrated the dependence of sampleefficient learning of a twoangle estimation task on the width of the individual neural tuning curves (Meier et al., 2020) and on additive function approximation properties of sparsely connected random networks (Harris, 2019).
A sample efficiency approach to population coding differs from the classical efficient coding theories (Attneave, 1954; Barlow, 1961; Atick and Redlich, 1992; Srinivasan et al., 1982; van Hateren, 1992; Rao and Ballard, 1999; Olshausen and Field, 1997; Chalk et al., 2018), which postulate that populations of neurons optimize information content of their code subject to metabolic constraints or noise. While these theories emphasize different aspects of the code’s information content (such as reduced redundancy, predictive power, or sparsity), they do not address sample efficiency demands on learning. Further, recent studies demonstrated hallmarks of redundancy and correlation in population responses (Chapin and Nicolelis, 1999; Bathellier et al., 2008; Pitkow and Meister, 2012; Gao and Ganguli, 2015; AbbasiAsl et al., 2016; Gallego et al., 2018; Stringer et al., 2018b), violating a generic prediction of efficient coding theories that responses of different neurons should be uncorrelated across input stimuli in high signaltonoise regimes to reduce redundancy in the code and maximize information content (Barlow, 1961; Atick and Redlich, 1992; Srinivasan et al., 1982; van Hateren, 1992; Haft and van Hemmen, 1998; Huang and Rao, 2011). In our theory, the structured correlations of neural responses correspond to the decay in the spectrum of the kernel, and play a key role in biasing learned readouts towards simple functions.
In recent related studies, the asymptotic decay rate of the kernel’s eigenspectrum was argued to be important for generalization (Stringer et al., 2018b) and robustness (Nassar et al., 2020). The spectral decay rate in the mouse V1 was found to be consistent with a high dimensional (power law) but smooth (differentiable) code, and smoothness was argued to be an enabler of generalization (Stringer et al., 2018b). While we also identify power law spectral decays, we show that sampleefficient learning requires more than smoothness conditions in the form of asymptotic decay rates on the kernel’s spectrum. The interplay between the stimulus distribution, target response and the code gives rise to sample efficient learning. Because of spectral bias, the top eigenvalues govern the small sample size behavior. The tail of the spectrum becomes important at large sample sizes.
Though the kernel is degenerate with respect to rotations of the code in the neural activity space, we demonstrated that the true V1 code has much lower average activity than random codes with the same kernel, suggesting that evolution and learning may be selecting neural codes with low average spike rates which preserve sampleefficiency demands for downstream learning tasks. We predict that metabolic efficiency may be a determinant in the orientation and placement of the ubiquitously observed lowdimensional coding manifolds in neural activity space in other parts of the brain (Gallego et al., 2018). The demand of metabolic efficiency is consistent with prior sparse coding theories (Niven and Laughlin, 2008; Olshausen and Field, 1997; Simoncelli and Olshausen, 2001; Hromádka et al., 2008), however, our theory emphasizes sampleefficient learning as the primary normative objective for the code. As a note of caution, while our analysis holds under the assumption that the neural code is deterministic, real neurons exhibit variability in their responses to repeated stimuli. Such noisy population codes do not generally achieve identical generalization performance under RROS transformations. For example, if each neuron is constrained to produce i.i.d. Poisson noise, then simple shifts of the baseline firing rate reduce the information content of the code. However, if the neural noise is Gaussian (even with stimulus dependent noise covariance), then the generalization error is conserved under RROS operations (Appendix Effect of noise on RROS symmetry). Further studies could focus on revealing the space of codes with equivalent inductive biases under realistic noise models.
Our work constitutes a first step towards understanding inductive biases in neuronal circuits. To achieve this, we focused on a linear, deltarule readout of a static population code. More work is need to study other factors that affect inductive bias. Importantly, sensory neuron tuning curves can adapt during perceptual learning tasks (Gilbert, 1994; Goltstein et al., 2021; Ghose et al., 2002; Schoups et al., 2001) with the strength of adaptation dependent on brain area (Yang and Maunsell, 2004; Adab et al., 2014; Op de Beeck et al., 2007). In many experiments, these changes to tuning in sensory areas are small (Schoups et al., 2001; Ghose et al., 2002), satisfying the assumptions of our theory. For example monkeys trained on noisy visual motion detection exhibit changes in sensorymotor (LIP) but not sensory areas (MT), consistent with a model of readout from a static sensory population code (Law and Gold, 2008; Shadlen and Newsome, 2001). However, other perceptual learning tasks and other brain areas can exhibit significant changes in neural tuning (Recanzone et al., 1993; Pleger et al., 2003; Furmanski et al., 2004). This diversity of results motivates more general analysis of learning in multilayer networks, where representations in each layer can adapt flexibly to task structure (Shan and Sompolinsky, 2021; Mastrogiuseppe et al., 2022; Bordelon and Pehlevan, 2022b; Ahissar and Hochstein, 2004). Alternatively, our current analysis of inductive bias can still be consistent with multilayer learning if the network is sufficiently overparameterized and tuning curves change very little (Jacot et al., 2018; Lee et al., 2018; Shan and Sompolinsky, 2021). In this case, network training is equivalent to kernel learning with a kernel that depends on the learning rule and architecture (Bordelon and Pehlevan, 2022a). However, in the regime of neural network training where tuning curves change significantly, more sophisticated analytical tools are needed to predict generalization (Flesch et al., 2021; Yang and Hu, 2021; Bordelon and Pehlevan, 2022b). Although our work focused on linear readouts, arbitrary nonlinear readouts which generate convex learning objectives have been recently studied in the high dimensional limit, giving qualitatively similar learning curves which depend on kernel eigenvalues and task model alignment (Loureiro et al., 2021b; Cui et al., 2022) (see Appendix Typical case analysis of nonlinear readouts).
Our work focused on how signal correlations influence inductive bias (Averbeck et al., 2006; Cohen and Kohn, 2011). However, since real neurons do exhibit variability in their responses to identical stimuli, one should consider the effect of neural noise and noise correlations in learning. We provide a preliminary analysis of learning with neural noise in Appendix Impact of neural noise and unlearnable targets on learning, where we show that neural noise can lead to irreducible asymptotic error which depends on the geometry of the signal and noise correlations. Further, if the target function is not fully expressible as linear combinations of neural responses, overfitting peaks in the learning curves are possible, but can be mitigated with regularization implemented by a weight decay in the learning rule (see Appendix 1—figure 1). Future work could extend our analysis to study how signal and noise correlations interact to shape inductive bias and generalization performance in the case where the noise correlation matrices are nonisotropic, including the role of differential correlations (MorenoBote et al., 2014). Overall, future work could build on the present analysis to incorporate a greater degree of realism in a theory of inductive bias.
Finally, we discuss possible applications of our work to experimental neuroscience. Our theory has potential implications for experimental studies of task learning. First, in cases where the population selective to stimuli can be measured directly, an experimenter could design easy or difficult tasks for an animal to learn from few examples, under a hypothesis that the behavioral output is a linear function of the observed neurons. Second, in cases where it is unclear which neural population contributes to learning, one could utilize our theory to solve the inverse problem of inferring the relevant kernel from observed learning curves on different tasks (Wilson et al., 2015). From these tasks, the experimenter could compare the inferred kernel to those of different recorded populations. For instance, one could compare the kernels from separate populations to the inferred kernel obtained from learning curves on certain visual learning tasks. This could provide new ways to test theories of perceptual learning (Gilbert, 1994). Lastly, extensions of our framework could quantify the role of neural variability on task learning and the limitation it imposes on accuracy and sample efficiency.
Methods
Generating example codes (Figure 1)
The two codes in Figure 1 were constructed to produce two different kernels for $\theta \in {S}^{1}$:
An infinite number of codes could generate either of these kernels. After diagonalizing the kernel into its eigenfunctions on a grid of 120 points, $\mathbf{K}}_{1}={\mathbf{\Psi}}_{1}{\mathbf{\Lambda}}_{1}{\mathbf{\Psi}}_{1}^{\mathrm{\top}},{\mathbf{K}}_{2}={\mathbf{\Psi}}_{2}{\mathbf{\Lambda}}_{2}{\mathbf{\Psi}}_{2}^{\mathrm{\top}$, we used a random rotation matrix $\mathbf{Q}\in {\mathbb{R}}^{N\times N}$ (which satisfies ${\mathbf{QQ}}^{\top}={\mathbf{Q}}^{\top}\mathbf{Q}=\mathbf{I}$) to generate a valid code
This construction guarantees that ${\mathbf{R}}_{1}^{\top}{\mathbf{R}}_{1}={\mathbf{K}}_{1}$ and ${\mathbf{R}}_{2}^{\top}{\mathbf{R}}_{2}={\mathbf{K}}_{2}$. We plot the tuning curves for the first three neurons. The target function in the first experiment is $y=\mathrm{cos}(\theta )0.6\mathrm{cos}(4\theta )$, while the second experiment used $y=\mathrm{cos}(6\theta )\mathrm{cos}(8\theta )$.
Theory of generalization
Recent work has established analytic results that predict the average case generalization error for kernel regression
where $E}_{g}(\mathcal{D})={\u27e8[f(\mathit{\theta},\mathcal{D})y(\mathit{\theta}){]}^{2}\u27e9}_{\mathit{\theta}$ is the generalization error for a certain sample $\mathcal{D}$ of size $P$ and $f(\mathit{\theta},\mathcal{D})=\mathbf{w}\cdot \mathbf{r}(\mathit{\theta})$ is the kernel regression solution for $\mathcal{D}$ (Appendix Convergence of the deltarule without weight decay) (Bordelon et al., 2020; Canatar et al., 2021). The typical or average case error ${E}_{g}$ is obtained by averaging over all possible datasets of size $P$. This average case generalization error is determined solely by the decomposition of the target function $y(\mathbf{x})$ along the eigenbasis of the kernel and the eigenspectrum of the kernel. This continuous diagonalization again takes the form (Appendix Singular value decomposition of continuous population responses) (Rasmussen and Williams, 2005)
Our theory is also applicable to discrete stimuli if $p(\mathit{\theta})$ is a Dirac measure as we describe in (Appendix Discrete stimulus spaces: finding eigenfunctions with matrix eigendecomposition). Since the eigenfunctions form a complete set of square integrable functions (Rasmussen and Williams, 2005), we expand both the target function $y(\mathit{\theta})$ and the learned function $f(\mathit{\theta},\mathcal{D})$ in this basis $y(\mathit{\theta})=\sum _{k}{v}_{k}{\psi}_{k}(\mathit{\theta})\text{},\text{}f(\mathit{\theta},\mathcal{D})=\sum _{k}{\hat{v}}_{k}{\psi}_{k}(\mathit{\theta})$, where ${\widehat{v}}_{k}$ are understood to be functions of the dataset $\mathcal{D}$. The eigenfunctions are orthonormal $\int d\mathit{\theta}p(\mathit{\theta}){\psi}_{k}(\mathit{\theta}){\psi}_{\mathrm{\ell}}(\mathit{\theta})={\delta}_{k,\mathrm{\ell}}$, which implies that the generalization error for any set of coefficients $\widehat{\mathbf{v}}$ is
We now introduce the equivalent training error, or empirical loss, written directly in terms of eigenfunction coefficients $\widehat{\mathbf{v}}$, which depends on the random dataset $\mathcal{D}=\{({\mathit{\theta}}^{\mu},{y}^{\mu}){\}}_{\mu =1}^{P}$
This loss function is minimized by delta rule updates with weight decay constant $\lambda $. It is straightforward to verify that the $H$minimizing coefficients are $\hat{\mathbf{v}}}^{\ast}=(\mathbf{\Psi}{\mathbf{\Psi}}^{\mathrm{\top}}+\lambda {\mathbf{\Lambda}}^{1}{)}^{1}\mathbf{\Psi}{\mathbf{\Psi}}^{\mathrm{\top}}\mathbf{v$, giving the learned function $f(\mathit{\theta},\mathcal{D})={\hat{\mathbf{v}}}^{\ast}\cdot \mathit{\psi}(\mathit{\theta})=\mathbf{k}(\mathit{\theta}{)}^{\mathrm{\top}}(\mathbf{K}+\lambda \mathbf{I}{)}^{1}\mathbf{y}$ where the vectors $\mathbf{k}$ and $\mathbf{y}$ have entries $[\mathbf{k}(\mathit{\theta}){]}_{\mu}=K(\mathit{\theta},{\mathit{\theta}}^{\mu})$ and $[\mathbf{y}{]}_{\mu}=y({\mathit{\theta}}^{\mu})$ for each training stimulus $\mathit{\theta}}^{\mu}\in \mathcal{D$. The $P\times P$ kernel gram matrix $\mathbf{K}$ has entries $[\mathbf{K}{]}_{\mu \nu}=K({\mathit{\theta}}^{\mu},{\mathit{\theta}}^{\nu})$. The $\lambda \to 0$ limit of the minimizer of $H$ coincides with kernel interpolation. This allows us to characterize generalization without reference to learned readout weights $\mathbf{w}$. The generalization error for this optimal function is
We note that the dependence on the randomly sampled dataset $\mathcal{D}$ only appears through the matrix $\mathbf{G}(\mathcal{D})$. Thus to compute the typical generalization error we need to average this matrix over realizations of datasets, i.e.$\u27e8\mathbf{G}(\mathcal{D})\u27e9}_{\mathcal{D}$. There are multiple strategies to perform such an average and we will study one here based on a partial differential equation which was introduced in Sollich, 1998; Sollich, 2002 and studied further in Bordelon et al., 2020. We describe in detail one method for performing such an average in Appendix Computation of learning curves. After this computation, we find that the generalization error can be approximated at large $P$ as
where $\gamma =P{\sum}_{k}\frac{{\lambda}_{k}^{2}}{{({\lambda}_{k}p+\kappa )}^{2}}$, giving the desired result. We note that (11) defines the function $\kappa $ implicitly in terms of the sample size $P$. Taking $\lambda \to 0$ gives the generalization error of the minimum norm interpolant, which desribes the generalization error of the solution. This result was recently reproduced using the replica method from statistical mechanics in an asymptotic limit where the number of neurons and samples are large (Bordelon et al., 2020; Canatar et al., 2021). Other recent works have verified our theoretical expressions on a variety of kernels and datasets (Loureiro et al., 2021b; Simon et al., 2021).
Additional intuition for the spectral bias phenomenon can be gained from the expected learned function ${\u27e8f(\mathit{\theta},\mathcal{D})\u27e9}_{\mathcal{D}}=\sum _{k}\frac{{\lambda}_{k}P}{{\lambda}_{k}P+\kappa}{v}_{k}{\psi}_{k}(\mathit{\theta})$, which is the average readout prediction over possible datasets $\mathcal{D}$. The function $\kappa (P)$ is defined implicitly as $\kappa =\lambda +\kappa {\sum}_{k}\frac{{\lambda}_{k}}{{\lambda}_{k}P+\kappa}$ and decreases with $P$ from $\kappa (0)=\lambda +{\sum}_{k}{\lambda}_{k}$ to its asymptotic value ${lim}_{P\to \mathrm{\infty}}\kappa (P)=\lambda $. The coefficient for the $k$th eigenfunction $\frac{{\lambda}_{k}P}{{\lambda}_{k}P+\kappa}{v}_{k}$ approaches the true coefficient v_{k} as $P\to \mathrm{\infty}$. The $k$th eigenfunction ${\psi}_{k}$ is effectively learned when $P\gg \frac{\kappa}{{\lambda}_{k}}$. For large eigenvalues ${\lambda}_{k}$, fewer samples $P$ are needed to satisfy this condition, while small eigenvalue modes will require more samples.
RNN experiment
For the simulations in Figure 7, we integrated a ratebased recurrent network model with $N=6000$ neurons, time constant $\tau =0.05$ and gain $g=1.5$. Each of the $P=80$ randomly chosen angles ${\gamma}^{\mu}$ generates a trajectory over $T=100$ equally spaced points in $t\in [0,3]$. The two dimensional input sequence is simply $\mathit{\theta}(t)=H(t)H(1t){[\mathrm{cos}({\gamma}^{\mu}),\mathrm{sin}({\gamma}^{\mu})]}^{\top}\in {\mathbb{R}}^{2}$. Target function for a delay $d$ is $\mathbf{y}({\theta}^{\mu},t)=H(1.5+dt)H(td1){[\mathrm{cos}({\gamma}^{\mu}),\mathrm{sin}({\gamma}^{\mu})]}^{\top}$ which is nonzero for times $t\in [1+d,1.5+d]$. In each simulation, the activity in the network is initialized to $\mathbf{u}(0)=\mathbf{0}$. The kernel gram matrix $\mathbf{K}\in {\mathbb{R}}^{PT\times PT}$ is computed by taking inner products of the time varying code at for different inputs ${\gamma}^{\mu}$ and at different times. Learning curves represent the generalization error obtained by randomly sampling $\mathcal{P}$ time points from the $PT$ total time points generated in the simulation process and training readout weights $\mathbf{w}$ to convergence with gradient descent.
Data analysis
Data source and processing
Mouse V1 neuron responses to orientation gratings were obtained from a publicly available dataset (Stringer et al., 2021; Pachitariu et al., 2019). Twophoton calcium microscopy fluorescence traces were deconvolved into spike trains and spikes were counted for each stimulus, as described in Stringer et al., 2021. The presented grating angles were distributed uniformly over $[0,2\pi ]$ radians. Data preprocessing, which included zscoring against the mean and standard deviation of null stimulus responses, utilized the provided code for this experiment, which also publicly available at https://github.com/MouseLand/stringeretal2019 (Stringer, 2019). This preprocessing technique was used in all Figures in the paper. To reduce corruption of the estimated kernel from neural noise (trialtotrial variability), we first trial average responses, binning the grating stimuli oriented at different angles $\theta $ into a collection of 100 bins over the interval from $[0,2\pi ]$ and averaging over all of the available responses from each bin. Since grating angles were sampled uniformly, there is a roughly even distribution of about 45 responses in each bin. After trial averaging, SVD was performed on the response matrix $\mathbf{R}$, generating the eigenspectrum and kernel eigenfunctions as illustrated in Figure 3. Figures 2, 3 and 8, all used this data anytime responses to grating stimuli were mentioned.
In Figures 3D, 4 and 8C, the responses of mouse V1 neurons to ImageNet images (Deng et al., 2009) were obtained from a different publicly available dataset (Stringer et al., 2018a). The images were taken from 15 different classes from the Imagenet dataset with ethological relevance to mice (birds, cats, flowers, hamsters, holes, insects, mice, mushrooms, nests, pellets, snakes, wildcats, other animals, other natural, other man made). In the experiment in Figure 3D we use all images from the mice and birds category for which responses were recorded. The preprocessing code and image category information were obtained from the publicly available code base at https://github.com/MouseLand/stringerpachitariuetal2018b (Stringer, 2018c). Again, spike counts were obtained from deconvolved and zscored calcium fluorescence traces. In the reconstruction experiment shown in Figure 4 we use the entire set of images for which neural responses were recorded.
Generating RROS codes
In Figure 8, the randomly rotated codes are generated by sampling a matrix $\mathbf{Q}$ from the Haar measure on the set of $N$by$N$ orthogonal matrices (Anderson et al., 1987), and chosing a $\mathit{\delta}$ by solving the following optimization problem:
which minimizes the total spike count subject to the kernel and nonnegativity of firing rates. The solution to this problem is given by ${\delta}_{i}^{*}={\mathrm{min}}_{\mu =1,\mathrm{\dots},P}{[\mathbf{Qr}({\mathit{\theta}}^{\mu})]}_{i}$.
Comparing sparsity of population codes
To explore the metabolic cost among the set of codes with the same inductive biases, we estimate the distribution of average spike counts of codes with the same inner product kernel as the biological code. These codes are generated in the form ${\mathbf{s}}^{\mu}={\mathbf{Qr}}^{\mu}+\mathit{\delta}$ where $\mathit{\delta}$ solves the optimization problem
To quantify the distribution of such codes, we randomly sample $\mathbf{Q}$ from the invariant (Haar) measure for $N\times N$ orthogonal matrices and compute the optimal $\mathit{\delta}$ as described above. This generates the aqua colored distribution in Figure 8B and C.
We also attempt to characterize the most efficient code with the same inner product kernel
Since this optimization problem is nonconvex in $\mathbf{Q}$, there is no theoretical guarantee that minima are unique. Nonetheless, we attempt to optimize the code by starting $\mathbf{Q}$ at the identity matrix and conduct gradient descent over orthogonal matrices (Plumbley, 2004). Such updates take the form
where $\mathrm{exp}(\cdot )$ is the matrix exponential. To make the loss function differentiable, we incorporate the nonnegativity constraint with a softminimum:
where $Z=\sum _{\nu}\mathrm{exp}(\beta {a}^{\nu})$ is a normalizing constant and $\mathbf{Q}=[{\mathbf{q}}_{1},\mathrm{\dots}{\mathbf{q}}_{N}]$. In the $\beta \to \mathrm{\infty}$ limit, this cost function converges to the exact optimization problem with nonnegativity constraint. Finite $\beta $, however, allows learning with gradient descent. Gradients are computed with automatic differentiation in JAX (Bradbury et al., 2018). This optimization routine is run until convergence and the optimal value is plotted as dashed red lines labeled ‘opt’. in Figure 8.
We show that our result is robust to different preprocessing techniques and to imposing bounds on neural firing rates in the Figure 8—figure supplement 1. To demonstrate that our result is not an artifact of zscoring the deconvolved signals against the spontaneous baseline activity level, we also conduct the random rotation experiment on the raw deconvolved signals. In addition, we show that imposing realistic constraints on the upper bound of the each neuron’s responses does not change our findings. We used a subset of $N=100$ neurons and computed random rotations. However, we only accepted a code as valid if it’s maximum value was less than some upper bound u_{b}. Subsets of $N=100$ neurons in the biological code achieve maxima in the range between 3.2 and 4.7. We performed this experiment for ${u}_{b}\in \{3,4,5\}$ so that the artificial codes would have maxima that lie in the same range as the biological code.
Lifetime and population sparseness
We compute two more refined measures of sparseness in a population code. For each neuron $i$ we compute the lifetime sparseness $L{S}_{i}$ (also known as selectivity) and for each stimulus $\mathit{\theta}$ we compute the population sparseness $P{S}_{\mathit{\theta}}$ which are defined as the following two ratios (Willmore and Tolhurst, 2001; Lehky et al., 2005; Treves and Rolls, 1991; Pehlevan and Sompolinsky, 2014)
The normalization factors ensure that these quantities lie in the interval between $(0,1)$. Intuitively, lifetime sparseness quantifies the variability of each neuron’s responses over the full set of stimuli, whereas population sparseness quantifies the variability of responses in the code for a given stimulus $\mathit{\theta}$.
Fitting a Gabor model to mouse V1 kernel
Under the assumption of translation symmetry in the kernel $K(\theta ,{\theta}^{\prime})$, we averaged the elements of the over rows of the empirical mouse V1 kernel (Pachitariu et al., 2019)
where angular addition is taken mod $\pi $. This generates the black dots in Figure 5B. We aimed to fit a thresholdpower law nonlinearity of the form ${g}_{q,a}(z)=\mathrm{max}{\{0,za\}}^{q}$ to the kernel. Based on the Gabor model discussed above, we parameterized tuning curves as
where ${\theta}_{i}$ is the preferred angle of the $i$th neuron’s tuning curve. Rather than attempting to perform a fit of ${\sigma}^{2},a,q,{\{{\theta}_{i}\}}_{i=1}^{N}$ of this form to the responses of each of the $\sim 20$k neurons, we instead simply attempt to fit to the population kernel by optimizing over $({\sigma}^{2},a,q)$ under the assumption of uniform tiling of ${\theta}_{i}$. However, we noticed that two of these variables ${\sigma}^{2},a$ are constrained by the sparsity level of the code. If each neuron, on average, fires for only a fraction $f$ of the uniformly sampled angles $\theta $, then the following relationship holds between ${\sigma}^{2}$ and
Calculation of the coding level $f$ for the recorded responses allowed us to infer $a$ from $\sigma}^{2$ during optimization. This reduced the free parameter set to $({\sigma}^{2},q)$. We then solve the following optimization problem
where integration over ${\theta}_{i}$ is performed numerically. Using the Scipy TrustRegion constrained optimizer, we found $(q,{\sigma}^{2},a)=(1.7,5.0,0.2)$ which we use as the fit parameters in Figure 5.
Lead contact
Requests for information should be directed to the lead contact, Cengiz Pehlevan (cpehlevan@seas.harvard.edu).
Data and code availability
Mouse V1 neuron responses to orientation gratings and preprocessing code were obtained from a publicly available dataset: https://github.com/MouseLand/stringeretal2019, (Stringer et al., 2021; Pachitariu et al., 2019).
Responses to ImageNet images and preprocessing code were obtained from another publicly available dataset, https://github.com/MouseLand/stringerpachitariuetal2018b (Stringer et al., 2018a).
The code generated by the authors for this paper is also available https://github.com/PehlevanGroup/sample_efficient_pop_codes (PehlevanGroup, 2022).
Appendix 1
Singular value decomposition of continuous population responses
SVD of population responses is usually evaluated with respect to a discrete and finite set of stimuli. In the main paper, we implicitly assumed that a generalization of SVD to a continuum of stimuli. In this section we provide an explicit construction of this generalized SVD using techniques from functional analysis. Our construction is an example of the quasimatrix SVD defined in Townsend and Trefethen, 2015 and justifies our use of SVD in the main text.
For our construction, we note that Mercer’s theorem guarantees the existence of an eigendecomposition of any inner product kernel $K(\mathit{\theta},{\mathit{\theta}}^{\prime})$ in terms of a complete orthonormal set of functions ${\{{\psi}_{k}\}}_{k=1}^{\mathrm{\infty}}$ (Rasmussen and Williams, 2005). In particular, there exist a nonnegative (but possibly zero) summable eigenvalues ${\{{\lambda}_{k}\}}_{k=1}^{\mathrm{\infty}}$ and a corresponding set of orthonormal eigenfunctions such that
For a stimulus distribution $p(\mathit{\theta})$, the set of functions ${\{{\psi}_{k}\}}_{k=1}^{\mathrm{\infty}}$ are orthonormal and form a complete basis for square integrable functions L_{2} which means
Next, we use this basis to construct the SVD. Each of the tuning curves ${r}_{i}\in {L}_{2}$ (assumed to be square integrable) can be expressed in this basis with the top $N$ of the functions in the set
where we introduced a matrix $\mathbf{A}\in {\mathbb{R}}^{N\times N}$ of expansion coefficients. Note that $\text{rank}(\mathbf{A})\le N$. We compute the singular value decomposition of the finite matrix $\mathbf{A}$
We note that the signal correlation matrix for this population code can be computed in closed form
due to the orthonormality of $\{{\psi}_{k}\}$. Thus the principal axes ${\mathbf{u}}_{k}$ of the neural correlations are the left singular vectors of $\mathbf{A}$. We may similarly express the inner product kernel in terms of the eigenfunctions
The kernel eigenvalue problem demands (Rasmussen and Williams, 2005)
The ${\mathbf{v}}_{k}$ vectors must be identical to $\pm {\mathbf{e}}_{k}$, the Cartesian unit vectors, if the eigenvalues are nondegenerate. From this exercise, we find that the SVD for $\mathbf{A}$ has the form $\mathbf{A}=\sqrt{N}{\sum}_{k=1}^{\text{rank}(\mathbf{A})}\sqrt{{\lambda}_{k}}{\mathbf{u}}_{k}{\mathbf{e}}_{k}^{\top}$. With this choice, the population code admits a singular value decomposition
This singular value decomposition demonstrates the connection between neural manifold structure (principal axes ${\mathbf{u}}_{k}$) and function approximation (kernel eigenfunctions ${\psi}_{k}$). This singular value decomposition can be verified by computing the inner product kernel and the correlation matrix, utilizing the orthonormality of $\{{\mathbf{u}}_{k}\}$ and $\{{\psi}_{k}\}$. This exercise has important consequences for the space of learnable functions, which is at most $\text{rank}(\mathbf{A})$ dimensional since linear readouts lie in $\text{span}{\{{r}_{i}(\mathit{\theta})\}}_{i=1}^{N}$.
Discrete stimulus spaces: finding eigenfunctions with matrix eigendecomposition
In our discussion so far, our notation suggested that $\mathit{\theta}$ take a continuum of values. Here we want to point that our theory still applies if $\mathit{\theta}$ take a discrete set of values. In this case, we can think of a Dirac measure $p(\mathit{\theta})={\sum}_{i=1}^{\stackrel{~}{P}}{p}_{i}\delta (\mathit{\theta}{\mathit{\theta}}^{i})$, where $i$ indexes all the $\stackrel{~}{P}$ values $\mathit{\theta}$ can take. With this choice
Demanding this equality for $\mathit{\theta}}^{\mathrm{\prime}}={\mathit{\theta}}^{i},\text{}i=1,...,\stackrel{~}{P$ generates a matrix eigenvalue problem
where $\mathbf{B}}_{ij}={\delta}_{ij}{p}_{i$. The eigenfunctions over the stimuli are identified as the columns of $\mathbf{\Psi}$ while the eigenvalues are the diagonal elements of $\mathbf{\Lambda}}_{k\ell}={\lambda}_{k}{\delta}_{k\ell$.
Experimental considerations
In an experimental setting, a finite number of stimuli are presented and the SVD is calculated over this finite set regardless of the support of $p(\mathit{\theta})$. This raises the question of the interpretation of this SVD and its relation to the inductive bias theory we presented. Here we provide two interpretations.
In the first interpretation, we think of the empirical SVD as providing an estimate of the SVD over the full distribution $p(\mathit{\theta})$. To formalize this notion, we can introduce a MonteCarlo estimate of the integral eigenvalue problem
For this interpretation to work, the experimenter must sample the stimuli from $p(\mathit{\theta})$, which could be the natural stimulus distribution. Measuring responses to a larger number of stimuli gives a more accurate approximation of the integral above, which will provide a better estimate of generalization performance on the true distribution $p(\mathit{\theta})$.
In the second interpretation, we construct an empirical measure on $\stackrel{~}{P}$ experimental stimulus values $\widehat{p}(\mathit{\theta})=\frac{1}{\stackrel{~}{P}}{\sum}_{\mu =1}^{\stackrel{~}{P}}\delta (\mathit{\theta}{\mathit{\theta}}^{\mu})$, and consider learning and generalization over this distribution. This allows the application of our theory to an experimental setting where $\widehat{p}(\mathit{\theta})$ is designed by an experimenter. For example, the experimenter could procure a complicated set of $\stackrel{~}{P}$ videos, to which an associated function $y(\theta )$ must be learned. After showing these videos to the animal and measuring neural responses, the experimenter could compute, with our theory, generalization error for a uniform distribution over this full set of $\stackrel{~}{P}$ videos. Our theory would predict generalization over this distribution after providing supervisory feedback for only a strict subset of $P<\stackrel{~}{P}$ videos. Under this interpretation, the relationship between the integral eigenvalue problem and matrix eigenvalue problem is exact rather than approximate
Demanding either of (32) or (33) equalities for ${\mathit{\theta}}^{\prime}={\mathit{\theta}}^{\nu},\nu =1,\mathrm{\dots},P$ generates a matrix eigenvalue problem
The eigenfunctions restricted to $\{{\mathit{\theta}}^{\mu}\}$ are identified as the columns of $\mathbf{\Psi}$ while the eigenvalues are the diagonal elements of $\mathbf{\Lambda}}_{k\ell}={\lambda}_{k}{\delta}_{k\ell$. For the case where $N$ and $P$ are finite, the spectrum obtained through eigendecomposition of the kernel $\mathbf{K}$ is the same as would be obtained through the finite $N$ signal correlation matrix ${\mathbf{\Sigma}}_{s}$, since they are inner and outer products of trial averaged population response matrices $\mathbf{R}$.
Translation invariant kernels
For the special case where the data distribution $p(\mathit{\theta})=\frac{1}{V}$ is uniform over volume $V$ and the kernel is translation invariant $K(\mathit{\theta},{\mathit{\theta}}^{\prime})=\kappa (\mathit{\theta}{\mathit{\theta}}^{\prime})$, the kernel can be diagonalized in the basis of plane waves
The eigenvalues are the Fourier components of the Kernel ${\lambda}_{\mathbf{k}}=\frac{1}{V}\widehat{\kappa}(\mathbf{k})=\frac{1}{V}\int d\mathit{\theta}{e}^{i\mathbf{k}\cdot \mathit{\theta}}\kappa (\mathit{\theta})$ while the eigenfunctions are plane waves ${\psi}_{\mathbf{k}}(\mathit{\theta})={e}^{i\mathbf{k}\cdot \mathit{\theta}}$. The set of admissible momenta $\mathcal{S}}_{\mathbf{k}}=\{{\mathbf{k}}_{0},\pm {\mathbf{k}}_{1},\pm {\mathbf{k}}_{2},...\$ are determined by the boundary conditions. The diagonalized representation of the kernel is therefore
For example, if the space is the torus ${\mathbb{T}}^{n}={S}^{1}\times {S}^{1}\times \mathrm{\dots}\times {S}^{1}$, then the space of admissable momenta are the points on the integer lattice $\mathcal{S}}_{\mathbf{k}}={\mathbb{Z}}^{n}=\{\mathbf{k}\in {\mathbb{R}}^{n}{k}_{i}\in \mathbb{Z}\text{}\mathrm{\forall}i=1,...,n\$. Reality and symmetry of the kernel demand that $\text{Im}({\lambda}_{\mathbf{k}})=0$ and ${\lambda}_{\mathbf{k}}={\lambda}_{\mathbf{k}}\ge 0$. Most of the models in this paper consider $\theta \sim \text{Unif}\left({S}^{1}\right)$, where the kernel has the following Fourier/Mercer decomposition
where we invoked the simple trigonometric identity $\mathrm{cos}(ab)=\mathrm{cos}(a)\mathrm{cos}(b)+\mathrm{sin}(a)\mathrm{sin}(b)$. By recognizing that ${\{\sqrt{2}\mathrm{cos}(k\theta ),\sqrt{2}\mathrm{sin}(k\theta )\}}_{k=0}^{\mathrm{\infty}}$ form a complete orthonormal set of functions with respect to $\text{Unif}\left({S}^{1}\right)$, we have identified this as the collection of kernel eigenfunctions.
Invariant kernels possess invariant eigenfunctions
Suppose the kernel $K(\mathit{\theta},{\mathit{\theta}}^{\prime})$ is invariant to some set of transformations $t\in \mathcal{T}$, by which we mean that
We will now show that any eigenfunction of such a kernel with nonzero eigenvalue must be an invariant function. Let ${\psi}_{k}(\mathit{\theta})$ be an eigenfunction with eigenvalue ${\lambda}_{k}>0$, then
This establishes that all functions which depend on $\mathcal{T}$ transformations must necessarily lie in the nullspace of $K$.
Theory of generalization
Convergence of the deltarule without weight decay
In this section, we discuss the deltarule convergence when weight decay parameter is set to $\lambda =0$. The next section considers the simpler case where $\lambda >0$. Gradient descent training of readout weights $\mathbf{w}$ on a finite sample of size $P$ converges to the kernel regression solution (Bartlett et al., 2020; Hastie et al., 2020). Let $\mathcal{D}=\{{\mathit{\theta}}^{\mu},{y}^{\mu}{\}}_{\mu =1}^{P}$ be the dataset with samples ${\mathit{\theta}}^{\mu}$ and target values ${y}^{\mu}$. We introduce a shorthand ${\mathbf{r}}^{\mu}=\mathbf{r}({\mathit{\theta}}^{\mu})$ for convenience. The empirical loss we aim to minimize is a sum of the squared losses of each data point in the training set
Performing gradient descent updates
recovers the delta rule that we discussed in the main text (Widrow and Hoff, 1960; Hertz et al., 1991). Letting the empirical response matrix $\mathbf{R}=[{\mathbf{r}}^{1},\mathrm{\dots},{\mathbf{r}}^{P}]\in {\mathbb{R}}^{N\times P}$ have a SVD $\mathbf{R}=\sum _{k}\sqrt{{\hat{\lambda}}_{k}}{\hat{\mathbf{u}}}_{k}{\hat{\mathit{\psi}}}_{k}^{\mathrm{\top}}$, and expanding the weights ${\mathbf{w}}_{t}={\sum}_{k}{w}_{t,k}{\widehat{\mathbf{u}}}_{k}$ and labels $\mathbf{y}=\sum _{k}{\hat{v}}_{k}{\hat{\mathit{\psi}}}_{k}$ in their respective SVD bases, we find
For all directions with ${\hat{\lambda}}_{k}>0$, the dynamics converge to the unique fixed point ${w}_{k}^{*}=\frac{{\widehat{v}}_{k}}{\sqrt{{\widehat{\lambda}}_{k}}}$, while for all modes with ${\widehat{\lambda}}_{k}=0$, the weights remain at ${w}_{k}^{*}=0$. Thus
where K^{+} is the MoorePenrose inverse of the kernel matrix ${K}_{\mu ,\nu}=K({\mathit{\theta}}^{\mu},{\mathit{\theta}}^{\nu})$. The predictions of the learned function are given by $f={\mathbf{w}}^{*}\cdot \mathbf{r}(\mathit{\theta})$ which can be expressed as
The fact that the solution can be written in terms of a linear combination of ${\{K(\mathit{\theta},{\mathit{\theta}}^{\mu})\}}_{\mu =1}^{P}$ is known as the representer theorem (Schölkopf et al., 2001; Rasmussen and Williams, 2005). A similar analysis for nonlinear readouts where $f(\mathit{\theta})=g\left(\mathbf{w}\cdot \mathbf{r}(\mathit{\theta})\right)$ is provided in Appendix Convergence of deltarule for nonlinear readouts.
Weight decay and ridge regression
We can introduce a regularization term in our learning problem which penalizes the size of the readout weights. This leads to a modified learning objective of the form
Inclusion of this regularization alters the learning rule through weight decay${\mathbf{w}}_{t+1}=(1\eta \lambda ){\mathbf{w}}_{t}\eta \sum _{\mu}{\mathbf{r}}^{\mu}({\mathbf{r}}^{\mu}\cdot {\mathbf{w}}_{t}{y}^{\mu})$, which multiplies the existing weight value by a factor of $1\eta \lambda $ before adding the data dependent update. The fixed point of these dynamics is $\mathbf{w}={({\mathbf{RR}}^{\top}+\lambda \mathbf{I})}^{1}\mathbf{Ry}$. This learning problem and gradient descent dynamics have a closed form solution
The generalization benefits of explicit regularization through weight decay is known to be related to the noise statistics in the learning problem (Canatar et al., 2021). This is visible in the Appendix 1—figure 1 , where unlearnable target functions demand nonzero optimal regularization. We simulate weight decay only in Figure 6C, where we use $\lambda =0.01{\sum}_{k}{\lambda}_{k}$ to improve numerical stability at large $P$.
Computation of learning curves
Recent work has established analytic results that predict the average case generalization error for kernel regression
where $E}_{g}(\mathcal{D})={\u27e8(f(\mathit{\theta},\mathcal{D})y(\mathit{\theta}){)}^{2}\u27e9}_{\mathit{\theta}$ is the generalization error for a certain sample $\mathcal{D}$ of size $P$ and $f(\mathit{\theta},\mathcal{D})$ is the kernel regression solution for $\mathcal{D}$ (Bordelon et al., 2020; Canatar et al., 2021). The typical or average case error ${E}_{g}$ is obtained by averaging over all possible datasets of size $P$. This average case generalization error is determined solely by the decomposition of the target function $y(\mathbf{x})$ along the eigenbasis of the kernel and the eigenspectrum of the kernel. This diagonalization takes the form
Since the eigenfunctions form a complete set of square integrable functions, we expand both the target function $y(\mathit{\theta})$ and the learned function $f(\mathit{\theta})$ in this basis
Due to the orthonormality of the kernel eigenfunctions $\{{\psi}_{k}\}$, the generalization error for any set of coefficients $\widehat{v}$ is
We now introduce training error, or empirical loss, which depends on the disorder in the dataset $\mathcal{D}=\{({\mathit{\theta}}^{\mu},{y}^{\mu}){\}}_{\mu =1}^{P}$
It is straightforward to verify that the optimal ${\widehat{\mathbf{v}}}^{*}$ which minimizes $H(\hat{\mathbf{v}},\mathcal{D})$ is the kernel regression solution for kernel with eigenvalues $\{{\lambda}_{k}\}$ when $\lambda \to 0$. The optimal weights $\widehat{\mathbf{v}}$ can be identified through the first order condition $\mathrm{\nabla}H(\hat{\mathbf{v}},\mathcal{D})=0$ which gives
where ${\mathrm{\Psi}}_{k,\mu}={\psi}_{k}({\mathit{\theta}}^{\mu})$ are the eigenfunctions evaluated on the training data and ${\mathrm{\Lambda}}_{k,\mathrm{\ell}}={\delta}_{k,\mathrm{\ell}}{\lambda}_{k}$ is a a diagonal matrix containing the kernel eigenvalues. The generalization error for this optimal solution is
We note that the dependence on the randomly sampled dataset $\mathcal{D}$ only appears through the matrix $\mathbf{G}(\mathcal{D})$. Thus to compute the typical generalization error we need to average over this matrix $\u27e8\mathbf{G}(\mathcal{D})\u27e9}_{\mathcal{D}$. There are multiple strategies to perform such an average and we will study one here based on a partial differential equation which was introduced in Sollich, 1998; Sollich, 2002 and studied further in Bordelon et al., 2020; Canatar et al., 2021. In this setting, we denote the average matrix $\mathbf{G}(P)={\u27e8\mathbf{G}(\mathcal{D})\u27e9}_{\mathcal{D}=P}$ for a dataset of size $P$. We first will derive a recursion relationship using the Sherman Morrison formula for a rank1 update to an inverse matrix. We imagine adding a new sampled feature vector $\varphi $ to a dataset $\psi $ with size $P$. The average matrix $\mathbf{G}(P+1)$ at $P+1$ samples can be related to $\mathbf{G}(P)$ through the Sherman Morrison rule
where in the last step we approximated the average of the ratio with the ratio of averages. This operation, is of course, unjustified theoretically, but has been shown to produce accurate learning curves (Sollich, 2002; Bordelon et al., 2020). Since the chosen basis of kernel eigenfunctions are orthonormal, the average over the new sample is trivial $\u27e8\psi {\psi}^{\mathrm{\top}}\u27e9}_{\psi}=\mathbf{I$. We thus arrive at the following recursion relationship for $\mathbf{G}$
By introducing an additional source $J$ so that $\mathbf{G}(\mathcal{D},J{)}^{1}=\frac{1}{\lambda}\mathbf{\Psi}{\mathbf{\Psi}}^{\mathrm{\top}}+{\mathbf{\Lambda}}^{1}+J\mathbf{I}$, we can relate $\mathbf{G}(\mathcal{D},J)$’s first and second moments through differentiation
Thus the recursion relation simplifies to
where we approximated the finite difference in $P$ as a derivative, treating $P$ as a continuous variable. Taking the trace of both sides and defining $\kappa (P,J)=\lambda +\text{Tr}\mathbf{G}(P,J)$ we arrive at the following quasilinear PDE
with the initial condition $\kappa (0,J)=\lambda +\text{Tr}({\mathbf{\Lambda}}^{1}+J\mathbf{I}{)}^{1}$. Using the method of characteristics, we arrive at the solution $\kappa (P,J)=\lambda +\text{Tr}{\left({\mathbf{\Lambda}}^{1}+(v+\frac{P}{\kappa (P,J)})\mathbf{I}\right)}^{1}$. Using this solution to $\kappa $, we can identify the solution to $\mathbf{G}$
The generalization error, therefore can be written as
where $\gamma =P{\sum}_{k}\frac{{\lambda}_{k}^{2}}{{({\lambda}_{k}P+\kappa )}^{2}}$, giving the desired result. Note that $\kappa $ depends on $J$ implicitly, which is the source of the $\frac{1}{1\gamma}$ factor. This result was recently reproduced using techniques from statistical mechanics (Bordelon et al., 2020; Canatar et al., 2021).
Spectral bias and codetask alignment
Through implicit differentiation it is straightforward to verify that the ordering of the mode errors ${E}_{k}=\frac{{\kappa}^{2}}{1\gamma}{\left({\lambda}_{k}P+\kappa \right)}^{2}$ matches the ordering of the eigenvalues (Canatar et al., 2021). Let $\lambda}_{k}>{\lambda}_{\ell$, then we have
Since $\lambda}_{\ell}<{\lambda}_{k$, the first bracket must be negative and the second bracket must be positive. Further, it is straightforward to compute that ${\kappa}^{\mathrm{\prime}}(P)=\frac{\kappa \gamma}{P(1+\gamma )}<0$. Therefore $\lambda}_{k}>{\lambda}_{\ell$ implies $\frac{d}{dP}\mathrm{log}\left(\frac{{E}_{k}}{{E}_{\ell}}\right)<0$ for all $P$. Since $\mathrm{log}\left(\frac{{E}_{k}}{{E}_{\mathrm{\ell}}}\right)=0$ at $P=0$ we therefore have that $\mathrm{log}({E}_{k}/{E}_{\ell})<0$ for all $P$ and consequently $E}_{k}<{E}_{\ell$. Modes with larger eigenvalues ${\lambda}_{k}$ have lower normalized mode errors ${E}_{k}$. This observation can be used to prove that target functions acting on the same data distribution with higher cumulative power distributions $C(k)$ for all $k$ will have lower generalization error normalized by total target power, ${E}_{g}(P)/{E}_{g}(0)$, for all $P$. Proof can be found in Canatar et al., 2021.
Asymptotic power law scaling of learning curves
Exponential spectral decays:
First, we will study the setting relevant to the vonMises kernel where ${\lambda}_{k}\sim {\beta}^{k}$ and ${v}_{k}^{2}\sim {\alpha}^{k}$ where $\alpha ,\beta <1$. This exponential behavior accounts for differences in bandwidth between kernels which modulates the base $\beta $ of the exponential scaling of ${\lambda}_{k}$ with $k$. We will approximate the sum over all mode errors with an integral
If we include a regularization parameter $\lambda $, then $\kappa \sim \lambda $ as $P\to \mathrm{\infty}$. Making the change of variables $u=P{\beta}^{k}/\lambda$, we transform the above integral into
The remaining integral over $u$ is either dominated near $u\approx 0$ if $\frac{\mathrm{ln}(1/\alpha )}{\mathrm{ln}(1/\beta )}<2$ and behaves as a $P$independent constant or else is dominated near $u\approx P/\lambda$, in which case the integral scales as $\sim {P}^{\frac{\mathrm{ln}(1/\alpha )}{\mathrm{ln}(1/\beta )}2}$. Multiplying these resulting functions with the prefactor, we find the following scaling laws for generalization.
Thus, we obtain a power law scaling of the learning curve ${E}_{g}$ which is dominated at large $P$ by ${E}_{g}\sim {P}^{\mathrm{min}(2,\frac{\mathrm{ln}(1/\alpha )}{\mathrm{ln}(1/\beta )})}$. For the vonMises kernel we can approximate the spectra with ${\lambda}_{k}\sim {\sigma}^{2k}$ and ${v}_{k}^{2}\sim {\sigma}_{T}^{2k}$ giving rise to a generalization scaling scaling ${E}_{g}\sim {P}^{\mathrm{min}(2,\frac{\mathrm{ln}{\sigma}_{T}}{\mathrm{ln}\sigma})}$.
Power law spectral decays
The same arguments can be applied for power law kernels ${\lambda}_{k}\sim {k}^{b}$ and power law targets ${v}_{k}^{2}\sim {k}^{a}$, which is of interest due to its connection to nonlinear rectified neural populations. In this setting, the generalization error is
We see that there are two possible power law scalings for ${E}_{g}$ with the exponents $(a1)/b$ and 2. At large $P$ this formula will be dominated by the term with minimum exponent so ${E}_{g}\sim {P}^{\mathrm{min}(a1,2b)/b}$.
Laplace kernel generalization
We calculate similar learning curves as we did for the vonMises kernel but with Laplace kernels to show that our results is not an artifact of the infinite differentiability of the Von Mises kernel. Each of these Laplace kernels has the same asymptotic power law spectrum ${\lambda}_{k}\sim o({k}^{2})$, exhibiting a discontinuous first derivative (Figure 6A). Despite having the same spectral scaling at large $k$, these kernels can give dramatically different performance in learning tasks, again indicating the influence of the top eigenvalues on generalization at small $P$ (Figure 6). Again, the trend for which kernels perform best at low $P$ can be reversed at large $P$. In this case, all generalization errors scale with ${E}_{g}\sim {P}^{2}$ (Figure 6B). More generally, our theory shows that if the task power spectrum and kernel eigenspectrum are both falling as power laws with exponents $a$ and $b$ respectively, then the generalization error asymptotically falls with a power law, ${E}_{g}\sim {P}^{\mathrm{min}(a1,2b)/b}$ (Methods) (Bordelon et al., 2020). This decay is fastest when $b\ge \frac{a1}{2}$ for which ${E}_{g}\sim {P}^{2}$. Therefore, the tail of the kernel’s eigenvalue spectrum determines the large sample size behavior of the generalization error for power law kernels. Small sample size limit is still governed by the bulk of the spectrum.
Learning with multiple output channels
Our theory is not limited to scalar target functions but rather can be easily extended to multiple output functions ${y}_{1},\mathrm{\dots},{y}_{C}$ from the same data, if for example the task requires computing class membership for $C$ categories. In this setting, each data point has the form $({\mathit{\theta}}^{\mu},{\mathbf{y}}^{\mu})$ with ${\mathbf{y}}^{\mu}\in {\mathbb{R}}^{C}$. For these $C$ classes, the generalization error takes the form
We therefore find that the generalization error in the multiclass setting is the same as the ${E}_{g}$ obtained for a single scalar target function with power spectrum $v}_{k}^{2}=\sum _{c}{\u27e8{y}_{c}(\mathit{\theta}){\varphi}_{k}(\mathit{\theta})\u27e9}^{2$(Bordelon et al., 2020; Canatar et al., 2021). The relevant cumulative power distribution measures the fraction of total output variance captured by the first $k$ eigenfunctions of the population code
Convergence of Deltarule for nonlinear readouts
In this section, we consider gradient descent dynamics of an arbitrary convex loss function. For instance, we can consider a binary classification problem where $y\in \{\pm 1\}$ by outputting a prediction of $\widehat{y}=\text{sign}(\mathbf{w}\cdot \mathbf{r})$. We could, for example, train a model using the hinge loss $\mathrm{\ell}(\mathbf{w}\cdot \mathbf{r},y)=\text{max}(0,1\mathbf{w}\cdot \mathbf{r}y)$ so that the classifier will converge to a kernel support vector machine (SVM) (Schölkopf et al., 2002). The generalization of the classifier would be the error rate of $\widehat{y}(\mathit{\theta})=\text{sign}(\mathbf{w}\cdot \mathbf{r}(\mathit{\theta}))$ compared to the ground truth $y(\mathit{\theta})$.
Let $\mathcal{D}=\{{\mathit{\theta}}^{\mu},{y}^{\mu}{\}}_{\mu =1}^{P}$ be the dataset with samples ${\mathit{\theta}}^{\mu}$ and target values ${y}^{\mu}$. We introduce a shorthand ${\mathbf{r}}^{\mu}=\mathbf{r}({\mathit{\theta}}^{\mu})$ for convenience. The loss we aim to minimize is the sum of the losses of each data point in the training set with an additional weight decay parameter
For convex $\mathrm{\ell}$ and nonzero $\lambda $, the above objective is strongly convex, indicating the existence of a unique minimizer which can be found from simple first order learning rules. For $\lambda >0$ the initial condition for $\mathbf{w}$ does not influence the final result.
We will now show that the dynamics will converge to a function which only depends on the code $\mathbf{r}(\mathit{\theta})$ through the kernel $K(\mathit{\theta},{\mathit{\theta}}^{\prime})$. To simplify the argument, we consider starting from an initial condition of ${\mathbf{w}}_{t=0}=\mathbf{0}$ and performing gradient descent updates. Under such an assumption, the weights ${\mathbf{w}}_{t}$ will always be in the span of the population vectors on the training set $\{{\mathbf{r}}^{\mu}{\}}_{\mu =1}^{P}$ since
The derivative in the final term is taken with respect to the first argument ${\mathrm{\ell}}^{\prime}(f,y)=\frac{\partial \mathrm{\ell}(f,y)}{\partial f}$. This update is still local and recovers the delta rule that we discussed in the main text for $\mathrm{\ell}(\mathbf{w}\cdot \mathbf{r},y)=\frac{1}{2}{(\mathbf{w}\cdot \mathbf{r}y)}^{2}$(Widrow and Hoff, 1960; Hertz et al., 1991). We can express ${\mathbf{w}}_{t}$ in terms of the population vectors $\mathbf{w}}_{t}=\sum _{\mu =1}^{P}{\alpha}_{t}^{\mu}\text{}{\mathbf{r}}^{\mu}=\mathbf{R}{\mathit{\alpha}}_{t$ so that ${\mathit{\alpha}}_{t}\in {\mathbb{R}}^{P}$ defines the linear weighting of each sample. The dynamics of these coefficients are
where $\mathbf{K}={\mathbf{R}}^{\top}\mathbf{R}\in {\mathbb{R}}^{P\times P}$ is the kernel Gram matrix evaluated on the training points. We multiply both sides of this equation by ${\mathbf{R}}^{\top}$, and define ${\mathit{\beta}}_{t}=\mathbf{K}{\mathit{\alpha}}_{t}$, which satisfy the following simplified dynamics
where K^{+} is the pseudoinverse of $\mathbf{K}$. The nonlinear fixed point condition is $\mathit{\beta}=\frac{1}{\lambda}\mathbf{K}{\mathrm{\ell}}^{\prime}(\mathit{\beta},\mathbf{y})$, which transparently only depends on the kernel $\mathbf{K}$ rather than the full code $\mathbf{R}$. The above equation recovers the correct linear equation $\mathit{\beta}=\mathbf{K}{\left(\mathbf{K}+\lambda \mathbf{I}\right)}^{1}\mathbf{y}$ for square loss. For an arbitrary test point $\mathit{\theta}$, the model makes prediction using $f(\mathit{\theta})=\mathbf{r}(\mathit{\theta})\cdot \mathbf{w}=\mathbf{r}(\mathit{\theta})\cdot [{\mathbf{RK}}^{+}\mathit{\beta}]=\mathbf{k}(\mathit{\theta})\cdot \mathit{\alpha}$, which also only depends on the kernel $[\mathbf{k}(\mathit{\theta}){]}_{\mu}=K(\mathit{\theta},{\mathit{\theta}}^{\mu})$ on test point $\mathit{\theta}$ and train points ${\mathit{\theta}}_{\mu}$, as well as the kernel gram matrix $[\mathbf{K}{]}_{\mu \nu}=K({\mathit{\theta}}^{\mu},{\mathit{\theta}}^{\nu})$.
To illustrate a specific case with a square error and nonlinear readout, consider output neurons which produce activity $g(\mathbf{w}\cdot \mathbf{r}(\mathit{\theta}))$ for invertible nonlinear function $g$ with nonvanishing gradient, and gradient based learning on $L={\sum}_{\mu}{(g(\mathbf{w}\cdot {\mathbf{r}}^{\mu})y({\mathit{\theta}}^{\mu}))}^{2}$. This gives $\mathrm{\Delta}\mathbf{w}\propto {\sum}_{\mu}{\mathbf{r}}^{\mu}{g}^{\prime}(\mathbf{w}\cdot {\mathbf{r}}^{\mu})({y}^{\mu}g(\mathbf{w}\cdot {\mathbf{r}}^{\mu}))\in \text{span}{\{{\mathbf{r}}^{\mu}\}}_{\mu =1}^{P}$, which is still a local learning rule. Thus the weights at convergence can be written as $\mathbf{w}=\sum _{\mu}{\alpha}^{\mu}{\mathbf{r}}^{\mu}$ and the learned function can be written as $f(\mathit{\theta})=g\left(\sum _{\mu}{\alpha}^{\mu}K(\mathit{\theta},{\mathit{\theta}}^{\mu})\right)$ is given by $\mathit{\alpha}={\mathbf{K}}^{+}{g}^{1}(\mathbf{y})$. To see this, first note that ${\mathbf{w}}_{t}\in \text{span}{\{{\mathbf{r}}^{\mu}\}}_{\mu =1}^{P}$ for all $t$ so that ${\mathbf{w}}^{*}=\mathbf{R}{\mathit{\alpha}}^{*}$. The interpolation condition can be expressed as $g\left({\mathbf{R}}^{\top}{\mathbf{w}}^{*}\right)=g\left(\mathbf{K}\mathit{\alpha}\right)=\mathbf{y}$, giving the desired result ${\mathit{\alpha}}^{*}={\mathbf{K}}^{+}{g}^{1}(\mathbf{y})$. The predictions of the model on a test stimulus $\mathit{\theta}$ are given by $f(\mathit{\theta})=g\left(\sum _{\mu =1}^{P}{\alpha}^{\ast \mu}K(\mathit{\theta},{\mathit{\theta}}^{\mu})\right)$. We see that this solution only depends on the kernel (directly and indirectly through ${\alpha}^{*}$), rather than the full code.
Typical case analysis of nonlinear readouts
The analysis of typical case generalization can be extended to nonlinear predictors and loss functions described by (68) which depend on the scalar prediction variable $\mathbf{w}\cdot \mathbf{r}(\mathit{\theta})$ (Loureiro et al., 2021a). Thanks Further, if $\mathbf{r}(\mathit{\theta})$ is well approximated as a Gaussian process, then the generalization performance can still be characterized using statistical mechanics methods (Loureiro et al., 2021a). Many qualitative features of our results continue to hold, including that the kernel’s diagonalization governs training and generalization and that improvements in code task alignment lead to improvements in generalization.
In a later work by Cui et al., 2022, SVM and ridge classifiers trained on codes and tasks with power law spectra were analyzed asymptotically, showing power law generalization error decay rates ${E}_{g}\sim {P}^{\beta}$. These classification learning curves for power law spectra were shown to follow power laws with exponents $\beta $ which are qualitatively similar to the exponents obtained with the square loss which we describe in our section titled Small and Large Sample Size Behaviors of Generalization. Just as in our theory, decay rate exponents $\beta $ are larger for codes which are well aligned to the task and are smaller for codes which are nonaligned.
Visual scene reconstruction task
Reconstruction of natural scenes from neural responses
Using the mouse V1 responses to natural scenes, we attempt to reconstruct original images from the neural codes using different numbers of images. The presented natural scenes are taken from ten classes of imagenet which can be downloaded from https://github.com/MouseLand/stringerpachitariuetal2018b. Let ${\mathit{\theta}}^{\mu}\in {\mathbb{R}}^{D}$ be a $D$dimensional flattened vector containing the pixel values of the μth image and let ${\mathbf{r}}^{\mu}\in {\mathbb{R}}^{N}$ represent the neural response to the μth image. The goal in the problem is to learn a collection of weights $\mathbf{W}\in {\mathbb{R}}^{D\times N}$ which map neural responses ${\mathbf{r}}^{\mu}$ to images ${\mathit{\theta}}^{\mu}$
The generalization error ${E}_{g}$ again measures the average error on all points, averaged over all possible datasets $\mathcal{D}=\{({\mathit{\theta}}^{\mu},{\mathbf{r}}^{\mu}){\}}_{\mu =1}^{P}$ of size $P$. If the optimal weights for dataset $\mathcal{D}$ is $\mathbf{W}(\mathcal{D})$ then the generalization error is
After identifying eigenfunctions ${\varphi}_{k}(\mathit{\theta})$, we expand the images in this basis $\mathit{\theta}=\sum _{k}{\mathbf{v}}_{k}{\psi}_{k}(\mathit{\theta})$ where ${\mathbf{v}}_{k}\in {\mathbb{R}}^{D}$. The generalization error is therefore ${E}_{g}={\sum}_{k}{{\mathbf{v}}_{k}}^{2}{E}_{k}(P)$ and the cumulative power is $C(k)=\frac{\sum _{\ell <k}{\mathbf{v}}_{\ell}{}^{2}}{\sum _{\ell =1}^{\mathrm{\infty}}{\mathbf{v}}_{\ell}{}^{2}}$. We perform this reconstruction task on many filtered versions of the natural scenes. To construct a filter, we first compute the Fourier transform of the image. Let $\mathbf{M}(\mathit{\theta})\in {\mathbb{R}}^{\sqrt{D}\times \sqrt{D}}$ represent the nonflattened image and let $\widehat{\mathbf{M}}(\mathit{\theta})\in {\mathbb{R}}^{\sqrt{D}\times \sqrt{D}}$ represent the Fourier transform of the image, computed explicitly as
To develop the bandpass filter, we calculate $\mathbf{k}=\sqrt{{k}^{2}+{\left({k}^{\prime}\right)}^{2}}$ for each of the indices in the matrix. For a bandpass filter with parameters ${s}_{max},r$ we simply zero out the entries in $\widehat{M}$ which correspond to states with frequencies outside the appropriate band: for any $k,{k}^{\prime}$ with $\mathbf{k}\notin [\sqrt{{s}_{max}^{2}{r}^{2}},{s}_{max}^{2}]$ then ${\widehat{M}}_{k,{k}^{\prime}}\to 0$. We then perform the inverse Fourier transform on $\widehat{M}$ to obtain a filtered version of the original image.
A simple feedforward model of V1
Linear neurons
We consider a simplified but instructive model of the V1 population code as a linearnonlinear map from photoreceptor responses through Gabor filters and then nonlinearity (Adelson and Bergen, 1985; Olshausen and Field, 1997; Rumyantsev et al., 2020). Let $\mathbf{x}\in {\mathbb{R}}^{2}$ represent the twodimensional retinotopic position of photoreceptors. The firing rates of the photoreceptor at position $\mathbf{x}$ to a static grating stimulus oriented at angle $\theta $ is
We model each V1 neuron’s receptive field as a Gabor filter of the receptor responses $h(\mathbf{x},\theta )$. The $i$th V1 neuron has preferred wavevector ${\mathbf{k}}_{i}$, generating the following set of weights between photoreceptors and the $i$th V1 neuron
The V1 population code is obtained by filtering the photoreceptor responses. By approximating the resulting sum over all retinal photoreceptors with an integral, we find the response of neuron $i$ to grating stimulus with wavenumber $\mathbf{k}$ is
The response of neuron $i$ is computed through nonlinear rectification of this input current ${r}_{i}(\theta )=g(\mathbf{w}({\theta}_{i})\cdot h(\theta ))$. For a linear neuron $g(z)=z$, the kernel has the following form
where the kernel is normalized to have maximum value of 1. Note that this normalization of the kernel is completely legitimate since it merely rescales each eigenvalue by a constant and does not change the learning curves.
Since the kernel only depends on the difference between angles $\theta {\theta}^{\prime}$, it is said to posess translation invariance. Such translation invariant kernels admit a Mercer decomposition in terms of Fourier modes $K(\theta )={\sum}_{n}{\lambda}_{n}\mathrm{cos}(n\theta )$ since the Fourier modes diagonalize shift invariant integral operators on ${S}^{1}$. For the linear neuron, the kernel eigenvalues scale like ${\lambda}_{n}\sim \frac{{\beta}^{n}}{{2}^{n}n!}$, indicating infinite differentiability of the tuning curves. Since ${\lambda}_{n}$ decays rapidly with $n$, we find that this Gabor code has an inductive bias that favors low frequency functions of orientation $\theta $.
Nonlinear simple cells
Introducing nonlinear functions $g(z)$ that map input currents $z$ into the V1 population into firing rates, we can obtain a nonlinear kernel ${K}_{g}(\theta )$ which has the following definition
In this setting, it is convenient to restrict ${\mathbf{k}}_{i},\mathbf{k},{\mathbf{k}}^{\prime}\in {S}^{1}$ and assume that the preferred wavevectors ${\mathbf{k}}_{i}$ are uniformly distributed over the circle. In this case, it suffices to identify a decomposition of the composed function $g({\mathbf{w}}_{i}\cdot \mathbf{h}(\theta ))$ in the basis of Chebyshev polynomials ${T}_{n}(z)$ which satisfy ${T}_{n}(\mathrm{cos}(\theta ))=\mathrm{cos}(n\theta )$
which can be computed efficiently with an appropriate quadrature scheme. Once the coefficients a_{n} are determined, we can compute the kernel by first letting ${\theta}_{i}$ to be the angle between $\mathbf{k}$ and ${\mathbf{k}}_{i}$ and letting $\theta $ be the angle between $\mathbf{k}$ and ${\mathbf{k}}^{\prime}$
Thus the kernel eigenvalues are ${\lambda}_{n}=\frac{1}{2}{a}_{n}^{2}(\psi )$.
Asymptotic scaling of spectra
Activation functions that encourage sparsity have slower eigenvalue decays. If the nonlinear activation function has the form ${g}_{q,a}(z)=\mathrm{max}{\{0,za\}}^{q}$, then the spectrum decays like ${\lambda}_{n}\sim {n}^{2q2}$. A simple argument justifies this scaling: if the function $g({e}^{{\sigma}^{2}}\mathrm{cosh}({\sigma}^{2}z))$ is only $q1$ times differentiable then ${a}_{n}{n}^{q}\sim {n}^{1}$ since ${\sum}_{n}{a}_{n}{n}^{q}$ must diverge. Therefore ${\lambda}_{n}={a}_{n}^{2}\sim {n}^{2q2}$. Note that this scaling is independent of the threshold. Examples of these scalings can be found in Figure 5—figure supplements 1 and 2.
Phase variation, complex cells and invariance
We can consider a slightly more complicated model where Gabors and stimuli have phase shifts
The simple cells are generated by nonlinearity
The input currents into the simple V1 cells can be computed exactly
When $\mathbf{k}={\mathbf{k}}_{i}=1$, the simple cell tuning curves ${r}_{i}=g({\mathbf{w}}_{i}\cdot \mathbf{h})$ only depend on $\mathrm{cos}(\theta {\theta}_{i})$ and $\varphi $, allowing a Fourier decomposition
The simple cell kernel ${K}_{s}$, therefore decomposes into Fourier modes over $\theta $
where $b}_{n}(\varphi ,{\varphi}^{\mathrm{\prime}})={\u27e8{a}_{n}(\varphi ,{\varphi}_{i}){a}_{n}({\varphi}^{\mathrm{\prime}},{\varphi}_{i})\u27e9}_{{\varphi}_{i}$. It therefore suffices to solve the infinite sequence of integral eigenvalue problems over $\varphi $
With this choice it is straightforward to verify that the kernel eigenfunctions are ${v}_{n,k}(\theta ,\varphi )={e}^{in\theta}{v}_{n,k}(\varphi )$ with corresponding eigenvalue ${\lambda}_{n,k}$. Since b_{n} is not translation invariant in $\varphi {\varphi}^{\prime}$, the eigenfunctions ${v}_{n,k}$ are not necessarily Fourier modes. These eigenvalue problems for b_{n} must be solved numerically when using arbitrary nonlinearity $g$. The top eigenfunctions of the simple cell kernel depend heavily on the phase of the two grating stimuli $\varphi $. Thus, a pure orientation discrimination task which is independent of phase requires a large number of samples to learn with the simple cell population.
Complex cell populations are phase invariant
V1 also contains complex cells which possess invariance to the phase $\varphi $ of the stimulus.
Again using Gabor filters we model the complex cell responses with a quadratic nonlinearity and sum over two squared filters which are phase shifted by $\pi /2$
which we see is independent of the phase $\varphi $ of the grating stimulus. Integrating over the set of possible Gabor filters $({\mathbf{k}}_{i},{\varphi}_{i})$ with $\mathbf{k}=1$ again gives the following kernel for the complex cells
Remarkably, this kernel is independent of the phase $\varphi $ of the grating stimulus. Thus, complex cell populations possess good inductive bias for vision tasks where the target function only depends on the orientation of the stimulus rather than it’s phase. In reality, V1 is a mixture of simple and complex cells. Let $s\in [0,1]$ represent the relative proportion of neurons which are simple cells and $(1s)$ the relative proportion of complex cells. The kernel for the mixed V1 population is given by a simple convex combination of the simple and complex cell kernels
where $n$ denotes neuron type (simple vs complex, tuning etc) and ${p}_{V1}(n),{p}_{s}(n),{p}_{c}(n)$ are probability distributions over the V1 neuron identities, the simple cell identities and the complex cell identities respectively. Increasing $s$ increases the phase dependence of the code by giving greater weight to the simple cell population. Decreasing $s$ gives weight to the complex cell population, encouraging phase invariance of readouts.
Visualization of feedforward Gabor V1 model and induced kernels
Examples of the induced kernels for the Gaborbank V1 model are provided in Figure 5. We show how choice of rectifying nonlinearity $g(z)$ and sparsifying threshold $a$ influence the kernel and their spectra. Learning curves for simple orientation tasks are provided.
Gabor model spectral bias and fit to V1 data
Motivated by findings in the primary visual cortex (Hansel and van Vreeswijk, 2002; Miller and Troyer, 2002; Priebe et al., 2004; Priebe and Ferster, 2008), we studied the spectral bias induced by rectified powerlaw nonlinearities of the form $g(z)=\mathrm{max}{\{0,za\}}^{q}$. From theory, such a powerlaw activation function arises in a spiking neuron when firing is driven by input fluctuations (Hansel and van Vreeswijk, 2002; Miller and Troyer, 2002). Further, this activation is observed in intracellular recordings over the dynamic range of neurons in primary visual cortex (Priebe and Ferster, 2008). For example, in cats, the power, $q$, ranges from 2.7 to 3.9 (Priebe et al., 2004). We fit parameters of our model to the Mouse V1 kernel and compared to other parameter sets in Figure 5—figure supplement 1. Our best fit value of $q=1.7$ is lower but on par with the estimates from the cat and reproduces the observed kernel. Computation of the kernel and its eigenvalues (Appendix Nonlinear simple cells) indicates a low frequency bias: the eigenvalues for low frequency modes are higher than those for high frequency modes, indicating a strong inductive bias to learn functions of low frequency in the orientation. Decreasing sparsity (lower $a$) leads to a faster decay in the spectrum (but similar asymptotic scaling at the tail, see Figure 5—figure supplements 1 and 2) and a stronger bias towards lower frequency functions (Figure 5). The effect of the power of nonlinearity $q$ is more nuanced: increasing power may increase spectra at lower frequencies, but may also lead to a faster decay at the tail (Figure 5—figure supplements 1 and 2 ). In general, an exponent $q$ implies a powerlaw asymptotic spectral decay ${\lambda}_{k}\sim {k}^{2q2}$ as $k\to \mathrm{\infty}$ (Appendix Nonlinear simple cells). The behavior at low frequencies may have significant impact for learning with few samples. Overall, our findings show that the spectral bias of a population code can be determined in nontrivial ways by its biophysical parameters, including neural thresholds and nonlinearities.
Energy model with partially phaseselective cells
The model of the V1 population as a mixture of purely simple and purely complex cells is an idealization which fails to capture the variability in phase selectivity of cells observed in experiment. In this section, we describe a simple model which can interpolate between an invariant code and a code which has high alignment with phasedependent eigenfunctions. Further, a single scalar parameter $\alpha $ will control how strongly the population is biased towards invariance. We define ${r}_{i}(\theta ,\varphi )=g\left({z}_{i}(\theta ,\varphi )\right)$ for nonlinear function $g$ and scalar $z$ which is constructed as follows
This linear combination is inspired by the construction of simple cells in Dayan & Abbot Chapter 2 (Dayan and Abbott, 2001). If all $\beta $ are equal, then this tuning curve is invariant to phase $\varphi $. To generate variability in selectivity to phase $\varphi $, we will draw $\mathit{\beta}$ from a Dirichlet distrbution on the simplex with concentration parameter $\alpha \mathbf{1}$ so that $p(\mathit{\beta})\propto {\prod}_{j=1}^{4}{\beta}_{j}^{\alpha 1}$ with ${\sum}_{j=1}^{4}{\beta}_{j}=1$. In the $\alpha \to \mathrm{\infty}$ limit, the probability density concentrates on $\frac{1}{4}\mathbf{1}$, leading to a code comprised entirely of complex cells which are invariant to phase $\varphi $. In the $\alpha \to 0$ limit, the density is concentrated around the “edges” of the simplex such as $(1,0,0,0),(0,1,0,0)$, where only one preferred phase is present per neuron. For intermediate values, neurons are partially selective to phase. As before, the selectivity or invariance to phase is manifested in the kernel decomposition and leads to similar learning curves for the three tasks of the main paper (Orientation, Phase, Hybrid). We provide an illustration of tuning curves, F1/F0 distributions, eigenfunctions, and learning curves in Figure 5—figure supplement 3.
Time dependent neural codes
RNN model and decomposition
In this setting, the population code $\mathbf{r}(\{\mathit{\theta}(t)\},t)$ is a function of an input stimulus sequence $\mathit{\theta}(t)$ and time $t$. In general the neural code $\mathbf{r}$ at time $t$ can depend on the entire history of the stimulus input $\mathit{\theta}({t}^{\prime})$ for ${t}^{\prime}\le t$, as is the case for recurrent neural networks. We denote dependence of a function $f$ on $\mathit{\theta}(t)$ in this causal manner with the notation $f(\{\mathit{\theta}\},t)$. In a learning task, a set of readout weights $\mathbf{w}$ are chosen so that a downstream linear readout $f(\{\mathit{\theta}\},t)=\mathbf{w}\cdot \mathbf{r}(\{\mathit{\theta}\},t)$ approximates a target sequence $y(\{\mathit{\theta}\},t)$ which maps input stimulus sequences to output scalar sequences. The quantity of interest is the generalization ${E}_{g}$, which in this case is an average over both input sequences and time, ${E}_{g}={\u27e8{(y(\{\mathit{\theta}\},t)f(\{\mathit{\theta}\},t))}^{2}\u27e9}_{\mathit{\theta}(t),t}$. The average is computed over a distribution of input stimulus sequences $p(\mathit{\theta}(t))$. To train the readout, $\mathbf{w}$, the network is given a sample of $P$ stimulus sequences ${\mathit{\theta}}^{\mu}(t),\mu =1,\mathrm{\dots},P$. For the μth training input sequence, the target system $y$ is evaluated at a set of discrete time points $\mathcal{T}}_{\mu}=\{{t}_{1},{t}_{2},...,{t}_{{\mathcal{T}}_{\mu}}\$ giving a collection of target values $\{{y}_{t}^{\mu}{\}}_{t\in {\mathcal{T}}_{\mu}}$ and a total dataset of size $\mathcal{P}=\sum _{\mu =1}^{P}{\mathcal{T}}_{\mu}$. The average case generalization computes a further average of the generalization error ${E}_{g}$ over randomly sampled datasets of size $\mathcal{P}$.
Learning is again achieved through iterated weight updates with deltarule form, but now have contributions from both sequence index and time $\mathrm{\Delta}\mathbf{w}=\eta \sum _{\mu}\sum _{t\in {\mathcal{T}}_{\mu}}{\mathbf{r}}_{t}^{\mu}({y}_{t}^{\mu}{f}_{t}^{\mu})$. As before, optimization of the readout weights is equivalent to kernel regression with a kernel that computes inner products of neural population vectors at different times $t,{t}^{\prime}$ for different input sequences $\{\mathit{\theta}\},\{{\mathit{\theta}}^{\mathrm{\prime}}\}$ . This kernel depends on details of the time varying population code including its recurrent intrinsic dynamics as well as its encoding of the timevarying input stimuli. The optimization problem and delta rule described above converge to the kernel regression solution for kernel gram matrix ${K}_{t,{t}^{\prime}}^{\mu ,{\mu}^{\prime}}=\frac{1}{N}{\mathbf{r}}_{t}^{\mu}\cdot {\mathbf{r}}_{{t}^{\prime}}^{{\mu}^{\prime}}$(Dong et al., 2020; Yang, 2019; Yang, 2020). The learned function has the form $f(\{\mathit{\theta}\},t)=\sum _{\mu ,{t}^{\mathrm{\prime}}\in {\mathcal{T}}_{\mu}}{\alpha}_{t}^{\mu}K(\{\mathit{\theta}\},\{\mathit{\theta}{\}}^{\mu},t,{t}^{\mathrm{\prime}})$, where $\mathit{\alpha}={\mathbf{K}}^{+}\mathbf{y}$ for kernel gram matrix $\mathbf{K}\in {\mathbb{R}}^{\mathcal{P}\times \mathcal{P}}$ which is computed for the entire set of training sequences, and the vector $\mathbf{y}\in {\mathbb{R}}^{\mathcal{P}}$ is the vector containing the desired target outputs for each sequence. Assuming a probability distribution over sequences $\mathit{\theta}(t)$, the kernel can be diagonalized with orthonormal eigenfunctions ${\psi}_{k}(\{\mathit{\theta}\},t)$. Our theory carries over from the static case: kernels whose top eigenfunctions have high alignment with the target dynamical system $y(\{\mathit{\theta}\},t)$ will achieve the best average case generalization performance.
Alternative neural codes with same kernel
Orthogonal transformations are sufficient for linear kernelpreserving transformations
We will now show that for any linear transformation $\stackrel{~}{\mathbf{r}}=\mathbf{Ar}$ which preserves the inner product kernel $K(\mathit{\theta},{\mathit{\theta}}^{\prime})$, there exists an orthogonal matrix $\mathbf{Q}$ such that $\stackrel{~}{\mathbf{r}}=\mathbf{Qr}$.
Proof.
Let $\stackrel{~}{\mathbf{r}}(\mathit{\theta})=\mathbf{Ar}(\mathit{\theta})$ for all stimuli $\mathit{\theta}$. To preserve the kernel, we must have
Taking projections against each of the orthonormal eigenfunctions ${\psi}_{\mathrm{\ell}}(\theta )$ (see Appendix Singular value decomposition of continuous population responses), we define vectors ${\mathbf{u}}_{k}$ as $\sqrt{{\lambda}_{k}}{\mathbf{u}}_{k}={\u27e8\mathbf{r}(\mathit{\theta}){\psi}_{k}(\mathit{\theta})\u27e9}_{\mathit{\theta}}$, allowing us to express the SVD of the population code $\mathbf{r}(\mathit{\theta})={\sum}_{k}\sqrt{{\lambda}_{k}}{\mathbf{u}}_{k}{\psi}_{k}(\mathit{\theta})$. These vectors $\{{\mathbf{u}}_{k}\}$ are orthonormal ${\mathbf{u}}_{k}\cdot {\mathbf{u}}_{\mathrm{\ell}}={\delta}_{k\mathrm{\ell}}$ since, by the definition of the kernel eigenfunctions ${\psi}_{k}$,
Since $\mathbf{r}(\mathit{\theta})$ and $\stackrel{~}{\mathbf{r}}(\mathit{\theta})$ have the same inner product kernel, they must posess the same kernel eigenfunctions ${\psi}_{k}$ and kernel eigenvalues ${\lambda}_{k}$, which are identified through the eigenvalue problem
We therefore have the following two singular value decompositions for $\mathbf{r}$ and $\stackrel{~}{\mathbf{r}}$
where ${\{{\mathbf{u}}_{k}\}}_{k=1}^{N}$ and ${\{{\stackrel{~}{\mathbf{u}}}_{k}\}}_{k=1}^{N}$ are both complete sets of orthonormal vectors (the sums above run over possible zero eigenvalues). Taking the equation $\stackrel{~}{\mathbf{r}}(\mathit{\theta})=\mathbf{Ar}(\mathit{\theta})$, we multiply both sides of the equation by ${\psi}_{k}(\mathit{\theta})$ and average over $\mathit{\theta}$ giving
For an eigenmode $k$ with positive eigenvalue ${\lambda}_{k}>0$, this implies ${\stackrel{~}{\mathbf{u}}}_{k}={\mathbf{Au}}_{k}$, while there is no corresponding constraint for the null modes with ${\lambda}_{k}=0$. However, the action of $\mathbf{A}$ on the nullspace of the code has no influence on $\stackrel{~}{\mathbf{r}}$ so there is no loss in generality to restrict consideration to transformations $\mathbf{A}$ which satisfy ${\stackrel{~}{\mathbf{u}}}_{k}={\mathbf{Au}}_{k}$ for all $k\in [N]$ (rather than just the ${\lambda}_{k}>0$ modes). This choice gives $\mathbf{A}={\sum}_{k=1}^{N}{\stackrel{~}{\mathbf{u}}}_{k}{\mathbf{u}}_{k}^{\top}$. Thus, the space of codes $\stackrel{~}{\mathbf{r}}(\mathit{\theta})$ with equivalent kernels to $\mathbf{r}(\mathit{\theta})\cdot \mathbf{r}({\mathit{\theta}}^{\prime})$ generated through linear transformations is equivalent to all possible orthogonal transformations of the original code $\{\mathbf{Qr}(\mathit{\theta}):{\mathbf{QQ}}^{\top}={\mathbf{Q}}^{\top}\mathbf{Q}=\mathbf{I}\}$. ∎
Effect of noise on RROS symmetry
The random rotation and optimal shift (RROS) operations introduced in the main text preserve generalization performance under the assumption of a deterministic neural code. However, for noisy codes, the presence of RROS symmetry is dependent on the noise distribution. Below we discuss two commonly analyzed distributions: the Gaussian distribution and the Poisson distribution. For Gaussian noise, the RROS operations preserve the generalization performance and the local Fisher information. However, if noise is constrained to be Poisson then RROS operations do not preserve generalization or Fisher information.
First, we will analyze stimulus dependent Gaussian noise, where generalization performance is preserved under rotations and baseline shifts. Note that if the code at $\mathit{\theta}$ obeyed $\mathbf{r}(\mathit{\theta})\sim \mathcal{N}(\overline{\mathbf{r}}(\mathit{\theta}),{\mathbf{\Sigma}}_{n}(\mathit{\theta}))$, then the rotated and shifted code follows $\mathbf{Q}\mathbf{r}(\mathit{\theta})+\mathit{\delta}\sim \mathcal{N}(\mathbf{Q}\overline{\mathbf{r}}(\mathit{\theta})+\mathit{\delta},\mathbf{Q}{\mathbf{\Sigma}}_{n}(\mathit{\theta}){\mathbf{Q}}^{\mathrm{\top}})$. This rotated and shifted code $\mathbf{Qr}(\mathit{\theta})+\mathit{\delta}$, when centered, will exhibit identical generalization performance as the original code. This is true both for learning from a trial averaged or nontrial averaged code. In the case of Gaussian noise on a centered code, the dataset transforms under a rotation as $\mathcal{D}=\{{\mathbf{r}}_{\mu},{y}_{\mu}\}\to {\mathcal{D}}^{\mathrm{\prime}}=\{\mathbf{Q}{\mathbf{r}}_{\mu},{y}_{\mu}\}$. The optimal weights for a linear model similarly transform as $\mathbf{w}(\mathcal{D})\to \mathbf{Q}\mathbf{w}(\mathcal{D})$. Under these transformations the predictor on test point $\mathit{\theta}$ is unchanged since
Further, the local Fisher information matrix is $\mathbf{I}(\mathit{\theta})=\frac{\partial \overline{\mathbf{r}}{(\mathit{\theta})}^{\top}}{\partial \mathit{\theta}}{\mathbf{\Sigma}}_{n}^{1}(\mathit{\theta})\frac{\partial \overline{\mathbf{r}}(\mathit{\theta})}{\partial {\mathit{\theta}}^{\top}}+\frac{1}{2}\text{Tr}{\mathbf{\Sigma}}_{n}^{1}(\mathit{\theta})\frac{\partial {\mathbf{\Sigma}}_{n}(\mathit{\theta})}{\partial \mathit{\theta}}{\mathbf{\Sigma}}_{n}^{1}(\mathit{\theta})\frac{\partial {\mathbf{\Sigma}}_{n}(\mathit{\theta})}{\partial {\mathit{\theta}}^{\top}}$ is unchanged under the transformation $\mathbf{r}\to \mathbf{Qr}+\mathit{\delta}$. Under this transformation, the covariance simply transforms linearly ${\mathbf{\Sigma}}_{n}\to \mathbf{Q}{\mathbf{\Sigma}}_{n}(\mathit{\theta}){\mathbf{Q}}^{\top}$ and the $\mathbf{Q}$ matrices will annihilate under the trace. This shows that, for some noise models, our assumption that rotations and baseline shifts preserve generalization performance will be valid.
However, for Poisson noise, where the variance is tied to the mean firing rate, the RROS operations will not preserve noise structure or information content. The Fisher information at scalar stimulus $\theta $ for a Poisson neuron is $I(\theta )=\frac{{\overline{r}}^{\prime}{(\theta )}^{2}}{\overline{r}(\theta )}$. A baseline shift $r\to r+\delta $ to the tuning curve will not change the numerator since the derivative of the tuning curve is invariant to this transformation, but it will increase the denominator.
Necessary conditions for optimally sparse codes
Next we argue why optimally sparse codes should be lifetime and population selective. We consider the following optimization problem: find a nonnegative neural responses $\mathbf{S}\in {\mathbb{R}}^{N\times P}$ and baseline vector $\mathit{\delta}\in {\mathbb{R}}^{N}$ so that baseline subtracted responses $\mathbf{R}=\mathbf{S}\mathit{\delta}{\mathbf{1}}^{\top}$ realize a desired inner product kernel $\mathbf{K}\in {\mathbb{R}}^{P\times P}$ and have minimal total firing. This is equivalent to finding the most metabolically efficient code among the space of codes with equivalent inductive bias. Mathematically, we formulate this problem as
To enforce the constraints for the definition of the kernel and the nonnegativity of the responses, we introduce the following Lagrangian
where 1 is the vector containing all ones, the Lagrange multiplier matrix $\mathbf{A}$ enforces the definition of the kernel and the KKT multiplier matrix $\mathbf{V}$ enforces the nonnegativity constraints for each element of $\mathbf{S}$. The KKT conditions require that any local optimum of the objective would have to satisfy the following equations (Kuhn and Tucker, 2014)
where $\odot $ denotes the elementwise Hadamard product. Using the complementary slackness condition $\mathbf{S}\odot \mathbf{V}=\mathbf{0}$, and the first optimality condition $\frac{\partial \mathcal{L}}{\partial \mathbf{S}}=\mathbf{0}$, we have
Therefore, for any neuronstimulus pair $(i,\mu )$, either ${S}_{i\mu}=0$ or ${\sum}_{\nu \in [P]}({S}_{i\nu}{\delta}_{i}){A}_{\nu \mu}=1$. Further, under the condition that K is full rank, we conclude that for any stimulus μ, ${\sum}_{\nu \in [P]}{A}_{\mu \nu}=0$ from the equation $\frac{\partial \mathcal{L}}{\partial \mathit{\delta}}=\mathbf{0}$. Let $\mathcal{I}}_{i}=\{\mu \in [P]:{S}_{i\mu}>0\$ represent the set of stimuli for which neuron $i$ fires. We will call this the receptive field set for neuron $i$. Let $\mathbf{B}}_{(i)}\in {\mathbb{R}}^{P\times P$ have entries
where the matrix ${\mathbf{A}}_{(i)}$ is the ${\mathcal{I}}_{i}\times {\mathcal{I}}_{i}$ minor of $\mathbf{A}$ obtained by taking all rows and columns with indices $\mu ,\nu \in {\mathcal{I}}_{i}$, and ${\mathbf{A}}^{+}$ denotes pseudoinverse of $\mathbf{A}$. Then the $i$th neuron’s tuning curve is a function of the index set ${\mathcal{I}}_{i}$ the baseline ${\delta}_{i}$ and the neuronindependent $P\times P$ matrix $\mathbf{A}$. The nonnegativity constraint for neuron $i$’s tuning curve implies that ${S}_{i\mu}=\sum _{\nu \in {\mathcal{I}}_{i}}{B}_{(i),\mu \nu}[{\delta}_{i}\sum _{\gamma \in [P]}{A}_{\nu \gamma}+1]>0$ for all $\mu \in {\mathcal{I}}_{i}$. To satisfy the definition of the kernel, we have the following constraint on the matrix $\mathbf{A}$, the index sets ${\mathcal{I}}_{i}$ and baselines ${\delta}_{i}$
This equation implictly defines the index sets ${\mathcal{I}}_{i}$ the baselines ${\delta}_{i}$ and the KKT matrix $\mathbf{A}$. We see that, in order to fit an arbitrary kernel, the receptive field sets $\{{\mathcal{I}}_{i}\}$ and baselines ${\delta}_{i}$ for each neuron must be sufficiently diverse since otherwise only a low rank kernel matrix can be achieved from the optimally sparse code. As a concrete example, suppose that ${\mathcal{I}}_{i}=\mathcal{I}$ so that ${\mathbf{V}}_{(i)}=\mathbf{V}$ and ${\delta}_{i}=\delta $ for all $i$. For example, this could occur if each neuron fired for every possible stimulus. In this case, the kernel would be rank one: $\mathbf{K}=N(\mathbf{s}(\mathcal{I},\delta ,\mathbf{A})\delta \mathbf{1}){(\mathbf{s}(\mathcal{I},\delta ,\mathbf{A})\delta \mathbf{1})}^{\top}$. In order to achieve a higher rank code there must be sufficient diversity of the receptive fields ${\mathcal{I}}_{i}$. Thus the only way for optimally sparse codes to realize high rank kernels $\mathbf{K}$ is to have neurons to have different receptive field sets ${\mathcal{I}}_{i}$. The necessary optimality conditions thus reveal a preference for sparse neural tuning curves to have high lifetime sparseness; to achieve diverse index sets ${\mathcal{I}}_{i}$, any given neuron will fire only for a unique subset of the possible stimuli.
Impact of neural noise and unlearnable targets on learning
While our analysis so far has focused on deterministic population codes, our theory can be extended to neural populations which exhibit variability in responses to identical stimuli. For each stimulus $\mathit{\theta}$, we let the population response $\mathbf{r}(\mathit{\theta})$ be a random vector with mean $\overline{\mathbf{r}}(\mathit{\theta})={\u27e8\mathbf{r}(\mathit{\theta})\u27e9}_{\mathbf{r}\mathit{\theta}}$ and covariance ${\mathbf{\Sigma}}_{n}(\mathit{\theta})={\u27e8(\mathbf{r}(\mathit{\theta})\overline{\mathbf{r}}(\mathit{\theta})){(\mathbf{r}(\mathit{\theta})\overline{\mathbf{r}}(\mathit{\theta}))}^{\top}\u27e9}_{\mathbf{r}\mathit{\theta}}$.
The (deterministic) target function can be decomposed in terms of the mean response as $y(\mathit{\theta})={\mathbf{w}}^{*}\cdot \overline{\mathbf{r}}(\mathit{\theta})$ (the usual decomposition $y={\mathbf{w}}^{*}\cdot \mathbf{r}(\mathit{\theta})$ gives an unphysical target function which fluctuates with the variability in neural responses). For a given configuration of weights $\mathbf{w}$, the generalization error (which is an average over the joint distribution of $\mathbf{r},\mathit{\theta}$) is determined only by the signal ${\mathbf{\Sigma}}_{s}={\u27e8\overline{\mathbf{r}}(\mathit{\theta})\overline{\mathbf{r}}{(\mathit{\theta})}^{\top}\u27e9}_{\mathit{\theta}}$ and noise ${\mathbf{\Sigma}}_{n}={\u27e8{\mathbf{\Sigma}}_{n}(\mathit{\theta})\u27e9}_{\mathit{\theta}}$ correlation matrices:
where we utilized the fact that ${\u27e8\mathbf{r}(\mathit{\theta})\overline{\mathbf{r}}(\mathit{\theta})\u27e9}_{\mathbf{r}\theta}=0$ to eliminate the crossterm. The two terms in the final expression can be thought of as a biasvariance decomposition over the noise in neural responses. The minimum achievable loss can be obtained by differentiation of the generalization error expression with respect to $\mathbf{w}$, giving ${E}_{g}^{*}={\mathbf{w}}^{*}{\mathbf{\Sigma}}_{n}{({\mathbf{\Sigma}}_{s}+{\mathbf{\Sigma}}_{n})}^{1}{\mathbf{\Sigma}}_{s}{\mathbf{w}}^{*}$. We note that any noise correlation matrix with noise orthogonal to coding direction ${\mathbf{\Sigma}}_{n}{\mathbf{w}}^{*}=0$ will give the minimal (zero) asymptotic error. Alignment of the noise ${\mathbf{\Sigma}}_{n}$ with ${\mathbf{w}}^{*}$ gives higher asymptotic error.
In addition to the irreducible error, the presence of neural noise can alter the learning curve at finite $P$. An analytical study of this is difficult, which we leave for future work. We numerically study the effect of neural variability on generalization performance in the orientation discrimination tasks for nontrialaveraged Mouse V1 code in Appendix 1—figure 1 . We note that the generalization error is worse at each finite value of $P$ when compared to trial averaged (noise free) learning curves. We varied the regularization parameter and did not find an obvious nonzero optimal weight decay $\lambda $, consistent with small noise levels.
Neural noise is not the only phenomenon that can degrade task learning. Codes which are incapable of expressing the target function through linear readouts are also susceptible to overfitting. As explained in Canatar et al., 2021, the components of the target function that are inexpressible act as a source of noise on the learning process which can overfit this noise. Such a scenario can occur, for example, when the readout neuron only gets input from a sparse subset of the coding neural population (Seeman et al., 2018). We show in Appendix 1—figure 1CD that using subsampled populations of size $N$ can indeed lead to a regime where more data can hurt performance leading to an overfitting error peak, a subsequent nonvanishing asymptotic error, and an optimal weight decay parameter $\lambda $. This phenomenon is known as double descent in machine learning literature (Belkin et al., 2019; Mei and Montanari, 2020; Canatar et al., 2021). At small $N$, these codes are not sufficiently expressive to learn the target function through linear readout. The overfitting peak occurs near the interpolation threshold, the largest value of $P$ where all training sets could be perfectly fit in the $\lambda \to 0$ limit (Canatar et al., 2021). At infinite $P$, generalization error asymptotes to the amount of unexplained variance in the target function.
Data availability
Mouse V1 neuron responses to orientation gratings and preprocessing code were obtained from a publicly available dataset: https://github.com/MouseLand/stringeretal2019. Responses to ImageNet images and preprocessing code were obtained from another publicly available dataset, https://github.com/MouseLand/stringerpachitariuetal2018b. The code generated by the authors for this paper is also available https://github.com/PehlevanGroup/sample_efficient_pop_codes, (copy archived at swh:1:rev:6cd4f0fe7043ae214dd682a9dc035a497ffa2d61).

FigshareRecordings of ten thousand neurons in visual cortex in response to 2,800 natural images.https://doi.org/10.25378/janelia.6845348.v4

FigshareRecordings of ~20,000 neurons from V1 in response to oriented stimuli.https://doi.org/10.25378/janelia.8279387.v3
References

ConferenceDo retinal ganglion cells project natural scenes to their principal subspace and whiten them?2016 50th Asilomar Conference on Signals, Systems and Computers.https://doi.org/10.1109/ACSSC.2016.7869658

Perceptual learning of simple stimuli modifies stimulus representations in posterior inferior temporal cortexJournal of Cognitive Neuroscience 26:2187–2200.https://doi.org/10.1162/jocn_a_00641

Spatiotemporal energy models for the perception of motionJournal of the Optical Society of America. A, Optics and Image Science 2:284–299.https://doi.org/10.1364/josaa.2.000284

The reverse hierarchy theory of visual perceptual learningTrends in Cognitive Sciences 8:457–464.https://doi.org/10.1016/j.tics.2004.08.011

Generation of random orthogonal matricesSIAM Journal on Scientific and Statistical Computing 8:625–629.https://doi.org/10.1137/0908055

What does the retina know about natural scenes?Neural Computation 4:196–210.https://doi.org/10.1162/neco.1992.4.2.196

Some informational aspects of visual perceptionPsychological Review 61:183–193.https://doi.org/10.1037/h0054663

Neural correlations, population coding and computationNature Reviews. Neuroscience 7:358–366.https://doi.org/10.1038/nrn1888

BookPossible Principles Underlying the Transformation of Sensory MessagesCambridge University.

Benign overfitting in linear regressionPNAS 117:30063–30070.https://doi.org/10.1073/pnas.1907378117

Local diversity and finescale organization of receptive fields in mouse visual cortexThe Journal of Neuroscience 31:18506–18521.https://doi.org/10.1523/JNEUROSCI.297411.2011

ConferenceSpectrum dependent learning curves in kernel regression and wide neural networksProceedings of the 37th International Conference on Machine Learning of Proceedings of Machine Learning Research. pp. 1024–1034.

ConferenceSelfConsistent Dynamical Field Theory of Kernel Evolution in Wide Neural NetworksAdvances In Neural Information Processing Systems.

Deep neural networks rival the representation of primate it cortex for core visual object recognitionPLOS Computational Biology 10:e1003963.https://doi.org/10.1371/journal.pcbi.1003963

Principal component analysis of neuronal ensemble activity reveals multidimensional somatosensory representationsJournal of Neuroscience Methods 94:121–140.https://doi.org/10.1016/s01650270(99)001302

Measuring and interpreting neuronal correlationsNature Neuroscience 14:811–819.https://doi.org/10.1038/nn.2842

Dimensionality reduction for largescale neural recordingsNature Neuroscience 17:1500–1509.https://doi.org/10.1038/nn.3776

BookTheoretical Neuroscience: Computational and Mathematical Modeling of Neural SystemsThe MIT Press.

ConferenceImagenet: A largescale hierarchical image databaseIn 2009 IEEE conference on computer vision and pattern recognition.

ConferenceReservoir computing meets recurrent kernels and structured transformsNeurIPS Proceedings.

Representation is representation of similaritiesThe Behavioral and Brain Sciences 21:449–467.https://doi.org/10.1017/s0140525x98001253

On simplicity and complexity in the brave new world of largescale neuroscienceCurrent Opinion in Neurobiology 32:148–155.https://doi.org/10.1016/j.conb.2015.04.003

Physiological correlates of perceptual learning in monkey V1 and V2Journal of Neurophysiology 87:1867–1888.https://doi.org/10.1152/jn.00690.2001

Regularization theory and neural networks architecturesNeural Computation 7:219–269.https://doi.org/10.1162/neco.1995.7.2.219

How noise contributes to contrast invariance of orientation tuning in cat visual cortexThe Journal of Neuroscience 22:5118–5128.https://doi.org/10.1523/JNEUROSCI.221205118.2002

Predictive codingWiley Interdisciplinary Reviews. Cognitive Science 2:580–593.https://doi.org/10.1002/wcs.142

Weight perturbation: an optimal architecture and learning technique for analog vlsi feedforward and recurrent multilayer networksIEEE Transactions on Neural Networks 3:154–157.https://doi.org/10.1109/72.105429

Advances in Neural Information Processing SystemsNeural tangent kernel: convergence and generalization in neural networks, Advances in Neural Information Processing Systems, Curran Associates, Inc.

ConferenceSGD on neural networks learns functions of increasing complexityIn Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019.

Representational similarity analysisconnecting the branches of systems neuroscienceFrontiers in Systems Neuroscience 2:4.https://doi.org/10.3389/neuro.06.004.2008

BookNonlinear programmingIn: Kuhn HW, editors. Traces and Emergence of Nonlinear Programming. Springer. pp. 1–4.https://doi.org/10.1007/9783034804394

Content and cluster analysis: assessing representational similarity in neural systemsPhilosophical Psychology 13:47–76.https://doi.org/10.1080/09515080050002726

Building machines that learn and think like peopleThe Behavioral and Brain Sciences 40:e253.https://doi.org/10.1017/S0140525X16001837

ConferenceDeep neural networks as gaussian processesIn International Conference on Learning Representations.

Random synaptic feedback weights support error backpropagation for deep learningNature Communications 7:13276.https://doi.org/10.1038/ncomms13276

Learning curves of generic features maps for realistic datasets with a teacherstudent modelAdvances in Neural Information Processing Systems 34:18137–18151.https://doi.org/10.1088/17425468/ac9825

The generalization error of random features regression: precise asymptotics and the double descent curveCommunications on Pure and Applied Mathematics 75:667–766.https://doi.org/10.1002/cpa.22008

Adaptive tuning curve widths improve sample efficient learningFrontiers in Computational Neuroscience 14:12.https://doi.org/10.3389/fncom.2020.00012

XVI. Functions of positive and negative type, and their connection the theory of integral equationsPhilosophical Transactions of the Royal Society of London. Series A, Containing Papers of A Mathematical or Physical Character 209:415–446.https://doi.org/10.1098/rsta.1909.0016

Neural noise can explain expansive, powerlaw nonlinearities in neural response functionsJournal of Neurophysiology 87:653–659.https://doi.org/10.1152/jn.00425.2001

ConferenceOn 1/n neural representation and robustnessAdvances in Neural Information Processing Systems 33.

Highly selective receptive fields in mouse visual cortexThe Journal of Neuroscience 28:7520–7536.https://doi.org/10.1523/JNEUROSCI.062308.2008

Energy limitation as a selective pressure on the evolution of sensory systemsThe Journal of Experimental Biology 211:1792–1804.https://doi.org/10.1242/jeb.017574

Robustness of spike deconvolution for neuronal calcium imagingThe Journal of Neuroscience 38:7976–7985.https://doi.org/10.1523/JNEUROSCI.333917.2018

BookRecordings of 20,000 Neurons from V1 in Response to Oriented StimuliAmerican Physiological Society.

Why do similarity matching objectives lead to hebbian/antihebbian networks?Neural Computation 30:84–124.https://doi.org/10.1162/neco_a_01018

Decorrelation and efficient coding by retinal ganglion cellsNature Neuroscience 15:628–635.https://doi.org/10.1038/nn.3064

ConferenceLie group methods for optimization with orthogonality constraintsIn International Conference on Independent Component Analysis and Signal Separation.

The contribution of spike threshold to the dichotomy of cortical simple and complex cellsNature Neuroscience 7:1113–1122.https://doi.org/10.1038/nn1310

ConferenceOn the spectral bias of neural networksIn International Conference on Machine Learning.

ConferenceA generalized representer theoremIn Proceedings of the 14th Annual Conference on Computational Learning Theory and 5th European Conference on Computational Learning Theory, COLT ’01/EuroCOLT ’01.

BookLearning with Kernels: Support Vector Machines, Regularization, Optimization, and BeyondMIT press.

Neural basis of a perceptual decision in the parietal cortex (area lip) of the rhesus monkeyJournal of Neurophysiology 86:1916–1936.https://doi.org/10.1152/jn.2001.86.4.1916

ConferenceNeural tangent kernel eigenvalues accurately predict generalizationICLR 2022 Conference.

A model of neuronal responses in visual area MTVision Research 38:743–761.https://doi.org/10.1016/s00426989(97)001831

Natural image statistics and neural representationAnnual Review of Neuroscience 24:1193–1216.https://doi.org/10.1146/annurev.neuro.24.1.1193

ConferenceApproximate learning curves for Gaussian processes9th International Conference on Artificial Neural Networks.https://doi.org/10.1049/cp:19991148

BookGaussian process regression with mismatched modelsIn: Dietterich T, Becker S, Ghahramani Z, editors. Advances in Neural Information Processing Systems. MIT Press. pp. 1–2.

Predictive coding: a fresh view of inhibition in the retinaProceedings of the Royal Society of London. Series B, Biological Sciences 216:427–459.https://doi.org/10.1098/rspb.1982.0085

Continuous analogues of matrix factorizationsProceedings of the Royal Society A 471:20140585.https://doi.org/10.1098/rspa.2014.0585

ConferenceDeep learning generalizes because the parameterfunction map is biased towards simple functionsIn International Conference on Learning Representations.

Dynamics of spatial frequency tuning in mouse visual cortexJournal of Neurophysiology 107:2937–2949.https://doi.org/10.1152/jn.00022.2012

Characterizing the sparseness of neural codesNetwork: Computation in Neural Systems 12:255.https://doi.org/10.1080/net.12.3.255.270

The lack of a priori distinctions between learning algorithmsNeural Computation 8:1341–1390.https://doi.org/10.1162/neco.1996.8.7.1341

The effect of perceptual learning on neuronal responses in monkey visual area v4The Journal of Neuroscience 24:1617–1626.https://doi.org/10.1523/JNEUROSCI.444203.2004

ConferenceTensor programs iv: Feature learning in infinitewidth neural networksIn International Conference on Machine Learning.

ConferenceUnderstanding deep learning requires rethinking generalizationIn 5th Int. Conf. on Learning Representations (ICLR 2017).
Article and author information
Author details
Funding
National Science Foundation (DMS2134157)
 Blake Bordelon
 Cengiz Pehlevan
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
We thank Jacob ZavatoneVeth and Abdulkadir Canatar for useful comments and discussions about this manuscript. BB acknowledges the support of the NSFSimons Center for Mathematical and Statistical Analysis of Biology at Harvard (award #1764269) and the Harvard QBio Initiative. CP and BB were also supported by NSF grant DMS2134157.
Version history
 Preprint posted: March 31, 2021 (view preprint)
 Received: March 14, 2022
 Accepted: December 15, 2022
 Accepted Manuscript published: December 16, 2022 (version 1)
 Version of Record published: January 13, 2023 (version 2)
Copyright
© 2022, Bordelon and Pehlevan
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 1,719
 views

 312
 downloads

 3
 citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Neuroscience
Negative memories engage a brain and bodywide stress response in humans that can alter cognition and behavior. Prolonged stress responses induce maladaptive cellular, circuit, and systemslevel changes that can lead to pathological brain states and corresponding disorders in which mood and memory are affected. However, it is unclear if repeated activation of cells processing negative memories induces similar phenotypes in mice. In this study, we used an activitydependent tagging method to access neuronal ensembles and assess their molecular characteristics. Sequencing memory engrams in mice revealed that positive (maletofemale exposure) and negative (foot shock) cells upregulated genes linked to anti and proinflammatory responses, respectively. To investigate the impact of persistent activation of negative engrams, we chemogenetically activated them in the ventral hippocampus over 3 months and conducted anxiety and memoryrelated tests. Negative engram activation increased anxiety behaviors in both 6 and 14monthold mice, reduced spatial working memory in older mice, impaired fear extinction in younger mice, and heightened fear generalization in both age groups. Immunohistochemistry revealed changes in microglial and astrocytic structure and number in the hippocampus. In summary, repeated activation of negative memories induces lasting cellular and behavioral abnormalities in mice, offering insights into the negative effects of chronic negative thinkinglike behaviors on human health.

 Neuroscience
Synaptic inputs to cortical neurons are highly structured in adult sensory systems, such that neighboring synapses along dendrites are activated by similar stimuli. This organization of synaptic inputs, called synaptic clustering, is required for highfidelity signal processing, and clustered synapses can already be observed before eye opening. However, how clustered inputs emerge during development is unknown. Here, we employed concurrent in vivo wholecell patchclamp and dendritic calcium imaging to map spontaneous synaptic inputs to dendrites of layer 2/3 neurons in the mouse primary visual cortex during the second postnatal week until eye opening. We found that the number of functional synapses and the frequency of transmission events increase several fold during this developmental period. At the beginning of the second postnatal week, synapses assemble specifically in confined dendritic segments, whereas other segments are devoid of synapses. By the end of the second postnatal week, just before eye opening, dendrites are almost entirely covered by domains of coactive synapses. Finally, coactivity with their neighbor synapses correlates with synaptic stabilization and potentiation. Thus, clustered synapses form in distinct functional domains presumably to equip dendrites with computational modules for highcapacity sensory processing when the eyes open.