(A) Mossy fiber inputs (blue) project to granule cells (green), which send parallel fibers that contact a Purkinje cell (black). (B) Diagram of neural network model. task variables are embedded, …
(A) A random categorization task in which inputs are mapped to one of two categories (+1 or –1). Gray plane denotes the decision boundary of a linear classifier separating the two categories. (B) A …
(A) Error as a function of coding level for networks trained to perform random categorization tasks (as in Figure 2E but with a wider range of associations ). Performance is measured for noisy …
Error as a function of coding level for networks with (A) Heaviside and (B) rectified power-law (with power 2, ) nonlinearity in the expansion layer. Networks learned Gaussian process targets. …
Error as a function of coding level for networks learning Gaussian process targets with input dimension (A) and (B). Dashed lines indicate the performance of a readout of the input layer. …
Dots denote performance of a readout of the expansion layer in simulations. Thin lines denote performance of a readout of the input layer in simulations. Thick lines denote theory for expansion …
(A) Effect of activation threshold on coding level. A point on the surface of the sphere represents a neuron with effective weights . Blue region represents the set of neurons activated by , …
(A) Geometry of high-dimensional categorization tasks where input patterns are drawn from random, noisy clusters (light regions). Performance depends on overlaps between training patterns from …
Frequency is indexed by . Errors are calculated using analytically using Equation 4 and represent the predictions of the theory for an infinitely large expansion. Curves are symmetric around …
Power as a function of frequency for random categorization tasks (colors) and for Gaussian process task (black). Power is averaged over realizations of target functions.
(A) Top: Fully connected network. Bottom: Sparsely connected network with in-degree and excitatory weights with global inhibition onto expansion layer neurons. (B) Error as a function of coding …
(A) Error as a function of in-degree for networks learning Gaussian process targets. Curves represent different values of , the length scale of the Gaussian process. The total number of …
(A) Left: Schematic of two-joint arm. Center: Cerebellar cortex model in which sensorimotor task variables at time are used to predict hand position at time . Right: Error as a function of …
(A) Error as a function of coding level in a spiking model. The firing rate of neuron i (in Hz) is given by , where is a gain term that adjusts the amplitude of the activity and is the …
During each epoch of training, the network is presented with all patterns in a randomized order, and the learned weights are updated with each pattern (see Methods). Networks were presented with 30 …
: number of expansion layer neurons. : number of input layer neurons. : number of connections from input layer to a single expansion layer neuron. : total number of connections from input to …
Figure panel | Network parameters | Task parameters |
---|---|---|
Figure 2E | ||
Figures 2F, 4G and 5B (full) | ||
Figure 5B and E | ||
Figure 6A | ||
Figure 6B | ||
Figure 6C | ||
Figure 6D | ||
Figure 7A | ; see Methods | |
Figure 7B | ||
Figure 7C | see Methods | |
Figure 7D | ; see Methods | |
Figure 2—figure supplement 1 | See Figure | |
Figure 2—figure supplement 2 | ||
Figure 2—figure supplement 3 | ||
Figure 2—figure supplement 4 | ||
Figure 7—figure supplement 1 | ||
Figure 7—figure supplement 2 |