Learning: How the cerebellum learns to build a sequence

Rabbits can learn the biological analogue of a simple recursive function by relying only on the neurons of the cerebellum.
  1. Reza Shadmehr  Is a corresponding author
  1. Johns Hopkins School of Medicine, United States

Sequential patterns like the Fibonacci numbers, as well as the movements that produce a tied shoelace, are examples of recursion: the process begins with a seed that a system uses to generate an output. That output is then fed back to the system as a self-generated input, which in turn becomes a new output. The result is a recursive function that uses a seed (external input) at time t to generate outputs at times t, t + D, t + 2D and so on (where D is a constant interval of time). Now, in eLife, Andrei Khilkevich, Juan Zambrano, Molly-Marie Richards and Michael Mauk of the University of Texas at Austin report the results of experiments on rabbits which shed light on how the brain learns the biological analogue of a recursive function (Khilkevich et al., 2018).

To present the seed that started a sequence of motor outputs, Khilkevich et al. electrically stimulated the mossy fibers that provided inputs to the cerebellum. Near the end of the period of mossy fiber stimulation, they electrically stimulated the skin near the eyelid, which caused the rabbits to blink. The blink was the motor output. With repeated exposure to the mossy fiber input and the eyelid stimulation, the cerebellum learned to predict that the mossy fiber stimulation would be followed by the eyelid stimulation (Krupa et al., 1993), which then led to an anticipatory blink at time t. That is, given an input to the cerebellum at time t, the animals learned to produce an output at the same time. The technical term for this kind of learning is classical conditioning.

However, the goal for the rabbits was to learn to blink not just at time t, but also at times t + D, t + 2D and so on. That is, the challenge for the animal was to learn to use its own motor output at time t (the eye blink) as the cue needed to produce a second blink at t + D. To do this, Khilkevich et al. measured the eyelid response at time t. If the eye was less than 50% closed, they stimulated the eyelid as usual. However, if the eye was more than 50% closed, they stimulated it 600 milliseconds later (that is, at t + D). The critical point is that there was no input to the mossy fibers at t + D. Although earlier experiments had shown that the cerebellum was not able to associate a mossy fiber input with stimulation of the eyelid when the delay between them was longer than 400 milliseconds (Kalmbach et al., 2010), Khilkevich et al. found that the animals learned to blink not only at time t, but also at time t + D.

Their hypothesis was that the sequence was learned through recursion: a copy of the commands for the blink at t was sent as input to the cerebellum, allowing it to associate these commands with the eyelid stimulation at t + D, and thereby learning to blink at t + D. An elegant experiment confirmed this hypothesis: Khilkevich et al. found that after training was completed and the rabbits blinked at times t and t + D, omitting the eyelid stimulation at time t resulted in the extinction of the blinks at times t and t + D. Moreover, and rather remarkably, even if the eyelid was subsequently stimulated at time t + D, there was still no blink. This established the fundamental feature of the recursive function: without the blink at time t, which was generated because of the mossy fiber input at t, the animal could not produce a blink at time t + D.

Under normal conditions, the principal cells of the cerebellum, Purkinje cells, produce a steady stream of simple spikes. As the animal learns to associate the mossy fiber input with the eyelid stimulation, the Purkinje cells reduce their simple spike discharge just before the blink at time t, and again before the second blink at t + D (Jirenhed et al., 2017). Khilkevich et al. found that the modulation of the spikes before t + D appeared to be causal, because there was no blink response at t + D if there was no modulation around time t + D. The timing of the modulation at t and t + D also appeared consistent with a role for the cerebellum in generating the recursive function.

The results of Khilkevich and co-workers expand the range of learning behaviors that have been ascribed to the cerebellum. Earlier work had shown that Purkinje cells learn to associate motor commands with their sensory consequences (Herzfeld et al., 2015), forming 'forward models' that enable animals to control their movements with precision and accuracy (Heiney et al., 2014; Herzfeld et al., 2018). The new results demonstrate that Purkinje cells can also learn recursive functions, using a seed plus feedback from the animal’s own actions to construct a sequence of movements.

References

Article and author information

Author details

  1. Reza Shadmehr

    Reza Shadmehr is at the Laboratory for Computational Motor Control, Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, United States

    For correspondence
    shadmehr@jhu.edu
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-7686-2569

Publication history

  1. Version of Record published:

Copyright

© 2018, Shadmehr

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,556
    views
  • 175
    downloads
  • 0
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Reza Shadmehr
(2018)
Learning: How the cerebellum learns to build a sequence
eLife 7:e40660.
https://doi.org/10.7554/eLife.40660
  1. Further reading

Further reading

    1. Neuroscience
    Cristina Gil Avila, Elisabeth S May ... Markus Ploner
    Research Article

    Chronic pain is a prevalent and debilitating condition whose neural mechanisms are incompletely understood. An imbalance of cerebral excitation and inhibition (E/I), particularly in the medial prefrontal cortex (mPFC), is believed to represent a crucial mechanism in the development and maintenance of chronic pain. Thus, identifying a non-invasive, scalable marker of E/I could provide valuable insights into the neural mechanisms of chronic pain and aid in developing clinically useful biomarkers. Recently, the aperiodic component of the electroencephalography (EEG) power spectrum has been proposed to represent a non-invasive proxy for E/I. We, therefore, assessed the aperiodic component in the mPFC of resting-state EEG recordings in 149 people with chronic pain and 115 healthy participants. We found robust evidence against differences in the aperiodic component in the mPFC between people with chronic pain and healthy participants, and no correlation between the aperiodic component and pain intensity. These findings were consistent across different subtypes of chronic pain and were similarly found in a whole-brain analysis. Their robustness was supported by preregistration and multiverse analyses across many different methodological choices. Together, our results suggest that the EEG aperiodic component does not differentiate between people with chronic pain and healthy individuals. These findings and the rigorous methodological approach can guide future studies investigating non-invasive, scalable markers of cerebral dysfunction in people with chronic pain and beyond.

    1. Neuroscience
    Claire Meissner-Bernard, Friedemann Zenke, Rainer W Friedrich
    Research Article

    Biological memory networks are thought to store information by experience-dependent changes in the synaptic connectivity between assemblies of neurons. Recent models suggest that these assemblies contain both excitatory and inhibitory neurons (E/I assemblies), resulting in co-tuning and precise balance of excitation and inhibition. To understand computational consequences of E/I assemblies under biologically realistic constraints we built a spiking network model based on experimental data from telencephalic area Dp of adult zebrafish, a precisely balanced recurrent network homologous to piriform cortex. We found that E/I assemblies stabilized firing rate distributions compared to networks with excitatory assemblies and global inhibition. Unlike classical memory models, networks with E/I assemblies did not show discrete attractor dynamics. Rather, responses to learned inputs were locally constrained onto manifolds that ‘focused’ activity into neuronal subspaces. The covariance structure of these manifolds supported pattern classification when information was retrieved from selected neuronal subsets. Networks with E/I assemblies therefore transformed the geometry of neuronal coding space, resulting in continuous representations that reflected both relatedness of inputs and an individual’s experience. Such continuous representations enable fast pattern classification, can support continual learning, and may provide a basis for higher-order learning and cognitive computations.