Learning: How the cerebellum learns to build a sequence
Sequential patterns like the Fibonacci numbers, as well as the movements that produce a tied shoelace, are examples of recursion: the process begins with a seed that a system uses to generate an output. That output is then fed back to the system as a self-generated input, which in turn becomes a new output. The result is a recursive function that uses a seed (external input) at time t to generate outputs at times t, t + D, t + 2D and so on (where D is a constant interval of time). Now, in eLife, Andrei Khilkevich, Juan Zambrano, Molly-Marie Richards and Michael Mauk of the University of Texas at Austin report the results of experiments on rabbits which shed light on how the brain learns the biological analogue of a recursive function (Khilkevich et al., 2018).
To present the seed that started a sequence of motor outputs, Khilkevich et al. electrically stimulated the mossy fibers that provided inputs to the cerebellum. Near the end of the period of mossy fiber stimulation, they electrically stimulated the skin near the eyelid, which caused the rabbits to blink. The blink was the motor output. With repeated exposure to the mossy fiber input and the eyelid stimulation, the cerebellum learned to predict that the mossy fiber stimulation would be followed by the eyelid stimulation (Krupa et al., 1993), which then led to an anticipatory blink at time t. That is, given an input to the cerebellum at time t, the animals learned to produce an output at the same time. The technical term for this kind of learning is classical conditioning.
However, the goal for the rabbits was to learn to blink not just at time t, but also at times t + D, t + 2D and so on. That is, the challenge for the animal was to learn to use its own motor output at time t (the eye blink) as the cue needed to produce a second blink at t + D. To do this, Khilkevich et al. measured the eyelid response at time t. If the eye was less than 50% closed, they stimulated the eyelid as usual. However, if the eye was more than 50% closed, they stimulated it 600 milliseconds later (that is, at t + D). The critical point is that there was no input to the mossy fibers at t + D. Although earlier experiments had shown that the cerebellum was not able to associate a mossy fiber input with stimulation of the eyelid when the delay between them was longer than 400 milliseconds (Kalmbach et al., 2010), Khilkevich et al. found that the animals learned to blink not only at time t, but also at time t + D.
Their hypothesis was that the sequence was learned through recursion: a copy of the commands for the blink at t was sent as input to the cerebellum, allowing it to associate these commands with the eyelid stimulation at t + D, and thereby learning to blink at t + D. An elegant experiment confirmed this hypothesis: Khilkevich et al. found that after training was completed and the rabbits blinked at times t and t + D, omitting the eyelid stimulation at time t resulted in the extinction of the blinks at times t and t + D. Moreover, and rather remarkably, even if the eyelid was subsequently stimulated at time t + D, there was still no blink. This established the fundamental feature of the recursive function: without the blink at time t, which was generated because of the mossy fiber input at t, the animal could not produce a blink at time t + D.
Under normal conditions, the principal cells of the cerebellum, Purkinje cells, produce a steady stream of simple spikes. As the animal learns to associate the mossy fiber input with the eyelid stimulation, the Purkinje cells reduce their simple spike discharge just before the blink at time t, and again before the second blink at t + D (Jirenhed et al., 2017). Khilkevich et al. found that the modulation of the spikes before t + D appeared to be causal, because there was no blink response at t + D if there was no modulation around time t + D. The timing of the modulation at t and t + D also appeared consistent with a role for the cerebellum in generating the recursive function.
The results of Khilkevich and co-workers expand the range of learning behaviors that have been ascribed to the cerebellum. Earlier work had shown that Purkinje cells learn to associate motor commands with their sensory consequences (Herzfeld et al., 2015), forming 'forward models' that enable animals to control their movements with precision and accuracy (Heiney et al., 2014; Herzfeld et al., 2018). The new results demonstrate that Purkinje cells can also learn recursive functions, using a seed plus feedback from the animal’s own actions to construct a sequence of movements.
References
-
Precise control of movement kinematics by optogenetic inhibition of Purkinje cell activityJournal of Neuroscience 34:2321–2330.https://doi.org/10.1523/JNEUROSCI.4547-13.2014
-
Temporal patterns of inputs to cerebellum necessary and sufficient for trace eyelid conditioningJournal of Neurophysiology 104:627–640.https://doi.org/10.1152/jn.00169.2010
Article and author information
Author details
Publication history
Copyright
© 2018, Shadmehr
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
- Physics of Living Systems
Neurons generate and propagate electrical pulses called action potentials which annihilate on arrival at the axon terminal. We measure the extracellular electric field generated by propagating and annihilating action potentials and find that on annihilation, action potentials expel a local discharge. The discharge at the axon terminal generates an inhomogeneous electric field that immediately influences target neurons and thus provokes ephaptic coupling. Our measurements are quantitatively verified by a powerful analytical model which reveals excitation and inhibition in target neurons, depending on position and morphology of the source-target arrangement. Our model is in full agreement with experimental findings on ephaptic coupling at the well-studied Basket cell-Purkinje cell synapse. It is able to predict ephaptic coupling for any other synaptic geometry as illustrated by a few examples.
-
- Neuroscience
Detecting causal relations structures our perception of events in the world. Here, we determined for visual interactions whether generalized (i.e. feature-invariant) or specialized (i.e. feature-selective) visual routines underlie the perception of causality. To this end, we applied a visual adaptation protocol to assess the adaptability of specific features in classical launching events of simple geometric shapes. We asked observers to report whether they observed a launch or a pass in ambiguous test events (i.e. the overlap between two discs varied from trial to trial). After prolonged exposure to causal launch events (the adaptor) defined by a particular set of features (i.e. a particular motion direction, motion speed, or feature conjunction), observers were less likely to see causal launches in subsequent ambiguous test events than before adaptation. Crucially, adaptation was contingent on the causal impression in launches as demonstrated by a lack of adaptation in non-causal control events. We assessed whether this negative aftereffect transfers to test events with a new set of feature values that were not presented during adaptation. Processing in specialized (as opposed to generalized) visual routines predicts that the transfer of visual adaptation depends on the feature similarity of the adaptor and the test event. We show that the negative aftereffects do not transfer to unadapted launch directions but do transfer to launch events of different speeds. Finally, we used colored discs to assign distinct feature-based identities to the launching and the launched stimulus. We found that the adaptation transferred across colors if the test event had the same motion direction as the adaptor. In summary, visual adaptation allowed us to carve out a visual feature space underlying the perception of causality and revealed specialized visual routines that are tuned to a launch’s motion direction.