Learning: How the cerebellum learns to build a sequence
Sequential patterns like the Fibonacci numbers, as well as the movements that produce a tied shoelace, are examples of recursion: the process begins with a seed that a system uses to generate an output. That output is then fed back to the system as a self-generated input, which in turn becomes a new output. The result is a recursive function that uses a seed (external input) at time t to generate outputs at times t, t + D, t + 2D and so on (where D is a constant interval of time). Now, in eLife, Andrei Khilkevich, Juan Zambrano, Molly-Marie Richards and Michael Mauk of the University of Texas at Austin report the results of experiments on rabbits which shed light on how the brain learns the biological analogue of a recursive function (Khilkevich et al., 2018).
To present the seed that started a sequence of motor outputs, Khilkevich et al. electrically stimulated the mossy fibers that provided inputs to the cerebellum. Near the end of the period of mossy fiber stimulation, they electrically stimulated the skin near the eyelid, which caused the rabbits to blink. The blink was the motor output. With repeated exposure to the mossy fiber input and the eyelid stimulation, the cerebellum learned to predict that the mossy fiber stimulation would be followed by the eyelid stimulation (Krupa et al., 1993), which then led to an anticipatory blink at time t. That is, given an input to the cerebellum at time t, the animals learned to produce an output at the same time. The technical term for this kind of learning is classical conditioning.
However, the goal for the rabbits was to learn to blink not just at time t, but also at times t + D, t + 2D and so on. That is, the challenge for the animal was to learn to use its own motor output at time t (the eye blink) as the cue needed to produce a second blink at t + D. To do this, Khilkevich et al. measured the eyelid response at time t. If the eye was less than 50% closed, they stimulated the eyelid as usual. However, if the eye was more than 50% closed, they stimulated it 600 milliseconds later (that is, at t + D). The critical point is that there was no input to the mossy fibers at t + D. Although earlier experiments had shown that the cerebellum was not able to associate a mossy fiber input with stimulation of the eyelid when the delay between them was longer than 400 milliseconds (Kalmbach et al., 2010), Khilkevich et al. found that the animals learned to blink not only at time t, but also at time t + D.
Their hypothesis was that the sequence was learned through recursion: a copy of the commands for the blink at t was sent as input to the cerebellum, allowing it to associate these commands with the eyelid stimulation at t + D, and thereby learning to blink at t + D. An elegant experiment confirmed this hypothesis: Khilkevich et al. found that after training was completed and the rabbits blinked at times t and t + D, omitting the eyelid stimulation at time t resulted in the extinction of the blinks at times t and t + D. Moreover, and rather remarkably, even if the eyelid was subsequently stimulated at time t + D, there was still no blink. This established the fundamental feature of the recursive function: without the blink at time t, which was generated because of the mossy fiber input at t, the animal could not produce a blink at time t + D.
Under normal conditions, the principal cells of the cerebellum, Purkinje cells, produce a steady stream of simple spikes. As the animal learns to associate the mossy fiber input with the eyelid stimulation, the Purkinje cells reduce their simple spike discharge just before the blink at time t, and again before the second blink at t + D (Jirenhed et al., 2017). Khilkevich et al. found that the modulation of the spikes before t + D appeared to be causal, because there was no blink response at t + D if there was no modulation around time t + D. The timing of the modulation at t and t + D also appeared consistent with a role for the cerebellum in generating the recursive function.
The results of Khilkevich and co-workers expand the range of learning behaviors that have been ascribed to the cerebellum. Earlier work had shown that Purkinje cells learn to associate motor commands with their sensory consequences (Herzfeld et al., 2015), forming 'forward models' that enable animals to control their movements with precision and accuracy (Heiney et al., 2014; Herzfeld et al., 2018). The new results demonstrate that Purkinje cells can also learn recursive functions, using a seed plus feedback from the animal’s own actions to construct a sequence of movements.
Precise control of movement kinematics by optogenetic inhibition of Purkinje cell activityJournal of Neuroscience 34:2321–2330.https://doi.org/10.1523/JNEUROSCI.4547-13.2014
Temporal patterns of inputs to cerebellum necessary and sufficient for trace eyelid conditioningJournal of Neurophysiology 104:627–640.https://doi.org/10.1152/jn.00169.2010
Article and author information
- Version of Record published: August 23, 2018 (version 1)
© 2018, Shadmehr
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
- Page views
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Fluctuations in brain and behavioral state are supported by broadly projecting neuromodulatory systems. In this study, we use mesoscale two-photon calcium imaging to examine spontaneous activity of cholinergic and noradrenergic axons in awake mice in order to determine the interaction between arousal/movement state transitions and neuromodulatory activity across the dorsal cortex at distances separated by up to 4 mm. We confirm that GCaMP6s activity within axonal projections of both basal forebrain cholinergic and locus coeruleus noradrenergic neurons track arousal, indexed as pupil diameter, and changes in behavioral engagement, as reflected by bouts of whisker movement and/or locomotion. The broad coordination in activity between even distant axonal segments indicates that both of these systems can communicate, in part, through a global signal, especially in relation to changes in behavioral state. In addition to this broadly coordinated activity, we also find evidence that a subpopulation of both cholinergic and noradrenergic axons may exhibit heterogeneity in activity that appears to be independent of our measures of behavioral state. By monitoring the activity of cholinergic interneurons in the cortex, we found that a subpopulation of these cells also exhibit state-dependent (arousal/movement) activity. These results demonstrate that cholinergic and noradrenergic systems provide a prominent and broadly synchronized signal related to behavioral state, and therefore may contribute to state-dependent cortical activity and excitability.
One signature of the human brain is its ability to derive knowledge from language inputs, in addition to nonlinguistic sensory channels such as vision and touch. How does human language experience modulate the mechanism by which semantic knowledge is stored in the human brain? We investigated this question using a unique human model with varying amounts and qualities of early language exposure: early deaf adults who were born to hearing parents and had reduced early exposure and delayed acquisition of any natural human language (speech or sign), with early deaf adults who acquired sign language from birth as the control group that matches on nonlinguistic sensory experiences. Neural responses in a semantic judgment task with 90 written words that were familiar to both groups were measured using fMRI. The deaf group with reduced early language exposure, compared with the deaf control group, showed reduced semantic sensitivity, in both multivariate pattern (semantic structure encoding) and univariate (abstractness effect) analyses, in the left dorsal anterior temporal lobe (dATL). These results provide positive, causal evidence that language experience drives the neural semantic representation in the dATL, highlighting the roles of language in forming human neural semantic structures beyond nonverbal sensory experiences.