Brainstem and cerebellar neurons implement an internal model to accurately estimate self-motion during externally generated (‘passive’) movements. However, these neurons show reduced responses during self-generated (‘active’) movements, indicating that predicted sensory consequences of motor commands cancel sensory signals. Remarkably, the computational processes underlying sensory prediction during active motion and their relationship to internal model computations during passive movements remain unknown. We construct a Kalman filter that incorporates motor commands into a previously established model of optimal passive self-motion estimation. The simulated sensory error and feedback signals match experimentally measured neuronal responses during active and passive head and trunk rotations and translations. We conclude that a single sensory internal model can combine motor commands with vestibular and proprioceptive signals optimally. Thus, although neurons carrying sensory prediction error or feedback signals show attenuated modulation, the sensory cues and internal model are both engaged and critically important for accurate self-motion estimation during active head movements.https://doi.org/10.7554/eLife.28074.001
When seated in a car, we can detect when the vehicle begins to move even with our eyes closed. Structures in the inner ear called the vestibular, or balance, organs enable us to sense our own movement. They do this by detecting head rotations, accelerations and gravity. They then pass this information on to specialized vestibular regions of the brain.
Experiments using rotating chairs and moving platforms have shown that passive movements – such as car journeys and rollercoaster rides – activate the brain’s vestibular regions. But recent work has revealed that voluntary movements – in which individuals start the movement themselves – activate these regions far less than passive movements. Does this mean that the brain ignores signals from the inner ear during voluntary movements? Another possibility is that the brain predicts in advance how each movement will affect the vestibular organs in the inner ear. It then compares these predictions with the signals it receives during the movement. Only mismatches between the two activate the brain’s vestibular regions.
To test this theory, Laurens and Angelaki created a mathematical model that compares predicted signals with actual signals in the way the theory proposes. The model accurately predicts the patterns of brain activity seen during both active and passive movement. This reconciles the results of previous experiments on active and passive motion. It also suggests that the brain uses similar processes to analyze vestibular signals during both types of movement.
These findings can help drive further research into how the brain uses sensory signals to refine our everyday movements. They can also help us understand how people recover from damage to the vestibular system. Most patients with vestibular injuries learn to walk again, but have difficulty walking on uneven ground. They also become disoriented by passive movement. Using the model to study how the brain adapts to loss of vestibular input could lead to new strategies to aid recovery.https://doi.org/10.7554/eLife.28074.002
For many decades, research on vestibular function has used passive motion stimuli generated by rotating chairs, motion platforms or centrifuges to characterize the responses of the vestibular motion sensors in the inner ear and the subsequent stages of neuronal processing. This research has revealed elegant computations by which the brain uses an internal model to overcome the dynamic limitations and ambiguities of the vestibular sensors (Figure 1A; Mayne, 1974; Oman, 1982; Borah et al., 1988; Glasauer, 1992; Merfeld, 1995; Glasauer and Merfeld, 1997; Bos et al., 2001; Zupan et al., 2002; Laurens, 2006; Laurens and Droulez, 2007; Laurens and Droulez, 2008; Laurens and Angelaki, 2011; Karmali and Merfeld, 2012; Lim et al., 2017). These computations are closely related to internal model mechanisms that underlie motor control and adaptation (Wolpert et al., 1995; Körding and Wolpert, 2004; Todorov, 2004; Chen-Harris et al., 2008; Berniker et al., 2010; Berniker and Kording, 2011; Franklin and Wolpert, 2011; Saglam et al., 2011; 2014). Neuronal correlates of the internal model of self-motion have been identified in brainstem and cerebellum (Angelaki et al., 2004; Shaikh et al., 2005; Yakusheva et al., 2007, 2008, 2013, Laurens et al., 2013a, 2013b).
In the past decade, a few research groups have also studied how brainstem and cerebellar neurons modulate during active, self-generated head movements. Strikingly, several types of neurons, well-known for responding to vestibular stimuli during passive movement, lose or reduce their sensitivity during self-generated movement (Gdowski et al., 2000; Gdowski and McCrea, 1999; Marlinski and McCrea, 2009; McCrea et al., 1999; McCrea and Luan, 2003; Roy and Cullen, 2001; 2004; Brooks and Cullen, 2009; 2013; 2014; Brooks et al., 2015; Carriot et al., 2013). In contrast, vestibular afferents respond indiscriminately for active and passive stimuli (Cullen and Minor, 2002; Sadeghi et al., 2007; Jamali et al., 2009). These properties resemble sensory prediction errors in other sensorimotor functions such as fish electrosensation (Requarth and Sawtell, 2011; Kennedy et al., 2014) and motor control (Tseng et al., 2007; Shadmehr et al., 2010). Yet, a consistent quantitative take-home message has been lacking. Initial experiments and reviews implicated proprioceptive switches (Figure 1B; Roy and Cullen, 2004; Cullen et al., 2011; Cullen, 2012; Carriot et al., 2013; Brooks and Cullen, 2014). More recently, elegant experiments by Brooks and colleagues (Brooks and Cullen, 2013; Brooks et al., 2015) started making the suggestion that the brain predicts how self-generated motion activates the vestibular organs and subtracts these predictions from afferent signals to generate sensory prediction errors (Figure 1C). However, the computational processes underlying this sensory prediction have remained unclear.
Confronting the findings of studies utilizing passive and active motion stimuli leads to a paradox, in which central vestibular neurons encode self-motion signals computed by feeding vestibular signals through an internal model during passive motion (Figure 1A), but during active motion, efference copies of motor commands, also transformed by an internal model (Figure 1C), attenuate the responses of the same neurons. Thus, a highly influential interpretation is that the elaborate internal model characterized with passive stimuli would only be useful in situations that involve unexpected (passive) movements but would be unused during normal activities, because either its input or its output (Figure 1—figure supplement 1) would be suppressed during active movement. Here, we propose an alternative that the internal model that processes vestibular signals (Figure 1A) and the internal model that generates sensory predictions during active motion (Figure 1C) are identical. In support of this theory, we show that the processing of motor commands must involve an internal model of the physical properties of the vestibular sensors, identical to the computations described during passive motion, otherwise accurate self-motion estimation would be severely compromised during actively generated movements.
The essence of the theory developed previously for passive movements is that the brain uses an internal representation of the laws of physics and sensory dynamics (which has been elegantly modeled as forward internal models of the sensors) to process vestibular signals. In contrast, although it is understood that transforming head motor commands into sensory predictions is likely to also involve internal models, no explicit mathematical implementation has ever been proposed for explaining the response attenuation in central vestibular areas. A survey of the many studies by Cullen and colleagues even questions the origin and function of the sensory signals canceling vestibular afferent activity, as early studies emphasized a critical role of neck proprioception in gating the cancellation signal (Figure 1B, Roy and Cullen, 2004), whereas follow-up studies proposed that the brain computes sensory prediction errors, without ever specifying whether the implicated forward internal models involve vestibular or proprioceptive cues (Figure 1C, Brooks et al., 2015). This lack of quantitative analysis has obscured the simple solution, which is that transforming motor commands into sensory predictions requires exactly the same forward internal model that has been used to model passive motion. We show that all previous experimental findings during both active and passive movements can be explained by a single sensory internal model that is used to generate optimal estimates of self-motion (Figure 1D, ‘Kalman filter’). Because we focus on sensory predictions and self-motion estimation, we do not model in detail the motor control aspects of head movements and we consider the proprioception gating mechanism as a switch external to the Kalman filter, similar to previous studies (Figure 1D, black dashed lines and red switch).
We use the framework of the Kalman filter (Figure 1D; Figure 1—figure supplement 2; Kalman, 1960), which represents the simplest and most commonly used mathematical technique to implement statistically optimal dynamic estimation and explicitly computes sensory prediction errors. We build a quantitative Kalman filter that integrates motion signals originating from motor, canal, otolith, vision and neck proprioceptor signals during active and passive rotations, tilts and translations. We show how the same internal model must process both active and passive motion stimuli, and we provide quantitative simulations that reproduce a wide range of behavioral and neuronal responses, while simultaneously demonstrating that the alternative models (Figure 1—figure supplement 1) do not. These simulations also generate testable predictions, in particular which passive stimuli should induce sensory errors and which should not, that may motivate future studies and guide interpretation of experimental findings. Finally, we summarize these internal model computations into a schematic diagram, and we discuss how various populations of brainstem and cerebellar neurons may encode the underlying sensory error or feedback signals.
The structure of the Kalman filter in Figure 1D is shown with greater detail in Figure 1—figure supplement 2 and described in Materials and methods. In brief, a Kalman filter (Kalman, 1960) is based on a forward model of a dynamical system, defined by a set of state variables that are driven by their own dynamics, motor commands and internal or external perturbations. A set of sensors, grouped in a variable , provide sensory signals that reflect a transformation of the state variables. Note that may provide ambiguous or incomplete information, since some sensors may measure a mixture of state variables, and some variables may not be measured at all.
The Kalman filter uses the available information to track an optimal internal estimate of the state variable . At each time , the Kalman filter computes a preliminary estimate (also called a prediction, ) and a corresponding predicted sensory signal . In general, the resulting state estimate and the predicted sensory prediction may differ from the real values and . These errors are reduced using sensory information, as follows (Figure 1—figure supplement 2B): First, the prediction and the sensory input are compared to compute a sensory error Second, sensory errors are transformed into a feedback , where is a matrix of feedback gains, whose dimensionality depends on both the state variable and the sensory inputs. Thus, an improved estimate at time is . The feedback gain matrix determines how sensory errors improve the final estimate (see Supplementary methods, ‘Kalman filter algorithm’ for details).
Figure 2 applies this framework to the problem of estimating self-motion (rotation, tilt and translation) using vestibular sensors, with two types of motor commands: angular velocity () and translational acceleration (), with corresponding unpredicted inputs, and (Figure 2A) that represent passive motion or motor error (see Discussion: ‘Role of the vestibular system during active motion: fundamental, ecological and clinical implications’). The sensory signals () we consider initially encompass the semicircular canals (rotation sensors that generate a sensory signal ) and the otoliths organs (linear acceleration sensors that generate a sensory signal ) – proprioception is also added in subsequent sections. Each of these sensors has distinct properties, which can be accounted for by the internal model of the sensors. The semicircular canals exhibit high-pass dynamic properties, which are modeled by another state variable (see Supplementary methods, ‘Model of head motion and vestibular sensors’). The otolith sensors exhibit negligible dynamics, but are fundamentally ambiguous: they sense gravitational as well as linear acceleration – a fundamental ambiguity resulting from Einstein’s equivalence principle [Einstein, 1907; modeled here as and ; note that and are expressed in comparable units; see Materials and methods; 'Simulation parameters']. Thus, in total, the state variable has 4-degrees of freedom (Figure 2A): angular velocity and linear acceleration (which are the input/output variables directly controlled), as well as (a hidden variable that must be included to model the dynamics of the semicircular canals) and tilt position (another hidden variable that depends on rotations , necessary to model the sensory ambiguity of the otolith organs).
The Kalman filter computes optimal estimates , , and based on motor commands and sensory signals. Note that we do not introduce any tilt motor command, as tilt is assumed to be controlled only indirectly though rotation commands (). For simplicity, we restrict self-motion to a single axis of rotation (e.g. roll) and a single axis of translation (inter-aural). The model can simulate either rotations in the absence of head tilt (e.g. rotations around an earth-vertical axis: EVAR, Figure 2B) or tilt (Figure 2C, where tilt is the integral of rotation velocity, ) using a switch (but see Supplementary methods, ‘Three-dimensional Kalman filter’ for a 3D model). Sensory errors are used to correct internal motion estimates using the Kalman gain matrix, such that the Kalman filter as a whole performs optimal estimation. In theory, the Kalman filter includes a total of eight feedback signals, corresponding to the combination of two sensory (canal and otolith) errors and four internal states (, , and ). From those eight feedback signals, two are always negligible (Table 2; see also Supplementary methods, ‘Kalman feedback gains’).
We will show how this model performs optimal estimation of self-motion using motor commands and vestibular sensory signals in a series of increasingly complex simulations. We start with a very short (0.2 s) EVAR stimulus, where canal dynamics are negligible (Figure 3), followed by a longer EVAR that highlights the role of an internal model of the canals (Figure 4). Next, we consider the more complex tilt and translation movements that require all four state variables to demonstrate how canal and otolith errors interact to disambiguate otolith signals (Figures 5 and 6). Finally, we extend our model to simulate independent movement of the head and trunk by incorporating neck proprioceptive sensory signals (Figure 7). For each motion paradigm, identical active and passive motion simulations will be shown side by side in order to demonstrate how the internal model integrates sensory information and motor commands. We show that the Kalman feedback plays a preeminent role, which explains why lots of brain machinery is devoted to its implementation (see Discussion). For convenience, all mathematical notations are summarized in Table 1. For Kalman feedback gain nomenclature and numerical values, see Table 2.
In Figure 3, we simulate rotations around an earth-vertical axis (Figure 3A) with a short duration (0.2 s, Figure 3B), chosen to minimize canal dynamics (, Figure 3B, cyan) such that the canal response matches the velocity stimulus (, compare magenta curve in Figure 3C with blue curve in Figure 3B). We simulate active motion (Figure 3D–K, left panels), where (Figure 3D) and (not shown), as well as passive motion (Figure 3D–K, right panels), where (Figure 3D) and (not shown). The rotation velocity stimulus (, Figure 3E, blue) and canal activation (, Figure 3F, magenta) are identical in both active and passive stimulus conditions. As expected, the final velocity estimate (output of the filter, Figure 3G, blue) is equal to the stimulus (Figure 3E, blue) during both passive and active conditions. Thus, this first simulation is meant to emphasize differences in the flow of information within the Kalman filter, rather than differences in performance between passive and active motions (which is identical).
The fundamental difference between active and passive motions resides in the prediction of head motion (Figure 3H) and sensory canal signals (Figure 3I). During active motion, the motor command (Figure 3D) is converted into a predicted rotation (Figure 3H) by the internal model, and in turn in a predicted canal signal (Figure 3I). Of course, in this case, we have purposely chosen the rotation stimulus to be so short (0.2 s), such that canal afferents reliably encode the rotation stimulus (; compare Figure 3F and E, left panels) and the internal model of canals dynamics have a negligible contribution; that is, (compare Figure 3I and H, left panels). Because the canal sensory error is null, that is (Figure 3K, left panel), the Kalman feedback pathway remains silent (not shown) and the net motion estimate is unchanged compared to the prediction, that is, . In conclusion, during active rotation (and in the absence of perturbations, motor or sensory noise), motion estimates are generated entirely based on an accurate predictive process, in turn leading to an accurate prediction of canal afferent signals. In the absence of sensory mismatch, these estimates don’t require any further adjustment.
In contrast, during passive motion the predicted rotation is null (, Figure 3H, right panel), and therefore the predicted canal signal is also null (, Figure 3I, right panel). Therefore, canal signals during passive motion generate a sensory error (Figure 3K, right panel). This sensory error is converted into a feedback signal (Figure 3J) with a Kalman gain (feedback from canal error to angular velocity estimate ) that is close to 1 (Table 2; note that this value represents an optimum and is computed by the Kalman filter algorithm). The final motion estimate is generated by this feedback, that is
These results illustrate the fundamental rules of how active and passive motion signals are processed by the Kalman filter (and, as hypothesized, the brain). During active movements, motion estimates are generated by a predictive mechanism, where motor commands are fed into an internal model of head motion. During passive movement, motion estimates are formed based on feedback signals that are themselves driven by sensory canal signals. In both cases, specific nodes in the network are silent (e.g. predicted canal signal during passive motion, Figure 3I; canal error signal during active motion, Figure 3K), but the same network operates in unison under all stimulus conditions. Thus, depending on whether the neuron recorded by a microelectrode in the brain carries predicted, actual or error sensory signals, differences in neural response modulation are expected between active and passive head motion. For example, if a cell encodes canal error exclusively, it will show maximal modulation during passive rotation, and no modulation at all during active head rotation. If a cell encodes mixtures of canal sensory error and actual canal sensory signals (e.g. through a direct canal afferent input), then there will be non-zero, but attenuated, modulation during active, compared to passive, head rotation. Indeed, a range of response attenuation has been reported in the vestibular nuclei (see Discussion).
We emphasize that in Figure 3 we chose a very short-duration (0.2 s) motion profile, for which semicircular canal dynamics are negligible and the sensor can accurately follow the rotation velocity stimulus. We now consider more realistic rotation durations, and demonstrate how predictive and feedback mechanisms interact for accurate self-motion estimation. Specifically, canal afferent signals attenuate (because of their dynamics) during longer duration rotations – and this attenuation is already sizable for rotations lasting 1 s or longer. We next demonstrate that the internal model of canal dynamics must be engaged for accurate rotation estimation, even during purely actively generated head movements.
We now simulate a longer head rotation, lasting 2 s (Figure 4A,B, blue). The difference between the actual head velocity and the average canal signal is modeled as an internal state variable , which follows low-pass dynamics (see Supplementary methods, ‘Model of head motion and vestibular sensors’). At the end of the 2 s rotation, the value of reaches its peak at ~40% of the rotation velocity (Figure 4B, cyan), modeled to match precisely the afferent canal signal , which decreases by a corresponding amount (Figure 4C). Note that persists when the rotation stops, matching the canal aftereffect ( after t > 2 s). Next, we demonstrate how the Kalman filter uses the internal variable to compensate for canal dynamics.
During active motion, the motor command (Figure 4D) is converted into an accurate prediction of head velocity (Figure 4H, blue). Furthermore, is also fed through the internal model of the canals to predict (Figure 4H, cyan). By combining the predicted internal state variables and , the Kalman filter computes a canal prediction that follows the same dynamics as (compare Figure 4F and I, left panels). Therefore, as in Figure 3, the resulting sensory mismatch is and the final estimates (Figure 4G) are identical to the predicted estimates (Figure 4H). Thus, the Kalman filter maintains an accurate rotation estimate by feeding motor commands through an internal model of the canal dynamics. Note, however, that because in this case (compare magenta curve in Figure 4F and blue curve in Figure 4E, left panels), (compare magenta curve in Figure 4I and blue curve in Figure 4H, left panels). Thus, the sensory mismatch can only be null under the assumption that motor commands have been processed through the internal model of the canals. But before we elaborate on this conclusion, let’s first consider passive stimulus processing.
During passive motion, the motor command is equal to zero. First, note that the final estimate is accurate (Figure 4G), as in Figure 3G, although canal afferent signals don’t encode accurately. Second, note that the internal estimate of canal dynamics (Figure 4G) and the corresponding prediction (; Figure 4H) are both accurate (compare with Figure 4E). This occurs because the canal error (Figure 4K) is converted into a second feedback, , (Figure 4J, cyan), which updates the internal estimate (see Supplementary methods, ‘Velocity Storage’). Finally, in contrast to Figure 3, the canal sensory error (Figure 4K) does not follow the same dynamics as (Figure 4C,F), but is (as it should) equal to (Figure 4B). This happens because, though a series of steps ( = - in Figure 4I and in Figure 4K), is added to the vestibular signal to compute . This leads to the final estimate (Figure 4G). Model simulations during even longer duration rotations and visual-vestibular interactions are illustrated in Figure 4—figure supplement 1. Thus, the internal model of canal dynamics improves the rotation estimate during passive motion. Remarkably, this is important not only during very long duration rotations (as is often erroneously presumed), but also during short stimuli lasting 1–2 s, as illustrated with the simulations in Figure 4.
We now return to the actively generated head rotations to ask the important question: What would happen if the brain didn’t use an internal model of canal dynamics? We simulated motion estimation where canal dynamics were removed from the internal model used by the Kalman filter (Figure 4—figure supplement 2). During both active and passive motion, the net estimate is inaccurate as it parallels, exhibiting a decrease over time and an aftereffect. In particular, during active motion, the motor commands provide accurate signals , but the internal model of the canals fails to convert them into a correct prediction , resulting in a sensory mismatch. This mismatch is converted into a feedback signal that degrades the accurate prediction such that the final estimate is inaccurate. These simulations highlight the role of the internal model of canal dynamics, which continuously integrates rotation information in order to anticipate canal afferent activity during both active and passive movements. Without this sensory internal model, active movements would result in sensory mismatch, and the brain could either transform this mismatch into sensory feedback, resulting in inaccurate motion estimates, or ignore it and lose the ability to detect externally generated motion or movement errors. Note that the impact of canal dynamics is significant even during natural short-duration and high-velocity head rotations (Figure 4—figure supplement 3). Thus, even though particular nodes (neurons) in the circuit (e.g. vestibular and rostral fastigial nuclei cells presumably reflecting either or in Figures 3 and 4; see Discussion) are attenuated or silent during active head rotations, efference copies of motor commands must always be processed though the internal model of the canals – motor commands cannot directly drive appropriate sensory prediction errors. This intuition has remained largely unappreciated by studies comparing how central neurons modulate during active and passive rotations – a misunderstanding that has led to a fictitious dichotomy belittling important insights gained by decades of studies using passive motion stimuli (see Discussion).
Next, we study the interactions between rotation, tilt and translation perception. We first simulate a short duration (0.2 s) roll tilt (Figure 5A; with a positive tilt velocity , Figure 5B, blue). Tilt position (, Figure 5B, green) ramps during the rotation and then remains constant. As in Figure 3, canal dynamics are negligible (; Figure 5F, magenta) and the final rotation estimate is accurate (Figure 5G, blue). Also similar to Figure 3, is carried by the predicted head velocity node during active motion (; ) and by the Kalman feedback node during passive motion (; ). That is, the final rotation estimate, which is accurate during both active and passive movements, is carried by different nodes (thus, likely different cell types; see Discussion) within the neural network.
When rotations change orientation relative to gravity, another internal state (tilt position , not included in the simulations of Figures 3 and 4) and another sensor (otolith organs; since in this simulation; Figure 5F, black) are engaged. During actively generated tilt movements, the rotation motor command () is temporally integrated by the internal model (see of Supplementary methods, ‘Kalman filter algorithm developed’), generating an accurate prediction of head tilt (Figure 5H, left panel, green). This results in a correct prediction of the otolith signal (Figure 5I, grey) and therefore, as in previous simulations of active movement, the sensory mismatch for both the canal and otolith signals (Figure 5L, magenta and gray, respectively) and feedback signals (not shown) are null; and the final estimates, driven exclusively by the prediction, are accurate; and .
During passive tilt, the canal error, , is converted into Kalman feedback that updates (Figure 5K, blue) and (not shown here; but see Figure 5—figure supplement 1 for 2 s tilt simulations), as well as the two other state variables ( and ). Specifically, the feedback from to (updates the predicted tilt and is temporally integrated by the Kalman filter (; see Supplementary methods, ‘Passive Tilt’; Figure 5K, green). The feedback signal from to has a minimal impact, as illustrated in Figure 5K, red (see also Supplementary methods,’ Kalman feedback gains’ and Table 2).
Because efficiently updates the tilt estimate , the otolith error is close to zero during passive tilt (Figure 5L, gray; see Supplementary methods, ‘Passive Tilt’) and therefore all feedback signals originating from (Figure 5J) play a minimal role (see Supplementary methods, ‘Passive Tilt’) during pure tilt (this is the case even for longer duration stimuli; Figure 5—figure supplement 1). This simulation highlights that, although tilt is sensed by the otoliths, passive tilt doesn’t induce any sizeable otolith error. Thus, unlike neurons tuned to canal error, the model predicts that those cells tuned to otolith error will not modulate during either passive or actively-generated head tilt. Therefore, cells tuned to otolith error would respond primarily during translation, and not during tilt, thus they would be identified ‘translation-selective’. Furthermore, the model predicts that those neurons tuned to passive tilt (e.g. Purkinje cells in the caudal cerebellar vermis; Laurens et al., 2013b) likely reflect a canal error that has been transformed into a tilt velocity error (Figure 5L, magenta). Thus, the model predicts that tilt-selective Purkinje cells should encode tilt velocity, and not tilt position, a prediction that remains to be tested experimentally (see Discussion).
Next, we simulate a brief translation (Figure 6). During active translation, we observe, as in previous simulations of active movements, that the predicted head motion matches the sensory (otolith in this case: ) signals ( and ). Therefore, as in previous simulations of active motion, the sensory prediction error is zero (Figure 6L) and the final estimate is equal to, and driven by, the prediction (; Figure 6G, red).
During passive translation, the predicted acceleration is null (, Figure 6H, red), similar as during passive rotation in Figures 3 and 4). However, a sizeable tilt signal ( and , Figure 6G,H, green), develops over time. This (erroneous) tilt estimate can be explained as follows: soon after translation onset (vertical dashed lines in Figure 6B–J), is close to zero. The corresponding predicted otolith signal is also close to zero (), leading to an otolith error (Figure 6L, right, gray). Through the Kalman feedback gain matrix, this otolith error, , is converted into: (1) an acceleration feedback (Figure 6J, red) with gain (the close to unity feedback gain indicates that otolith errors are interpreted as acceleration: ; note however that the otolith error vanishes over time, as explained next); and (2) a tilt feedback (Figure 6J, green), with . This tilt feedback, although too weak to have any immediate effect, is integrated over time (; see Figure 5 and Supplementary methods, ‘Somatogravic effect’), generating the rising tilt estimate (Figure 6G, green) and (Figure 6H, green).
The fact that the Kalman gain feedback from the otolith error to the internal state generates the somatogravic effect is illustrated in Figure 6—figure supplement 1, where a longer acceleration (20 s) is simulated. At the level of final estimates (perception), these simulations predict the occurrence of tilt illusions during sustained translation (somatogravic illusion; Graybiel, 1952; Paige and Seidman, 1999). Further simulations show how activation of the semicircular canals without a corresponding activation of the otoliths (e.g. during combination of tilt and translation; Angelaki et al. (2004); Yakusheva et al., 2007) leads to an otolith error (Figure 6—figure supplement 2) and how signals from the otoliths (that sense indirectly whether or not the head rotates relative to gravity) can also influence the rotation estimate at low frequencies (Figure 6—figure supplement 3; this property has been extensively evaluated by Laurens and Angelaki, 2011). These simulations demonstrate that the Kalman filter model efficiently simulates all previous properties of both perception and neural responses during passive tilt and translation stimuli (see Discussion).
The model analyzed so far has considered only vestibular sensors. Nevertheless, active head rotations often also activate neck proprioceptors, when there is an independent rotation of the head relative to the trunk. Indeed, a number of studies (Kleine et al., 2004; Brooks and Cullen, 2009; 2013; Brooks et al., 2015) have identified neurons in the rostral fastigial nuclei that encode the rotation velocity of the trunk. These neurons receive convergent signals from the semicircular canals and neck muscle proprioception and, accordingly, are named ‘bimodal neurons’, to contrast with ‘unimodal neurons’, which encode passive head velocity. Because the bimodal neurons do not respond to active head and trunk movements (Brooks and Cullen, 2013; Brooks et al., 2015), they likely encode feedback signals related to trunk velocity. We developed a variant of the Kalman filter to model both unimodal and bimodal neuron types (Figure 7; see also Supplementary methods and Figure 7—figure supplement 1–3).
The model tracks the velocity of the trunk in space and the velocity of the head on the trunk as well as neck position (). Sensory inputs are provided by the canals (that sense the total head velocity, ), and proprioceptive signals from the neck musculature (), which are assumed to encode neck position (Chan et al., 1987).
In line with the simulations presented above, we find that, during active motion, the predicted sensory signals are accurate. Consequently, the Kalman feedback pathways are silent (Figure 7—figure supplement 1–3; active motion is not shown in Figure 7). In contrast, passive motion induces sensory errors and Kalman feedback signals. The velocity feedback signals (elaborated in Figure 7—figure supplement 1–3) have been re-plotted in Figure 7, where we illustrate head in space (blue), trunk in space (gray), and head on trunk (red) velocity (neck position feedback signals are only shown in Figure 7—figure supplement 1–3).
During passive whole head and trunk rotation, where the trunk rotates in space (Figure 7A, Real motion: , grey) and the head moves together with the trunk (head on trunk velocity , red, head in space , blue), we find that the resulting feedback signals accurately encode these rotation components (Figure 7A, Velocity Feedback; see also Figure 7—figure supplement 1). During head on trunk rotation (Figure 7B, Figure 7—figure supplement 2), the Kalman feedback signals accurately encode the head on trunk (red) or in space (blue) rotation, and the absence of trunk in space rotation (gray). Finally, during trunk under head rotation that simulates a rotation of the trunk while the head remains fixed in space, resulting in a neck counter-rotation, the various motion components are accurately encoded by Kalman feedback (Figure 7C, Figure 7—figure supplement 3). We propose that unimodal and bimodal neurons reported in (Brooks and Cullen, 2009; 2013) encode feedback signals about the velocity of the head in space (, Figure 7, blue) and of the trunk in space (, Figure 7, gray), respectively. Furthermore, in line with experimental findings (Brooks and Cullen, 2013), these feedback pathways are silent during self-generated motion.
The Kalman filter makes further predictions that are entirely consistent with experimental results. First, it predicts that proprioceptive error signals during passive neck rotation encode velocity (Figure 7—figure supplement 3L; see Supplementary methods, ‘Feedback signals during neck movement’). Thus, the Kalman filter explains the striking result that the proprioceptive responses of bimodal neurons encode trunk velocity (Brooks and Cullen, 2009; 2013), even if neck proprioceptors encode neck position. Note that neck proprioceptors likely encode a mixture of neck position and velocity at high frequencies (Chan et al., 1987; Mergner et al., 1991); and additional simulations (not shown) based on this hypothesis yield similar results as those shown here. We used here a model in which neck proprioceptors encode position for simplicity, and in order to demonstrate that Kalman feedback signals encode trunk velocity even when proprioceptive signals encode position.
Second, the model predicts another important property of bimodal neurons: their response gains to both vestibular (during sinusoidal motion of the head and trunk together) and proprioceptive (during sinusoidal motion of the trunk when the head is stationary) stimulation vary identically if a constant rotation of the head relative to the trunk is added, as an offset, to the sinusoidal motion (Brooks and Cullen, 2009). We propose that this offset head rotation extends or contracts individual neck muscles and affects the signal to noise ratio of neck proprioceptors. Indeed, simulations shown in Figure 7—figure supplement 4 reproduce the effect of head rotation offset on bimodal neurons. In agreement with experimental findings, we also find that simulated unimodal neurons are not affected by these offsets (Figure 7—figure supplement 4).
Finally, the model also predicts the dynamics of trunk and head rotation perception during long-duration rotations (Figure 7—figure supplement 5), which has been established by behavioral studies (Mergner et al., 1991).
The theoretical framework of the Kalman filter asserts that the brain uses a single internal model to process copies of motor commands and sensory signals. But could alternative computational schemes, involving distinct internal models for motor and sensory signals, explain neuronal and behavioral responses during active and passive motions? Here, we consider three possibilities, illustrated in Figure 1—figure supplement 1. First, that the brain computes head motion based on motor commands only and suppresses vestibular sensory inflow entirely during active motion (Figure 1—figure supplement 1A). Second, that a ‘motor’ internal model and a ‘sensory’ internal model run in parallel, and that central neurons encode the difference between their outputs – which would represent a motion prediction error instead of a sensory prediction error, as proposed by the Kalman filter framework (Figure 1—figure supplement 1B). Third, that the brain computes sensory prediction errors based on sensory signals and the output of the ‘motor’ internal model, and then feeds these errors into the ‘sensory’ internal model (Figure 1—figure supplement 1C).
We first consider the possibility that the brain simply suppresses vestibular sensory inflow. Experimental evidence against this alternative comes from recordings performed when passive motion is applied concomitantly to an active movement (Brooks and Cullen, 2013; 2014; Carriot et al., 2013). Indeed, neurons that respond during passive but not active motion have been found to encode the passive component of combined passive and active motions, as expected based on the Kalman framework. We present corresponding simulation results in Figure 8. We simulate a rotation movement (Figure 8A), where an active rotation (, Gaussian velocity profile) is combined with a passive rotation (, trapezoidal profile), a tilt movement (Figure 8B; using similar velocity inputs, and , where the resulting active and passive tilt components are and ), and a translation movement (Figure 8C). We find that, in all simulations, the final motion estimate (Figure 8D–F; , and , respectively) matches the combined active and passive motions (, and , respectively). In contrast, the Kalman feedback signals (Figure 8G–I) specifically encode the passive motion components. Specifically, the rotation feedback (, Figure 8G) is identical to the passive rotation (Figure 8A). As in Figure 5, the tilt feedback (, Figure 8H) encodes tilt velocity, also equal to (Figure 8A). Finally, the linear acceleration feedback (, Figure 8I) follows the passive acceleration component, although it decreases slightly with time because of the somatogravic effect. Thus, Kalman filter simulations confirm that neurons that encode sensory mismatch or Kalman feedback should selectively follow the passive component of combined passive and active motions.
What would happen if, instead of computing sensory prediction errors, the brain simply discarded vestibular sensory (or feedback) signals during active motion? We repeat the simulations of Figure 8A–I after removing the vestibular sensory input signals from the Kalman filter. We find that the net motion estimates encode only the active movement components (Figure 8J–L; , and ) – thus, not accurately estimating the true movement. Furthermore, as a result of the sensory signals being discarded, all sensory errors and Kalman feedback signals are null. These simulations indicate that suppressing vestibular signals during active motion would prevent the brain from detecting passive motion occurring during active movement (see Discussion, ‘Role of the vestibular system during active motion: ecological, clinical and fundamental implications.”), in contradiction with experimental results.
Next, we simulate (Figure 9) the alternative model of Figure 1—figure supplement 1B, where the motor commands are used to predict head motion (Figure 9, first row) while the sensory signals are used to compute a self-motion estimate (second row). According to this model, these two signals would be compared to compute a motion prediction error instead of a sensory prediction error (third row; presumably represented in the responses of central vestibular neurons). We first simulate short active and passive rotations (Figure 9A,B; same motion as in Figure 3). During active rotation (Figure 9A), both the motor prediction and the sensory self-motion estimate are close to the real motion and therefore the motor prediction is null (Figure 9A, third row). In contrast, the sensory estimate is not cancelled during passive rotation, leading to a non-zero motion prediction error (Figure 9B, third row). Thus, the motion prediction errors in Figure 9A,B resemble the sensory prediction errors predicted by the Kalman filter in Figure 3 and may explain neuronal responses recorded during brief rotations.
However, this similarity breaks down when simulating a long-duration active or passive rotation (Figure 9C,D; same motion as in Figure 4—figure supplement 1A,B). The motor prediction of rotation velocity would remain constant during 1 min of active rotation (Figure 9C, first row), whereas the sensory estimate would decrease over time and exhibit an aftereffect (Figure 9C, second row). This would result in a substantial difference between the motor prediction and the sensory estimate (Figure 9C, third row) during active motion. This contrasts with Kalman filter simulations, where no sensory prediction errors occur during active motion.
A similar difference would also be seen during active translation (Figure 9E; same motion as in Figure 6). While the motion prediction (first row) would encode the active translation, the sensory estimate (second row) would be affected by the somatogravic effect (as in Figure 6), which causes the linear acceleration signal (red) to be replaced by a tilt illusion (green), also leading to motion prediction errors (third row). In contrast, the Kalman filter predicts that no sensory prediction error should occur during active translation.
These simulations indicate that processing motor and vestibular information independently would lead to prediction errors that would be avoided by the Kalman filter. Beyond theoretical arguments, this scheme may be rejected based on behavioral responses: Both rotation perception and the vestibulo-ocular reflex (VOR) decrease during sustained passive rotations, but persist indefinitely during active rotation (macaques: Solomon and Cohen, 1992); humans: Guedry and Benson (1983); Howard et al. (1998); Jürgens et al., 1999). In fact, this scheme cannot account for experimental findings, even if we consider different weighting for how the net self-motion signal is constructed from the independent motor and sensory estimates (Figure 9, bottom row). For example, if the sensory estimate is weighted 100%, rotation perception would decay during active motion (Figure 9C, bottom, dark blue), inconsistent with experimental results. If the motor prediction is weighted 100%, passive rotations would not be detected at all (Figure 9B,D, light blue). Finally, intermediate solutions (e.g. 50%/50%) would result in undershooting of both the steady state active (Figure 9C) and passive (Figure 9B,D) rotation perception estimates. Note also that, in all cases, the rotation after-effect would be identical during active and passive motion (Figure 9C,D, bottom), in contradiction with experimental findings (Solomon and Cohen, 1992; Guedry and Benson, 1983; Howard et al., 1998).
Finally, the third alternative scheme (Figure 1—figure supplement 1C), where sensory prediction error is used to cancel the input of a sensory internal model is, in fact, a more complicated version of the Kalman filter. This is because an internal model that processes motor commands to predict sensory signals must necessarily include an internal model of the sensors. Thus, simulations of the model in Figure 1—figure supplement 1C would be identical to the Kalman filter, by merely re-organizing the sequence of operations and uselessly duplicating some of the elements, to ultimately produce the same results.
We have tested the hypothesis that the brain uses, during active motion, exactly the same sensory internal model computations already discovered using passive motion stimuli (Mayne, 1974; Oman, 1982; Borah et al., 1988; Merfeld, 1995; Zupan et al., 2002; Laurens, 2006; Laurens and Droulez, 2007; Laurens and Droulez, 2008; Laurens and Angelaki, 2011; Karmali and Merfeld, 2012; Lim et al., 2017). Presented simulations confirm the hypothesis that the same internal model (consisting of forward internal models of the canals, otoliths and neck proprioceptors) can reproduce behavioral and neuronal responses to both active and passive motions. The formalism of the Kalman filter allows predictions of internal variables during both active and passive motions, with a strong focus on sensory error and feedback signals, which we hypothesize are realized in the response patterns of central vestibular neurons.
Perhaps most importantly, this work resolves an apparent paradox in neuronal responses between active and passive movements (Angelaki and Cullen, 2008), by placing them into a unified theoretical framework in which a single internal model tracks head motion based on motor commands and sensory feedback signals. Although particular cell types that encode sensory errors or feedback signals may not modulate during active movements because the corresponding sensory prediction error is negligible, the internal models of canal dynamics and otolith ambiguity operate continuously to generate the correct sensory prediction during both active and passive movements. Thus, the model presented here should eliminate the misinterpretation that vestibular signals are ignored during self-generated motion, and that internal model computations during passive motion are unimportant for every day’s life. We hope that this realization should also highlight the relevance and importance of passive motion stimuli, as critical experimental paradigms that can efficiently interrogate the network and unravel computational principles of natural motor activities, which cannot easily be disentangled during active movements.
We have developed the first ever model that simulates self-motion estimates during both actively generated and passive head movements. This model, summarized schematically in Figure 10, transforms motor commands and Kalman filter feedback signals into internal estimates of head motion (rotation and translation) and predicted sensory signals. There are two important take-home messages: (1) Because of the physical properties of the two vestibular sense organs, the predicted motion generated from motor commands is not equal to predicted sensory signals (for example, the predicted rotation velocity is processed to account for canal dynamics in Figure 4). Instead, the predicted rotation, tilt and translation signals generated by efference copies of motor commands must be processed by the corresponding forward models of the sensors in order to generate accurate sensory predictions. This important insight about the nature of these internal model computations has not been appreciated by the qualitative schematic diagrams of previous studies. (2) In an environment devoid of externally generated passive motion, motor errors and sensory noise, the resulting sensory predictions would always match sensory afferent signals accurately. In a realistic environment, however, unexpected head motion occurs due to both motor errors and external perturbations (see ‘Role of the vestibular system during active motion: ecological, clinical and fundamental implications’). Sensory vestibular signals are then used to correct internal motion estimates through the computation of sensory errors and their transformation into Kalman feedback signals. Given two sensory errors ( originating from the semicircular canals and originating from the otoliths) and four internal state variables (rotation, internal canal dynamics, tilt and linear acceleration: , , , ), eight feedback signals must be constructed. However, in practice, two of these signals have negligible influence for all movements ( feedback to and feedback to ; see Table 2 and Supplementary methods, ‘Kalman Feedback Gains’), thus only six elements are summarized in Figure 10.
The non-negligible feedback signals originating from the canal error are as follows (Figure 10, left):
The feedback to the rotation estimate represents the traditional ‘direct’ vestibular pathway (Raphan et al., 1979; Laurens and Angelaki, 2011). It is responsible for rotation perception during high-frequency (unexpected) vestibular stimulation, and has a gain close to unity.
The feedback to feeds into the internal model of the canals, thus allowing compensation for canals dynamics. This pathway corresponds to the ‘velocity storage’ (Raphan et al., 1979; Laurens and Angelaki, 2011). Importantly, the contribution of this signal is significant for movements larger than ~1 s, particularly during high velocity rotations.
The feedback to tilt () converts canal errors into a tilt velocity () signal, which is subsequently integrated by the internal model of head tilt.
The non-negligible feedback signals originating from the otolith error are as follows (Figure 9, right):
The feedback to linear acceleration () converts unexpected otolith activation into an acceleration signal and is responsible for acceleration perception during passive translations (as well as experimentally generated otolith errors; Merfeld et al., 1999; Laurens et al., 2013a).
The feedback to tilt () implements the somatogravic effect that acts to bias the internal estimate of gravity toward the net otolith signal so as to reduce the otolith error.
The feedback to plays a similar role with the feedback to tilt , that is, to reduce the otolith error; but acts indirectly by biasing the internal estimate of rotation in a direction which, after integration, drives the internal model of tilt so that it matches otolith signal (this feedback was called ‘velocity feedback’ in Laurens and Angelaki, 2011). Behavioral studies (and model simulations) indicate that this phenomenon has low-frequency dynamics and results in the ability of otolith signals to estimate rotational velocity (Angelaki and Hess, 1996; Hess and Angelaki, 1993). Lesion studies have demonstrated that this feedback depends on an intact nodulus and ventral uvula, the vermal vestibulo-cerebellum (Angelaki and Hess, 1995a; Angelaki and Hess, 1995b).
The model in Figure 10 is entirely compatible with previous models based on optimal passive self-motion computations (Oman, 1982; Borah et al., 1988; Merfeld, 1995; Laurens, 2006; Laurens and Droulez, 2007; Laurens and Droulez, 2008; Laurens and Angelaki, 2011; Karmali and Merfeld, 2012; Lim et al., 2017; Zupan et al., 2002). The present model is, however, distinct in two very important aspects: First, it takes into account active motor commands and integrates these commands with the vestibular sensory signals. Second, because it is formulated as a Kalman filter, it makes specific predictions about the feedback error signals, which constitute the most important nodes in understanding the neural computations underlying head motion sensation. Indeed, as will be summarized next, the properties of most cell types in the vestibular and cerebellar nuclei, as well as the vestibulo-cerebellum, appear to represent either sensory error or feedback signals.
Multiple studies have reported that vestibular-only (erroneous term to describe ‘non-eye-movement-sensitive’) neurons in the VN encode selectively passive head rotation (McCrea and Luan, 2003; Roy and Cullen, 2001; 2004; Brooks and Cullen, 2014) or passive translation (Carriot et al., 2013), but suppress this activity during active head movements. In addition, a group of rostral fastigial nuclei (unimodal rFN neurons; Brooks and Cullen, 2013; Brooks et al., 2015) also selectively encodes passive (but not active) rotations. These rotation-responding VN/rFN neurons likely encode either the semicircular canal error itself or its Kalman feedback to the rotation estimate (blue in Figure 10, dashed and solid ovals ‘VN, rFN’, respectively). The translation-responding neurons likely encode either the otolith error or its feedback to the linear acceleration estimate (Figure 10, solid and dashed red lines ‘VN, trans PC’). Because error and feedback signals are proportional to each other in the experimental paradigms considered here, whether VN/rFN encode sensory errors or feedback signals cannot easily be distinguished using vestibular stimuli alone. Nevertheless, it is also important to emphasize that, while the large majority of VN and rFN neurons exhibit reduced responses during active head movements, this suppression is rarely complete (McCrea et al., 1999; Roy and Cullen, 2001; Brooks and Cullen, 2013; Carriot et al., 2013). Thus, neuronal responses likely encode mixtures of error/feedback and sensory motion signals (e.g. such as those conveyed by direct afferent inputs).
During large amplitude passive rotations (Figure 4—figure supplement 3), the rotation estimate persists longer than the vestibular signal (Figure 4, blue; a property called velocity storage). Because the internal estimate is equal to the canal error, this implies that VN neurons (that encode the canal error) should exhibit dynamics that are different from those of canal afferents, having incorporated velocity storage signals. This has indeed been demonstrated in VN neurons during optokinetic stimulation (Figure 4—figure supplement 1; Waespe and Henn, 1977; Yakushin et al., 2017) and rotation about tilted axes (Figure 6—figure supplement 3; Reisine and Raphan, 1992; Yakushin et al., 2017).
Based on the work summarized above, the final estimates of rotation (Figure 4G) and translation (Figure 6G), which are the desirable signals to drive both perception and spatial navigation, do not appear to be encoded by most VN/rFN cells. Thus, one may assume that they are reconstructed downstream, perhaps in thalamic (Marlinski and McCrea, 2008; Meng et al., 2007; Meng and Angelaki, 2010) or cortical areas. Interestingly, more than half (57%) of ventral thalamic neurons (Marlinski and McCrea, 2008) and an identical fraction (57%) of neurons of the VN cells projecting to the thalamus (Marlinski and McCrea, 2009) respond similarly during passive and actively-generated head rotations. The authors emphasized that VN neurons with attenuated responses during actively-generated movements constitute only a small fraction (14%) of those projecting to the thalamus. Thus, although abundant in the VN, these passive motion-selective neurons may carry sensory error/feedback signals to the cerebellum, spinal cord or even other VN neurons (e.g. those coding the final estimates; Marlinski and McCrea, 2009). Note that Dale and Cullen, 2016, reported contrasting results where a large majority of ventral thalamus neurons exhibit attenuated responses during active motion. Even if not present in the ventral posterior thalamus, this signal should exist in the spatial perception/spatial navigation pathways. Thus, future studies should search for the neural correlates of the final self-motion signal. VN neurons identified physiologically to project to the cervical spinal cord do not to modulate during active rotations, so they could encode either passive head rotation or active and passive trunk rotation (McCrea et al., 1999).
Furthermore, the dynamics of the thalamus-projecting VN neurons with similar responses to passive and active stimuli were not measured (Marlinski and McCrea, 2009). Recall that the model predicts that final estimates of rotation differ from canal afferent signals only in their response dynamics (Figure 4, compare panels F and G). It would make functional sense that these VN neurons projecting to the thalamus follow the final estimate dynamics (i.e., they are characterized by a prolonged time constant compared to canal afferents) – and future experiments should investigate this hypothesis.
Another class of rFN neurons (and possibly VN neurons projecting to the thalamus; Marlinski and McCrea, 2009, or those projecting to the spinal cord; McCrea et al., 1999) specifically encodes passive trunk velocity in space, independently of head velocity (bimodal neurons; Brooks and Cullen, 2009; 2013; Brooks et al., 2015). These neurons likely encode Kalman feedback signals about trunk velocity (Figure 7, blue). Importantly, these neurons respond equivalently to passive whole trunk rotation when the trunk and the head rotate together (Figure 7A) and to passive trunk rotation when the head is space-fixed (Figure 7C). The first protocol activates the semicircular canals and induces a canal error , while the later activates neck proprioceptors and generates a proprioceptive error, . From a physiological point of view, this indicates that bimodal neurons respond to semicircular canals as well as neck proprioceptors (hence their name). Note that several other studies identified VN (Anastasopoulos and Mergner, 1982), rFN (Kleine et al., 2004) and anterior suprasylvian gyrus (Mergner et al., 1985) neurons that encode trunk velocity during passive motion, but didn’t test their response to active motion.
The Kalman filter also predicts that neck proprioceptive signals that encode neck position should be transformed into error signals that encode neck velocity. In line with model predictions, bimodal neurons encode velocity signals that originate from neck proprioception during passive sinusoidal (1 Hz, Brooks and Cullen, 2009) and transient (Gaussian velocity profile, Brooks and Cullen, 2013) movements. Remarkably, although short-duration rotation of the trunk while the head is stationary in space leads to a veridical perception of trunk rotation, long duration trunk rotation leads to an attenuation of the perceived trunk rotation and a growing illusion of head rotation in the opposite direction (Mergner et al., 1991). These experimental findings are also predicted by the Kalman filter model (Figure 7—figure supplement 5).
The simple spike modulation of two distinct types of Purkinje cells in the caudal cerebellar vermis (lobules IX-X, Uvula and Nodulus) encodes tilt (tilt-selective cells) and translation (translation-selective cells) during three-dimensional motion (Yakusheva et al., 2007, 2008, 2013; Laurens et al., 2013a; Laurens et al., 2013b). Therefore, it is possible that tilt- and translation selective cells encode tilt and acceleration feedback signals (Figure 10, green and red lines, respectively). If so, we hypothesize that their responses are suppressed during active motion (Figures 5 and 6). How Purkinje cells modulate during active motion is currently unknown. However, one study (Lee et al., 2015) performed when rats learned to balance on a swing indicates that Purkinje cell responses that encode trunk motion are reduced during predictable movements, consistent with the hypothesis that they encode sensory errors or Kalman feedback signals.
Model simulations have also revealed that passive tilt does not induce any significant otolith error (Figure 5J). In contrast, passive tilt elicits a significant canal error (Figure 5K). Thus, we hypothesize that the tilt signal present in the responses of Purkinje cells originates from the canal error onto the tilt internal state variable. If it is indeed a canal, rather than an otolith, error, it should be proportional to tilt velocity instead of tilt position (or linear acceleration). Accordingly, we observed (Laurens et al., 2013b) that tilt-selective Purkinje cell responses were on average close to velocity (average phase lag of 36° during sinusoidal tilt at 0.5 Hz). However, since sinusoidal stimuli are not suited for establishing dynamics (Laurens et al., 2017), further experiments are needed to confirm that tilt-selective Purkinje cells indeed encode tilt velocity.
Model simulations have also revealed that passive translation, unlike passive tilt, should include an otolith error. This otolith error feeds also into the tilt internal variable (Figure 9, somatogravic feedback) and is responsible for the illusion of tilt during sustained passive linear acceleration (somatogravic effect; Graybiel, 1952). Therefore, as summarized in Figure 10 (green lines), both canal and otolith errors should feedback onto the tilt internal variable. The canal error should drive modulation during tilt, whereas the otolith error should drive modulation during translation. In support of these predictions, we have demonstrated that tilt-selective Purkinje cells also modulate during translation, with a gain and phase consistent with the simulated otolith-driven feedback (Laurens et al., 2013b). Thus, both of these feedback error signals might be carried by caudal vermis Purkinje cells – and future experiments should address these predictions.
Note that semicircular canal errors must be spatially transformed in order to produce an appropriate tilt feedback. Indeed, converting a rotation into head tilt requires taking into account the angle between the rotation axis and earth-vertical. This transformation is represented by a bloc marked ‘3D’ in Figure 9 (see also () in Supplemenatry methods, ‘Three-Dimensional Kalman filter’. Importantly, we have established (Laurens et al., 2013b) that tilt-selective Purkinje cells encode spatially transformed rotation signals, as predicted by theory. In fact, we have demonstrated that tilt-selective Purkinje cells do not simply modulate during vertical canal stimulation, but also carry the tilt signal during off-vertical axis yaw rotations (Laurens et al., 2013b).
In this respect, it is important to emphasize that truly tilt-selective neurons exclusively encode changes in orientation relative to gravity, rather than being generically activated by vertical canal inputs. Thus, it is critical that this distinction is experimentally made using three-dimensional motion (see Laurens et al., 2013b; Laurens and Angelaki, 2015). Whereas 3D rotations have indeed been used to identify tilt-selective Purkinje cells in the vermis (Laurens et al., 2013b; Yakusheva et al., 2007), this is not true for other studies. For example, Siebold et al., 1997, Siebold et al., 1999, 2001), Laurens and Angelaki, 2015 and Zhou et al. (2006) have reported tilt-modulated cells in the rFN and VN, respectively, but because these neurons were not tested in three dimensions, the signals carried by these neurons remain unclear.
As summarized above, the simple spike responses of tilt-selective Purkinje cells during passive motion have already revealed many details of the internal model computations. Thus, we have proposed that tilt- selective Purkinje cells encode the feedback signals about tilt, which includes scaled and processed (i.e. by a spatial transformation, green ‘3D’ box in Figure 10) versions of both canal and otolith sensory errors (Figure 10, green oval, ‘tilt PC?’). However, there could be alternative implementations of the Kalman filter, where tilt-selective Purkinje cells may not encode only feedback signals, as proposed next:
We note that motor commands must be also be spatially processed (black ‘3D’ box in Figure 10) to contribute to the tilt prediction. One may question whether two distinct neuronal networks transform motor commands and canal errors independently (resulting in two ‘3D’ boxes in Figure 10). An alternative (Figure 10—figure supplement 1) would be that the brain merges motor commands and canal error to produce a final rotation estimate prior to performing this transformation. From a mathematical point of view, this alternative would only require a re-arrangement of the Kalman filter equations, which would not alter any of the model’s conclusions. However, tilt-selective Purkinje cells, which encode a spatially transformed signal, would then carry a mixture of predictive and feedback signals and would therefore respond identically to active and passive tilt velocity. Therefore, the brain may perform a spatial transformation of predictive and feedback rotation signals independently (Figure 10); or may merge them before transforming them (Figure 10—figure supplement 1). Recordings from tilt-selective Purkinje cells during active movements will distinguish between these alternatives.
In summary, many of the response properties described by previous studies for vestibular nuclei and cerebellar neurons can be assigned a functional ‘location’ within the Kalman filter model. Interestingly, most of the central neurons fit well with the properties of sensory errors and/or feedback signals. That an extensive neural machinery has been devoted to feedback signals is not surprising, given their functional importance for self-motion estimation. For many of these signals, a distinction between sensory errors and feedback signals is not easily made. That is, rotation-selective VN and rFN neurons can encode either canal error (Figure 10, bottom, dashed blue oval) or rotation feedback (Figure 10, bottom, solid blue oval). Similarly, translation-selective VN, rFN and Purkinje cells can encode either otolith error (Figure 10, bottom, dashed red oval) or translation feedback (Figure 10, bottom, solid red oval). The only feedback that is easily distinguished based on currently available data is the tilt feedback (Figure 10, green lines).
Although the blue, green and red feedback components of Figure 10 can be assigned to specific cell groups, this is not the case with the cyan feedback components. First, note that, like the tilt variable, the canal internal model variable, receives non-negligible feedback contributions from both the canal and otolith sensory errors (Figure 10, cyan lines). The canal feedback error changes the time constant of the rotation estimate (Figure 4 and Figure 4—figure supplements 1 and 3), whereas the otolith feedback error may suppress (post-rotatory tilt) or create (horizontal axis rotation) a rotation estimate (Figure 6—figure supplement 3). The neuronal implementations of the internal model of the canals (), and of its associated feedback pathways, are currently unknown. However, lesion studies clearly indicate that the caudal cerebellar vermis, lobules X and IX may influence the canal internal model state variable (Angelaki and Hess, 1995a; Angelaki and Hess, 1995b; Wearne et al., 1998). In fact, it is possible that the simple-spike output of the translation-selective Purkinje cells also carries the otolith sensory error feedback to the canal internal model state variable (Figure 10, bottom, cyan arrow passing though the dashed red ellipse). Similarly, the canal error feedback to the canal internal model state variable (Figure 10, bottom, cyan arrow originating from the dashed blue ellipse) can originate from VN or rFN cells that selectively encode passive, not active, head rotation (Figure 4J, note that the feedback is but a scaled-down version of the feedback).
Thus, although the feedback error signals to the canal internal model variable can be linked to known neural correlates, cells coding for the state variable exclusively have not been identified. It is possible that the hidden variable may be coded in a distributed fashion. After all, as already stated above, VN and rFN neurons have also been shown to carry mixed signals - they can respond to both rotation and translation, as well as they may carry both feedback/error and actual sensory signals. Thus, it is important to emphasize that these Kalman variables and error signals may be represented in a multiplexed way, where single neurons manifest mixed selectivity to more than just one internal state and/or feedback signals. This appears to be an organizational principle both in central vestibular areas (Laurens et al., 2017) and throughout the brain (Rigotti et al., 2013; Fusi et al., 2016). It has been proposed that mixed selectivity has an important computational advantage: high-dimensional representations with mixed selectivity allow a simple linear readout to generate a diverse array of potential responses (Fusi et al., 2016). In contrast, representations based on highly specialized neurons are low dimensional and may preclude a linear readout from generating several responses that depend on multiple task-relevant variables.
In this treatment, we have considered primarily the importance of the internal models of the sensors to emphasize its necessity for both self-generated motor commands and unpredicted, external perturbations. It is important to point out that self-generated movements involve internal model computations that have been studied extensively in the field of motor control and motor adaptation (Wolpert et al., 1995; Körding and Wolpert, 2004; Todorov, 2004; Chen-Harris et al., 2008; Berniker et al., 2010; Berniker and Kording, 2011; Franklin and Wolpert, 2011; Saglam et al., 2011; 2014). While the question of motor adaptation are not addressed directly in the present study, experiments in which resistive or assistive torques are applied to the head (Brooks et al., 2015) or in which active movements are entirely blocked (Roy and Cullen, 2004; Carriot et al., 2013) reveal how central vestibular pathways respond in situations that cause motor adaptation. Under these conditions, central neurons have been shown to encode net head motion (i.e. active and passive indiscriminately) with a similar gain as during passive motion (Figure 7—figure supplements 6 and 7). This may be interpreted and modeled by assuming that central vestibular pathways cease to integrate copies of motor commands (Figure 7—figure supplement 6) whenever active head motion is perturbed, until the internal model of the motor plant recalibrates to anticipate this perturbation (Brooks et al., 2015). Further analysis of these experimental results (Figure 7—figure supplement 7) indicate that they are fundamentally non-linear and cannot be reproduced by the Kalman filter (which is limited to linear operations) and therefore requires the addition of an external gating mechanism (black pathway in Figure 1D).
Notably, this nonlinearity is triggered with proprioceptive mismatch, that is, when there is a discrepancy between the intended head position and proprioceptive feedback. Note that perturbing head motion also induces a vestibular mismatch since it causes the head velocity to differ from the motor plan. However, central vestibular neurons still encode specifically passive head movement during vestibular mismatch, as can be shown by superimposing passive whole body rotations to active movements (Brooks and Cullen, 2013; 2014; Carriot et al., 2013) and illustrated in the model predictions of Figure 8. Remarkably, the elementary and fundamental difference between these different types of computations has never before been presented in a single theoretical framework.
Proprioceptive mismatch is likely a specific indication that the internal model of the motor plant (necessary for accurate motor control; Figure 1D, red) needs to be recalibrated. Applying resistive head torques (Brooks et al., 2015) or increasing head inertia (Saglam et al., 2011; 2014) does indeed induce motor adaptation which is not modeled in the present study (but see Berniker and Kording, 2008 ). Interestingly, the studies by Saglam et al. (2011), 2014) indicate that healthy subjects use a re-calibrated model of the motor plant to restore optimal motor performance, but that vestibular deficient patients fail to do so, indicating that vestibular error signals participate in motor adaptation (Figure 1D, broken blue arrow).
The internal model framework has been widely used to simulate optimal motor control strategies (Todorov, 2004; Chen-Harris et al., 2008; Saglam et al., 2011; 2014) and to create Kalman filter models of reaching movements (Berniker and Kording, 2008) and postural control (van der Kooij et al., 2001). The present model, however, is to our knowledge the first to apply these principles to optimal head movement perception during active and passive motion. As such, it makes explicit links between sensory dynamics (i.e. the canals), ambiguities (i.e. the otoliths), priors and motor efference copies. Perhaps most importantly, the focus of this study has been to explain neuronal response properties. By simulating and explaining neuronal responses during active and passive self-motion in the light of a quantitative model, this study advances our understanding of how theoretical principles about optimal combinations of motor signals, multiple sensory modalities with distinct dynamic properties and ambiguities and Bayesian priors map onto brainstem and cerebellar circuits.
To simplify the main framework and associated predictions, as well as the in-depth mathematical analyses of the model’s dynamics (Supplementary methods), we have presented a linearized one-dimensional model. This model was used to simulate either rotations around an earth-vertical axis or combinations of translation and rotations around an earth-horizontal axis. A more natural and general way to simulate self-motion information processing is to design a three-dimensional Kalman filter model. Such models have been used in previous studies, either by programming Kalman filters explicitly (Borah et al., 1988; Lim et al., 2017), or by building models based on the Kalman filter framework (Glasauer, 1992; Merfeld, 1995; Glasauer and Merfeld, 1997; Bos et al., 2001; Zupan et al., 2002). We show in Supplementary methods, ‘Three-dimensional Kalman filter’, how to generalize the model to three dimensions.
The passive motion components of the model presented here are to a large extent identical to the Particle filter Bayesian model in (Laurens, 2006; Laurens and Droulez, 2007,Laurens and Droulez, 2008; Laurens and Angelaki, 2011), which we have re-implemented as a Kalman filter, and into which we incorporated motor commands. One fundamental aspect of previous Bayesian models (Laurens, 2006; Laurens and Droulez, 2007,Laurens and Droulez, 2008) is the explicit use of two Bayesian priors that prevent sensory noise from accumulating over time. These priors encode the natural statistics of externally generated motion or motion resulting from motor errors and unexpected perturbations. Because, on average, rotation velocities and linear accelerations are close to zero, these Bayesian priors are responsible for the decrease of rotation estimates during sustained rotation (Figure 4—figure supplement 2) and for the somatogravic effect (Figure 6—figure supplement 2) (see Laurens and Angelaki, 2011) for further explanations). The influence of the priors is higher when the statistical distributions of externally generated rotation () and acceleration () are narrower (Figure 10—figure supplement 2), that is when their standard deviation is smaller. Stronger priors reduce the gain and time constant of rotation and acceleration estimates (Figure 10—figure supplement 2B,D). Importantly, the Kalman filter model predicts that the priors affect only the passive, but not the active, self-motion final estimates. Indeed, the rotation and acceleration estimates last indefinitely during simulated active motion (Figure 4—figure supplement 2, Figure 6—figure supplement 2, Figure 10—figure supplement 2). In this respect, the Kalman filter may explain why the time constant of the vestibulo-ocular reflex is reduced in figure ice skaters (Tanguy et al., 2008; Alpini et al., 2009): The range of head velocities experienced in these activities is wider than normal. In previous Bayesian models, we found that widening the rotation prior should increase the time constant of vestibular responses, apparently in contradiction with these experimental results. However, these models did not consider the difference between active and passive stimuli. The formalism of the Kalman filter reveals that Bayesian priors should reflect the distribution of passive motion or motor errors. In athletes that are highly trained to perform stereotypic movements, this distribution likely narrows, resulting in stronger priors and reduced vestibular responses.
One of the predictions of the Kalman filter is that motion illusions, such as the disappearance of rotation perception during long-duration rotation and the ensuing post-rotatory response (Figure 4—figure supplement 1B) should not occur during active motion (Figure 4—figure supplement 1A). This has indeed been observed in monkeys (Solomon and Cohen, 1992) and humans, where steady-state per-rotatory responses plateau at 10°/s and post-rotatory responses are decreased by a similar amount (Guedry and Benson, 1983; Howard et al., 1998); see also Brandt et al., 1977a). The fact that post-rotatory responses are reduced following active, as compared to passive, rotations is of particular interest, because it demonstrates that motor commands influence rotation perception even after the movement has stopped. As shown in Figure 4, The Kalman filter reproduces this effect by feeding motor commands though an internal model of the canals. As shown in Figure 4—figure supplement 1, this process is equivalent to the concept of ‘velocity storage’ (Raphan et al., 1979; see Laurens, 2006, Laurens and Droulez, 2008, MacNeilage et al. (2008), Laurens and Angelaki (2011) for a Bayesian interpretation of the concept of velocity storage). Therefore, the functional significance of this network, including velocity storage, is found during natural active head movements (see also Figure 4—figure supplement 3), rather than during passive low-frequency rotations with which it has been traditionally associated with in the past (but see Laurens and Angelaki, 2011).
A recent study (MacNeilage and Glasauer, 2017) evaluated how motor noise varies across locomotor activities and within gait cycles when walking. They found that motor noise peaks shortly before heel strike and after toe off; and is minimal during swing periods. They interpreted experimental findings using principles of sensory fusion, an approach that uses the same principles of optimal cue combination as the Kalman filter but doesn’t include dynamics. Interestingly, this analysis showed that vestibular cues should have a maximal effect when motor noise peaks, in support with experimental observations (Brandt et al., 1999; Jahn et al., 2000).
To avoid further complications to the solution to the Kalman filter gains, the presented model does not consider how the brain generates motor commands in response to vestibular stimulation, e.g. to stabilize the head in response to passive motion or to use vestibular signals to correct motor commands. This would require an additional feedback pathway - the reliance of motor command generation on sensory estimates (Figure 1D, blue broken arrow). For example, a passive head movement could result in a stabilizing active motor command. Or an active head movement could be less than desired because of noise, requiring an adjustment of the motor command to compensate. These feedback pathways have been included in previous Kalman filter models (e.g. van der Kooij et al., 2001), a study that focused specifically on postural control and reproduced human postural sway under a variety of conditions. Thus, the Kalman filter framework may be extended to model neuronal computations underlying postural control as well as the vestibulo-collic reflex.
Neuronal recordings (Brooks and Cullen, 2013; 2014; Carriot et al., 2013) and the present modeling unambiguously demonstrate that central neurons respond to unexpected motion during active movement (a result that we reproduced in Figure 8G–I). Beyond experimental manipulations, a number of processes may cause unpredictable motion to occur in natural environments. When walking on tree branches, boulders or soft grounds, the support surface may move under the feet, leading to unexpected trunk motion. A more dramatic example of unexpected trunk motion, that requires immediate correction, occurs when slipping or tripping. Complex locomotor activities involve a variety of correction mechanism among which spinal mechanisms and vestibular feedback play preeminent roles (Keshner et al., 1987; Black et al., 1988; Horstmann and Dietz, 1988).
The contribution of the vestibular system for stabilizing posture is readily demonstrated by considering the impact of chronic bilateral vestibular deficits. While most patients retain an ability to walk on firm ground and even perform some sports (Crawford, 1964; Herdman, 1996), vestibular deficit leads to an increased incidence of falls (Herdman et al., 2000), difficulties in walking on uneven terrains and deficits in postural responses to perturbations (Keshner et al., 1987; Black et al., 1988; Riley, 2010). This confirms that vestibular signals are important during active motion, especially in challenging environments. In this respect, the Kalman filter framework appears particularly well suited for understanding the effect of vestibular lesions.
As mentioned earlier, vestibular sensory errors also occur when the internal model of the motor apparatus is incorrect (Brooks et al., 2015) and these errors can lead to recalibration of internal models. This suggests that vestibular error signals during self-generated motion may play two fundamental roles: (1) updating self-motion estimates and driving postural or motor corrections, and (2) providing teaching signals to internal models of motor control (Wolpert et al., 1995) and therefore facilitating motor learning. This later point is supported by the finding that patients with vestibular deficits fail to recalibrate their motor strategies to account for changes in head inertia (Sağlam et al., 2014).
But perhaps most importantly, the model presented here should eliminate the misinterpretation that vestibular signals are ignored during self-generated motion – and that passive motion stimuli are old-fashioned and should no longer be used in experiments. Regarding the former conclusion, the presented simulations highlight the role of the internal models of canal dynamics and otolith ambiguity, which operate continuously to generate the correct sensory prediction during both active and passive movements. Without these internal models, the brain would be unable to correctly predict sensory canal and otolith signals and everyday active movements would lead to sensory mismatch (e.g. for rotations, see Figure 4—figure supplements 2 and 3). Thus, even though particular nodes (neurons) in the circuit show attenuated or no modulation during active head rotations, vestibular processing remains the same - the internal model is both engaged and critically important for accurate self-motion estimation, even during actively-generated head movements. Regarding the latter conclusion, it is important to emphasize that passive motion stimuli have been, and continue to be, extremely valuable in revealing salient computations that would have been amiss if the brain’s intricate wisdom was interrogated only with self-generated movements.
Furthermore, a quantitative understanding of how efference copies and vestibular signals interact for accurate self-motion sensation is primordial for our understanding of many other brain functions, including balance and locomotor control. As stated in Berniker and Kording (2011): ‘A crucial first step for motor control is therefore to integrate sensory information reliably and accurately’, and practically any locomotor activity beyond reaching movements in seated subjects will affect posture and therefore recruit the vestibular sensory modality. It is thus important for both motor control and spatial navigation functions (for which intact vestibular cues appear to be critical; Taube, 2007) to correct the misconception of incorrectly interpreting that vestibular signals are cancelled and thus are not useful during actively generated movements. By providing a state-of-the-art model of self-motion processing during active and passive motion, we are bridging several noticeable gaps between the vestibular and motor control/navigation fields.
‘A good model has a delightful way of building connections between phenomena that never would have occurred to one’ (Robinson, 1977). Four decades later, this beautifully applies here, where the mere act of considering how the brain should process self-generated motion signals in terms of mathematical equations (instead of schematic diagrams) immediately revealed a striking similarity with models of passive motion processing and, by motivating this work, opened an avenue to resolve a standing paradox in the field.
The internal model framework and the series of quantitative models it has spawned have explained and simulated behavioral and neuronal responses to self-motion using a long list of passive motion paradigms, and with a spectacular degree of accuracy (Mayne, 1974; Oman, 1982; Borah et al., 1988; Glasauer, 1992; Merfeld, 1995; Glasauer and Merfeld, 1997; Bos et al., 2001; Zupan et al., 2002; Laurens, 2006; Laurens and Droulez, 2007,Laurens and Droulez, 2008; Laurens and Angelaki, 2011; Karmali and Merfeld, 2012; Lim et al., 2017). Internal models also represent the predominant theoretical framework for studying motor control (Wolpert et al., 1995; Körding and Wolpert, 2004; Todorov, 2004; Chen-Harris et al., 2008; Berniker et al., 2010; Berniker and Kording, 2011; Franklin and Wolpert, 2011; Saglam et al., 2011; 2014). The vestibular system shares many common questions with the motor control field, such as that of 3D coordinate transformations and dynamic Bayesian inference, but, being considerably simpler, can be modeled and studied using relatively few variables. As a result, head movements represent a valuable model system for investigating the neuronal implementation of computational principles that underlie motor control. The present study thus offers the theoretical framework which will likely assist in understanding neuronal computations that are essential to active self-motion perception, spatial navigation, balance and motor activity in everyday life.
In a Kalman filter (Kalman, 1960), state variables are driven by their own dynamics (matrix ), motor commands and unpredictable perturbations resulting from motor noise and external influence through the equation (Figure 1—figure supplement 2A):
where matrices and reflect the response to motor inputs and perturbations, respectively.
A set of sensors, grouped in a variable , measure state variables transformed by a matrix , and are modeled as:
where is Gaussian sensory noise (Figure 1—figure supplement 2A, right). The model assumes that the brain has an exact knowledge of the forward model, that is, of , , and as well as the variances of and . Furthermore, the brain knows the values of the motor inputs and sensory signals , but doesn’t have access to the actual values of and .
At each time , the Kalman filter computes a preliminary estimate (also called a prediction) and a corresponding predicted sensory signal (Figure 1—figure supplement 2B). In general, the resulting state estimate and the predicted sensory prediction may differ from the real values and because: (1) , but the brain cannot predict the perturbation , (2) the brain does not know the value of the sensory noise and (3) the previous estimate used to compute could be incorrect. These errors are reduced using sensory information, as follows (Figure 1—figure supplement 2B). First, this prediction and the sensory input are compared to compute a sensory error Second, sensory errors are then transformed into a feedback where is a matrix of feedback gains. Thus, an improved estimate at time is . The value of the feedback gain matrix determines how sensory errors (and therefore sensory signals) are used to compute the final estimate and is computed based on , , and on the variances of and (see Supplementary methods, ‘Kalman filter algorithm’).
In the case of the self-motion model, the motor commands and are inputs to the Kalman filter (Figure 2). Note that, while the motor system may actually control other variables (such as forces or accelerations), we consider that these variables are converted into and . We demonstrate in Supplementary methods,’ Model of motor commands’ that altering these assumptions does not alter our conclusions. In addition to motor commands, a variety of unpredictable factors such as motor noise and external (passive) motion also affect and (MacNeilage and Glasauer, 2017). The total rotation and acceleration components resulting from these factors are modeled as variables and . Similar to (Laurens, 2006; Laurens and Droulez, 2007,Laurens and Droulez, 2008) we modeled the statistical distribution of these variables as Gaussians, with standard deviations and .
Excluding vision and proprioception, the brain senses head motion though the semicircular canals (that generate a signal ) and the otoliths organs (that generate a signal ). Thus, in initial simulations (Figures 3–6), the variable encompasses and (neck proprioceptors are added in Figure 7).
The semicircular canals are rotation sensors that, due to their mechanical characteristic, exhibit high-pass filter properties. These dynamics may be neglected for rapid movements of small amplitude (such as Figure 3) but can have significant impact during natural movements (Figure 4—figure supplement 3). They are modeled using a hidden state variable . The canals are also subject to sensory noise . Taken both the noise and the dynamics into account, the canals signal is modeled as .
The otolith organs are acceleration sensors. They exhibit negligible temporal dynamics in the range of motion considered here, but are fundamentally ambiguous: they sense gravitational as well as linear acceleration – a fundamental ambiguity resulting from Einstein’s equivalence principle (Einstein, 1907). Gravitational acceleration along the inter-aural axis depends on head roll position, modeled here as . The otoliths encode the sum of and and is also affected by sensory noise , such that the net otolith signal is .
How sensory errors are used to correct motion estimates depends on the Kalman gain matrix, which is computed by the Kalman algorithm such that the Kalman filter as a whole performs optimal estimation. In theory, the Kalman filter includes a total of 8 feedback signals, corresponding to the combination of two sensory errors (canal and otolith errors) and four internal states (see Supplementary methods,’ Kalman feedback gains’).
It is important to emphasize that the Kalman filter model is closely related to previous models of vestibular information processing. Indeed, simulations of long-duration rotation and visuo-vestibular interactions (Figure 4—figure supplement 2), as well as mathematical analysis (Laurens, 2006), demonstrate that is equivalent to the ‘velocity storage’ (Raphan et al., 1979; Laurens and Angelaki, 2011). These low-frequency dynamics, as well as visuo-vestibular interactions, were previously simulated and interpreted in the light of optimal estimation theory; and accordingly are reproduced by the Kalman filter model.
The model presented here is to a large extent identical to the Particle filter Bayesian model in (Laurens, 2006; Laurens and Droulez, 2007, Laurens and Droulez, 2008; Laurens and Angelaki, 2011). It should be emphasized that: (1) transforming the model into a Kalman filter didn’t alter the assumptions upon which the Particle filter was build; (2) introducing motor commands into the Kalman filter was a textbook process that did not require any additional assumptions or parameters; and (3) we used exactly the same parameter values as in Laurens, 2006 and Laurens and Droulez, 2008 (with the exception of whose impact, however, is negligible, and of the model of head on trunk rotation that required additional parameters; see next section).
The parameters of the Kalman filter model are directly adapted from previous studies (Laurens, 2006; Laurens and Droulez, 2008). Tilt angles are expressed in radians, rotation velocities in rad/s, and accelerations in g (1 g = 9.81 m/s2). Note that a small linear acceleration in a direction perpendicular to gravity will rotate the gravito-inertial force vector around the head by an angle . For this reason, tilt and small amplitude linear accelerations are expressed, in one dimension, in equivalent units that may be added or subtracted. The standard deviations of the unpredictable rotations () and accelerations () are set to the standard deviations of the Bayesian a priori in Laurens, 2006 and Laurens and Droulez, 2008, that is, 0.7 rad/s and 0.3 g (). The standard deviation of the noise affecting the canals () was set to 0.175 rad/s (as in Laurens, 2006 and Laurens and Droulez, 2008; see Figure 10—figure supplement 2 for simulations with different parameters). The standard deviation of the otolith noise () was set to 0.002 (2 cm/s2). We verified that values ranging from 0 to 0.01 had no effect on simulation results. The time constant of the canals was set to τc=4s. Simulations used a time step of = 0.01 s. We verified that changing the value of the time step without altering other parameters had no effect on the results.
We ran simulations using a variant of the model that included visual information encoding rotation velocity. The visual velocity signals were affected by sensory noise with a standard deviation = 0.12 rad/s, as in Laurens and Droulez, 2008.
Another variant modeled trunk in space velocity () and head on trunk velocity () independently. The standard deviations of unpredictable rotations were set to = 0.7 rad/s (identical to ) and = 3.5 rad/s. The standard deviation of sensory noise affecting neck afferents was set manually to = 0.0017 rad. We found that increasing the neck afferent noise reduces the gain of head on trunk and trunk in space velocity estimate (Figure 7C) (e.g. by 60% for a tenfold increase in afferent noise). Reducing the value of this noise has little effect on the simulations.
For simplicity, all simulations were run without adding the sensory noise and . These noise-free simulations are representative of the results that would be obtained by averaging several simulation runs performed with sensory noise (e.g. as in Laurens and Droulez, 2007). We chose to present noise-free results here in order to facilitate the comparison between simulations of active and passive motions.
A Matlab implementation of the Kalman model is available at: https://github.com/JeanLaurens/Laurens_Angelaki_Kalman_2017 (Laurens, 2017; copy archived at https://github.com/elifesciences-publications/Laurens_Angelaki_Kalman_2017).
Here we describe the Kalman model in more detail. We present the model of head motion and vestibular information processing, first as a set of linear equations (‘Model of head motion and vestibular sensors’), and then in matrix form (‘Model of head motion in matrix form’). Next we present the Kalman filter algorithm, in the form of matrix computations (‘Kalman filter algorithm’) and then as a series of equations (‘Kalman filter algorithm developed’).
Next, we derive a series of properties of the internal model computations (‘Velocity Storage during EVAR’; ‘Passive Tilt’,’ Kalman feedback gains’,’ Time constant of the somatogravic effect’,’ Model of motor commands’).
We then present some variations of the Kalman model (‘Visual rotation signals’,’ Model of head and neck rotations’, ‘Feedback signals during neck movement’, ‘Three-dimensional Kalman filter’).
Here, states that head velocity is the sum of self-generated rotation and of unexpected rotations resulting from motor errors and passive motion. In the absence of motor commands, is expected to be zero on average, independently from all previous events.
describes the first-order low-pass dynamics of the canals:
with and .
integrates rotation into tilt . The variable acts as a switch: it is set to one during tilt and to 0 during EVAR (in which case remains equal to zero, independently of ).
Finally, that describes linear acceleration, resembles in form and properties.
The system of these equations is rewritten as follows in order to eliminate from the right-hand side (which is needed so that it may fit into the form of below):
The model sensory transduction is:
indicates that the semicircular canals encode rotation velocity, minus the dynamic component ; and indicates that the otolith organs encode the sum of tilt and acceleration.
The system of equations () can be rewritten in matrix form:
, , , ,,
Similarly, the model of sensory transduction is rewritten as:
Given the standard deviations of , , and (, , and ), the covariance matrices of and (that are needed to perform Kalman filter computations) are respectively:
Note that the matrix used here that unexpected rotations and accelerations are independent. However, it could easily be adapted to represent more complex covariance matrices resulting for instance from motor noise.
The Kalman filter algorithm (Kalman, 1960) computes optimal state estimates in any model that follows the structure of (Figure 1—figure supplement 2). The optimal estimate is computed by the following steps (Figure 1—figure supplement 2B):
The Kalman gain matrix is computed as:
where and are the covariance of the predicted and updated estimates, and are the covariance matrices of and , and is an identity matrix. These equations are not shown in Figure 1—figure supplement 2.
The initial conditions of are set according to the initial head position in the simulated motion, and the initial value of and are set to their steady-state value, which are computed by setting and then running 500 iterations of the Kalman filter algorithm.
Here we analyze the Kalman filter equations to derive the dynamics of rotation perception during passive EVAR and compare it to existing models. During passive EVAR ( and ), the dynamics of the rotation estimate depends of , which is governed by :
Based on these equations, follows a first-order differential equation:
This equation is characteristic of a leaky integrator, that integrates with a gain , and has a time constant which is computed by solving:
The final rotation estimate is the sum of and the canal signal:
These equations reproduce the standard model of (Raphan et al., 1979). Note that the gains , =16.5 s, and are similar to the values in (Raphan et al., 1979) and to model fits to experimental data in (Laurens et al., 2011). The dynamics of can be observed in simulations, i.e. in Figure 4—figure supplement 2B where the leaky integrator is charged by vestibular signals at t = 0 to 10 s and t = 60 to 70 s; and subsequently discharges with a time constant =16.5 s. The discharge of the integrator is also observed in Figure 4—figure supplement 2C when t > 60 s Figure 6—figure supplement 3C when t > 120 s.
Here we provide additional mathematical analyses about motion estimation during passive tilt. During passive tilt (), the internal estimate follows:
First, we note that combine into . In other words, the tilt estimate during passive tilt is computed by integrating feedback signals .
Also, to a first approximation, the gain is close to , the canal error encodes and is approximately null. In this case, and . Therefore, during passive tilt (Figure 5, Figure 5—figure supplement 1), the internal model () integrates tilt velocity signals that originate from the canals and are conveyed by feedback pathways.
Note, however, that the Kalman gain is slightly lower than (; Table 2). Yet, the final tilt estimate remains accurate due to an additional feedback originating from which can be analyzed as follows. Because , the tilt estimate lags behind , resulting in a small otolith error that contributes to the feedback signal (via the term in ). The value of stabilizes to a steady-state where . Based on , we obtain:
with , (based on Table 2).
Thus, a feedback signal originating from the otolith error complements the canal error. This effect is nevertheless too small to be appreciated in Figure 5.
Here we provide additional information about Kalman gains and we justify that some feedback signals are considered negligible.
First, we note that some values of the Kalman gain matrix (those involved in a temporal integration), include the parameter (Table 2). This is readily explained by the following example. Consider, for instance, the gain of the vestibular feedback to the tilt estimate (). During passive tilt, the tilt estimate is tracked by the Kalman filter according to:
Since is computed by integrating canal signals () over time, we would expect the equation above to be approximately equal to the following:
Therefore, we expect that . When simulations are performed with s, we find indeed that . Furthermore, if simulations are performed again, but with s, we find . In other words, the Kalman gain computed by the filter is scaled as a function of in order to perform the operation of temporal integration (albeit with a gain of 0.9). For this reason, we write in Table 2, which is more informative than . Similarly, other Kalman gain values corresponding to and (which is also computed by temporal integration of Kalman feedback) are shown as a function of in Table 2.
Furthermore, the values of and are divided by in all figures, for the same reason. If, for example, , then the corresponding value and would be 0.009 (). This value is correct (since ) but would cause to appear disproportionately small. In order to compensate for this, we plot in the figures. The feedback is scaled in a similar manner. In contrast, neither nor are scaled.
Note that the feedback gain (from the canal error to ) is equal to - (Table 2). This compensates for a part of the error during tilt (see previous section), which generates an erroneous acceleration feedback . This component has a negligible magnitude and is not discussed in the text or included in the model of Figure 9.
Note also that the Kalman filter gain is practically equal to zero (Table 2). In practice, this means that the otoliths affect rotation perception only through variable . Accordingly, otolith-generated rotation signals (e.g. Figure 6—figure supplement 3C, from t = 60 s to t = 120 s) exhibit low-pass dynamics.
Because and are practically null and have no measurable effect on behavioral or neuronal responses, the corresponding feedback pathways are excluded from Figure 9.
Here we analyze the dynamics of the somatogravic effect. During passive linear acceleration, the otolith error determines the feedback that aligns with and therefore minimizes the feedback. This process can be modeled as a low-pass filter based on the following equations:
This equation illustrates that is a low-pass filter that converges towards with a time constant = =1.3 s (Table 2).
Note that the feedback from to adds, indirectly, a second component to the differential equation above, leading to transiently overshoot in Figure 6—figure supplement 1. Nonetheless, describing the somatogravic effect as a first-order low-pass filter is accurate enough for practical purposes.
In this model, we assume that the Kalman filter receives copies of motor commands that encode rotation velocity and linear acceleration . We assume that motor noise and external perturbations amount to two independent Gaussian processes sum up to generate the total unpredictable components and . It should be noted that internal model computations that underlie motor control, and in particular the transformation of muscle activity into and , requires a series of coordinate transformations that are not modelled here, and that these transformation may affect the covariance matrix of motor noise and therefore of .
We found that this simplistic description of motor inputs to the Kalman filter was adequate in this study for two reasons. First, the motor inputs and appear only in the prediction stage of the Kalman filter . In all simulations presented in this study, we have observed that motor commands were transformed into accurate predictions of motor commands. We reason that, if the model of motor commands was changed, it would still lead to accurate predictions of the self-generated motion component, as long as the motor inputs are unbiased and are sufficient to compute all motion variables, either directly or indirectly through the internal model. Under these hypotheses, simulation results would remain unchanged.
Regarding the covariance matrix of motor noise, we find that assuming that the unpredictable motion components and are independent is sufficient to model the experimental studies considered here. Furthermore, we note that the Kalman filter could readily accommodate a more sophisticated noise covariance matrix, should it be necessary.
Where is a Gaussian noise with standard deviation = 0.12 rad/s (Laurens and Droulez, 2008).
This signal is incorporated into the matrices of the sensory model as follows:
, , ,
The model of head motion and the matrix equations of the Kalman filter remain unchanged.
We created a variant of the Kalman filter, where trunk velocity in space and head velocity relative to the trunk are two independent variables and . We assumed that head position relative to the trunk is sensed by neck proprioceptors. To model this, we added an additional variable (for ‘neck’) that encodes the position of the head relative to the trunk: . We also added a sensory modality that represents neck proprioception.
Total head velocity (which is not an explicit variable in the model but may be computed as ) is sensed through the semicircular canals, which were modeled as previously.
The model of head and trunk motion is based on the following equations:
Note that () and (), are analogous to ( in the main model and imply that and are the sum of self-generated (, ) and unpredictable components (, ). encodes . The canal model () is identical as in (), the input being the velocity of the head in space, i.e. .
The sensory model includes the canal signal and a neck proprioceptive signal that encodes neck position:
Note that is identical to in the main model, and that is subject to sensory noise .
Similar to the main model, are written in matrix form:
, , , ,,
Similarly, the model of sensory transduction is rewritten as:
Given the standard deviations of , , and (, , and ), the covariance matrices of and are respectively:
Simulations were performed using the Kalman filter algorithm, as in the main model.
In Figure 7—figure supplements 2 and 3, we note that passive neck motion induces a proprioceptive feedback that encodes neck velocity, although proprioceptive signals are assumed to encode neck position. This dynamic transformation is explained by considering that, during passive motion:
Because (Table 3), neck position is updated exclusively by its own proprioceptive feedback . Furthermore, the equation above is transformed into:
In a steady-state, , leading to:
that is (with )
Thus, similar to the reasons already pointed out in section ‘Kalman feedback gains’, the feedback signal is scaled by in Figure 7—figure supplement 1–3. Also, because the gain (which is close to 1, see Table 3) doesn’t scale with , the feedback is also scaled by in Figure 7—figure supplement 1–3.
Importantly, the equations above indicate that neck proprioception error should encode neck velocity even when the proprioceptive signals are assumed to encode neck position.
Next, we note that, during passive neck rotation, the estimate of head velocity relative to the trunk is determined by . Since , we expect that . Accordingly, simulations performed with yield . Furthermore, performing simulations with leads to . We find that is also dependent of . Therefore, the corresponding Kalman gains scale with in Table 3.
Note that the considerations above can equally explain the amplitude and dynamics of the predicted neck position and of the neck proprioceptive error in Figure 7—figure supplement 6. The simulation in Figure 7—figure supplement 6C is identical (with half the amplitude) to a passive rotation of the head relative to the trunk (Figure 7—figure supplement 2), where . The simulation in Figure 7—figure supplement 6C can be explained mathematically by noticing that it is equivalent to an active head motion (where ) superimposed to a passive rotation of the head with a gain of 0.5 and in the opposite direction, resulting in .
For the sake of simplicity, we have restricted our model to one dimension in this study. However, generalizing the model to three-dimensions may be useful for further studies and is relatively easily accomplished, by (1) replacing one-dimensional variables by three-dimensional vectors and (2) locally linearizing a non-linearity that arises from a vectorial cross-product ( below), as shown in this section.
Generalization of the Kalman filter to three dimensions requires replacing each motion and sensory parameter with a 3D vector. For instance, is replaced by , and that encode the three-dimensional rotation vector in a head-fixed reference frame . Sensory variables are also replaced by three variables, that is (, , ) and (, , ) that encode afferent signals from the canals and otoliths in three dimensions.
With one exception, all variables along one axis (e.g. along the axis) are governed by the same set of equations () as the main model. Therefore, the full 3D model can be thought of as three independent Kalman filters operating along the and dimensions. The only exception is the three-dimensional computation of tilt, which follows the equation:
Where and are vectorial representations of () and (), and represents a vectorial cross-product. In matrix form,
This non-linearity is implemented by placing the terms and () in the matrices and that integrate rotation inputs ( and ) into tilt, as shown below.
The implementation of the 3D Kalman filter is best explained by demonstrating how the matrices of the 1D filter are replaced by scaled-up matrices.
First, we replace all motion variables and inputs by triplets of variables along and :
, , are replaced by: , ,
Next, we scale the matrix up; each non-zero element in the 1D version being repeated twice in the 3D version:
is replaced by
We build in a similar manner. Furthermore, the element (that encodes the integration of into ) is replaced by a set of terms that encode the cross-product in , as follows:
is replaced by
As previously, . Note that and must be recomputed at each iteration since and change continuously.
Similarly, we build the sensor model as follows:
and are replaced by and
is replaced by
and are replaced by:
Figure ice skating induces vestibulo-ocular adaptation specific to required athletic skillsSport Sciences for Health 5:129–134.https://doi.org/10.1007/s11332-009-0088-4
Canal-neck interaction in vestibular nuclear neurons of the catExperimental Brain Research 46:269–280.https://doi.org/10.1007/BF00237185
Vestibular system: the many facets of a multimodal senseAnnual Review of Neuroscience 31:125–150.https://doi.org/10.1146/annurev.neuro.31.060407.125555
Estimating the sources of motor errors for adaptation and generalizationNature Neuroscience 11:1454–1461.https://doi.org/10.1038/nn.2229
Bayesian approaches to sensory integration for motor controlWiley Interdisciplinary Reviews: Cognitive Science 2:419–428.https://doi.org/10.1002/wcs.125
Velocity storage contribution to vestibular self-motion perception in healthy human subjectsJournal of Neurophysiology 105:209–223.https://doi.org/10.1152/jn.00154.2010
Abnormal postural control associated with peripheral vestibular disordersProgress in Brain Research 76:263–275.https://doi.org/10.1016/S0079-6123(08)64513-6
Optimal estimator model for human spatial orientationAnnals of the New York Academy of Sciences 545:51–73.https://doi.org/10.1111/j.1749-6632.1988.tb19555.x
Modeling human spatial orientation and motion perceptionAIAA Modeling and Simulation Technologies Conference. pp. 6–9.
Multimodal integration in rostral fastigial nucleus provides an estimate of body movementJournal of Neuroscience 29:10499–10511.https://doi.org/10.1523/JNEUROSCI.1937-09.2009
Multimodal integration of self-motion cues in the vestibular system: active versus passive translationsJournal of Neuroscience 33:19555–19566.https://doi.org/10.1523/JNEUROSCI.3051-13.2013
Adaptive control of saccades via internal feedbackJournal of Neuroscience 28:2804–2813.https://doi.org/10.1523/JNEUROSCI.5300-07.2008
Internal models of self-motion: computations that suppress vestibular reafference in early vestibular processingExperimental Brain Research 210:377–388.https://doi.org/10.1007/s00221-011-2555-9
The vestibular system: multimodal integration and encoding of self-motion for motor controlTrends in Neurosciences 35:185–196.https://doi.org/10.1016/j.tins.2011.12.001
Sensory coding in the vestibular thalamus discriminates passive from active self-motionProgram No. 181.10/JJJ22, Neuroscience Meeting Planner. Society for Neuroscience Meeting.
Jahrbuch Der Radioaktivität Und Elektronik411–462, Über das Relativitätsprinzip und die aus demselben gezogenen Folgerungen, Jahrbuch Der Radioaktivität Und Elektronik, 4.
Why neurons mix: high dimensionality for higher cognitionCurrent Opinion in Neurobiology 37:66–74.https://doi.org/10.1016/j.conb.2016.01.010
Three-Dimensional Kinematics of Eye, Head and Limb Movements387–398, Modelling three-dimensional vestibular responses during complex motion stimulation, Three-Dimensional Kinematics of Eye, Head and Limb Movements.
Das Zusammenspiel von Otolithen und Bogengängen im Wirkungsgefüge der Subjektiven vertikaleLehrstuhl für Nachrichtentechnik der Technischen Universitat München, Munich, Germany.
Oculogravic illusionArchives of Ophthalmology 48:605–615.https://doi.org/10.1001/archopht.1952.00920010616007
Modification of per- and postrotational responses by voluntary motor activity of the limbsExperimental Brain Research 52:190–198.https://doi.org/10.1007/BF00236627
Vestibular RehabilitationIn: R. W Baloh, G. M Halmagyi, editors. Disorders of the Vestibular System. Oxford: Oxford University Press. pp. 583–597.
Angular velocity detection by head movements orthogonal to the plane of rotationExperimental Brain Research 95:77–83.https://doi.org/10.1007/BF00229656
Post-rotatory nystagmus and turning sensations after active and passive turningJournal of Vestibular Research 8:299–312.https://doi.org/10.1016/S0957-4271(97)00079-7
Response of vestibular nerve afferents innervating utricle and saccule during passive and active translationsJournal of Neurophysiology 101:141–149.https://doi.org/10.1152/jn.91066.2008
Estimation of self-turning in the dark: comparison between active and passive rotationExperimental Brain Research 128:491–504.https://doi.org/10.1007/s002210050872
A new approach to linear filtering and prediction problemsJournal of Basic Engineering 82:35–45.https://doi.org/10.1115/1.3662552
A distributed, dynamic, parallel computational model: the role of noise in velocity storageJournal of Neurophysiology 108:390–405.https://doi.org/10.1152/jn.00883.2011
Trunk position influences vestibular responses of fastigial nucleus neurons in the alert monkeyJournal of Neurophysiology 91:2090–2100.https://doi.org/10.1152/jn.00849.2003
The functional significance of velocity storage and its dependence on gravityExperimental Brain Research 210:407–422.https://doi.org/10.1007/s00221-011-2568-4
The Neuronal Codes of the Cerebellum97–115, How the vestibulocerebellum builds an internal model of self-motion, The Neuronal Codes of the Cerebellum, 10.1016/B978-0-12-801386-1.00004-6.
Probabilistic Reasoning and Decision Making in Sensory-Motor Systems279–300, Bayesian modelling of visuo-vestibular interactions, Probabilistic Reasoning and Decision Making in Sensory-Motor Systems, Heidelberg, Springer, 10.1007/978-3-540-79007-5_12.
Computation of linear acceleration through an internal model in the macaque cerebellumNature Neuroscience 16:1701–1708.https://doi.org/10.1038/nn.3530
Processing of angular motion and gravity information through an internal modelJournal of Neurophysiology 104:1370–1381.https://doi.org/10.1152/jn.00143.2010
Experimental parameter estimation of a visuo-vestibular interaction model in humansJournal of Vestibular Research : Equilibrium & Orientation 21:251–266.https://doi.org/10.3233/VES-2011-0425
Modélisation Bayésienne des interactions visuo-vestibulairesDoctoral Dissertation, Paris, France.
Plasticity of cerebellar Purkinje cells in behavioral training of body balance controlFrontiers in Systems Neuroscience 9:113.https://doi.org/10.3389/fnsys.2015.00113
Perceptual precision of passive body tilt is consistent with statistically optimal cue integrationJournal of Neurophysiology 117:2037–2052.https://doi.org/10.1152/jn.00073.2016
Computational approaches to spatial orientation: from transfer functions to dynamic Bayesian inferenceJournal of Neurophysiology 100:2981–2996.https://doi.org/10.1152/jn.90677.2008
Quantification of Head Movement Predictability and Implications for Suppression of Vestibular Input during LocomotionFrontiers in Computational Neuroscience 11:47.https://doi.org/10.3389/fncom.2017.00047
Coding of self-motion signals in ventro-posterior thalamus neurons in the alert squirrel monkeyExperimental Brain Research 189:463–472.https://doi.org/10.1007/s00221-008-1442-5
Self-motion signals in vestibular nuclei neurons projecting to the thalamus in the alert squirrel monkeyJournal of Neurophysiology 101:1730–1741.https://doi.org/10.1152/jn.90904.2008
Vestibular System Part 2: Psychophysics, Applied Aspects and General Interpretations493–580, A systems concept of the vestibular organs, Vestibular System Part 2: Psychophysics, Applied Aspects and General Interpretations, Heidelberg, Springer.
Signal processing of semicircular canal and otolith signals in the vestibular nuclei during passive and active head movementsAnnals of the New York Academy of Sciences 1004:169–182.https://doi.org/10.1196/annals.1303.015
Responses of ventral posterior thalamus neurons to three-dimensional vestibular and optic flow stimulationJournal of Neurophysiology 103:817–826.https://doi.org/10.1152/jn.00729.2009
Vestibular signals in primate thalamus: properties and originsJournal of Neuroscience 27:13590–13602.https://doi.org/10.1523/JNEUROSCI.3931-07.2007
Modeling the vestibulo-ocular reflex of the squirrel monkey during eccentric rotation and roll tiltExperimental Brain Research 106:123–134.https://doi.org/10.1007/BF00241362
Canal-neck interaction in vestibular neurons of the cat's cerebral cortexExperimental Brain Research 61:94–108.https://doi.org/10.1007/BF00235625
Human perception of horizontal trunk and head rotation in space during vestibular and neck stimulationExperimental Brain Research 85:389–404.https://doi.org/10.1007/BF00229416
Characteristics of the VOR in response to linear accelerationAnnals of the New York Academy of Sciences 871:123–135.https://doi.org/10.1111/j.1749-6632.1999.tb09179.x
Velocity storage in the vestibulo-ocular reflex arc (VOR)Experimental Brain Research 35:229–248.https://doi.org/10.1007/BF00236613
Neural basis for eye velocity generation in the vestibular nuclei of alert monkeys during off-vertical axis rotationExperimental Brain Research 92:209–226.https://doi.org/10.1007/BF00227966
Neural mechanisms for filtering self-generated sensory signals in cerebellum-like circuitsCurrent Opinion in Neurobiology 21:602–608.https://doi.org/10.1016/j.conb.2011.05.031
Neuromuscular adaptations during perturbations in individuals with and without bilateral vestibular lossDoctoral Dissertation, University of Iowa, United States.
Control of Gaze by Brain Stem Neurons49–58, Vestibular and optokinetic symbiosis: an example of explaining by modelling, Control of Gaze by Brain Stem Neurons, Amsterdam, Elsevier.
Optimal control of natural eye-head movements minimizes the impact of noiseJournal of Neuroscience 31:16185–16193.https://doi.org/10.1523/JNEUROSCI.3721-11.2011
Error correction, sensory prediction, and adaptation in motor controlAnnual Review of Neuroscience 33:89–108.https://doi.org/10.1146/annurev-neuro-060909-153135
Properties of cerebellar fastigial neurons during translation, rotation, and eye movementsJournal of Neurophysiology 93:853–863.https://doi.org/10.1152/jn.00879.2004
Canal-otolith interaction in the fastigial nucleus of the alert monkeyExperimental Brain Research 136:169–178.https://doi.org/10.1007/s002210000575
Vestibulo-ocular reflex and motion sickness in figure skatersEuropean Journal of Applied Physiology 104:1031–1037.https://doi.org/10.1007/s00421-008-0859-7
The head direction signal: origins and sensory-motor integrationAnnual Review of Neuroscience 30:181–207.https://doi.org/10.1146/annurev.neuro.29.051605.112854
Sensory prediction errors drive cerebellum-dependent adaptation of reachingJournal of Neurophysiology 98:54–62.https://doi.org/10.1152/jn.00266.2007
Neuronal activity in the vestibular nuclei of the alert monkey during vestibular and optokinetic stimulationExperimental Brain Research 27:523–538.https://doi.org/10.1007/BF00239041
Frequency-selective coding of translation and tilt in macaque cerebellar nodulus and uvulaJournal of Neuroscience 28:9997–10009.https://doi.org/10.1523/JNEUROSCI.2232-08.2008
Spatiotemporal properties of optic flow and vestibular tuning in the cerebellar nodulus and uvulaJournal of Neuroscience 33:15145–15160.https://doi.org/10.1523/JNEUROSCI.2118-13.2013
Coding of velocity storage in the vestibular nucleiFrontiers in Neurology 8:386.https://doi.org/10.3389/fneur.2017.00386
Responses of monkey vestibular-only neurons to translation and angular rotationJournal of Neurophysiology 96:2915–2930.https://doi.org/10.1152/jn.00013.2006
Stefan GlasauerReviewing Editor; Ludwig-Maximilian University of Munich, Germany
In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.
Thank you for submitting your article "A unified internal model theory to resolve the paradox of active versus passive self-motion sensation" for consideration by eLife. Your article has been reviewed by three peer reviewers, and the evaluation has been overseen by a Senior Editor and a Reviewing Editor. The following individual involved in review of your submission has agreed to reveal his identity: Laurence Harris (Reviewer #3).
The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.
This very timely and comprehensive theory paper studies the way self-motion may be estimated by the brain and mainly concerns the vestibular system during active and passive head movements. In particular, it suggests a unified theory for how active and passive motion can be estimated by a single internal model. This extends previous, well-established work by the authors and others applying internal models to the problem of estimation of passive motion. A Kalman filter framework was used to determine optimal (Bayesian) solutions. The paper draws on a considerable amount of previously published experimental evidence from a rich series of elegant experiments to argue the hypothesis and to consolidate a large amount of previously collected data into an explanatory model. The internal forward model proposed here is used to make a series of specific experimental predictions (mainly found in the supplementary information). The model's mathematical development and execution is very thoroughly motivated, as it draws on a solid body of previous computational work. The manuscript did an excellent job of simplifying the description of such complex models. Model simulation results are shown for different scenarios and compared descriptively with neurophysiologic evidence. A particularly strong point about the paper is that it makes multiple testable predictions and could be used, for example, to predict the consequences of experience in microgravity. The principle of this type of model may find application beyond the situation considered here, for example, in looking at how the sensory consequences of an eye movement are combined with retinal information to provide the perception of an apparently stable world (e.g., Bridgeman et al. 1994).
The reviewers appreciate the structure of the paper and the systematic testing using different conditions building in complexity. The unusual interactive connection with the supplemental material to try and keep the main paper shorter is also appreciated.
1) Novelty: A major concern for the paper is the nature of the advancement of our understanding of computational principles in the brain: Internal forward models, priors and their impact on error-based movement correction have been extensively studied in the human reaching / motor control literature (from Wolpert et al. to Bernicker et al.; some of this should be mentioned to make a connection to this body of literature) and as such their ability to explain active and passive motion (of arms) are not especially novel. Thus, the main advance must lie in the domain of vestibular information processing, which needs to motivated better.
2) Cancellation mechanism: The authors state that "we have tested the hypothesis that this postulated cancellation mechanism uses exactly the same sensory internal model computations…." While the authors have put forward one framework that explains a variety of phenomenon, the reviewers wonder if other models could also explain these phenomena. Perhaps (at least for some of the scenarios) the motor command could be subtracted from the sensory estimate of a "passive only" internal model and yield the same response. This should be addressed by a comparison to an alternative "subtraction model". The simple model is probably what people naively think might be the case and so demonstrating its failings might give the paper more impact.
3) Reliance of motor command generation: The model does not include an additional feedback pathway – the reliance of motor command generation on sensory estimates. For example, a passive head movement could result in a stabilizing active motor command. Or an active head movement could be less than desired because of noise, requiring an adjustment of the motor command to compensate. Obviously these additional pathways complicate the solution to the Kalman filter gains. While the model is still an important contribution without these pathways, the limitations of the model without these pathways should be addressed.
4) Motor commands: The authors assume that the motor command is a noiseless signal that is perfectly executed. This apparently doesn't account for motor error, since the effect of motor commands is not deterministic and noiseless, as assumed by injecting the motor command as Xu (see Figure 1B) into the system model. It seems that this ignores motor noise and execution noise, even though these sources of noise are extremely important (see work by Wolpert and others). In the second paragraph of the subsection “Model of motor commands” the authors claim that 𝑋𝜀 includes motor noise, but this is not evident and needs to be discussed better. Motor noise would, in this reviewer's opinion, enter before applying M and therefore would have a different noise spectrum and/or covariance matrix than the system noise 𝑋𝜀.
An alternative view would be that processed efference copy signals (i.e. predicted sensory signals) have to be treated the same way as other sensory signals, because 1) the efference copy just like any neural signal cannot be assumed to be noiseless and 2) it would take into account the motor noise appropriately (instead of treating it as external perturbation). Is that view wrong? Or would it provide an alternative to your present model? It seems to me that approaches such as described in papers by Wolpert et al. would consider efference copy input as containing noise. For example, MacNeilage and Glasauer (Front Comput Neurosci 2017) assume that vestibular and efference copy signals are combined like two sensory signals, weighted by their reliability.
5) Reference frames: a theoretical concern is how do the motor signals (and proprioceptive signals) arrive in the correct form and frame of reference to be compared to the sensory information?
6) Active vs. passive movements: As mentioned in the text (subsection “Interruption of internal model computations during proprioceptive mismatch”), the current model cannot reproduce some important results concerning active movements: when active head movement is blocked or torque is applied, "central neurons were shown to encode net head motion". The authors suggest that this "result may be reproduced by the Kalman filter by switching off the internal model of the motor plant.…". However, this actually means that the Kalman filter model fails to explain this result, because it would need another element. What the authors propose here sounds like an ad-hoc extension of the model (a switch) to explain results, which can as well be interpreted as evidence against the model. It is important to discuss this more appropriately.
In the second paragraph of the aforementioned subsection, the authors suggest that "proprioceptive mismatch" would cause to stop using the internal model. However, that seems a bit oversimplifying: Proprioceptive mismatch occurs due to a variety of natural conditions (e.g. increased head inertia due to carrying food, or due to fatigue, etc.) for which it would make no sense at all to completely stop using the internal model.
7) Parameter variations: Technically, systematic parameter variations to test for the robustness of the model predictions are missing (which can be easily addressed).
8) One reviewer would also like to see more quantitative results showing time courses of adaptation in the model and e.g. firing rates match up with each other. This brings up the question of whether the model does adapt or whether all Kalman filters operate with steady state gain (as suggested by another reviewer).https://doi.org/10.7554/eLife.28074.037
- Jean Laurens
- Dora E Angelaki
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
- Stefan Glasauer, Reviewing Editor, Ludwig-Maximilian University of Munich, Germany
© 2017, Laurens et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.