Abstract
Goal-directed movements can fail due to errors in our perceptual and motor systems. While these errors may arise from random noise within these sources, they also reflect systematic motor biases that vary with the location of the target. The origin of these systematic biases remains controversial. Drawing on data from an extensive array of reaching tasks conducted over the past 30 years, we evaluated the merits of various computational models regarding the origin of motor biases. Contrary to previous theories, we show that motor biases do not arise from systematic errors associated with the sensed hand position during motor planning or from the biomechanical constraints imposed during motor execution. Rather, motor biases are primarily caused by a misalignment between eye-centric and the body-centric representations of position. This model can account for motor biases across a wide range of contexts, encompassing movements with the right versus left hand, proximal and distal effectors, visible and occluded starting positions, as well as before and after sensorimotor adaptation.
Introduction
Accurate movements are crucial for everyday activities, affecting whether a glass is filled or spilled, or whether a dart hits or misses the target. Some of movement errors arise from sensorimotor noise, including visual noise regarding the location of targets and effectors1–3, planning noise introduced when issuing a motor command, and neuromuscular noise when executing a movement4–6.
In addition, some of these errors arise from systematic biases that vary across the work space7–10. The origin of these biases remains controversial (Fig 1a): Whereas some studies postulate that motor biases stem from systematic distortions in perception7,10–13, others posit that biases originate from inaccurate motor planning and/or biomechanical constraints associated with motor execution14–16. In the following section, we provide an overview of current models of systematic motor biases as well as outline a novel hypothesis, setting the stage for a re-analysis of published data and presentation of new experimental results.
Starting at the input side, motor biases may arise from systematic distortions in the representation of the perceived target position (Fig 1b). A prominent finding in the visual cognition literature is that the remembered location of a visual stimulus is biased towards diagonal axes12,13,17. That is, the reported visual location of a stimulus is shifted towards the centroid of each quadrant. This perceptual bias does not depend on the method of response, as this phenomenon can be observed when participants point to a cued location or press a key to indicate the remembered location of a briefly presented visual target12,13. While this literature has emphasized that this form of perceptual bias arises from processing within visual working memory, it is an open question whether it contributes to goal-directed reaching when the visual target remains visible.
Another perceptual cause of motor biases stems from proprioception (Fig 1c). Systematic distortions in the perceived position of the hand18–20 and/or joint position21,22 can infleunce motor planning. For example, if the perceived starting position of the hand is leftwards of its true location, a reaching movement to a forward visual target would exhibit a rightward bias and a reaching movement to a rightward visual target would fall short7. A proprioceptive perceptual bias at the starting position has been reported to be the major source of bias in many previous studies.18–22
Whereas the preceding models have considered how distortions of visual or proprioceptive space might, on their own, lead to reaching biases, motor biases could also originate from a misalignment in the mapping between perceptual and motor reference frames. Based on classic theories of motor planning23, the start position and the target position are initially encoded in an eye-centric visual coordinate frame, and then transformed to representations in a body-centric proprioceptive coordinate frame within which the movement is planned (Fig 1d). Motor biases could arise from systematic distortions that occur during this visuo-proprioceptive transformation process24–26. Indeed, when participants are required to match the position of their unseen hand with a visual target, systematic transformation biases are observed across the workspace (Fig 1d; also see Methods)27,28. In the current study, we develop a novel computational model to capture how these transformation biases should result in systematic biases during reaching.
On the output side, it has been posited that reaching biases could arise from biomechanical factors that impact movement execution8. Specifically, movements may be biased toward trajectories that minimize inertial resistance and/or energetic costs15,16,29,30. For example, minimizing energy expenditure would result in biases towards trajectories that minimize resistive forces or changes in joint angles14,31. Moreover, inaccuracies in the internal model of limb dynamics could produce systematic execution biases. For example, underestimating the weight of the limb would result in reaches that overshoot the target14,32.
To determine the origin of motor biases, we formalized four computational models to capture the potential sources described above. As detailed in the Results section, the models predict distinct motor bias patterns in a center-out reaching task (Fig 1f-h). While prior research has focused on the impact of individual sources (e.g., vision or proprioception) on the pattern of motor errors, these studies often entail a limited set of contexts (e.g., reaching behavior only when the start position is visible or only with the right hand). However, looking across studies, the task context can result in dramatically different motor bias patterns; indeed, when plotted in polar coordinates across the workspace, the bias functions range from having single peak7,10,21 to quadruple peaks12,13,17. This diversity underscores the absence of a comprehensive explanation for motor bias phenomena across different experimental designs and setups. Additionally, a notable limitation of earlier work is the reliance on small participant cohorts (n<10) and a restricted number of targets (typically 8). The sensitivity of such experiments is limited in terms of their capacity to discriminate between models.
To better evaluate sources of motor bias during reaching, we report a series of experiments involving a range of contexts, designed to test predictions of the different models. We compared movements performed with the right or left hand, proximal or distal effectors, under conditions in which the start position was either visible or not visible, and before and after implicit sensorimotor adaptation. To increase the power of model comparisons, we measured the motor bias function at a higher resolution (24 targets) and in a bigger sample size (n >50 per experiment).
Results
Motor biases across the workspace
To examine the pattern of motor biases during goal-directed movements, participants performed a center-out reaching task with their right hand (Fig 2a). We ran two versions of the study in Experiment 1. In Exp 1a, we used an 8-target version similar to that used in most previous studies7–10,21. To obtain better resolution of the motor bias pattern, we also conducted a 24-target version in Exp 1b. Within each experiment, participants first performed the task without visual feedback to establish their baseline bias and then a block with veridical continuous feedback to examine how feedback influences their biases. Motor biases were calculated as the angular difference between the target and hand when the movement amplitude reached the target distance (Fig 2b), with a positive error indicating a counterclockwise bias and a negative error indicating a clockwise error.
Across the workspace, the pattern of motor biases exhibited a two-peaked function (Fig 2c) characterized by two peaks and two troughs. From the 8-target experiment, larger biases were observed for the diagonal targets (45°, 135°, 225°, 335°) compared to the cardinal targets (0°, 90°, 180°, 270°)33,34. In terms of direction, reaches to diagonal targets were biased toward the vertical axis, and reaches for cardinal targets were biased in the counterclockwise direction. This pattern is similar to what has been observed in previous studies for right-handed movements8,9. With the higher resolution in the 24-target experiment, we see that the peaks of the motor bias function are not strictly aligned with the diagonal targets but are shifted towards the horizonal axis. Moreover, the upward shift of the motor bias function relative to the horizontal line suggests that clockwise biases are more prevalent compared to counterclockwise biases across the workspace.
Motor biases primarily emerge from a misalignment between visual and hand reference frames
We developed a series of models to capture how systematic distortions at different sensorimotor processing stages would cause systematic motor biases (Fig 3a). Here we consider processing associated with the perceived position of the target, the perceived position of the arm/hand, and planning processes required to transform a target defined in visual space to a movement defined in arm/proprioceptive space. Biases could also arise from biomechanical constraints. Given that biomechanical biases are not easily simulated, we will evaluate this hypothesis experimentally (see below).
We implemented four single-source models to simulate the pattern of the motor bias that would be predicted in a center-out reaching task (Fig 1f-h; see Methods). For the Visual Bias model, we assumed that the representation of the visual targets were biased towards the diagonals13. This model predicts a four-peaked motor bias function (Fig 1f). For the Proprioceptive Bias model, we considered two variants building on the core idea is that the perceived starting position of the hand is distorted: A Vector-Based model in which the motor plan is a vector pointing from the perceived hand position to the target7,10 and a Joint-Based model in which the movement is encoded as changes in the shoulder and elbow joint angles to move the limb from a start position to a desired target location21,22 (See Fig S1). Importantly, both models predict a motor bias function with a single peak (Fig 1g). Taken together, models that focus on systematic distortions of perceptual information do not qualitatively capture the observed two-peaked motor bias function (Fig 2c).
The fourth model, the Transformation model is based on the idea that the start and target positions are initially encoded in visual space and transformed into proprioceptive space for motor planning23. Motor biases may arise from a transformation error between coordinate systems. Two prominent features stand out when this transformation error is empirically measured18–20,27. First, the direction of the transformation error is similar across the workspace (e.g., a leftward and downward matching error for right-handers). Second, the magnitude of the error increases with distance from the body (Fig 1d). As such, we simulated a visuo-proprioceptive error map by using a leftward and downward error vector with the magnitude scaled across the workspace based on the distance of each location to a reference point (Fig 1h Top). This model predicts a two-peaked motor bias function (Fig 1h Bottom), a shape strikingly similar to that observed in Exps 1a and 1b.
To quantitively compare the models, we fit each model with the data in Exp 1b. The Transformation Bias model provided a good fit of the two-peaked motor bias function (R2=0.84, Fig 3a left, see Table S1 for parameters). Fig 1h (top) shows the recovered visual-proprioceptive bias map based on the parameters of the Transformation Bias model when fit to the reaching data in Exp 1b. The simulated results are very similar to the map measured empirically in the previous study20 (Fig 1e). In contrast, the Visual Bias and Proprioceptive Bias models provide poor fits (all R2<0.18, Fig 3a right). Thus, the simulations suggest that motor biases observed in reaching primarily originate from a transformation between visual and proprioceptive space.
A second way to evaluate the models is to compare the motor bias functions for the left and right hands. The Proprioceptive and Transformation Bias models predict that the bias function will be mirror-reversed for the two hands whereas the Visual Bias model predicts that the functions will be superimposed. We compared the functions for right and left hand reaches in Exp 2 using the 8-target layout. We found that the dissimilarity (RMSE) between the pattern of motor biases across two hands significantly decreased when the left-hand data are mirror-reversed compared to when the bias patterns are compared without mirror reversal (t(78) = 2.7, p = 0.008, Fig 2d-e). These results are consistent with the Transformation Bias model and provide further confirmation that the Visual Bias model, at least as conceptualized here, fails to provide a comprehensive account of reaching biases.
While the overall pattern of biases in the visuo-proprioceptive map are similar across individuals, there are subtle individual differences18,20. As such, we would anticipate that the motor bias function would also exhibit stable individual differences due to idiosyncrasies in the visuo-proprioceptive map. To examine this, we correlated the bias functions obtained from blocks in which we either provided no visual feedback or veridical endpoint feedback. The magnitude of the biases was attenuated when endpoint feedback was provided, likely because the feedback reduced the visuo-proprioceptive mismatch. Nonetheless, the overall pattern of motor bias was largely preserved, with the within-participant correlations (Exp 1a: rnorm = 0.999, Exp 1b: rnorm = 0.974) significantly higher than the averaged between-participant correlation in both Exp 1a and Exp 1b (Fig 2f).
Visual bias also contributes to the motor bias
In the preceding section, we considered each model in isolation, testing the idea that motor biases arise from a single source. However, the bias might originate from multiple sources. For example, there could be a distortion in both vision and proprioception, or a visuo-proprioceptive transformation that operates on distorted inputs. To address this, we evaluated hybrid models by combining the Visual Bias model with the Proprioceptive or Transformation Bias models. Although theoretically plausible, we did not consider a hybrid of the Proprioceptive and Transformation Bias models since they conflict in terms of whether the start position is perceived visually or proprioceptively.
The hybrid model that combines the Transformation and Visual Bias models (T+V model) provided an excellent fit of the motor bias pattern in Exp 1b (R2=0.973, Fig 3a). Based on a comparison of BIC values, this model not only outperforms the other hybrid models, but also significantly improved the fit compared to the Transformation Bias model alone. These results are especially interesting in that the assumed visual bias towards the diagonal axes has only been shown in studies in which perception was tested after the target had been extinguished. The current results suggest that this bias is also operative when the target remains visible, suggesting that the visual bias may reflect a general distortion in how space is represented, rather than a distortion that arises as information is processed in visual working memory.
To further evaluate the T+V model, we examined its performance in explaining the motor bias function obtained in an on-line study (Exp 3) in which participants performed the center-out task by moving a finger across a trackpad. One major difference between the in-person and on-line setups is that the workspace is much smaller and closer to the body when participants use a trackpad (Fig 2g). As such, the magnitude of the motor biases generated by transformation errors should be smaller in the online compared to the in-person setup (Fig 2h).
Consistent with the prediction of the T+V model, we found markedly smaller motor biases with the online setup (Exp 3) compared to the in-person setup (Exp 1) (Fig 2c). While the motor bias functions were similar across experiments, we observed two small peaks between 20° and 200° in Exp 3 that were not apparent in Exp 1. When we fit this function to single source models, the Visual Bias model outperformed the Transformation Bias model. This suggests that, when the movements are close to the body, visual biases make a relatively stronger contribution to the motor biases compared to transformation biases. Nonetheless, the T+V model again provides the best fit to the motor bias function (R2=0.857, Fig 3e, see Table S1 for parameters), significantly outperforming the other alternatives including the Visual Bias model (Fig 3f).
Transformation model accounts for qualitative changes in the motor bias function
The Transformation Bias model assumes that, for normal reaching, both the start and the target positions are encoded in visual space before being transformed into proprioceptive space for motor planning. However, if the start position is not visible, then the sensed start position would be directly encoded in proprioceptive space (i.e., where the hand is positioned), bypassing the need for a transformation between coordinate frames. As such, biases arising from the transformation process would only arise when the input is limited to the perceived position of the visual target. When we simulated the scenario in which the start position is not visible, the Transformation Bias model predicts a single-peaked function (Fig 4a right), a qualitative change from the two-peaked function predicted when both the start position and target position are visible (Fig 4a left).
To test this idea, we re-examined data from previous studies in which the participant’s hand was passively moved to a start position with no visual information given about the start location or hand position7,10,21. Strikingly, the motor bias function under this condition has only have one peak. Thus, the transformation Bias model provides a novel account of the difference in motor biases observed when the start position is visible (Exp 1-3) compared to when it is not visible.
We note that the one-peaked motor bias function has previously been interpreted as evidence in support of a Proprioceptive Bias model (Fig 1f)7,10,21. We performed a model comparison on the data from one of these studies10 and the T+V outperformed the Proprioceptive Bias, as well as the P+V models (ΔBIC=10.9). In addition, only the T+P model is able to account for the asymmetry between clockwise and counterclockwise biases. In summary, these results suggest that motor biases when reaching from an unseen start position arise when the target position is transformed from visual to proprioceptive coordinates rather than from a proprioceptive bias impacting the sensed start position. Moreover, the T+V model provides a parsimonious account of the bias functions, independent of the visibility of the start position.
Another way to evaluate the Transformation model is to perturb the position of the visual start position relative to the real hand position. Under this manipulation, a single peaked motor bias function is observed (Fig S2) 21,22. Interestingly, the functions exhibit opposing phase shifts when the starting position is perturbed to the left versus to the right (Fig S2). This qualitative change in the motor bias function can again be successfully captured by the Transformation model (Fig S2c). Taken together, these data provide strong evidence favoring the notion that motor biases originate primarily from a misalignment between visuo-proprioceptive reference frames.
Biomechanical models fail to account for motor biases
An alternative account of motor biases is that they arise from biomechanical constraints associated with upper limb movements. For example, movement kinematics have been explained in terms of cost functions to minimize energy and/or minimize jerk,35 constraints that may result in an increase in endpoint error15,16,29. One argument against a biomechanical model comes from evidence provided in the previous section: Biomechanical constraints associated with movement execution would not predict qualitative changes in the motor bias function in response to a visual manipulation of the start position.
As a second comparison of the T+V and Biomechanical Bias models, we examined how motor biases change after the sensorimotor map is recalibrated following a form of motor learning, implicit sensorimotor adaptation. Here we re-analyzed the data from previous experiments that had used a perturbation technique in which the visual feedback was always rotated by 15° from the target, independent of the hand position (Fig 5a, non-contingent clamped feedback36). Participants adapt to this perturbation, with subsequent reaches to the same target shifted in the opposite direction (Fig 5b), reaching an asymptote of around 20° and showing a robust aftereffect when the perturbation is removed. Participants are unaware of their change in hand angle in response to clamped feedback, reporting their perceived hand position to be close to the target37.
For the T+V model, the transformation between visual and proprioceptive space depends on the perceived positions of the start and target locations in a visual-based reference space, one that remains unchanged before and after adaptation. We assume that adaptation has changed a sensorimotor map that is referenced after the transformation from visual to proprioceptive space. As such, the heading angle after adaptation for each target location is obtained by summing the motor biases for that target location and the extent of implicit adaptation. This would result in a vertical shift of the motor bias function (Fig 5c top).
In contrast, the biomechanical model predicts that motor biases will be dependent on the actual movement direction rather than the target location (e.g., a bias towards a movement that is energetically efficient). Since the mapping between a target location and its corresponding reach direction is rotated after adaptation, the motor bias pattern would also be rotated (Fig 5c bottom). As such, the biomechanical model predicts that the motor bias function will be shifted along both the horizontal and vertical axes.
To arbitrate between these models, we analyzed the data from two previous studies, looking at the bias function from no-feedback trials performed before (baseline) and after adaptation (washout) 36,38.
Consistent with the prediction of the T+V model, the motor bias function shifted vertically after adaptation (Fig 5d) but did not shift horizontally.
To quantitatively evaluate these results, we first fit the motor bias function during the baseline phase with the T+V model and fixed the parameters. We then examined the heading angles during the aftereffect phase by fitting two additional parameters, one that allowed the function to shift vertically (v) and the other to allow the function to shift horizontally (h). The T+V model predicts that only v will be different than zero; in contrast, the Biomechanical model predicts that h and v will both be different than zero and should be of similar magnitude. The results clearly favored the T+V model (Fig 5e). The vertical shift in the bias functions was of a similar magnitude as the aftereffect, with the shift direction depending on the direction of the clamped feedback (v: CW: 12.5°; CCW: –12.2°, p<0.001). In contrast, the best fitting value for h was not significantly different from zero in both condition (Fig 5g). These results are consistent with the hypothesis that visual representations are first transformed into proprioceptive space for motor planning, with the recalibrated sensorimotor map altering the trajectory selected to achieve the desired movement outcome.
Discussion
While motor biases are ubiquitous in goal-directed reaching movements, the origin of these biases has been a subject of considerable debate. We addressed this issue by characterizing these biases across a range of experimental conditions and evaluated a set of computational models derived to capture different possible sources of bias. Contrary to previous theories, our results indicate that motor biases do not stem from a distortion in the sensed position of the hand7,10,21,22 or from biomechanical constraints during movement execution15,16,29. Instead, motor biases appear to arise from systematic distortions in perceiving the location of the visual target and the transformation required to translate a perceived visual target into a movement described in proprioceptive coordinates.24–26 Strikingly, our model successfully accounts for sensorimotor biases across a wide range of contexts, encompassing movements performed with either hand as well as with proximal and distal effectors. Our model also accounts for the qualitative changes in the motor bias function that are observed when vision of the starting position of the hand is occluded, and when the sensorimotor map is perturbed following implicit adaptation.
While motor biases have been hypothesized to reflect a mismatch across perceptual and motor coordinate systems,25,26 it is unclear what information is transformed and what reference frame is employed for motor planning. Interestingly, many previous studies posit that movement is planned in an eye-centric visual reference frame39–41. While the target can be directly perceived in this reference space, the start position of the hand would need to be transformed from a proprioceptive reference frame to a visual one. Systematic error in this transformation would mean that the start position of the hand is inaccurately represented in visual space, resulting in motor biases7,25. This idea underlies the Proprioceptive Bias models described in this paper.
In contrast to these models, our Transformation Bias model posits that movement is planned in a hand-centric proprioceptive reference frame. By this view, when both the target and start position are provided in visual coordinates, the sensorimotor system transforms these positions from visual space to proprioceptive space. Systematic error in this transformation process will result in motor biases. When vision of the start position is available, the Transformation Bias model successfully accounts for the two-peaked motor bias function (Exp 1). Even more compelling, the Transformation Bias model accounts for how the pattern of motor biases change when the visibility of the start position is manipulated. When the start position is occluded, the transformation from visual to proprioceptive space is only relevant for the target position since the start position of the hand is already represented in proprioceptive space. Here the model predicts a motor bias function with a single peak, a function that has been observed in previous studies7,10.
We note that there is a third scenario, one which both the start position and target position are provided in proprioceptive space. We predict that under this condition, motor biases originating from the visuo-proprioceptive transformation would completely disappear. Indeed, when the hand is passively moved first to the target location and then to the start position, subsequent reaches to the target do not show the signature of bias from a visuo-proprioceptive transformation.12 Instead, the reaches exhibited a bias towards the diagonal axes, consistent with the predicted pattern if the sole source of bias is visual.
Why would a sensorimotor system exhibit inherent biases during the transformation process? We propose that these biases arise from two interrelated factors. First, these systems are optimally tuned for distinct purposes: A body-centric system predominantly uses proprioceptive and vestibular inputs to determine the orientation and position of the body in space, while an eye-centric system relies on visual inputs to interpret the layout of objects in the external world, representations that should remain stable even as the agent moves about in this environment.42,43 Second, these sensory systems consistently receive information with very different statistical distributions2,44, perhaps because of these distinct functions. For example, visual inputs tend to cluster around the principal axes (horizontal and vertical) 45,46, whereas proprioceptive information during reaching is clustered around diagonal axes47. This is because these movements are often the least effortful and are the most frequently enacted directions of movement.48. The differences in computational goals and input distributions might have led to natural divergences in how each system represents space49, and consequently, result in a misalignment between the reference frames.
The Transformation Bias model addresses how biases arise when the information is passed along from a visual to a proprioceptive reference frame. However, the results indicate that another source of bias originates from a distortion within the visual reference frame itself, manifesting as an attractive bias towards the diagonal axes. Thus, the best fitting model posits two sources of bias, one related to the representation of the visual target and a second associated with the transformation process. This hybrid Transformation + Visual Bias model outperformed all single-source and hybrid models, providing an excellent fit of the behavioral data across a wide variety of contexts.
There are several hypotheses concerning the origin of this bias. One account has focused on the idea that these biases arise from distortions introduced in visual working memory. The biases are observed when participants report the location of a remembered visual target13,17,50, independent of the reporting method (pointing or keypresses)17. However, as mentioned above, similar biases are observed even when participants report the location of a proprioceptive target (i.e., participants match the position of their unseen hand).12 Moreover, as shown in our study, diagonal target biases make a sizable contribution even when the visual target is always present, imposing no demands on working memory. As such, we postulate that those visual biases may reflect a more domain-general distortion of spatial representations.
In addition to the diagonal bias, there may be other eye-centric, visual biases that impact movement. For instance, when required to fixate away from the target during motor planning, participants show systematic distortions in reach amplitude, with the effect an under– or overshoot depending on the relative position of the eyes and target51,52. While these results have been taken as evidence for eye-centric motor planning, redirecting fixation will shift the target and/or start position into the periphery, introducing distortions of radial distance. These distortions will influence motor planning even if the planning occurs in proprioceptive space.
Our data suggest that biomechanical factors do not significantly impact motor biases. While we did not formalize a biomechanical model, we provided several lines of evidence suggesting the biomechanical factors have minimal influence on the pattern of motor biases. For example, it is hard to envision a biomechanical model that would account for the qualitative change in the bias function when the start position was visible (two-peak function) to when it was hidden (one-peak function).
Empirically, we evaluated biomechanical contributions to motor biases by examining the bias pattern observed before and after implicit sensorimotor adaptation. We assume that adaptation mainly modifies a sensorimotor map53 but has a relatively smaller influence on a visuo-proprioceptive map28,37,54. That is, adaptation may change the mapping between a target represented in the proprioceptive space and the motor commands required to reach that location. Given that a biomechanical model assumes that motor biases are associated with the direction of a movement, this model would predict that the pattern of motor biases would be distorted by implicit motor adaptation. At odds with this prediction, the pattern of motor biases remained unchanged after adaptation, a result consistent with the Transformation Bias model.
Nonetheless, the current study does not rule out the possibility that biomechanical factors may cause motor biases in different contexts. In our study, biomechanical constraints may not be influential since the extent of the required movements was relatively modest and involved minimal interaction torques. Moreover, we focused on examining biases that manifest at the movement endpoint rather than in the movement trajectory where biomechanics are known to play a greater role.15,16 Future studies are necessary to explore whether biomechanical biases are more pronounced in contexts in which the limb dynamics are crucial and/or the movements are energetically costly.
Methods
Participants
For the lab-based study (Exp 1, 2), 206 undergraduate students (age: 18-24) were recruited from University of California, Berkeley. For the online study (Exp 3), 183 young adult participants (age: 18-30) were recruited via Prolific, a website designed to recruit participants for online behavioral testing. All participants were right-handed as assessed by the Edinburgh handedness test55 with normal or corrected-to-normal vision. Each participant was paid $15/h. The protocol was approved by the institutional review board at the University of California Berkeley.
Procedure
Experiments 1a, 1b, and 2
Experiments 1a, 1b, and 2 were conducted in the lab. Participants performed a center-out reaching task, holding a digitizing pen in the right or left hand to make horizontal movements on a digitizing tablet (49.3cm x 32.7cm, sampling rate= 100 Hz; Wacom, Vancouver, WA). The stimuli were displayed on a 120 Hz, 17-in. monitor (Planar Systems, Hillsboro, OR), which was mounted horizontally above the tablet (25 cm), to preclude vision of the limb. The experiment was controlled by custom software coded in MATLAB (The MathWorks, Natick, MA), using Psychtoolbox extensions, and run on a Dell OptiPlex 7040 computer (Dell, Round Rock, TX) with Windows 7 operating system (Microsoft Co., Redmond, WA).
Participants made reaches from the center of the workspace to targets positioned at a radial distance of 8 cm. The start position and target location were indicated by a white annulus (1.2 cm diameter) and a filled blue circle (1.6 cm), respectively. Vision of the hand was occluded by the monitor, and the lights were extinguished in the room to minimize peripheral vision of the arm. Feedback, when provided, was in the form of a 4 mm white cursor that appeared on the computer monitor, aligned with the position of the digitizing pen.
To start each trial, the participant moved the cursor to the start circle (5mm diameter). After maintaining the cursor within the start circle for 500 ms, a target appeared at one of the target locations. The participant was instructed to make a rapid slicing movement through the target. We did not impose any reaction time guidelines, allowing the participant to set their own pace to initiate the movement. On no-feedback trials, the cursor was blanked when the hand left the start circle, and the target was extinguished once the radial distance of the movement reached the target distance (8 cm). On feedback trials, the cursor was visible throughout the movement until the movement amplitude reached 8 cm; at that point, its position was frozen for 1 s, providing feedback of the accuracy of the movement (angular position with respect to the target). After this interval, the target and cursor were extinguished.
At the end of both the no-feedback and feedback trials, a white ring appeared denoting the participant’s radial distance from the start position. This ring was displayed to guide the participant back to the start position without providing angular information about hand position. Once the participant moved within 2 cm of the start position, the ring was extinguished, and a veridical cursor appeared to allow the participant to move their hand to the start position. If the amplitude of the hand movement did not reach the target (<8 cm radial distance) within 300 ms, the message “too slow” would be displayed for 500 ms before the white ring appeared.
For Exp 1a and Exp 2, there were 8 target locations, evenly spaced in 45° increments around the workspace (primary axes and main diagonals). For Exp 1b, there were 24 target locations, evenly spaced in 15° increments. Each experiment consisted of a no-feedback block followed by a feedback block. There were 5 trials per target (40 trials total) for each block in the Exps 1a. There were 4 trials per target (96 trials total) in Exp 1b.
Experiments 3a and 3b
Exps 3a and 3b were conducted using our web-based experimental platform (Tsay et al., 2021). Participants made center-out movements by controlling a cursor with the trackpad on their personal computers. It was not possible to occlude vision of the hand. However, since the visual stimulus was presented on a vertical monitor and the hand movement was in the horizontal plane, we assume vision of the hand was limited to the periphery (based on observations that the eyes remain directed to the screen during the trial). The size and position of visual stimuli were scaled based on each participant’s screen size (height = 239.6 ± 37.7 mm, width = 403.9 ± 69.5 mm). The experiment was controlled by custom software written with JavaScript and presented on Google Chrome. Data were collected and stored using Google Firebase.
The procedure was designed to mimic the lab-based experiments. On each trial, the participant made a center-out planar movement from the start position to a visual target. A white annulus (1% of screen height in diameter, 0.4 cm on average) indicated the start position, and a blue circle (1% of screen height in diameter) indicated the target location. The radial distance of the target from the start position was 40% of the screen height (5 cm on average). At the beginning of each trial, participants moved the cursor (0.6% of the screen height in diameter) to the start position, located at the center of their screen. The cursor was only visible when its distance from the start position was within 20% of the screen height. After maintaining the cursor at the start position for 500 ms, the target appeared. The participant made a rapid slicing movement through the blue target. As in the online experiments, there were feedback and no-feedback trials. For feedback trials, the cursor was visible until it reached the target distance, and then froze for 1 s at the target distance. On no-feedback trials, the cursor was extinguished after the hand exited the start position and the target disappeared once the radial distance of the movement reached the target distance. 500 ms after the end of the trial, the cursor became visible, repositioned at a random location within 10% of the screen height from the start position. The participant then moved the cursor to the start position to trigger the next trial.
There were 8 target locations in Exp 3a and 24 target locations in Exp 3b. As with the lab-based experiments, each experiment included a no-feedback block followed by a feedback block. We obtained larger data sets in the online studies: For each block, there were 20 trials/target (160 total trials for Exp 3a and 480 total trials for Exp 3b).
Reanalysis of prior data sets
Vindras et al (2005). This study used a design in which the participant did not see the start position of the movement. This was achieved by not included start position information in the visual display and passively moving the participant’s hand to a start position prior to each reach. Once positioned, a visual target would appear and the participant reached to that location. Across trials, there were two start positions, 12 target positions (spaced evenly by 30° around the workspace), and two target distances (6 and 12 cm). In modeling these data, we used the movement endpoint averaged across start positions and target distances.
Morehead et al (2017) & Kim et al (2018). We re-analyzed the data from the 15° conditions of Exp 4 in Morehead et al (2017) and Exps 1 and 2 in Kim et al (2018). These three experiments examined visuomotor adaptation using non-contingent clamped feedback. On perturbation trials, the feedback cursor was presented at the radial position of the hand but with a fixed 15° angular offset relative to the target. Participants were informed that the angular position was not contingent on their hand position and instructed to move directly to the target, ignoring the feedback. This method results in robust implicit adaptation, with the heading direction of the movement gradually shifting away from the target in the opposite direction of the cursor. Participants are unaware of this change in behavior (Tsay et al, 2020). In each experiment, there were three blocks: A no-feedback baseline block (10 trials/target), a clamped feedback block (60 trials/target), and a no-feedback washout block (10 trials/target).
Data analyses
Motor bias refers to the angular difference between the position of the hand and target when the hand reaches the endpoint target distance. Angular errors were plotted as a function of the target position with 0° corresponding to the rightward target (3 o’clock location) and 90° corresponding to the forward target. Positive bias values indicate a counterclockwise error, and negative values indicate a clockwise error.
To assess the similarity of the motor bias functions across different conditions, we calculated the normalized correlation coefficient as is the Pearson correlation coefficient between the two motor bias functions. rmax is the correlation coefficient between the recorded motor bias function and the true (but unknown) underlying motor bias function from that condition. To calculate rmax we used a method developed to measure the noise ceiling for EEG/fMRI data 56:
where rhalf is determined by splitting the data set (based on participants) into random halves and calculating the correlation coefficient between the first half and the second half of the data. We bootstrapped rhalf by resampling the data 2000 times and used the average value. rmax is calculated separately for a pair of conditions and the smaller one is applied as the normalizer for rnorm.
Models
To examine the source of motor bias, we considered five single-source models and three multiple-source models.
Visual Bias model
The Visual Bias model postulates that movement biases arise because the perceived position of the visual target is systematically distorted (Fig 1b). Here we draw on the work of Huttenlocher et al13. In their study, a visual target was picked from an invisible circle, presented for 1 s and then blanked. The participant then indicated the remembered position of the target by pointing to a position on a circular digitizing pad. The results showed a bias towards the four diagonal directions (45°, 135°, 225°, 315°), with the magnitude of this bias increasing linearly as a function of the distance from the diagonals. As such, the maximum bias was observed for targets close to four cardinal target locations (0°, 90°, 180°, 270°), and the sign of bias flipped at the four cardinal target locations.
We used the shape of this function to model bias associated with the perception of the location of the visual targets. To obtain a continuous function, we assumed a transition zone around the cardinal targets, each with a half-width represented by the parameter a (Fig 1a), and the peak motor bias is represented by the parameter b. As such, the angular bias (y) at a target located at x° can be formalized as:
This model has two free parameters (a and b). If participants directly reach to the perceived target location, their motor biases will directly reflect their visual biases.
Vector-based Proprioceptive Bias model
Vindras et al. 7,10 proposed a model in which movement biases result from a misperception in estimating the initial position of the hand (Fig 1c). Specifically, it has been shown that the perceived position of the hand when placed near the center of the workspace is biased towards the ipsilateral side and away from the body 18,19,27). Assuming that the planned movement is formed by a vector pointing from the sensed hand position to the visual target position, this proprioceptive distortion will result in systematic motor biases around the workspace. For example, for the target at 90°, misperceiving the initial position of the right hand to the right of the start position will result in a movement that is biased in the counterclockwise (leftward) direction.
To simulate this Proprioceptive Bias model, we assumed the participants perceived the start position (0, 0) as a rightward bias away from the midline position, defining a proprioceptive error vector (xe, ye). For a target i at [xi, yi], the motor plan is a vector [xi − xe, yi − ye]. From this, we calculated the angular difference between the motor plan vector and the target position to generate the motor bias for each target. The two free parameters in this model are [xe, y].
Joint-based Proprioceptive Bias model
Reaching movements may also be planned in joint coordinates rather than the hand (endpoint) position 21,22. Based on this hypothesis, motor biases could come about if there is a misperception of the initial elbow and shoulder joint angles. To implement a Joint-Based Proprioceptive Bias model, we represent the length of the forearm and upper arm as l1 and l1, respectively. We denote the initial angles of the shoulder and elbow joints as θ0 and φ0, respectively, and their associated perceived error as θe, and φe (See Fig S1).
By setting the origin of the coordinate system for the right shoulder at P0 (0, 0), the hand can be represented as:
For a fixed position in the workspace, there will be a unique solution pair for θ and φ (π > φ > θ > 0), should a solution exist. To calculate the required change in joint angle to reach a visual target, we assumed that the system plans a movement based on the perceived hand position:
Then we solve the following equation to decide the proper σ θiand σi φithat transfer the hand from the start position to a target i at [xi, yi]:
We calculated the real movement direction based on the real hand position:
We compare the direction of σhi and the target direction to calculate the motor bias. For simplicity, we assume l1= l2=24 cm.57 The four free parameters in this model are θ0, φ0, θe., and φe
Transformation Bias model
The Transformation Bias model proposes attributes motor biases to systematic errors that arise during the transformation from a visual to proprioceptive-based reference frame. To implement this model, we refer to an empirically derived visuo-proprioceptive error map from a data set that sampled most of reachable space (Fig 1d, 20). Specifically, in that study, participants were asked to move their unseen hand from a random start position to a visual target. Rather than require a discrete reaching movement, they were told to continuously adjust their hand position, focusing on accuracy in aligning the hand with the target. The direction of the error was relatively consistent across targets, with the final hand position shifted to the right and undershooting the target. The magnitude of these biases increased as the radial extent of the limb increased. This basic pattern has been observed across studies using different visuo-proprioceptive matching methods18,19,27,28,54,58.
The matching errors provide an empirical measure of the transformation from a visual reference frame to a proprioceptive reference frame. To model these data, we defined a transformation error vector, [xe, ye], whose direction is fixed across space. We then defined a “reference position” with a coordinate of [xr, yr]. For upper-limb movements, this reference position is often considered to be positioned around the shoulder.59 The transformation error vector at position i is scaled by its Euclidean distance (d) to the referent position:
Movements towards a target i is planned via the vector connecting the start position to the target in proprioceptive space, denoted as:
where T0 is the transformation vector at the start position, which is set as [0,0]. Motor bias is calculated as the angular difference between the motor plan and the target. The four free parameters in the Transformation Bias model are xe, ye, xr, yr.
Hybrid models
The four models described above each attribute motor biases to a single source. However, the bias might originate from multiple processes. To formalize this hypothesis, we considered three hybrid models, combining the Visual Bias model with the two versions of the Proprioceptive Bias model and with the Transformation Bias model. We did not create a hybrid of the Proprioceptive and Transformation Bias models since they make different assumptions about the information used to derive the motor plan.
Proprioceptive Bias + Visual Bias (P+V) model
We also created two hybrid models, combining the Visual Bias model with the Vector-Based and Joint-Based Proprioceptive Bias models. The Visual Bias model is used to estimate systematic error in the perceived location of the target and the proprioceptive Bias models are used to estimate systematic error in the perceived position of the hand at the start position. For these models, we calculated the biases from two models separately and then added them together:
where bi refers to the bias at target i.
Transformation Bias + Visual Bias (T+V) model
The Transformation Bias model attributes motor biases to error that arises during the transformation of spatial information from visual space to proprioceptive space. However, the perceived location of the visual target may be biased. While we recognize that the misperception of the visual target may influence the transformation bias, the visual bias usually very small in our experiment, with a peak <2°. As such, we simplify by calculating the biases from the Transformation Bias and Visual Bias models separately, and then added the values together:
Model comparison
To compare the models, we fit each model with the data from Experiments 1b and 3b in which reaches were made to 24 targets. We evaluated our models on the average data across participants. We used the fminsearchbnd function in MATLAB to minimize the sum square errors (SSE) and used BIC for model comparison:
where k is the number of parameters of the models, and LL is the sum of loglikelihood across all trials of all participants. Smaller BIC values correspond to better fits.
Modeling motor bias after implicit sensorimotor adaptation
To examine how the motor bias function changes after visuomotor adaptation, we first used the T+V model to fit the motor bias function from a no-feedback baseline block tested prior to the introduction of the perturbation. We then used the best-fitted baseline model (TVb) to estimate the shift in the motor bias function from data obtained in a no-feedback aftereffect block following adaptation:
where b(i) is the motor bias at target i in the aftereffect; v and h indicate the vertical and horizontal shift respectively. To estimate distribution of v and h, we bootstrapped the subjects with repetition for 200 times and fitted the v, h based on the group average of each bootstrapped sample.
Funding
RBI is funded by the NIH (grants NS116883 and NS105839). JST is funded by the NIH (F31NS120448).
Acknowledgements
We thank Zixuan Wang and Anisha Chandy for helpful discussions. We thank Anisha Chandy for data collection.
Competing interests
RI is a co-founder with equity in Magnetic Tides, Inc.
Data availability statement
Data and code is available at https://github.com/shion707/Motor-Bias
References
- 1.The statistical determinants of adaptation rate in human reachingJ. Vis 8
- 2.Combining priors and noisy visual cues in a rapid pointing taskJ. Neurosci 26:10154–10163
- 3.A sensory source for motor variationNature 437:412–416
- 4.Motor unit recruitment strategies and muscle properties determine the influence of synaptic noise on force steadinessJ. Neurophysiol 107:3357–3369
- 5.The scaling of motor noise with muscle strength and motor unit number in humansExp. Brain Res 157:417–430
- 6.The Role of Variability in Motor LearningAnnu. Rev. Neurosci 40:479–498
- 7.Pointing Errors Reflect Biases in the Perception of the InitialHand PositionJ. Neurophysiol 79:3290–3294
- 8.Accuracy of planar reaching movements. II. Systematic extent errors resulting from inertial anisotropyExp. Brain Res 99:112–130
- 9.Learning a visuomotor transformation in a local area of work space produces directional biases in other areasJ. Neurophysiol 73:2535–2539
- 10.Error parsing in visuomotor pointing reveals independent processing of amplitude and directionJ. Neurophysiol 94:1212–1224
- 11.Categorical biases in spatial memory: the role of certaintyJ. Exp. Psychol. Learn. Mem. Cogn 41:473–481
- 12.A common format for representing spatial location in visual and motor working memoryPsychon. Bull. Rev https://doi.org/10.3758/s13423-023-02366-3
- 13.Spatial categories and the estimation of locationCognition 93:75–97
- 14.Directional biases reveal utilization of arm’s biomechanical properties for optimization of motor behaviorJ. Neurophysiol 98:1240–1252
- 15.A minimum energy cost hypothesis for human arm trajectoriesBiol. Cybern 76:97–105
- 16.Evaluation of trajectory planning models for arm-reaching movements based on energy costNeural Comput 21:2634–2647
- 17.Stable individual signatures in object localizationCurr. Biol 27:R700–R701
- 18.The Proprioceptive Map of the Arm Is Systematic and Stable, but IdiosyncraticPLoS One 6
- 19.The precision of proprioceptive position senseExp. Brain Res 122:367–377
- 20.Accuracy of hand localization is subject-specific and improved without performance feedbackSci. Rep 10
- 21.Flexible strategies for sensory integration during motor planningNat. Neurosci 8:490–497
- 22.Multisensory integration during motor planningJ. Neurosci 23:6982–6992
- 23.Direct visuomotor transformations for reachingNature 416:632–636
- 24.Errors in pointing are due to approximations in sensorimotor transformationsJ. Neurophysiol 62:595–608
- 25.A coordinate system for the synthesis of visual and kinesthetic informationJ. Neurosci 11:770–778
- 26.Frames of reference for hand orientationJ. Cogn. Neurosci 7:182–195
- 27.Proprioceptive localization of the left and right handsExp. Brain Res 204:373–383
- 28.Reach adaptation and proprioceptive recalibration following exposure to misaligned sensory inputJ. Neurophysiol 103:1888–1895
- 29.Task performance is prioritized over energy reductionIEEE Trans. Biomed. Eng 56:1310–1317
- 30.Slowing of movements in healthy aging as a rational economic response to an elevated effort landscapeJ. Neurosci https://doi.org/10.1523/JNEUROSCI.1596-23.2024
- 31.Moving effortlessly in three dimensions: does Donders’ law apply to arm movement?J. Neurosci 15:6271–6280
- 32.Impairments of reaching movements in patients without proprioception. I. Spatial errorsJ. Neurophysiol 73:347–360
- 33.Statistics predict kinematics of hand movements during everyday activityJ. Mot. Behav 41:3–9
- 34.Alignment to natural and imposed mismatches between the sensesJ. Neurophysiol 109:1890–1899
- 35.The coordination of arm movements: an experimentally confirmed mathematical modelJ. Neurosci 5:1688–1703
- 36.Characteristics of Implicit Sensorimotor Adaptation Revealed by Task-irrelevant Clamped FeedbackJ. Cogn. Neurosci 29:1061–1074
- 37.Continuous reports of sensed hand position during sensorimotor adaptationJ. Neurophysiol 124:1122–1130
- 38.Invariant errors reveal limitations in motor correction rather than constraints on error sensitivityCommun Biol 1
- 39.Reach plans in eye-centered coordinatesScience 285:257–260
- 40.Spatial Transformations for Eye–Hand CoordinationIn Encyclopedia of Neuroscience Elsevier :203–211
- 41.Gaze-centered remapping of remembered visual space in an open-loop pointing taskJ. Neurosci 18:1583–1594
- 42.The proprioceptive senses: their roles in signaling body shape, body position and movement, and muscle forcePhysiol. Rev 92:1651–1697
- 43.Proprioception: a new look at an old conceptJ. Appl. Physiol 132:811–814
- 44.Human representation of visuo-motor uncertainty as mixtures of orthogonal basis distributionsNat. Neurosci 18:1152–1158
- 45.Variability in encoding precision accounts for visual short-term memory limitationsProc. Natl. Acad. Sci. U. S. A 109:8780–8785
- 46.A unifying theory explains seemingly contradictory biases in perceptual estimationNat. Neurosci https://doi.org/10.1038/s41593-024-01574-x
- 47.Movement Repetition Facilitates Response PreparationCell Rep 24:801–808
- 48.A representation of effort in decision-making and motor controlCurr. Biol 26:1929–1934
- 49.When Feeling Is More Important Than Seeing in Sensorimotor AdaptationCurr. Biol 12:834–837
- 50.Distinguishing response from stimulus driven history biasesbioRxiv https://doi.org/10.1101/2023.01.11.523637
- 51.Behavioral reference frames for planning human reaching movementsJ. Neurophysiol 96:352–362
- 52.Updating target distance across eye movements in depthJ. Neurophysiol 99:2281–2290
- 53.Understanding implicit sensorimotor adaptation as a process of proprioceptive re-alignmentElife 11
- 54.Sensory recalibration of hand position following visuomotor adaptationJ. Neurophysiol 102:3505–3518
- 55.The assessment and analysis of handedness: The Edinburgh inventoryNeuropsychologia 9:97–113
- 56.Measuring the Performance of Neural ModelsFront. Comput. Neurosci. 10https://doi.org/10.3389/fncom.2016.00010
- 57.Anthropometric reference data for children and adults: United States, 2007-2010Vital Health Stat 11:1–48
- 58.Functional neuroanatomy of proprioceptionJ. Surg. Orthop. Adv 17:159–164
- 59.The perceived position of the hand in spacePercept. Psychophys 62:363–377
Article and author information
Author information
Version history
- Preprint posted:
- Sent for peer review:
- Reviewed Preprint version 1:
Copyright
© 2024, Wang et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 201
- downloads
- 8
- citations
- 0
Views, downloads and citations are aggregated across all versions of this paper published by eLife.