Different Causes of Motor Biases.

(a) Motor biases may originate from biases in perceiving the initial hand position (proprioceptive bias), perceiving the location of the visual target (visual bias), transforming positional information from visual to proprioceptive space (transformation bias), and/or biomechanical constraints during motor execution. Previous models attribute motor biases to errors originating from the distinct contributions of visual (b) and proprioceptive biases (c). (d) Our model attributes motor biases to a transformation error between visual and proprioceptive coordinate systems. (e) A visuo-proprioceptive map showing the matching error between proprioceptive and visual space (Wang et al (2020)). Participants matched the position their hand (tip of the arrow) from a random starting location to the position of a visual target (end of the arrow). The blue dot depicts an example of a visual target in the workspace, and the red arrow indicates the corresponding matched hand position. Participants were asked to maximize spatial accuracy rather than focus on speed. (f-h) Model bias functions predicted by four models. Top: Illustration of how each model is applied to a center-out reaching task. As an example, the predicted motor plan and the corresponding real movement are provided for the 100° target in f and 135° target in g and h. Bottom: The predicted motor bias functions qualitatively differ in terms of the number of peaks and toughs. Note that the middle panel depicts two variants of a proprioception model.

Motor biases across different experimental contexts.

(a) Lab-based experimental apparatus for Exps 1-2. (b) Vectors linking the start position to the average endpoint position when reach amplitude equaled the target radius (pink lines; Exp 1a). (c) Motor biases as a function of target location. The dots indicate the mean angular error across participants during the no-feedback block (pink) and veridical feedback block (grey). The pattern of motor bias was similar in Exp 1a (8-targets; left panel) and Exp 1b (24-targets; right panel), characterized by two peaks and two troughs. Error bars denote standard error. (d) Motor biases generated during left hand reaches (left), left-hand results when the data are mirror reversed across the vertical meridian (middle), and right-hand reaches (right). (e) Left: The motor bias generated by right-hand reaches was similar to that of mirror-reversed left-hand reaches. Right: Difference in RMSE when the right-hand map is compared to the original left-hand map relative to when the right-hand map is compared to the mirror reversed left-hand map. Positive values indicate better data alignment when the left hand data are mirror-reversed. (f) Correlation of the motor bias function between the no-feedback and feedback blocks is higher in the within-participant condition compared to the between-participant condition. Gray bars indicate the 25% and 75% quartiles. White dots indicate the median and horizontal lines indicate the mean. (g) Experimental setup for Exp 3. Participants were asked to make center-out reaching movements using a trackpad or mouse. These movements predominantly involve finger and wrist movements. (h) The workspace is presumed to be closer to the reference point (e.g., left shoulder) for finger/wrist movements (Exp 3) compared to that of arm movements (Exp 1-2). (i) The pattern of motor biases in finger/wrist movements for 8-targets (left) and 24-targets (right).

The pattern of motor biases is best explained by assuming systematic distortions in the perceived location of the target and the transformation between visual-proprioceptive coordinate frames.

(a) For single-source models, the pattern of motor biases in the no feedback block of Exp 1a (pink dots) is best fit by the Transformation Bias model (left) compared to the other models (right). (b) A mixed model with transformation and visual biases (T+V) provides the best fit to the motor bias function in both Exp 1b (top) and Exp 3b (bottom). (c) Model comparison using BIC. ΔBIC values are provided by subtracting the BIC from the best performing model (i.e., the T+V model). A smaller ΔBIC signifies better model performance.

Motor bias patten changes when the start position is not visible.

(a) Schematic showing the planned movement under the Transformation Bias model when the start position is either visible (left) or not visible (right). In the latter case, only the target position has to be transformed from visual to proprioceptive coordinates with the start position directly encoded in proprioceptive space. The T+V model now predicts a single-peaked motor bias function (lower row). (b) Consistent with this prediction, a two-peaked function is predicted when the start position is visible (as in Exp 1) and a single-peaked function is predicted when start position is not displayed. Data (pink dots) are from Vindras et al (2005).

The pattern of motor bias is preserved after implicit sensorimotor adaptation, consistent with the Transformation + Visual Bias model.

(a) Illustration of the clamped perturbation. Feedback cursor is offset by a fixed angle from the target, independent of the participant’s heading direction. (b) Time course of hand angle in response to clockwise or counterclockwise clamped feedback. Vertical lines demarcate the perturbation block which was preceded and followed by no-feedback baseline and washout phases, respectively (gray areas). Shaded area indicates standard error. (c) Predictions for the bias functions after adaptation for the T+V (top) and Biomechanical models (bottom). See text for details. The right column shows the predicted motor bias functions following adaptation in response to a clockwise (CW) or counterclockwise (CCW) clamp. (d) Motor bias functions before and after training in a CW (left) and a CCW (right) clamp. Data taken from Morehead et al. (2017) and Kim et al. (2018); the height of the colored bars indicates the standard error for each data point. The best-fit lines for the T+V model are shown. (e) Parameter values to capture vertical and horizontal shifts in motor bias functions before and after training. The CW and CCW conditions both showed a significant vertical shift but no horizontal shift.

Schematic of a vector based and joint-based Proprioceptive Bias model.

Previous studies have considered two variants of the Proprioceptive Bias model. (a) A vector-based model in which the motor plan is a vector pointing from the perceived hand position to the target7,10. (b) A joint-Based model in which the movement is encoded as changes in the shoulder and elbow joint angles to move the limb from a start position to a desired location21,22. See Method, Models for details.

The Transformation Bias model can explain the motor bias functions when the visual information is shifted.

(a) In Sober and Sabes (2003)22 participants performed center-out reaches to a visual target. To perturb the visual information, the start position was presented 6 cm to the left or right of the actual start position of the hand. (b) Participants showed a one-peaked motor bias functions with the shift-left and shift-right functions shifted in an antiphase relationship to one another. (c) These bias functions are quantitively captured by the Transformation Bias model.

Parameter estimates from best fits of the T+V model for the data from Exps 1b and 3b.

See Methods for description of each parameter.

a. Participant moved on the trackpad in Exp 3b. We assumed the movement distance was 1 cm and scaled the parameters accordingly.

b. The estimate of yr is much smaller in Exp 3b compared to Exp 1b, suggesting the workspace in Exp 3b is closer to the body. This attenuates the average magnitude of the bias.