Figures and data

Different Causes of Motor Biases.
(a) Motor biases may originate from biases in perceiving the initial hand position (proprioceptive bias), perceiving the location of the visual target (target bias), transforming positional information from visual to proprioceptive space (transformation bias), and/or biomechanical constraints during motor execution. Previous models attribute motor biases to errors originating from the distinct contributions of visual (b) and proprioceptive biases (c). (d) Our model attributes motor biases to a transformation error between visual and proprioceptive coordinate systems. (e) A visuo-proprioceptive map showing the matching error between proprioceptive and visual space (Wang et al (2020)). Participants matched the position their hand (tip of the arrow) from a random starting location to the position of a visual target (end of the arrow). The blue dot depicts an example of a visual target in the workspace, and the red arrow indicates the corresponding matched hand position. Participants were asked to maximize spatial accuracy rather than focus on speed. (f-h) Simulated motor bias functions predicted by four models. Top: Illustration of how each model yields a biased movement, with the example shown for a movement to the 135° target in panels g and h and for the 100° target in panel f (as there is no target bias at 135°). Grey bars in panel f, g, h indicate predicted bias for all targets and/or start position based on previous measurement of visual bias (f)13, and proprioceptive/transformation bias (g-h)20. Bottom: Simulated motor bias functions differ qualitatively in terms of the number of peaks and troughs. Note that the middle panel depicts two variants of a proprioception model.

Motor biases across different experimental contexts.
(a) Lab-based experimental apparatus for Exps 1-2. (b) Vectors linking the start position to the average endpoint position when reach amplitude equaled the target radius (pink lines; Exp 1a). (c) Motor biases as a function of target location. The dots indicate the mean angular error across participants during the no-feedback block (pink) and veridical feedback block (grey). The pattern of motor bias was similar in Exp 1a (8-targets; left panel) and Exp 1b (24-targets; right panel), characterized by two peaks and two troughs. Error bars denote standard error. (d) Motor biases generated during left hand reaches (left), left-hand results when the data are mirror reversed across the vertical meridian (middle), and right-hand reaches (right). (e) Left: Mirror reversal of biases observed during left hand reaches are similar to biases observed with right hand reaches. Right: Difference in RMSE when the right-hand map is compared to the original left-hand map relative to when the right-hand map is compared to the mirror reversed left-hand map. Positive values indicate better data alignment when the left-hand data are mirror-reversed. (f) Correlation of the motor bias function between the no-feedback and feedback blocks is higher in the within-participant condition compared to the between-participant condition. Gray bars indicate the 25% and 75% quartiles. White dots indicate the median and horizontal lines indicate the mean. (g) Experimental setup for Exp 3. Participants were asked to make center-out reaching movements using a trackpad or mouse. These movements predominantly involve finger and wrist movements. (h) The workspace is presumed to be closer to the reference point (e.g., left shoulder) for finger/wrist movements (Exp 3) compared to that of arm movements (Exp 1-2). The transformation maps for the in-person and online spaces were simulated from the best-fit models in Exp 1 and Exp 2, respectively. (i) The pattern of motor biases in finger/wrist movements for 8-targets (left) and 24-targets (right).

The pattern of motor biases is best explained by assuming systematic distortions in the perceived location of the target and the transformation between visual and proprioceptive coordinate frames.
(a) For single-source models, the pattern of motor biases in the no feedback block of Exp 1a (pink dots) is best fit by the Transformation Bias model. (b) The three input-based models cannot explain the two-peak motor bias function. (c) Considering only the four single-source models, the data overwhelmingly favored the Transformation Bias model (48 out of 56 participants). (d) A mixture model involving transformation and target biases (TR+TG) provides the best fit to the motor bias function in Exp 1b (top). (e) Model comparison using BIC in Exp 1b. ΔBIC values are provided by subtracting the BIC from the best performing model (i.e., the TR+TG model). A smaller ΔBIC signifies better model performance. (f) For the mixture models, the data for almost all of the individuals were best explained by the TR+TG model (50 out of 56). (g-i) Same as panels d–f, but for Experiment 3b. Fig. S2 shows representative individuals whose data are best captured by different models.

Motor biases in both angular and distance dimensions originate from a misalignment between visuo-proprioceptive reference frames.
(a) KINARM apparatus for Exp 4. (b) Assuming that participants viewed the display from a fixed angle, we would expect a perceptual bias in depth perception35,36. (c) Model simulations for motor biases in movement extent. The Transformation Bias model predicts a two-peaked function for distance bias, while the Proprioceptive Bias and the Target Depth Bias (TGD) models predict a one-peaked function. (d) Participants exhibited a two-peaked bias function for reach angle and extent. (e) The hybrid Transformation Bias + Target Depth Bias (TR+TGD) provides a good fit to the data for both dimensions. (f-g) Model comparisons. The TR+ TGD model outperformed alternative models in terms of averaged ΔBIC (f) and model frequency (g).

Motor bias pattern changes when the start position is not visible.
(a) Schematic showing the planned movement under the Transformation Bias model when the start position is either visible (left) or not visible (right). In the latter case, only the target position is transformed from visual to proprioceptive coordinates with the start position directly encoded in proprioceptive space. The TR+TG model now predicts a single-peaked motor bias function (lower row). (b) Consistent with this prediction, a two-peaked function is predicted when the start position is visible (as in Exp 1) and a single-peaked function is predicted when start position is not displayed. Data (pink dots) are from Vindras et al (2005).

Biomechanical constraints are unlikely to be a primary source of motor biases.
(a) Schematic of the two-skeleton, 60-muscle effector used in MotorNet and how predictions concerning reaching biases were simulated. (b) The model predicts a four-peaked motor bias function for a center-out reaching task, at odds with the two-peaked functions observed in Exps 1-3. Grey lines denote single simulations. Black line denotes the group average across runs.

The pattern of motor bias is preserved after implicit sensorimotor adaptation, consistent with the Transformation + Target Bias model.
(a) Illustration of the clamped perturbation. Feedback cursor is offset by a fixed angle from the target, independent of the participant’s heading direction. (b) Time course of hand angle in response to clockwise or counterclockwise clamped feedback. Vertical lines demarcate the perturbation block which was preceded and followed by no-feedback baseline and washout phases, respectively (gray areas). Shaded area indicates standard error. (c) Predictions for the bias functions after adaptation for the TR+TG (top) and Biomechanical models (bottom). See text for details. The right column shows the predicted motor bias functions following adaptation in response to a clockwise (CW) or counterclockwise (CCW) clamp. (d) Motor bias functions before and after training in a CW (left) and a CCW (right) clamp. Data taken from Morehead et al. 39 and Kim et al. 41; the height of the colored bars indicates the standard error for each data point. The best-fit lines for the TR+TG model are shown. (e) Parameter values to capture vertical and horizontal shifts in motor bias functions before and after training. The CW and CCW conditions both showed a significant vertical shift but no horizontal shift.

Schematic of a vector based and joint-based Proprioceptive Bias model.
Previous studies have considered two variants of the Proprioceptive Bias model. (a) A vector-based model in which the motor plan is a vector pointing from the perceived hand position to the target7,10. (b) A joint-Based model in which the movement is encoded as changes in the shoulder and elbow joint angles to move the limb from a start position to a desired location21,22. See Method, Models for details.

Data and fits for four individuals.
These participants were selected to represent cases in which the best fit was provided by each of the four single source models.

The Transformation Bias model can explain the motor bias functions when the visual information is shifted.
(a) In Sober and Sabes (2003)22 participants performed center-out reaches to a visual target. To perturb the visual information, the start position was presented 6 cm to the left or right of the actual start position of the hand. (b) Participants showed a one-peaked motor bias functions with the shift-left and shift-right functions shifted in an antiphase relationship to one another. (c) These bias functions are quantitively captured by the Transformation Bias model.

Model recovery.
The parameters in the three mixed models are recoverable. We simulated each model 50 times with all parameters randomly sampled from uniform distributions and then fit each simulated agent with all three models 200 times each. (a-c) The fitted parameters are very close to the ground truth. (d) Log-likelihood as a function of fitting iterations. Based on this curve, we determined that 150 iterations were sufficient given that the log-likelihood values were asymptotic at this point. (e) In most cases, the model fits recover the simulated model, with minimal confusion across the three models.

Illustration of the TR+TG model.
(a) We assumed that participants are biased in their representation of the target position, following the Target Bias model. (b) The biased target position is transformed into proprioceptive space, following the Transformation Bias model. (c) The movement is planned in proprioceptive space.

Parameter estimates from best fits using the group-level data for the the TR+TG model from Exps 1b and 3b.
See Methods for description of each parameter. a. Participant moved on the trackpad in Exp 3b. We assumed the movement distance was 1 cm and scaled the parameters accordingly. b. The estimate of yr is much smaller in Exp 3b compared to Exp 1b, suggesting the workspace in Exp 3b is closer to the body. This attenuates the average magnitude of the bias.