Figures and data

Methods, pRF modeling and attention effect.
a. Participants fixated a bull’s-eye in either full screen (top left panel), gaze center (top center), gaze left (bottom left) and gaze right runs (bottom center). They reported the orientation of pink noise patterns (+45° or –45°) presented either within the fixation bull’s-eye (attend-fix) or within the bar (attend-bar). b. We titrated the difficulty of the orientation discrimination task by varying the dispersion coefficient of the orientation filter applied to the pink noise pattern. The figure presents probability distributions of orientations contained within the pink noise patterns as a function of the dispersion coefficient. Right insets showcase three levels of difficulty from non-oriented random noise (bottom) to the most oriented pattern used (top). c. Group performance (n = 8) obtained in the attend-bar (black) and attend-fix (gray) conditions. Error bars show ±SEM and the dashed line shows the staircase convergence level (∼79% correct). d. In the full screen and gaze center/left/right conditions, a bar moved in different directions (black rectangles with arrows) interleaved with periods in which only the fixation bull’s-eye was shown (empty black rectangles). e. To determine regions of interest (ROIs), we averaged full screen runs and fit a linear pRF model. f. Example V1 timeseries and its best explained pRF model and parameters. g. Single participant (sub-03) ROIs and pRF angle and eccentricity maps projected on his inflated brain. Participants brains visualizations are available online (invibe.nohost.me/gazeprf). h. Map of the effect of spatial attention obtained by comparing the explained variance of attend-bar and attend-fix conditions separately. Reddish colors of the inflated cortex illustrate the effect of spatial attention. I. Model explained variance (R2) observed for the best-fitting voxels of each ROI in the attend-bar (black) and attend-fix (gray) conditions. Error bars show ±SEM, asterisks show significant difference between conditions (two-sided p < 0.05).

Out-of-sample retinotopic and spatiotopic predictions.
a. Retinotopic predictions are obtained by adding the change in gaze direction to the full screen pRFx parameters (e.g., pRFxgaze left = pRFxfull screen – 4°). The spatiotopic hypothesis dictates that the pRFx should stay identical irrespective of the gaze direction (i.e., pRFxgaze left = pRFxgaze right = pRFxgaze center = pRFxfull screen). b-c. Example retinotopic (red) and spatiotopic (blue) timeseries predictions for V1 (b) and hMT+ voxels (c) in the gaze center (top row), gaze left (middle row) and gaze right (bottom row) attend-bar runs of one participant (sub-03). Leftmost inset values show corresponding model R2. d-e. Retinotopic (red) and spatiotopic predictions (blue) change in explained variance between gaze left/right and gaze center conditions across ROIs. Note that only spatiotopic prediction of V1, V2 and V3 voxels display a significant change as compared to zero in both attention conditions (V1/V2/V3: 0.0156 > ps > 0.0001; two-sided p values), as well as V3AB (two-sided p = 0.0156) and VO in the attend-bar condition (two-sided p = 0.0078). Error bars show ±SEM, asterisks show significant difference between retinotopic and spatiotopic predictions (two-sided p < 0.05).

Fitting retinotopic and spatiotopic pRF models.
a. Fitting procedure. We used a coarse-to-fine optimization fitting procedure in which we kept the pRFy and pRFsize parameters from the corresponding full screen runs. In order to avoid bias between the spatiotopic and retinotopic model, we first used two different sets of parameter starting points. These starting points were based on either the retinotopic (red) or the spatiotopic (blue) hypotheses. For a next optimization stage, we selected the parameters producing the highest fit quality. b. Retinotopic and spatiotopic predictions. The gaze center pRFx as a function of the gaze left/right pRFx will either shift (left panel, retinotopic prediction) or remain at the same position (right panel, spatiotopic prediction) when comparing the gaze center with the gaze left (green) or gaze right conditions (purple). c-d. Each panel shows the group average of 16 equal bins of the pRFx obtained in the gaze center runs as a function of the corresponding group averaged bins of the pRFx obtained in the gaze left (purple) and gaze right runs (green) for the attend-bar (c, dark colors) and attend-fix conditions (d, light colors). Error areas show ±SEM. e. Group average reference frame index (RFI) observed in the attend-bar (black) and attend-fix conditions (gray) for the best-fitting voxels of each ROI. Error bars show ±SEM, white asterisks show significant RFI as compared to a null effect (two-sided p < 0.05). Black asterisks indicate a significant difference between the attend-bar and attend-fix conditions (two-sided p < 0.05). f-g. Reference frame index maps obtained by projecting the RFI obtained in the attend-bar (f) and attend-fix conditions (g) on inflated cortex using a unimodal color scale (blue: spatiotopic vs. red: retinotopic) for a participant (sub-05, same as in Fig. 1).

Reference frame index as a function of model overlap and explained variance.
We ordered and binned voxels in two dimensions according to spatial relevance (vertical dimension, quantified as pRF overlap with the gaze center aperture) and explained variance (horizontal dimension). A strong gradient with higher RFI values emerges in the more informative voxels (upper right quadrant) across all visual ROIs, indicating that voxels with stronger inferential contributions show increasingly retinotopic reference frames.

Bayesian decoding.
a. Our encoding procedure consists of determining a likelihood p(b|s)time defined as the probability of observing fMRI BOLD signal (b) as a function of the stimulus (s) presented in the full screen condition. For this analysis we use the spatial tuning and noise correlations of the 250 best-fitting voxels per ROI. b. The decoding procedure illustrated here computes the posterior probability over time p(s|b)time using independent recorded BOLD signals for example from the gaze center condition. This procedure results in a direct comparison of decoded and actual bar position over time (see comparison of posterior distribution sample and ground truth). c. Decoded bar position as function of the time in the gaze center (top, orange), the gaze left (middle, green) and the gaze right (bottom, purple) conditions. Graphs present decoding from individual runs (from the second session) using V1 voxels activity (attend-bar condition) of a participant (sub-05). Predictions are illustrated as red and blue dashed lines for the retinotopic and spatiotopic predictions, respectively. Error areas show the posterior distribution STD. d-e. Average decoded bar position across bar passes and sessions for best-fitting voxels of V1 (d) and hMT+ (e) respectively, for gaze center (top), gaze left (middle) and gaze right conditions(bottom), in the attend-bar (left panels, dark colors) and attend-fix conditions (right panels, light colors). Error areas show ±SEM. f. Group averaged correlation between the decoded bar position and the ground truth. Conventions are as in Fig. 3e. g. Group average reference frame index observed in the attend-bar (black) and attend-fix conditions (gray) for the best-fitting voxels of each ROI. Conventions are as in Fig. 3e.

Decoding uncertainty.
a. Average decoded uncertainty across ROIs and participants for the gaze center (left column), gaze left (middle column) and gaze right (right column) conditions. Black asterisks indicate a significant difference between the attend-bar and attend-fix conditions (two-sided p < 0.05). b. Same results for the attend-bar (left column) and attend-fix (right column) conditions. Black asterisks indicate a significant difference between the gaze center, gaze left and gaze right conditions (two-sided p < 0.05).