Preparatory attentional templates in prefrontal and sensory cortex encode target-associated information

  1. Zhiheng Zhou  Is a corresponding author
  2. Joy Geng  Is a corresponding author
  1. College of Psychology, Sichuan Normal University, China
  2. Center for Mind and Brain, University of California, Davis, United States
  3. Department of Psychology, University of California, Davis, United States
5 figures and 5 additional files

Figures

Figure 1 with 2 supplements
Associative learning task.

(A) The four target face stimuli and their associated scenes. (B) Associative learning task to test memory for face-scene pairs. Participants viewed a series of face-scene pairs and made a judgment about whether the face and scene were matched or not. (C) Memory performance for both match and nonmatch conditions was high, suggesting a strong association was formed before the face search task. Error bars refer to 95% CIs. Facial images from The Chicago Face Database (Ma et al., 2015).

Figure 1—figure supplement 1
The four target face stimuli and their three distractor face counterparts.

Four target faces were approved for release to the public following the Chicago Face Database (CFD; https://www.chicagofaces.org/) copyright rules. The image file names of the twelve distractor faces are listed for reference. Facial images from The Chicago Face Database (Ma et al., 2015).

Figure 1—figure supplement 2
The four scene categories used in the search task.

Each category consisted of 16 exemplars.

Figure 2 with 1 supplement
fMRI tasks and analysis procedure.

(A) Face search task. Each trial started with a 1-second search cue indicating the target face for that trial. This was followed by an 8-second blank search delay period, and then the search display for 0.25 second. Participants pressed a button to indicate the location of the target on the left or right. (B) Illustration of the separate 1-back task used for cross-task classifier testing. Consistent with the main visual search task, the image was presented for 1 second followed by an 8-second delay period. (C) Classification scheme. Classifiers were trained on the neural response patterns from the search cue face stimulus and delay periods from the visual search task; the classifier was tested on face or scene sample stimulus and delay period from the 1-back task. (D) Visualization of the twelve functional regions of interest (ROIs) on the cortical surface of a representative participant. All ROIs were defined in an individual’s native space. Facial images from The Chicago Face Database (Ma et al., 2015).

Figure 2—figure supplement 1
Illustration of overlap between the HCP-MMP1 atlas (Glasser et al., 2016) and the Schaefer resting-state 17-network atlas (Schaefer et al., 2018) in the inferior frontal junction regions.

Black outlines correspond to the PEF, IFJp, and IFJa from the HCP-MMP1 atlas. The red ROI corresponds to the 17-network atlas precentral label from the dorsal attention network in the left hemisphere and the ventral attention network in the right.

Decoding of face and scene information during the search cue period.

(A) Evidence of face information in a priori defined regions of interest (ROIs). Greater than chance-level classification accuracies were found in the dorsolateral prefrontal cortices (dLPFC), superior parietal lobule (SPL), and fusiform face area (FFA). Evidence for scene information was found in ventrolateral prefrontal cortices (vLPFC). N=26 participants. †p<0.05 uncorrected, *p<0.05, **p<0.005. (B, C) Significant brain regions revealed by a whole-brain searchlight procedure with information about the face cue (B) or associated scene (C). Note that scene information was never shown during the cue period and therefore decoding of scene information reflects memory-evoked responses to the cued face.

Figure 4 with 1 supplement
Decoding of face and scene information during the search delay period.

(A) Evidence of face information in a priori defined regions of interest (ROIs) was only found in the left intraparietal sulcus (IPS). However, scene information was decoded in both inferior frontal junction (IFJ) and parahippocampal place area (PPA), reflecting memory-evoked target-associated information in a network that encodes the target template. N=26 participants. †p<0.05 uncorrected, *p<0.05, **p<0.005. (B, C) Whole-brain searchlight analyses showed no additional brain regions carried significant information about the face (B), but additional scene information was found in the retrosplenial cortex (C).

Figure 4—figure supplement 1
Decoding of face and scene information during the search cue (A), search delay1 (B), and search delay2 (C) periods.

An exploratory analysis was conducted to examine possible differences in decoding of target-associated information in the earlier versus later portions of the delay period. New GLMs were run. They were identical to the main analysis, except that the delay period was split into two 4-second regressors (i.e., the first and second half of the 8-second delay period) in both the face search task and the face and scene 1-back task. The decoding schemes were identical to the main analysis but were now conducted separately for the delay1 and delay2 time periods. The ROI decoding results were similar for target faces during the search cue period using this new GLM compared to the main analysis results – this was expected since there were no differences in the model for the search cue period. However, decoding of scenes and faces during the delay1 and delay2 periods was not reliably found based on the new GLMs. The only significant effect was in LH PPA during delay2. The null results could be due to insufficient power when the data are divided, individual differences in when preparatory activation is the strongest, or truly no difference in activation over the delay period. Other methods with higher temporal resolution may be better suited to answer the question of exactly when preparatory activation for target-associated information is initiated and how long it lasts. LH, left hemisphere. N=26 participants. †p<0.05 uncorrected, *p<0.05, **p<0.005.

Figure 5 with 1 supplement
Behavioral and brain results from the face search period.

(A) Behavioral accuracy and RT both showed a scene-validity effect, suggesting scene information was used to guide attention during search. N=26 participants. **p<0.005, ***p<0.001. Error bars refer to 95% CIs. (B) Whole-brain group-level univariate contrast results showing significantly greater activations for the scene-invalid than scene-valid conditions are illustrated in blue (cold colors), and the reverse contrast in red (hot colors). (C) Contrast betas from the scene-invalid minus scene-valid conditions within each of the a priori regions of interest (ROIs). *p<0.05, **p<0.005.

Figure 5—figure supplement 1
Univariate contrast results from the search period shown in a volumetric MNI standard brain.

(A) Scene-invalid minus scene-valid trials. (B) Scene-valid minus scene-invalid trials. Both activation maps are shown with correction at the cluster level, pTFCE<0.005.

Additional files

Supplementary file 1

Whole-brain searchlight results of brain regions showing significant decoding during the search cue period.

https://cdn.elifesciences.org/articles/104041/elife-104041-supp1-v1.docx
Supplementary file 2

Whole-brain searchlight results of brain regions showing significant decoding during the search delay period.

https://cdn.elifesciences.org/articles/104041/elife-104041-supp2-v1.docx
Supplementary file 3

Univariate contrasts related to scene validity during the search period.

https://cdn.elifesciences.org/articles/104041/elife-104041-supp3-v1.docx
Supplementary file 4

Mean (SE) number of voxels in each ROI.

https://cdn.elifesciences.org/articles/104041/elife-104041-supp4-v1.docx
MDAR checklist
https://cdn.elifesciences.org/articles/104041/elife-104041-mdarchecklist1-v1.docx

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Zhiheng Zhou
  2. Joy Geng
(2025)
Preparatory attentional templates in prefrontal and sensory cortex encode target-associated information
eLife 14:RP104041.
https://doi.org/10.7554/eLife.104041.3