(a) Workflow for AI model for gaze estimation (Chong et al., 2020). The model takes individual frames, paired with a binary mask that indicates the gazer’s head location within the scene, and a cropped image of the gazer’s head, to produce a probability heatmap. The pixel location with the highest probability was taken as the final estimated gazed location and gazer vector endpoint (orange arrow in final estimation image). We computed various frame-to-frame gaze features based on the gazer vectors and related them to the dynamics of observers’ eye movements during gaze-following. (b) Examples of the initial gazer vector, the gazer vector distance, the gazer goal vector, the angular displacement, and angular errors. The gazer vector distance was the vector length indicating how far away the estimated gazed location (by the gazer) was from the gazer. The gazer goal vector is the vector whose start point was the gazer’s head centroid and the endpoint was the gazer goal location. The angular displacement is the angle between the current gazer vector and the initial gazer vector position. The angular error is the angle between the current gazer/saccade vector and the gazer goal vector. (c) Estimation of the typical head velocities right before (200ms interval) the gazer’s head stops moving. Velocities were obtained by aligning all videos relative to the gaze stop time and averaging the head velocities. Head velocity = 0 at time = 0. (d) The first saccade vectors (teal lines) and corresponding gazer vectors (orange lines) at the saccade initiation times for all observers and trials for the same video (top: gaze goal present condition, bottom: gaze goal absent condition). (e) Histogram of angular errors for first saccade vectors and gazer vectors at the saccade initiation times for all trials/videos and observers. All vectors were registered relative to the gazer goal vector (the horizontal direction to the right represents 0 angular error).