Abstract
Studies of social cognition examine how organisms process and act on the presence, intentions, actions, and behavioural outcomes of others in social contexts. Many real-life social interactions unfold during direct face-to-face contact and rely on immediate, time-continuous feedback about mutual behaviour and changes in the shared environment. Yet, essential aspects of these naturalistic conditions are often lacking in experimental laboratory settings for direct dyadic interactions, i.e., interactions between two people. Here, we describe a novel experimental setting, the Dyadic Interaction Platform (DIP), designed to investigate the behavioural and neural mechanisms of real-time social interactions. Based on a transparent, touch-sensitive, bi-directional visual display, this design allows two participants to observe visual stimuli and each other simultaneously, allowing face-to-face interaction in a shared vertical workspace. Different implementations of the DIP facilitate interactions between two human adults, adults and children, two children, nonhuman primates and in mixed nonhuman-human dyads. The platforms allow for diverse manipulations of interactive contexts and synchronized recordings of both participants’ behavioural, physiological, and neural measures. This approach enables us to integrate economic game theory with time-continuous sensorimotor and perceptual decision-making, social signalling and learning, in an intuitive and socially salient setting that affords precise control over stimuli, task timing, and behavioural responses. We demonstrate the applications and advantages of DIPs in several classes of transparent interactions, ranging from value-based strategic coordination games and dyadic foraging to social cue integration, information seeking, and social learning.
Introduction
Humans and other primates are social beasts. Our identity, beliefs, thoughts, actions, and speech are grounded in the context of our social interactions with others. Decades of research on social cognition in carefully controlled laboratory studies have provided ample evidence that unidirectional, non-interactive social cues, e.g., still faces, gestures, emotional facial expressions, and spoken words, can shape our perception and subsequent behaviour. Natural social interactions are, however, rarely controlled and quintessentially non-static and reciprocal. Even the most basic social exchange between two individuals provides a kaleidoscope of rapidly changing cues regarding the social partners’ facial expressions, gazes, actions, and words. Understanding the extent to which we are capable of attending to and incorporating such dynamic, multidimensional cues in our everyday interactions requires examining behaviour in similarly rich situations. Therefore, experimental settings are needed that strike the desired balance between natural interactions and controlled stimulus presentation while monitoring continuous behavioural and neural data of interacting agents. Here, we present a novel experimental platform that allows two individuals - either two adult humans, a human adult and child, two nonhuman primates, or a nonhuman primate and a human confederate - to interact face-to-face with one another while observing the same stimuli and manipulating a common, shared workspace. During these interactions, a wide range of behavioural and neural indices of their interactions and shared environment can be collected, allowing analysis of how social cues and actions are perceived, processed, and how they influence dynamic social interactions.
In what follows, we first describe the state-of-the-art on how the presentation of social information in static and dynamic contexts influences cognitive processing and decision-making. Next, we describe prior setups that have sought to combine naturalistic social settings with experimental control to examine the integration of social information in social interactions before describing the platform and exemplary use cases in further detail.
Social cognition
As in other domains of cognition, social information processing relies on attending to, perceiving, and learning from relevant cues in the individual’s environment. Many earlier studies examining social information processing have employed static unidirectional paradigms where a participant is presented with social stimuli, such as words or still images of faces, as the participant’s behavioural, physiological, or neural responses to these stimuli are recorded. These paradigms have yielded considerable insight into the factors that shape the processing and learning of social information in such static designs. We will not describe this literature in detail here, but direct the reader to excellent reviews on these topics (e.g., Birmingham and Kingstone, 2009; Deen et al., 2023).
Moving on from such static designs, a critical step in the social cognition literature was to use more realistic dynamic stimuli, such as movies, animated avatars, or real individuals, albeit in a unidirectional fashion, i.e., focusing on assessing perception and behaviour in a single subject. Such studies suggest that embedding cognition in our lived social experience can impact how participants respond across various situations. For instance, the face recognition literature abounds with studies suggesting that individuals prefer looking at people’s faces, especially their eyes, relative to objects in situations where participants are presented with static images of objects and faces (see Birmingham and Kingstone (2009) for a review). More recent studies have, however, allowed participants to view or interact with real people as stimuli, e.g., sit across from a stranger (Laidlaw et al., 2011), walk down a University campus (Foulsham et al., 2011), follow an individual’s gaze (Gallup et al., 2012), or make transparent judgments about people, i.e., these judgments would be visible to the person being judged (Gobel et al., 2015). In stark contrast to the face perception literature, these studies found that participants moderate when they look into people’s faces and eyes in real social interactions and pay more attention to nonsocial information. Similarly, the developmental literature suggests that infants prefer dynamic faces as opposed to static patterns (Courage et al., 2006) and robustly follow an adult’s gaze when presented with videos of adults directing their gaze toward an object (Senju and Csibra, 2008; Bohn et al., 2024). In contrast, recent findings examining infants’ behaviour in naturalistic social interactions with their caregivers with head-mounted eyetrackers suggest that children’s looking toward faces may be impacted by the motor costs associated with fixating on someone’s face (Franchak et al., 2018) and that infants do not always reliably follow their caregiver’s gaze in such interactions (Madhavan et al., 2025; Yu and Smith, 2013). Thus, embedding cognitive science research in transparent social interactions reveals that individual responses in realistic social situations may differ dramatically from findings from static and unidirectional laboratory studies.

Timescales of dyadic interactions.
Classical economic game theory mostly focuses on trial-by-trial decisions: each agent learns about the mutual outcomes of dyadic choices at the end of each round, or trial, e.g., both select option “square” in the trial T3 (row 2). Across trials, decision strategies, reflecting recent history of interactions and predictions of future choices of the other agent, can emerge and transition gradually or abruptly, e.g. from S1 to S2 (row 3). Our approach aims at expanding the classical games, enabling “transparency” in dyadic decision-making paradigms so that each agent can monitor the other agent’s ongoing social cues and actions continuously in real-time (row 1). Instantaneously coordinated actions may give rise to new strategies, e.g. leader-follower dynamics that emerge spontaneously, based on time- and space-continuous actions. Of course, decision-making in a social context requires agents to integrate longer-term experiences and predict consequences beyond situational strategies, for example, adapting to partners with different levels of competence or cooperative dispositions (row 4). Immediate partner visibility in naturalistic face-to-face interactions allows for efficient partner-specific learning and behavioural adjustments.
Dynamic social interactions may occur spontaneously without a concrete goal or a joint task. However, understanding the full spectrum of social mechanisms also requires examining goal-directed contexts in which interactions are purposeful for solving joint tasks or attaining rewards. In particular, game-theoretical approaches have been extensively used to study social value-based decisions during bidirectional goal-directed interactions. Such “games” typically provide information about the discrete choices of both players at the end of each game round or trial, or impose a predetermined order of actions, e.g., turn-taking (sequential games, or actor-observer paradigms). Thus, players can base their decisions on the history of their and their partner’s choices and outcomes and their predictions about the future (Figure 1). While such paradigms capture an important aspect of social interactions, especially at longer timescales, many real-world interactions typically unfold continuously in real-time within a direct face-to-face sensorimotor context (van Doorn et al., 2014; Yoo et al., 2021a). To study the effects of such within-trial short-term dynamics, we have recently introduced probabilistic action visibility (“transparency”) and found that giving participants dynamic and continuous access to a partner’s choices changes evolutionary successful strategies in iterative non-zero-sum games (Unakafov et al., 2019, 2020). Along the same lines, recent work in humans and nonhuman primates has focused on continuous dynamic strategies in purely competitive dyadic settings (Hosokawa and Watanabe, 2012; McDonald et al., 2019) and non-zero-sum games (Brosnan et al., 2012, 2017; Ferrari-Toniolo et al., 2019; Hawkins and Goldstone, 2016; Ong et al., 2021; Pisauro et al., 2022). These studies revealed how individuals maximize their rewards by dynamically and continuously integrating the actions of their social partners into strategic choices.
Beyond economic strategic games, the social context also strongly influences basic perceptual processes. Studies examining perceptual judgements (i.e., judgment about ambiguous sensory stimuli) in cooperative contexts typically have individuals complete serial decision tasks, first individually and then jointly with their interaction partners (Bahrami et al., 2010; Bang et al., 2017; Bang and Frith, 2017; Baumgart et al., 2019). This work suggests that, under certain conditions, joint performance is better than the best individual performance and that partners exhibit mutual alignment of their perceptual judgments or associated confidence. Importantly, the manner and the timing of social exchange has a strong impact on individual responses: for instance, information about others’ choices that is presented serially (e.g., at the end ofa trial or within a structured turn-taking) influences decisions differently compared to when the information exchange is more dynamic or continuous (Pescetelli and Yeung, 2020, 2022).
This brief overview makes clear that static and dynamic designs - both unidirectional and bidirectional - tap into different levels of cognitive processing that are key to understanding social behaviour. Equally clear, however, is our takeaway that our understanding of how individuals perceive, react, attend to, and learn from social interactions is likely to change based on the extent to which experimental designs consider the continuous dynamics between social partners in an interaction (Hadley et al., 2022). There is, therefore, a need for experimental platforms that allow us to incorporate and study such dynamics readily. Below, we briefly describe the experimental approaches that have, thus far, enabled studying primate social cognition at different levels of interactivity and transparency (see also Hari et al. (2015) and Fan et al. (2021) for reviews on levels of naturalism and interactivity in social neuroscience research).
Prior dyadic setups
Experimental dyadic settings range from naturalistic free-flowing interactions “in a room”, which are inherently difficult to control and analyse, to highly specialized configurations with shared or separate visual displays and workspaces for each agent, providing varying access to multifaceted socio-affective signals (e.g., eye gaze, facial expressions, or gestures). Dyadic experiments with human participants are often conducted in separate booths or rooms, using two individual computer monitors to present a task and exchange information between participants (Rollwage et al., 2020; Steixner-Kumar et al., 2022; Schneider et al., 2024). In such tasks, researchers typically manipulate the information presented to individuals or substitute a real human partner with a computer agent or confederate. Furthermore, this arrangement allows recording the brain activity of one or both participants using EEG or in an MEG or MRI scanner (Park et al., 2019; van Baar et al., 2019; Levy et al., 2021; Philippe et al., 2024). However, under such conditions, the immediacy of the real social context and availability of socio-affective cues from the social partner are limited. Human studies also utilised the side-by-side positioning of the players in front of two separate screens in the same room (Buidze et al., 2024, 2025). Participants cannot observe each other directly, but see the outcome of the other player’s responses on their respective screen. While the immediacy of the task-related interaction on the screen is preserved, the information about the other player is strictly controlled by the experimenter and does not include facial or gestural responses.
In more naturalistic settings that facilitated the seminal discovery of mirror neurons, Rizzollatti and colleagues recorded single neurons from the frontal cortex of macaque monkeys as they either observed the experimenters reaching for different objects or reached for the same objects (di Pellegrino et al., 1992). These findings spurred the development of primate social neurophysiology and advanced our understanding of social interactions at the neuronal level (Chang, 2017; Isoda et al., 2018; Nougaret et al., 2019; Baez-Mendoza et al., 2021). For instance, in pioneering studies by Fujii and colleagues (Fujii et al., 2007, 2008), macaque monkeys sat at a table at a 90° angle or opposite to each other and reached for food morsels. Although the timing of the reaching behaviour could not be carefully controlled, such studies allowed researchers, for the first time, to test the neuronal representations of self and other’s reaching space in the parietal cortex.
Since then, many variations of dyadic setups have been implemented. A common variant used both in nonhuman primate conspecific dyads and in mixed human confederate-monkey dyads is based on a side-by-side or at 90° sitting configuration with visual access to separate (touch)screens and workspaces, or a shared screen with individual response devices (Chang et al., 2013, 2015; Haroush and Williams, 2015; Falcone et al., 2017; Brosnan et al., 2017, 2012; Cirillo et al., 2018; Ferrari-Toniolo et al., 2019; Dal Monte et al., 2020; Formaux et al., 2023; Meisner et al., 2024). Such arrangements facilitate social proximity, but do not allow direct face-to-face interactions and social gaze monitoring. Furthermore, they often require perspective switching from one’s own actions and their consequences to the partner’s actions. Thus, many studies employing this approach use sequential turn-taking between one actor who determines the outcome in the current trial and one passive observer/reward recipient at the time. Not only do such tasks enforce a less natural serial interaction, they also require participants to realize when they are the actor and when they are not, which can be a challenging process for nonhuman primates or younger children.
As an alternative, many studies have presented macaque subjects with a horizontal computer display or a horizontal touchscreen placed between the two partners, allowing them to observe each other’s actions and ensuring controlled presentation of visual stimuli (Yoshida et al., 2011; Azzi et al., 2012; Baez-Mendoza et al., 2013; Baez-Mendoza and Schultz, 2016; Noritake et al., 2018; Grabenhorst et al., 2019; Ong et al., 2021). In such setups, the gaze and the focus of attention have to be shifted from the screen below to the partner and back, while the action space is separated into own and other’s parts. Along the same lines, several studies had human participants face each other while interacting with objects on a table between partners (Hamilton and Holler, 2023; Madhavan et al., 2025; Yu and Smith, 2013). Such tasks, however, similarly necessitate considerable vertical gaze and attention shifts between the object stimuli and the partner. Furthermore, such interactions allow limited experimental control over the timing and presentation of the stimuli.
As a compromise between side-based and vis-à-vis approaches, in some studies two players faced a shared computer display arranged at an angle, so that both players could observe the stimuli and to a certain extent each other’s actions and faces (Hosokawa and Watanabe, 2012). Human studies, in turn, have used back-to-back computer screens between opposing partners (Jahng et al., 2017; Yu et al., 2020). To enable naturalistic gaze interactions, some recent human and macaque studies also utilized accessible conditions in which the dividing computer screen or a shutter can be lowered (Dal Monte et al., 2022), arranged laterally (Tang et al., 2016), or rendered transparent (Pryluk et al., 2020; Hirsch et al., 2023), exposing the face of the partner. Likewise, some across-the-table experiments with face-to-face or opaque divider conditions relied on manipulating separate response devices and auditory feedback (Behrens et al., 2020). But in all these approaches, task-related stimuli are shifted away from the face and the body signals, and workspaces are divided.
Finally, three studies have implemented a transparent face-to-face arrangement. The innovative setup of Ballesta and Duhamel (Ballesta and Duhamel, 2015) relied upon semitransparent mirrors to project visual stimuli onto a touchscreen plane, and required painstaking alignment and head-fixed subjects (personal communication, J-R Duhamel). Vaziri-Pashkam and colleagues used an ingenious but limited solution consisting of a Plexiglass divider with pieces of foam attached to it as targets, and electromagnetic hand tracking device - with no possibility to display and act on visual stimuli. Recently, Ninomiya and colleagues (Ninomiya et al., 2021) implemented face-to-face interaction using large illuminated buttons on each side, visible to both agents but only operable by one agent, and no other visual stimuli. These approaches enable monitoring the partner’s face and actions without the need for large gaze shifts, but limit stimulus and response options.
Taken together, the approaches summarized above suggest that most research on cognition in dynamic social interactions faces a trade-off. Participants often have limited access to the continuously changing behaviour of their social partners, requiring them to split their attention between the workspace and their social partners. Paradigms that provide more access, such as studying free-flowing face-to-face interactions between two partners, have limitations in terms of experimental control, stimulus presentation and behavioural recordings. Therefore, there is a need for an approach that allows controlled stimulus presentation and experimental manipulation while still affording direct access to the social partner’s face and actions.
Dyadic Interaction Platform
In this paper, we present a dyadic interaction platform (DIP) that unifies many advantages of prior setups and allows studying real-time social interactions between two partners. The key feature of the DIP - contrasting with the commonly used configurations reviewed above - is a transparent bidirectional screen that presents stimuli and allows participants to see each other, while also serving as a shared workspace where both partners can interact. Thus, the experimental setup enables the presentation of tightly controlled visual (in combination with auditory) stimuli, which participants can simultaneously manipulate, trigger, or selectively attend to. The availability of both, the social interaction partner and the stimuli in the same line of sight ensures that participants can readily attend to both sources of information. A range of different recording devices can be flexibly integrated into the platform to capture rich multi-dimensional behavioural, physiological, and neural data, allowing one to evaluate task-driven and spontaneously occurring contingencies in social interaction. Taken together, the platform uniquely embeds tight experimental control in naturalistic face-to-face social interactions, thereby allowing researchers to examine social information processing in bidirectional, dynamic social interactions.
Two technical conference reports have introduced a concept similar to the projection-based dyadic interaction platform variant we are presenting here (Ishii and Kobayashi, 1992; Heo et al., 2014). We advance this approach by demonstrating flexible implementations adaptable to diverse experimental designs and target groups, and presenting empirical validation data from four distinct classes of interaction paradigms.
Materials and methods
We begin by summarizing the main platform components and evaluating the advantages and disadvantages of different options. We then describe several DIP variants, and in the Results section showcase compelling use cases where DIPs have been applied to address specific research questions across four example paradigm classes.
Platform components
Here, we describe the variety of visual displays, interactive components, and recording devices that can be flexibly integrated and synchronized within the DIP.
Visual displays
The mutual visibility of the interaction partners is a hallmark of the DIP. Thus, the display, whilst acting as the shared visual workspace, should maintain high transparency, allow stimuli to be displayed precisely in time and space, and offer similar flexibility and ease of use as a computer monitor. The desired optical properties of the display are a low haze, high transmittance, and low reflectivity. A hazy display would blur everything behind the display and low transmittance lowers contrast and brightness. Both, therefore, reduce the salience of the interaction partner’s face and body. Reflections create ghost images of the stimuli and the participants which can be distracting. Furthermore, the optical properties of transparent panels should stay constant over the visible spectrum so that the display does not appear tinted.
We found that using low-iron, anti-reflective glass (sometimes referred to as museum glass) or attaching anti-reflective film to the glass panel can help reduce reflections (see Figure 2). Importantly, anti-reflective coating or film should be placed on all pane-to-air interfaces, if there are gaps between the layers (e.g., between the visual display and the protective glass). However, participants may still see a faint reflected self-image of themselves on the screen, especially if the two sides of the DIP are not equally illuminated. Thus, maintaining equal illumination on both sides of the DIP is crucial. In our experience, multiple independently controllable diffuse light sources on the two sides - instead of directed spot-lights - allow for suitable ‘titration’ of partner visibility and self-reflection. Thus, by manipulating the degree of illumination on the two sides of the DIP, studies can either ensure that the visual experience is similar across the two participants or create “asymmetric dyads”, where one participant, typically the real subject, can see the other partner and their facial and posture cues well, but not vice versa.
Crucially, a transparent screen cannot display black colour. Thus, black areas of an image are perceived as transparent, i.e., the participant and backdrop on the other side act as the background for the stimuli. If the scene behind such a display is dark, the display will appear similar to a conventional non-transparent monitor. If the scene is illuminated, the presented stimuli will be superimposed on the scene. Thus, both sides of the DIP need to be illuminated appropriately to ensure transparency and access to the partner’s face and actions. While this, in turn, can result in some degree of self-reflection, the distracting effect of self-reflection can be alleviated by offsetting the two partners in depth (distance from the display) and vertical/horizontal directions. We note that, in our experience, the self-reflection typically becomes less distracting once partners engage in interaction. Since the scene behind the display is typically illuminated, the backdrop should be as homogeneous as possible. We found that a dark backdrop (curtains or a painted wall) allows high contrast for the stimuli and displays darker colours with higher fidelity and avoids reflections on the display that bright walls and objects create.
The contrast of presented stimuli is also influenced by the brightness of the display. Bright stimuli will occlude the scene behind. Therefore, the placement of stimuli on the screen should ensure that they minimize occlusions of the partner’s face. For example, stimuli can be placed on the sides or in a ring-like arrangement that leaves enough space in the centre to see the partner’s face. Furthermore, stimuli that work well on standard monitors might not work well on a transparent screen given that, as noted above, black areas are displayed as transparent. Dark parts of an image might still be discernible when they are enclosed by bright outlines while taking care of contrast costs incurred by the background. Given these limitations, we found that bright, simple stimuli such as geometric shapes or cartoons in bold colours were preferable to photo-realistic images.
Finally, studies may need to include transparent and non-transparent settings within a single experiment. A non-transparent setting is one where the two partners independently interact with the stimuli without seeing each other or each other’s responses. A simple solution we employed was to install blinds that block half of the screen on each side, thereby completely occluding the partner, while allowing participants to use the remaining half of the display on their side. An alternative solution would be to use electric switchable glass, which allows opacity to be adjusted continuously and less arduously. Next, we describe the different displays we have used for various instantiations of the DIP and their advantages and disadvantages.
OLED displays
Organic LED (OLED) screens are available in 4 sub-pixel configurations that allow light to pass between the two sides of the screen. This results in a display that is transparent when the stimulus is dark and increasingly opaque as the brightness of the stimulus is increased. Thus, bright objects presented on a dark background are clearly visible from both sides of the screen, while the dark background allows good visibility of the scene on the other side of the screen. One consequence of the 4 sub-pixel structure is that transparency is around 40-45%.

Overview and schematic of the composite panels for two variants of DIP visual displays.
The top row presents the configuration and the panels that are combined with the OLED display (implemented in DIP1 and DIP2); note that in the macaque/human DIP1, the eyetracker and the hand/body cameras are illustrated on only one side, and only one side of the panel composition is shown, for clarity. The bottom row presents the configuration and the panels that are combined in the projector-based design (implemented in DIP3 and DIPc).
Due to OLED design asymmetry, there are noticeable differences in luminosity between the front and the back side of the screen, which can reach magnitudes of 80:1. While this results in considerable difference in brightness across the two sides of the screen, we note that the brightness on the dimmer side is still well above perceptual thresholds (see Table 1). To avoid giving participants on the brighter side a perceptual advantage, stimuli should be chosen to be clearly above the visibility threshold as judged from the dimmer side. All OLED displays we have tested so far (Table 1) have provided a sufficiently bright image on the dimmer side. Generally, this asymmetry is undesirable but is negligible in most cases, and it can be alleviated by tuning the background lighting to balance the contrast on both sides.
Light projector and projection film
An alternative to the OLED display described above is a projection film that displays an image projected from a light projector. The film can be mounted on any smooth, transparent surface, such as window glass, turning the surface into a projection screen (see Figure 2). The light transmittance and clarity of the film can be high (both > 90%) and, in combination with anti-reflective glass, allows for excellent transparency (see Table 1). The projector can be placed relatively freely in space depending on its optics, which decouples the image generation from the screen. This allows flexibility concerning the display size and resolution. A projector with a steep projection angle is favourable because it reduces the possibility of the participant’s body occluding the image from the screen. We, therefore, used ultra-short throw projectors that offer the steepest projection angles and can be placed directly above the screen, thereby reducing the distance from the screen where the occlusion can happen. However, when participants interact with the screen by touch, the hand tapping on the screen will block the light from the projector, thereby obstructing the stimulus on both sides. Usually, the participant performing the tapping is unaware of the occlusion because their hand obstructs the view. For the other participant, however, the stimulus partially or wholly disappears when touched on the other side. To mitigate such occlusions, we used a second projector on the other side, projecting a mirrored image so that occlusion from one side only results in a slight drop in brightness. This has the additional advantage that the brightness of stimuli is equal on both sides, independent of the projection film (see Table 1).
Including a second projector into the system requires careful alignment of the two projection planes onto one another to ensure that stimuli on both sides precisely overlap. We achieved this as follows: We presented a test image (e.g. a grid of bright lines on a dark background) and performed coarse mechanical alignment of the second projector (height, position, angle, focus). The remaining distortion was then removed with the image shape correction of the projector firmware. The “point correction” allows one to shift individual grid points so that the test images overlap. When the correction is applied, the projector interpolates between the grid points so that the presented stimuli overlap.
Luminance and transmittance of DIP displays
We examined the brightness and overall light transmittance of the OLED and projector-based displays used in current instantiations of the DIP. A luminance meter (LS-100 with close-up lens No 135, Minolta) was mounted on a tripod about 50 cm from the screen. The device pointed towards the display at a right angle and received light from a field of view of 1 degree. During the measurements, all room lighting was turned off to only capture the light of the displays. The luminance of a homogeneous white light source placed directly before and behind the display was measured to determine the transmittance. The transmittance was then calculated as the ratio of these luminances. To assess the brightness difference of the two sides of the display, we measured the luminance from each side of the display while the display showed a homogeneous white screen. The average luminances per side and the respective ratio are shown in Table 1.
We found that the light projector-based DIPs (DIP3, DIPc) have a luminance ratio close to one, as expected from the symmetric projector setup. The OLED displays have one side that is much brighter than the other, while the transmittance is also reduced as compared to the projectionbased systems. On the other hand, OLED-based displays present crisper, non-blurred stimuli, and do not require complex setup and alignment.

Characterization of transparent displays
Auditory stimulation
Auditory speakers directed to both sides of the DIP allow researchers to provide instructions and auditory feedback to both participants simultaneously. For example, for non-human primate experiments, information about the amount of reward received by the partner can be encoded in the audio stream (Moeller et al., 2023), while task instructions for e.g., human child participants, can be pre-recorded and played back as part of the task progression, ensuring reproducibility across dyads (Bothe et al., 2024). The combination of engaging visual and auditory feedback, for instance during successful performance on a task, can render the task more “game-like” (Allen et al., 2024; Lewen et al., 2025), helping evoke and sustain interest and attention.
Interactive components
Human interface devices (HID) - such as computer mice, joysticks, or touch panels - are integral to the DIP concept and functionality. They can be used to steer the progress of the task, e.g., trigger subsequent stimulus presentation by participants’ touching, by moving to or clicking on a specific point on the screen. At the same time, such devices allow continuous read-out of information as participants actively respond to the task, which can be analysed later as a form of behaviour.
Touchscreens
The touchscreen capitalizes on the primary form of visually-guided dexterous manipulations of the environment characteristic of primates. It has been extensively used in sensorimotor neuroscience to study visually-guided reach movements (Battaglia-Mayer et al., 2003; Gail and Andersen, 2006; Chang and Snyder, 2012; Lehmann and Scherberger, 2013; Suriya-Arunroj and Gail, 2019). A standard touchscreen-based paradigm, however, typically registers only endpoints of a movement, e.g., when participants tap on their selected target on the screen. Here, the transparency of the DIP yields a significant advantage, since participants can continuously observe the hand and arm movements of their partner and infer where they are going to tap on the screen. This has, for example, allowed us to uncover behavioural dependencies based on visibility of the social partner’s ongoing actions (Moeller et al., 2023). Researchers interested in continuous dynamics of such behavioural dependencies may consider supplementing the touchscreen recordings with marker- or video-based motion capture (see below) to digitise continuous behaviour in 3D (Gallivan and Chapman, 2014). Analysing behaviour in 3D could be particularly important given studies showing that reach dynamics can differ depending on the depth of the reaching plane (Ferrea et al., 2022). Transparent touchscreens also ensure that the visual stimuli, the social partner and the shared workspace are all in the same view, thus allowing participants to devote full attention to the task space and to observing the partner. Touchscreens may also provide participants with a greater sense of agency in interacting with the stimuli and the partner’s hand on the other side, compared to more abstract cursor tasks controlled by mouse or joystick. However, touchscreens incur costs in terms of the participants’ reaching arm obstructing the view on the stimuli or the partner’s actions. Vertical touchscreen tasks may also not be suitable for longer studies or certain types of interactions due to the physical effort required to raise the arm to provide a response.
Mouse and joystick
In contrast to touchscreens, computer mice and joysticks (or other types of manipulanda) allow the participant’s continuous behaviour to be represented by a moving cursor on the screen. Mousetracking tasks combined with analyses of the movement trajectories have been successfully employed to study economic decision-making in solo settings (Spivey et al., 2005; Freeman and Ambady, 2010; Scherbaum and Dshemuchadse, 2020; Boschet-Lange et al., 2024). Recent methodological advances in trajectory quantification, furthermore, allow analysis of space-time continuous data at the level of individual movements in single trials (Gallivan et al., 2018; Ulbrich and Gail, 2021, 2023) without having to rely on stereotyped movements averaged across trials.
In dyadic settings, information about each participant’s task-related behaviour (and hence cognitive state) can be made mutually available via mouse- or joystick-controlled cursors on the trans parent screen without significant occlusions by the arm. Overall body motion during mouse or joystick response is reduced, allowing participants to complete longer experiments and respond across the entire screen with minimal effort. Furthermore, minimizing body motions is advantageous for stable recordings of physiological and neural data, e.g., reducing muscle and movement artifacts in EEG and MEG.
Another advantage is that mice and joysticks allow experimenters to create disassociations between a participant’s physical manipulation of the joystick and the observable consequences of this manipulation on screen, for example to introduce increased action costs by limiting the movement speed of the cursor (Lewen et al., 2025). For certain experimental designs, joysticks or haptic ma- nipulanda may also offer advantages over computer mice. First, joysticks provide two response dimensions such as the direction and the tilt, which can be used to measure distinct aspects of task response (see Perceptual decision-making in dyadic context). Second, the origin of the behavioural response can be normalized to the centre position of the joystick, allowing reproducible positioning of the hand. Third, force-feedback joysticks or robotic manipulanda allow resistive forces important for effort-based decision paradigms (Morel et al., 2017). Computer mice and joysticks, however, may require additional consideration related to differences in users’ proficiency with such devices, e.g., when working with infants and younger children.
Data acquisition components
The DIP can be equipped with recording devices to continuously monitor spontaneous and goal- directed behaviour such as facial and vocal expressions, body movements, gaze, as well as peripheral physiological and neural signals. In what follows, we briefly describe the devices that can be integrated, the behaviour they can capture and how such data can be efficiently analysed, to allow for a comprehensive view of dyadic interactions in social settings.
Facial expressions
One of the hallmarks of the DIP is the availability of the other social partner and their dynamic facial signalling in the same line of the sight of the visual stimuli. Thus, the partner’s face and changes to the partner’s facial expressions can be incorporated in the experimental paradigm, allowing an unprecedented level of precision in examining how changes to one participants’ facial expressions impact behaviour and performance of the other participant. Videos of the faces of both participants can then be analysed with specialized tools such as Affectiva, a face recognition software from iMotions, or other machine learning tools (Ballesteros et al., 2024), to automate recognition of specific facial expressions. Importantly, we note that for accurate identification of the facial expressions, the cameras must be positioned approximately in front of the face (see Figure 3, DIP3).
Non-linguistic vocalizations
Non-linguistic vocalizations like laughter, sighs, grunts, or grumbling are important socio-emotional cues in social interactions. These non-linguistic vocalisations can occur between spoken content or even in the absence of verbal communication. They are recorded with microphones attached to both sides of the DIP setup. Software, such as Hume AI (Hume AI Inc., 2024), can automatically identify and classify non-verbal utterances that appear in recordings of longer duration into 24 distinct emotional dimensions. Emotional vocalisations can then be analysed with regard to the extent to which participants respond to each other’s performance and whether such vocalisations influence the behaviour of the partners in the task. For instance, non-linguistic vocalisations may be one way for a member of the dyad to express either satisfaction with or a desire to change the interaction strategy currently displayed by the dyad.
Linguistic vocalizations
With human participants, language is one of the primary mediums for exchanging social cues. The microphones attached to the DIP setup can be used to capture the linguistic vocalisations exchanged between two participants working together and examine how such exchanges influence the processing of the presented visual and auditory stimuli. This could be expanded to include conversation analysis with varied foci, e.g., understanding the benefit of face-to-face interaction relative to asynchronous interaction in processing linguistic input, examining the linguistic structures that enable participants to coordinate efforts towards the completion of a shared objective, how turn-taking is organised and impacts subsequent decision making, how repairs are perceived or initiated when participants detect a mismatch in the strategies being employed by the social partners. Recent advances in AI, e.g., OpenAI Whisper (Radford et al., 2023), can be incorporated to automate speech transcription, allowing researchers to time lock speech to the temporal properties of stimulus presentation and response timing.
3D hand tracking
As with facial expressions, ongoing visibility ofa partner’s actions is a critical source of information to guide one’s own behaviour in a dyadic interaction. Thus, the timing and direction of the participants’ hand movements needs to be integrated into the analysis of such interactions. One way of tracking the 3D movement of the hand in real-time, isto let subjects use a haptic manipulandum, as introduced above (Morel et al., 2017), and register the position and even force applied to its handle. Alternative to the manipulandum, one can also track free movements using video-based motion capture, at least for offline analyses. Multiple cameras with different views of the same action can be used to calculate the 3D position of objects during a task. Thanks to advances in machine learning, it is possible to automatically detect the presence and position of body parts, e.g., hands or arm position, in such digital images with remarkable accuracy (Mathis et al., 2018). Increasing the number of cameras ameliorates the occlusion problem, where some camera perspectives cannot capture all features. Modern systems reconstruct occluded parts very well and can be trained on limb or hand models of different species (Hüser, 2024). One successfully tested setup included three cameras on each side of the screen, i.e., six cameras in total (macaque/human DIP1, see Figure 2 and Table 2). One camera was mounted centrally on top of the setup looking down: this view covered the 2D movements of the hands and also allowed us to monitor the overall experiment. One camera was mounted on the top left (looking to the bottom right), and one was mounted on the top right (looking to the bottom left). Both cameras were mounted on a plane parallel to the screen with the central axis orthogonal to each other. These cameras allowed estimation of the 3D locations of the arm, providing valuable information on when participants begin to interact with the screen and how their hand movements influence their social partner’s behaviour.
Gaze behaviour
Eyetracking devices allow continuous monitoring of participants’ gaze behaviour and quantify participants’ attention to the content on the screen and to their social partner during their mutual interaction. The DIP can be combined with eyetrackers mounted either below or above the screen, with head-mounted eyetrackers, or, indeed, with cameras whose output is later coded with regard to participants’ gaze behaviour.
If no physical interaction between the participants and the screen is required and participants remain seated and relatively still throughout the experiment, eyetrackers can be mounted below the screen. In such cases, simple cameras capturing head and eye movements may also provide a cost-efficient alternative to commercial eyetrackers. Cameras, however, come with lower spatial resolution and are only recommended for use when limited content with well-defined areas of interest is presented on screen.
If a touch response or arm movements toward the screen are intrinsic to the task, eyetrackers can be mounted either above the screen, or participants can wear head-mounted eyetrackers (e.g., Tobii glasses, Pupil Labs Core or Neon). Modern binocular head-mounted eyetrackers allow researchers to collect participants’ gaze behaviour in the real world, i.e., beyond the limitations of content displayed on the screen. For example, they enable the tracking of gaze to the social partner’s hand movements towards the screen before the actual touch on-screen happens, or it assesses the extent to which participants track their social partners’ eye or head movements before following their gaze (see Section Attention and social learning). Such eyetrackers can also be used in studies where participants move freely during the task. Head-mounted eyetrackers do, however, change the appearance of the social partner, which might be undesirable in paradigms with young children or where unobstructed facial expressions or gaze cues are critical to task performance.
In the DIP, it can often be necessary to differentiate between gazing at an object on the screen and gazing at the partner located behind the screen. There are straightforward and more complicated solutions to this problem. One simple solution is to ensure that the centre of the screen is left free so that the partner’s face is distinct from task-related objects, e.g., with stimuli arranged in a circle around the partner’s face. For instance, in Figure 7, we present data captured using Pupil Labs eyetrackers children wore while they were allowed to move freely during the task. The data were fed through an object detection model (YOLOv8, Jocher et al., 2023). This allowed automated detection of the partner’s face and the different objects presented on screen, which were then mapped onto the gaze data to estimate children’s gaze towards the partner and those objects. However, manipulating stimulus presentation in this manner could result in the object on the screen occluding the partner’s face, depending on where the participant stands. Binocular models that estimate eyetracking coordinates in 3D may provide a more suitable alternative, allowing to calculate the vergence and discriminate between fixations to objects on-screen relative to fixations to the partner’s face beyond.
Automated recognition of different areas on the screen and gaze tracking during free head movements can also be achieved using surface detection, as implemented within the Pupil Labs system. Surface detection works by the position and orientation of predefined markers, e.g., AprilTags, in the visual scene (Olson, 2011) (see Figure 3, DIP2). The eyetracking software can be set up to detect where a participant’s gaze falls relative to these AprilTags throughout the study, thereby enabling automatic detection of participants’ fixations to specific parts of the screen.
Peripheral physiology
Social decision making and cooperative behaviour in real life interactions are accompanied by strong changes in the physiological and psycho-emotional states of the participant, regulated by autonomous physiological processes (Behrens et al., 2020). Such changes can influence self-perception (e.g., noticing an increase in heart rate) and can also be perceived by the interaction partner, e.g., blushing (Prochazkova and Kret, 2017); for a review of physiological synchrony in dyads, see Palumbo et al. (2016)). There is, therefore, real value in measuring peripheral physiological measures such as electrocardiography (ECG), electromyography (EMG), and electrodermal activity (EDA) in the DIP. For instance, EMG recordings of facial muscle movement serve as indicators of expressions of positive and negative valence. Physiological and emotional state changes, mediated by sympathetic and parasympathetic systems, can be reflected in electrodermal activity (EDA) and the heart rate, measured by ECG or pulse-oximetry. Yet, in the context of behavioural experiments with freely moving subjects, the probe of a pulse oximeter can be distracting or unwieldy. Imagebased photoplethysmography (iPPG) overcomes these challenges, allowing the extraction of pulse information from videos of a subjects face. Indeed, this has been successfully used in human subjects using low cost web cams as sensors (Poh et al., 2011). Furthermore, we could show that iPPG with low cost video cameras can also be successfully used in head-fixed rhesus macaques in spite of their smaller size and more pigmented skin (Unakafov et al., 2018).
Electrophysiology and hyperscanning
The most direct way to examine the neural correlates of the interaction between individuals is through hyperscanning, a method that describes the simultaneous recording of brain activity from two or more individuals to determine how covariation in their neural activity relates to their behaviour and social interactions (Hakim et al., 2023). One method that is particularly well suited for investigating the rapidly changing neural processes that occur in dyadic social interactions is EEG. EEG is a non-invasive and cost-effective method that can be used to map neural processes with very high temporal resolution and is easily extendable to a hyperscanning setup (Czeszumski et al., 2020). The integration of hyperscanning EEG into the DIP, further, allows the monitoring of neural changes based on the shared task environment (visual and auditory cues controlled by the experimenter) and the representation of the partner and their actions (socio-emotional cues, decisions). Hyperscanning can be achieved by different approaches: integration via linked stationary amplifiers, synchronization via trigger signals sent from the same PC, or via Lab Streaming Layer (LSL; Kothe et al. (2024), see https://labstreaminglayer.org/). Recording in both subjects using a single data acquisition system simplifies the time synchronisation, by only having to synchronize the clock of the electrophysiological recording to the clock of the computer running the paradigm, facilitating inter-brain synchronization analyses. Mobile EEG systems allow for more flexibility in the tasks to be performed and allow participants to move around during the session with comparatively little loss of data quality. The concurrent registration of peripheral physiological measures and neural signals is especially crucial for the investigation of neural correlates of dynamic social behaviours, such as social mimicry (Achaibou et al., 2008). It also may facilitate artifact correction of noise sources that are common to multiple measurement channels (Li et al., 2021), which might be more pronounced in dynamic social interactions.
Much like traditional single-subject experimental platforms, the DIP is also well suited for intracranial electrophysiology in one or both subjects, enabling targeted high temporal and spatial resolution recordings of neuronal activity such as single neuron and local field potential recordings. This approach can be used in nonhuman primate experiments (Chang, 2017; Isoda et al., 2018), and in human epilepsy patients undergoing intracranial EEG (iEEG) monitoring with subdural grid electrocorticography (ECoG) or stereotactic depth electrodes (Parvizi and Kastner, 2018).
An important consideration in the DIP-based experimental designs, relevant to all neural data recording modalities, is the vis-à-vis arrangement and transparency, which result in participants representing the same lateralised stimulus/action space in opposite hemispheres. For example, a stimulus appearing on the left for one participant (processed in the right hemisphere) would be on their partner’s “subjective” right side (processed in the left hemisphere). This is particularly crucial for early visual processing in humans and even more so for the highly contralateral cortical representations, including the frontoparietal network, in macaques (Kagan et al., 2010). These considerations should be taken into account when considering DIP-based dual-brain analysis, relative to common hyper-scanning approaches where participants are in separate rooms or seated side-by-side and observe and act on identical visual displays.
MEG and hyperscanning
Magnetoencephalography (MEG) has found applications in neuroscience despite its serious cost compared to EEG, because it offers certain advantages over EEG with respect to source localization of neural activity. This is because (i) MEG measures the magnetic field, and is thus reference-free, and (ii) the physics of the magnetic fields induced by neural currents allow for an easier calculation of high-precision forward solutions to the electromagnetic inverse problem. Until very recently, MEG could be considered a less-than-ideal method for neurophysiological studies into transparent interaction due the high cost of superconducting-based MEG devices and their size, which precluded operating two MEG devices in a standard sized single magnetically shielded room. A critical restriction was the fact that subjects had to refrain from any head movement to avoid relative motion between the stationary sensors of the device and the head.
This situation has changed with the advent of usable optically-pumped magnetometer (OPM) - MEG systems of sufficient sensitivity (Brookes et al., 2022). OPM-MEG sensors need no cooling with liquid helium and are lightweight enough to allow for a sensor montage in a form of helmet that subjects can wear on their heads (Boto et al., 2018). Moving MEG sensors on a subject’s head through space, however, comes at a cost: As OPM-MEG sensors usually have a low dynamic range any background magnetic field in the room has to be compensated locally at the sensor before the measurement to ensure proper sensor function. Therefore, background fields need to be much more tightly controlled by the magnetically shielded room (MSR) than for stationary, superconducting-based MEGs. Moreover, dynamic compensation of residual fields is possible for experiments with head movements (Holmes et al., 2023), resulting in a setup better suited to studies of naturalistic dyadic interactions. An important additional advantage brought about by the switch to OPM-MEG is that larger sensor configurations, e.g. for hyper-scanning, can be built-out gradually, whereas sensor number and configuration in a superconducting-based system are fixed. Additional challenges for integrating MEG with the DIP concern magnetic fields produced by DIP components. For instance, projectors for presenting stimuli on the transparent screen have to be mounted outside the MSR (e.g. above its ceiling). Thus, an optic path of considerable length has to be traversed through a relatively narrow opening in the MSR wall, and entirely non-magnetic materials have to be used inside the MSR to realize the necessary short-throw optics.
Description of DIP instantiations
Table 2 describes four instantiations of the DIP varying in terms of the target subject population, the kind of data being collected, the visual displays and the combination of recording devices that have been successfully incorporated into the platform. The first instantiation of the DIP targeted interactions between two macaques or two humans or a human and a macaque dyad (macaque/human DIP, DIP1). Key features of this setup are flexibility in terms of how responses could be recorded, i.e., a touchscreen, a mouse or a joystick, head- or frame-mounted eyetracking, intracranial electrophysiology, as well as computer-controlled fluid dispenser pump systems to provide reward to the macaques (Figure 3, top row, left and middle panels). For human participants, the second instantiation of the DIP (human DIP2, (Figure 3, top row, right panel) included mice or joysticks as the recording devices for task-related responses, and wearable eyetrackers. The next update to the DIP (DIP3), based on the shielded projectors instead of OLED displays, allowed for collection of a wider range of behavioural and neurophysiological data, e.g., recording peripheral physiology and EEG, as well as eyetracking data from a mounted eyetracker on each side of the frame (Figure 3, bottom row, left panel). Finally, the DIPc extended the set up to human child-adult and child-child interactions, requiring additional changes in terms of dimensions, integrating head- or frame-mounted eyetrackers, object and face detection using machine learning models, as well as vital cosmetic changes, e.g., differently coloured frames on the two sides, to indicate to the child when and how they could respond (Figure 3, bottom row, middle and right panel).
Ethics statement
Experiments with human participants were performed in accordance with institutional guidelines and adhered to the principles of the Declaration of Helsinki. Human participants in all experiments presented here provided written informed consent for their (or their child’s) participation in the study. In particular, all adults depicted in Figure 3 gave explicit written consent for themselves and/or their children for photographs to be depicted in the figure and agreed to its publication. All studies were approved by the the ethics committee of the Georg-Elias-Müller-Institute for Psychology, University of Göttingen.
The experimental procedures with macaque monkeys were approved by the responsible regional government office (Niedersächsisches Landesamt für Verbraucherschutz und Lebensmittelsicherheit (LAVES), permits 3392-42502-04-13/1100 and 3319-42502-04-18/2823), and were conducted in accordance with the European Directive 2010/63/EU, the corresponding German law governing animal welfare, and German Primate Center institutional guidelines.

Description of the four DIPs

Dyadic Interaction Platforms in action.
Top row: OLED-based DIP1 and DIP2, bottom row: double projection-based DIP3 and DIPc. See Table 2 for descriptions.
Results
Designing one- and two-way interactive paradigms for a DIP differs from traditional paradigms where a single participant is tested. On the one hand, the inclusion of two agents in trial-based experiments is easily possible by allowing or instructing the participants to take turns. On the other hand, dynamic decision-making, in contrast to discrete, regulated trial-based paradigms, involves navigating through a continuous stream of choices, actions, and outcomes, mirroring the fluid nature of decision-making in real-life scenarios. Such dynamic decision-making tasks allow for a more ecologically valid exploration of behaviour and cognition. Next, we describe four classes of example paradigms that we realized with the DIP - which vary in terms of the dynamic and continuous nature of the interaction between two partners - and the exciting possibilities that such approaches open up for cognitive science research.
Transparent economic games
Dyadic economic games are a cornerstone of experimental economics and social science, offering a powerful framework to explore the decision-making processes between two participants in competitive and cooperative contexts (Sanfey, 2007; Rilling and Sanfey, 2011; Tremblay et al., 2017). These games are intended to simulate real-world scenarios where individuals must make choices that affect not only their own outcomes but also those of their partners (von Neumann and Morgenstern, 1944). Such choices can be presented as one-shot games or as iterated games with repeated interactions that encourage tracking the interaction history and the formulation of predictions regarding the other agent’s decisions. Classical examples include the Prisoner’s Dilemma, the Stag Hunt, the Hawk-Dove / Chicken Game, the Ultimatum Game, and the Trust Game, each offering unique insights into altruism, reciprocity, fairness, and a conflict over a shared resource (Smith, 1997; Brosnan et al., 2017).

Dynamic coordination in transparent Bach-or-Stravinsky decision game.
(A) Top panel: “fraction choosing own” i.e., choice of the individually preferred target for agent A (human confederate, red) and agent B (monkey, blue) in one session (running average of eight trials). The visual access to other’s actions was occluded in the middle part of the session (opaque). The confederate (red) switched between own and monkey’s preferred targets in blocks of 20 trials. Bottom panel: the average joint reward. Dashed green line - maximal attainable average joint reward, given the used payoff matrix. (B) Human vs monkey reaction time difference histograms for the three prevalent outcomes: coordinated selection of human’s preferred target (red), monkey’s preferred target (blue), and selection of own preferred target by each agent (magenta), in the two action visibility conditions. Modified from Moeller et al. (2023).
While, in classical dyadic games, individual choices are made either “simultaneously” (neither player knows the choice of the other before making their own decision) or sequentially in a predetermined order, real interactions often unfold continuously with the partner’s actions in direct sight (Dugatkin et al., 1992; van Doorn et al., 2014). In this “transparent” context, the timing of one’s own and other’s actions becomes part of the strategy (Noe, 2006; McDonald et al., 2019; Unakafov et al., 2020). Moreover, coordinating based on mutual choice history might be more demanding than coordinating based on the immediately observable behaviour of others, especially for children and nonhuman species. For example, visual feedback about the partner’s choices improves coordination in the iterated Stag Hunt in humans, capuchins and rhesus macaques (Brosnan et al., 2012), and such coordination in chimpanzees is facilitated if one of the agents consistently acts faster than the partner (Bullinger et al., 2011). Similarly, there are substantial differences in capuchins’ and rhesus’ behaviour in the Chicken game when they had access to the current choice ofa partner (Brosnan et al., 2017; Ong et al., 2021). In humans, a real-time anti-coordination game revealed that action visibility and the ability to change an already initiated action increased efficiency and fairness (Hawkins and Goldstone, 2016).
To study dynamic value-based interactions in humans and rhesus macaques, we implemented a transparent face-to-face version of the iterated Bach-or-Stravinsky (BoS) game (Moeller et al., 2023). In the BoS paradigm, each player has an individually preferred option, but coordinating on either one of these options increases the reward for both players (Kilgour and Fraser, 1988). While any coordinated choice results in better rewards than non-coordinated choices, one coordinated choice results in greater rewards for the first player and the other - for the second player. Thus, while the rational choice is to coordinate, BoS includes an inherent conflict about who profits the most. In our DIP implementation, the players used visually-guided manual reaches to a shared vertical workspace on a dual touchscreen to indicate their choice between the two targets representing the two options. We found that both species learned to use mutual action visibility for efficient coordination. Human dyads mainly adopted dynamic cooperative turn-taking to equalize the payoffs. All macaque dyads initially converged to a simpler, more static coordination driven by unilateral reward maximization or effort minimization motives. However, macaques paired with a turn-taking human confederate developed dynamic coordination, if they were able to observe the confederate’s actions (Figure 4A). The incorporation of action timing into strategic behaviour was evident from the analysis of reaction times: the macaque subjects waited for the partner to commit to their non-preferred choice (human colour, Figure 4B, left panel), but this behaviour broke down when they could not observe the partner’s movements (Figure 4B, right panel). Remarkably, when such confederate-trained macaques were paired together, they exhibited dynamic turn-taking driven by temporal competition, unlike the prosocial turn-taking in humans. Underscoring the importance of sensorimotor dynamics, reaction time differences between the two players strongly predicted the joint choice on a trial-by-trial basis (Moeller et al., 2023). The currently faster monkey led to its preferred option, and the slower monkey followed. This study demonstrates that dynamic coordination is not limited to humans, but it can be subserved by different social attitudes and cognitive capacities. More generally, these experiments emphasize the importance of action visibility, within-trial dynamics and immediate sensorimotor context for studying the emergence and maintenance of naturalistic coordination and embodied decision-making, grounded in real- world constraints such as effort and movement biomechanics.
Continuous strategic interactions
The recent emphasis on continuous interactions reflects the paradigm shift from discrete, trialbased choices between limited fixed response options to more naturalistic, dynamic behaviours that characterize most real-world scenarios (Gordon et al., 2021), such as collective foraging and hunting (McDonald et al., 2019; Yoo et al., 2021a,b; Pisauro et al., 2022). Conversely, the action component in many classical decision paradigms is trivial (e.g. a button press or a simple eye movement) and bears no consequence on the subsequent perceptual inputs, breaking the recurrent perception-action loop inherent in natural behaviours. Therefore, it is necessary to employ experimental paradigms that realize continuous, embodied dynamic interactions under controlled conditions (Cisek and Green, 2024).
The transparent DIP is ideally suited to study such interactions. The visually-guided reach coordination BoS game described above already exemplifies sensorimotor interactions where action timing and effort become integral parts of strategy space (Moeller et al., 2023; McDonald et al., 2019). But in each trial, the choice was limited to two options equidistant from the central starting position. To implement strategic interactions in a more variable, continuously evolving action space, two interacting “agents” can be represented by virtual avatars (cursors) controlled by a joystick or a computer mouse on a shared 2D playing field. Spatial targets can be placed flexibly, and both players see their own and partner’s cursors in real time, mimicking real-world scenarios where individuals must make rapid decisions based on the positions and actions of others. Crucially, face- to-face visibility ensures a salient social context despite the interaction via virtual agents. The level of embodied realism is achieved by continuous spatiotemporal trajectories of the agents, and access to face, hand, and body movements of the two players. At the same time, the participants do not move excessively, which facilitates physiological and neural recordings. In the following, we present two kinds of real-time dyadic foraging tasks to illustrate the richness of the co-evolving coordination and decision processes that can be studied in a time-resolved manner.
In a purely competitive foraging task, we tested adult human subjects to explore how the proximity ofa competitor to a target influences their spatial choices. Participants engaged in a series of trials where they had to quickly navigate to one of several targets to collect points. The placement of targets and the starting location of the agents were varied to emulate various foraging scenarios. Four potential target locations were shown (Figure 5A). To prevent the players from occupying targets in advance, a no-go zone surrounded targets, and the trial was aborted if either of the players entered this zone during the trial start state; after 1-3 s, two randomly selected targets lit up as active targets and players were allowed to collect the targets by hovering over them. The trial ended when both targets were collected, either by one or by two players.
Subjects were paired with a confederate player in the first experiment and with another naive player in the second experiment. In the first experiment (Figure 5B, left), we tested the effect of the “competitor’s” proximity on the subjects’ choice by instructing a confederate to choose a randomized position along the horizontal axis at the beginning of each trial. We computed the distance between each cursor and each active target. Qualitatively, our results show that the subject chose the left target when the distance of the confederate’s cursor to the right target was smaller than the left target and vice versa (Figure 5B, right). In the second experiment, we paired the subjects to play against each other. All players chose the initial location of their cursor in the middle of the screen and as close as possible to the no-go zone, maximizing their chance to win the race to active targets (cf. “space dilemma” in Pisauro et al. (2022)). Analysing the 2D trajectories, we could also identify trials where one of the players did not take a straight path to the chosen target (Figure 5C). For example, the green player in Figure 5C left moved to their right but then made a sharp turn to collect the left target, or the purple player in Figure 5C right first headed to the close left target but then collected the far left target. Such trajectories demonstrate change-of-mind scenarios in which a player tracks the competitor and adjusts accordingly, revealing the complexity of decision-making in a transparent, competitive task.
To expand the study of continuous transparent interactions beyond zero-sum scenarios that focus on competition, we developed a dynamic dyadic foraging paradigm that enables emergence of both, cooperative and competitive strategies, and facilitates continuity of social signals and actions across multiple interaction cycles (Lewen et al., 2025). In this Cooperation-Competition Foraging (CCF) game, across many continuous interaction cycles, dyads decide between collecting “joint targets” together or “single targets” alone, allowing to elucidate behavioural mechanisms arbitrating between cooperative and competitive strategies. We found that most human dyads converged to their specific ratio of collecting single versus joint targets, exhibiting dyad-specific stable strategies that nevertheless spanned the entire range from pure cooperation to pure competition. These results show the flexibility and the richness of interactions that can emerge in well-balanced foraging games and demonstrates how incorporating sensorimotor variables such as movement speed, curvature, effort minimization and skill differences shapes optimal strategies.
Perceptual decision-making in dyadic contexts
A perceptual decision, or judgment, is a process of converting sensory inputs to discrete categorical variables. Perceptual decisions are influenced by behavioural relevance (Treue, 2003), attentional deployment (Treue, 2001), stimulus/choice history (Witthoft et al., 2018), reward contingencies (Cicmil et al., 2015), and perceptual confidence (Kiani and Shadlen, 2009; Moreira et al., 2018)). The main emphasis of perceptual decision studies is the judgment of ambiguous, noisy stimuli, under conditions of perceptual uncertainty. Crucially, perceptual decisions are profoundly shaped by social influences (Bahrami et al., 2010; Bang and Frith, 2017; Baumgart et al., 2019; Takagaki and Krug, 2020; Pescetelli and Yeung, 2022). Most work on the interaction between individual perceptual choices and social information has focused on cooperative tasks, where participants first make individual judgments and then exchange their opinions, and often associated confidence, before a joint decision is made. Joint performance can, however, exceed the best individual performance under certain conditions, depending on the perceptual similarity between the partners and the mode of social exchange (Bang and Frith, 2017; Wahn et al., 2018). However, social influences often adversely affect perceptual accuracy or lead to a disconnect between accuracy and confidence in perceptual choices. For instance, considering a partner’s choices can be motivated by a desire to be correct, especially when one has low confidence in one’s own judgment. On the other hand, social modulation might be driven by reasons unrelated to accuracy - such as social conformity. Therefore, one of the central questions in understanding flexible decision-making in social contexts is to dissociate and quantify the reliability-weighted, adaptive informative influences (such as Bayes-optimal cue integration) from normative, conformity-driving biases (Toelch and Dolan, 2015; Mahmoodi et al., 2018, 2022).

(A) Flow of the competitive foraging task. Left: two human players in DIP, using a joystick to collect targets, in a starting position. Middle: in each trial, two out of four targets are activated. Right: the active targets may be collected by the same subject or different subjects. (B) Effect of initial proximity. Left: a confederate chose pseudorandom locations along the horizontal axis at the beginning of each trial. Right: the dependency of subject’s choice (left or right active target) as a function of self and confederate’s initial distance from the two active targets. To combine the results across trials with various combinations of the active targets, we assigned the two active targets as the left and right targets, then pooled a cursor’s distance to the left and right targets across trials. (C) Dynamic decisions in two subjects during example trials, revealed by their cursor’s trajectory. Left: green player’s change of mind. Right: purple player’s change of mind.

Example of a perceptual decision-making paradigm on DIP, with corresponding behavioural data.
Left: Human subjects watched a 100% coherent random dot pattern (RDP) on both sides of the transparent OLED screen (DIP2). Using a joystick, they had to indicate whether the stimulus direction was moving leftward or rightward of the vertical midline. The stimulus direction changed instantly after pseudorandom time intervals. Right: Psychometric curves of two example subjects measured on both sides of the DIP screen (dark/bright). Data points indicate the percentage of reporting rightward direction as a function of stimulus difficulty (deviation from vertical direction) and direction (positive - rightward). The data are fitted with a bounded logistic function.
We have recently developed a powerful new approach to studying social information integration using a continuous perceptual report (CPR) paradigm (Schneider et al., 2024). Its primary advantage is the simultaneous tracking of both perceptual accuracy and confidence in real-time, in individual and social contexts. This paradigm does not separate perceptual decision-making and the social exchange of choices and associated confidence into distinct, imposed stages of the task. Instead, participants continuously track and indicate the perceived direction ofa noisy random dot pattern and their confidence in their perception. In the dyadic condition, both partners’ ongoing report and occasional feedback are added to the visual task display so that each participant can see where their partner thinks the stimulus is moving, how certain, and how successful they are. With this approach, we derived a nuanced view of the relationship between individual expertise and dyadic effects, demonstrating how the bidirectional modulation by social information lawfully depends on solo performance differences between dyadic partners.
The CPR experiments described above have been conducted so far with each dyadic player in a separate booth, using joystick-controlled cursors incorporated into a shared stimulus display to continuously indicate the responses of both participants. A highly promising next step is to utilize the transparent DIP, to exploit the immediacy of the direct face-to-face interaction, as well as the additional social cues such as facial and postural signals. Our pilot data demonstrate that continuous perceptual reports of human participants can be reliably measured on the transparent DIP, on both sides (Figure 6). The immediate visual access would also allow more embodied response modalities such as hand gestures, capturing the bidirectional sensorimotor link between perception and action. Thus, by tightly controlling the shared perceptual evidence and closely monitoring individual responses, and at the same time embedding the interaction into a naturalistic social context, the behavioural and neural mechanisms of social influences on sensory processing in an individual brain can be elucidated.
Attention and social learning
The developmental psychology literature has long emphasised the role of the input provided in caregiver-child interactions in driving learning. Indeed, development is contingent on the quality of such social interactions, where caregivers, on the one hand, guide their children in terms of what aspects of the environment to attend to (Csibra and Gergely, 2009), and children, on the other hand, actively engage in these learning situations by choosing what, when, and from whom they want to learn (Mani and Ackermann (2018); Ruggeri et al. (2019); Smith et al. (2018); Pelz and Kidd (2020). Consequently, the information children engage with, attend to, and learn from depends on their ability to observe and navigate not just their own actions but also those of others. There is, therefore, a critical need to investigate learning in the context of social interactions from which children learn.
The DIP provides a unique opportunity to study how children explore the world around them and how such exploration drives selective attention and learning in social interactions (Bothe et al., 2024). While previous studies highlight the factors (e.g., uncertainty, novelty) that drive exploration and learning in isolated contexts, more recent work suggests that exploration and attention to, e.g., faces or objects in more social contexts differs dramatically from isolated contexts. For instance, adults spend only around a fifth of the time looking at people’s faces in natural settings, e.g., walking around a University campus, relative to tasks where participants are presented with faces on a screen (Foulsham et al. (2011); see Risko et al. (2016), fora review). Similarly, while the developmental literature has touted the importance of gaze following in early infancy, recent tasks examining natural caregiver-child interactions found that children rarely follow the gaze of their caregivers and tend to be more egocentric in their interactions with others (Madhavan et al., 2025; Yu and Smith, 2013).
While such studies offer valuable insights into the dynamics of social interactions, they offer little possibility of controlling the environment presented to participants, especially regarding the timing and presentation of visual and auditory stimuli related to participants’ attention to the world around them. In contrast, the DIP allows researchers to continuously monitor (i) children’s attention to and exploration of varying visual and auditory stimuli presented on the screen during interaction with others, (ii) the extent to which one partner’s attention to an object on the screen influences the others’ exploration and sustained attention towards the same object and (iii) children’s learning of information provided in such social contexts. Moreover, the transparent and dynamic nature of the DIP facilitates the study of individual gaze patterns within social contexts and across development. For example, we integrated mobile eyetracking with the DIP (here, DIPc) in children between 4- to 5-years of age, enabling the assessment of the timing and accuracy of children’s visual attention to stimuli on screen or their interaction partner behind the screen, as well as the modulation of their visual attention in response to visual or auditory stimuli presented via the loudspeaker in real-time. Preliminary data from such tasks finds that children do fixate on their social partners’ faces during the task but spend less than a fifth of the time attending to their partner’s face relative to the objects on the screen (see Figure 7). This mirrors findings with adults walking around a University campus (Foulsham et al., 2011). The parallel between findings with adults in natural settings and children in the DIP speaks to similar availability and access to the social partner’s face across the two settings.
Furthermore, the DIPc also records when participants interact with particular objects on the screen by tapping them. In other work, we allowed children and their social partners to tap on their chosen objects on the screen. We followed children’s fixations and sustained attention to these objects based on whether they had tapped on them or whether their partner had tapped on them. We found that children fixated objects prior to their tapping on this object, highlighting the timeline of their decision to tap on this object. They also fixated objects that their partner tapped on prior to their partner actually tapping on this object, likely due to their social partner’s manual hand and arm movements and gaze towards the to-be-tapped object (see Figure 7, where children fixate the tapped object more than other objects, even before an object is tapped upon - as indicated by the vertical line - regardless of whether the child or their social partner tapped on this object). This observation speaks to the accessibility of the manual actions, and consequently, intentions of social partners in such paradigms.

Eyetracking data collected using the DIPc.
(A) The proportion of time children spent looking at their social partner’s face relative to the images on screen across the trial. The lines depict the mean and shaded area the SE across the trial. The vertical line indicates the point at which children or their partner tapped on one of the images onscreen. (B) Children’s proportion of looking at the image that was chosen, i.e., tapped on, across trials based on whether they or their social partner in the task chose the image. The vertical line indicates the point at which children or their partner tapped on one of the images onscreen. Across trials where they or their social partner tapped on an image, children fixated this image prior to it being chosen showcasing the extent to which children were able to pre-empt their partner’s choice given the transparency of the DIPc setup.
In summary, the DIP allows precise experimental control in more naturalistic social settings, where children have continuous access to a range of cues from the social partner’s face and manual actions, while researchers similarly continuously monitor whom and what children are attending to and learning from. Embedding future studies in platforms like the DIP, we believe, has real promise of transforming our understanding of early sociocognitive development.
Discussion
We are social beings, and our interactions with the world present a constantly changing, dynamic exchange of social cues, facial expressions, actions and language. Understanding how behaviour and cognition play out in such rich settings requires capturing the complexity of real interactions in our experimental paradigms. At the same time, to examine complex social interactions with the degree of detail currently available in the research to-date, there is a need to integrate experimental control into social settings and enable researchers to collect a range of behavioural and neurophysiological indices of cognitive processing in natural social interactions.
Advantages of Dyadic Interaction Platform
To meet this need, we developed the Dyadic Interaction Platform (DIP) - an innovative experimental environment that allows researchers to study interactions between two human or non-human primate participants. The platform features a shared transparent workspace that both participants can manipulate, enabling real-time, interactive engagement. Participants face each other, separated by a transparent touchscreen where visual stimuli can be presented with high temporal precision within the view of a social partner so that participants can easily attend to both sources of information. The DIP offers several key advantages over previous dyadic setups: (i) precise experimental control over stimuli and interactions, (ii) salient face-to-face engagement and social gaze monitoring, (iii) continuous access to the partner’s actions, decisions and behaviour, and (iv) joint manipulation of a shared workspace for cooperative or competitive tasks. The platform, therefore, presents a significant step forward in studying social behaviour in more naturalistic yet controlled settings (Fan et al., 2021), and enables engaging game-like paradigms that are increasingly utilized to study cognition (Allen et al., 2024).
We presented four different instantiations of the platform which integrate a variety of recording devices to capture rich multi-dimensional behavioural, physiological, and neural data, simultaneously from two participants. The different instantiations vary in terms of the visual displays and the kinds of behavioural responses that can be recorded (touchscreens, computer mice, joysticks, head-mounted or wall-mounted eyetrackers, video and audio recordings), and allow for a diverse range of participant groups to be examined (human adults and children, nonhuman primates, and mixed-species dyads). Some instantiations also record intracranial neural signals, EEG and EMG data synced to the behavioural devices listed above. The four instantiations showcase the remarkable flexibility in design choices for future studies, which can be tailored to the particular requirements of the study population, research question, and available resources. For instance, while studies with younger, less dexterous participants may rely more on touchscreens than mice or joysticks, such studies may be constrained in duration given the motor costs involved in touchscreen responses. For older study participants, joysticks or mice may provide a more suitable and easy-to- use interface while allowing participants to continuously follow each other’s behaviour. Along the same lines, the research question that depends on realistic movement costs might require real arm reaching to a touchscreen (or a spring-loaded joystick), while studying effects of agency or sensorimotor adaptation necessitates a dissociation between the real movement and its sensory consequences, achievable via a mouse or a joystick interface. Similarly, researchers can choose between wearable or mounted eyetrackers, depending on the degree of precision, flexibility, analysis effort and participant comfort desired while monitoring participants’ eye movements during the task. Finally, we highlight recent advancements in machine learning that we have integrated into the preprocessing pipeline to make analysis of such complex datasets more efficient, especially regarding face and object detection and posture estimation during different tasks.
To illustrate the capabilities of DIP, we also presented example data collected from different DIP instantiations. The breadth of paradigms and research questions outlined above highlights the range of possibilities open to researchers interested in social cognition - from social learning in young human children to dynamic cooperation and competitive turn-taking in macaque dyads. In what follows, we briefly list the key insights of dynamic dyadic behaviour from each paradigm that would not have been possible without the features of the DIP highlighted above.
First, we examined how dyads coordinate their decisions in the transparent version of the classical Bach-or-Stravinsky economic game. In this game, both players receive larger rewards for converging to one choice, with the partner whose option they converged to benefiting more. Since players could observe each other in this task, they quickly learned to coordinate their choices based on their partner’s action cues. Thus, players could dynamically integrate their partners’ actions in their own choices, with macaques converging on a choice based on which of the two monkeys was the first to reach to its preferred option. In a second paradigm, we examined how participants collected points in a competitive foraging task based on the proximity of their competitor to one of several possible targets. We showed that participants could use their knowledge of their competitor’s position in the task to navigate to points further away from their competitor, once again demonstrating how participants can integrate a variety of cues online in their social decision-making. Indeed, this study showed how participants changed their decision online in the task based on information about their competitor’s path across the screen. Taken together, the results showcase the advantages of including transparency and dynamicity in the platform, where continuous access to the actions and decisions of their partners helped players to optimize their strategies (cf. Moeller et al., 2023; Lewen et al., 2025). Fine-grained analysis of reaction times and movement trajectories recorded continuously through the task helps elucidating the underlying dynamics of cognitive processes and mutual dependencies rather than only discrete decision endpoints (Ferrari-Toniolo et al., 2019; Hawkins and Goldstone, 2016; McDonald et al., 2019; Yoo et al., 2021a,b, 2020).
We also proposed extending the platform to incorporate a truly transparent, face-to-face version of the novel continuous perceptual report paradigm (Schneider et al., 2024), where participants track and indicate how confident they are of the perceived direction of a noisy sensory pattern. We demonstrated that the DIP makes it possible to present and measure continuous perceptual reports of each participant on both sides of the transparent display. This will be an important step forward in terms of experimental design, especially given previous findings that continuous access to information about other’s choices in decision tasks influences individual decisions differently from when such information is presented serially (Pescetelli and Yeung, 2020, 2022). We, therefore, see enormous potential in such an extension to examine how social information can influence even the very early stages of sensory processing. Indeed, our study of young children’s allocation of attention to their social partners and objects in the shared visual environment targets precisely this question. We found - first - that children fixate on their social partner’s face during the task, albeit less than previously assumed in studies presenting children with static, unidirectional face stimuli (Madhavan et al., 2025). At the same time, we found that continuous access to the actions and behaviour of their social partner meant that children could preempt their partner’s focus of attention - by following their gaze and hand movements - and fixate on the object that their partner would subsequently tap on even before the partner tapped on this object. Taken together, findings such as these highlight (a) the advantages to be gained from examining behaviour in transparent settings and (b) the potential pitfalls when drawing conclusions about, e.g., children’s prioritised attention to faces from studies using unidirectional stimuli (see also Foulsham et al. (2011) for similar findings with adults).
One key advantage of the DIP mutual face-to-face and action visibility is its suitability for comparative research between humans and non-human primates. Like many natural interactions (Dugatkin etal., 1992; Noe, 2006; van Doorn etal., 2014), most DIP-based paradigms rely, at least in part, on real-time observation of a partner’s actions and outcomes, allowing for direct social interaction without requiring abstract representations of the partner’s presence, intentions, or complex inferences based on past experiences. This is particularly important for studying nonhuman primates, who may be underperforming in tasks that require predicting actions and outcomes based solely on interaction history or abstract cues (Brosnan et al., 2010, 2012, 2017; Ong et al., 2021; Formaux et al., 2022). By enabling immediate, visually grounded social exchanges, the DIP provides a more ecologically valid and accessible framework for investigating shared and divergent mechanisms of social cognition and coordination across species.
Limitations and outlook
While the different versions of the DIP demonstrate the flexibility of the platform in terms of experimental designs and the complexity of data that can be acquired, the results document how behaviour is impacted by the social context in which tasks are presented. Our findings stress the need to examine behaviour and cognition in controlled social settings. At the same time, we acknowledge the limitations of the platform as it is currently conceptualised and implemented, which we discuss in detail next.
Perhaps the first criticism that can be levelled at the platform concerns the ecological validity of the experimental designs and social settings outlined above and possible within the platform. Indeed, unless we are on a command deck of a futuristic spaceship, real social settings do not necessarily include floating transparent screens where information is presented to both participants simultaneously. To what extent can we assume that the DIP captures processing in natural social settings? Real-world situations often present individuals with objects in their environment that their social partner can choose to attend to or not. Consider two individuals going to grasp a door handle at the same time. They need to consider the intentions and actions of their social partner. The DIP emulates such situations, presenting participants a very similar context to that described above, but with the bonus that the researchers can manipulate the timing and contingencies of the setting. Indeed, some of our past studies have observed young infants and their caregivers interacting with one another in a play setting (Madhavan et al., 2025). While studies in real play settings may portend higher ecological validity than the paradigms described above, they are limited in terms of the timing and presentation of stimuli, e.g., the number of objects that can be presented or when during the task such objects are presented. We see such experimental control in social settings as the primary advantage of the DIP, while acknowledging the compromise between the limited ecological validity of the platform relative to completely natural, free-flowing social interactions.
The DIP and modern augmented reality approaches are similar in that they both allow overlaid digital interaction while maintaining visual contact with the environment and the interaction partner(s). In comparison to augmented reality environments, the biggest limitation of the DIP is its non-expandability of the digital space. Unlike augmented reality devices, which can display digital content in three dimensions overlaid on the physical environment, the DIP is limited to two dimensions on a vertical plane of the screen. At the same time, using augmented reality requires attaching additional devices to the participant’s head, while the DIP eliminates the need for bulky goggles that affect head movement or face visibility. This advantage, combined with the fact that the digital space provided by the DIP is inherently synchronised between participants, makes the DIP much more accessible.
Close face-to-face real-time interactions, such as reaching to and manipulating the same targets on the shared touchscreen (and almost touching a partner’s hand), create a highly salient social context. If the experimental paradigm is configured such that the facial signals and other subtle postural cues are relevant, the advantage of seeing both task stimuli and an interaction partner in the same line of sight is undeniable. However, in tasks that use more abstract, mouse or joystick- driven cursors or avatars, where the actual interaction takes place in a “virtual plane” on the screen, the relevance of face visibility becomes less clear. Indeed, our experience suggests that in such tasks, virtual avatars are very quickly imbued with agency, and when the task load is high, socio- emotional signals of the actual partner receive less attention. Nevertheless, we argue that the undeniable physical presence of the actual interactive partner on the other side provides a highly salient social context, even when most of actual task-related interaction takes place on a virtual plane. Future studies leveraging advanced video or EMG analysis will further clarify the role of facial socio-emotional cues in such settings.
We also note that the transparent display offers the advantage of immediate face-to-face and action visibility but comes with some inherent limitations. Independent of the specific implementation or technology used, the visibility (contrast) of the stimuli depends on the background. Although this is true for non-transparent modes of presentation, the control over the background of the transparent display is more limited because typically at least a part of the background will be the other participant’s face, arms, and clothes. Thus, the backdrop, as well as the illumination of the room, may become an important experimental parameter that might influence replicability. Furthermore, given the high transmissivity of the projection film, a second image can be formed behind the screen either on the opposing participant oron the floor. These images are usually dim, distorted, and blurry, but can be distracting to participants. We note above several workarounds to these issues, e.g., a patterned carpet can help to reduce contrast on the floor and make the reflected images less distracting to the participants. We also see high potential in terms of the future in the use of glass that can switch between transparent and opaque displays, allowing researchers to flexibly manipulate transmissivity as required in their paradigms. Such switchable glass could then also be used to flexibly move between independent and joint stimulus presentation or create asymmetry in the available information across participants, which allows for exciting possibilities for future research.
The DIP has the potential to elucidate the impact of individual differences in social interaction behaviours. Psychometric personality measures and additional cognitive tests can be used to predict indices of accumulated interaction behaviours, which in our initial experience show substantial reliability. Due to the continuous stream of behaviour and the perceptibility of the interaction partner, the DIP might have an advantage over other lab paradigms to study social behaviour, like turn-based economic games, which show only weak relationships with personality traits (Zhao and Smillie, 2015). To decompose dyadic behaviours in the DIP into actor, partner, and relationship effects, groups of participants can be tested in a round robin design, which can then be analysed via Social Relations Modelling (Back et al., 2023). Hormonal measures taken in the context of a DIP task can also be used to explain individual differences in interaction behaviours. For example, depending on the strategy of the interaction partner, the competitive foraging task can be expected to trigger reactive increases in hormones like testosterone, which responds to social contests, or cortisol, which responds to stressful challenges. These endocrine responses are accessible in a non-invasive manner from saliva sampling (Botzet et al., 2024).
The continuous, dynamic interactions afforded by DIP result in rich and heterogeneous data. This is a huge advantage, but also a challenge compared to classical paradigms, where only one or few bits of information (e.g. a simple button press, corresponding to a discrete choice) are acquired per trial. Instead of the simple and precise timing of stimuli or events, to which, e.g., neural data such as ERPs or neuronal firing can be time-locked, the researcher now needs to extract the time points or periods of interest from continuous, often non-stationary, data, and to classify spontaneous emerging interactions into meaningful classes. Typically, there is no unique solution and one needs to find a suitable compromise between generality, specificity, and other non-optimal assessment factors of the time points or periods. Here, Bayesian inference can be especially informative because it can define a precise model of the feature one is searching for (e.g., change of movement direction or slowing down to indicate uncertainty of the decision). Bayesian inference can then, with relatively sparse data, provide not only a single maximum-likelihood estimate but the full posterior probability with its uncertainty. Depending on the analysis, one can then concentrate on the “clear” time points or periods or accept larger uncertainty in the classification. Complementary, machine learning approaches could also be adapted to parse the rich behaviour into classes. Overall, these novel approaches allow full use of the rich data provided from the continuous recording of dyadic, mutually coupled behaviour in a shared environment.
One promising approach to analysing hyperscanning data from a DIP setup is to measure informational alignment rather than direct brain-to-brain synchrony. The informational alignment estimates the similarity of perceptual and cognitive representations, and can be detected using “inter-brain representational similarity analysis” (IRSA; Varlet and Grootswagers (2024)). In this analysis approach time-frequency profiles of same and different objects are submitted to a RSA analysis using data from different participants. Compared with inter-brain synchrony (IBS), such an inter-brain RSA detected significantly more same objects vs. different objects effects in both amplitude and phase. This provides first evidence for higher sensitivity to informational alignment in hyperscanning participants using IRSA.
Finally, while the DIP has been designed and used primarily for studying dyadic interactions, it is not inherently limited to two-way interactions. The setup allows for the inclusion of multiple participants on each side of the screen or the integration of observers, making it a viable platform for investigating group interactions in future research.
Conclusion
We present a novel dyadic interaction platform that allows researchers to study naturalistic, dynamic social interactions in different subject populations in diverse tasks while collecting a rich multi-dimensional array of behavioural, physiological and neural data. The DIP’s exceptional versatility comes from integrating different stimulus presentation and recording devices, thereby allowing researchers to flexibly tailor the platform to their research question and measures of cognitive processing they are particularly interested in. We include several example tasks that document both the transparency and flexibility of the DIP as well as the more fine-grained understanding of the dynamics of social behaviour gained by providing individuals continuous access to the decisions, social signals, and actions of their partners. Indeed, the examples outlined here document the real need to study rich, dynamic and complex settings, showcasing how performance is impacted at the millisecond level by embedding behaviour into such settings. We look eagerly forward to the advances in our understanding of primate social behaviour that such platforms can provide.
Availability of data, materials, and code
The BoS dataset described in the section on Transparent economic games and links to the GitHub code repositories are available at public OSF repository https://osf.io/f5u8z/.
The data and code related to the Competitive Foraging task described in the section on Continuous strategic interactions will be available at public OSF repository https://osf.io/8r6e2/.
The data and code related to the Cooperation-Competition Foraging study described in the section on Continuous strategic interactions will be available at public OSF repository https://osf.io/56hw7.
The data and code related to the CPR DIP dataset described in the section on Perceptual decisionmaking in dyadic context will be available at public OSF repository https://osf.io/8r6e2/.
The dataset and code described in the section on Attention and social learning are available at public OSF repository https://osf.io/6dven/.
Acknowledgements
We thank Dr. Chris Schloegl for efficient scientific coordination of the Leibniz ScienceCampus Primate Cognition and the Collaborative Research Center SFB 1528 “Cognition of Interaction”. We also thank Dr. Aleksandra Bovt, scientific coordinator of the RTG 2906 “Curiosity”, and the members of the “Cognition of Interaction” and “Curiosity” consortia for stimulating discussions.
This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project ID 454648639: SFB 1528 (Collaborative Research Center) “Cognition of Interaction” and Project ID 50280717: RTG 2906 “Curiosity”, the Leibniz ScienceCampus Primate Cognition, and Leibniz Collaborative Excellence grant K265/2019 “Neurophysiological mechanisms of primate interactions in dynamic sensorimotor settings”.
Additional information
Author contributions
Sebastian Isbaner: Conceptualization (DIP), Conceptualization, Methodology, Software, Formal analysis, Investigation, Data Curation, Writing—original draft, Writing—review & editing, Visualisation. Raymundo Baez-Mendoza: Writing—original draft, Writing—review & editing. Ricarda Bothe: Methodology, Formal analysis, Investigation, Data Curation, Writing—original draft, Writing—review & editing, Visualisation. Sarah Eiteljörge: Writing—original draft, Writing—review & editing. Anna Fischer: Methodology, Writing—original draft, Writing—review & editing. Alexander Gail: Conceptualization (DIP), Methodology, Writing—original draft, Writing—review & editing, Supervision of individual projects, Funding acquisition. Jan Gläscher: Writing—original draft, Writing—review & editing, Funding acquisition. Hannah Lüschen: Methodology, Investigation. Sebastian Moeller: Conceptualization (DIP), Methodology, Software, Formal analysis, Investigation, Writing—original draft, Writing—review & editing, Visualisation. Lars Penke: Writing—original draft, Writing—review & editing, Supervision of individual projects, Funding acquisition. Viola Priesemann: Methodology, Writing—original draft, Writing—review & editing, Supervision of individual projects, Funding acquisition. Johannes Ruß: Writing—original draft, Writing—review & editing. Anne Schacht: Methodology, Supervision of individual projects, Funding acquisition. Felix Schneider: Methodology, Software, Formal analysis, Investigation, Writing—original draft, Writing—review & editing, Visualisation. Neda Shahidi: Methodology, Software, Formal analysis, Investigation, Writing—original draft, Writing—review & editing, Visualisation, Supervision of individual projects. Stefan Treue: Conceptualization (DIP), Methodology, Writing—review & editing, Supervision of individual projects, Funding acquisition. Michael Wibral: Writing—original draft, Writing—review & editing, Funding acquisition. Annika Ziereis: Methodology, Writing—original draft, Writing—review & editing. Julia Fischer: Writing—original draft, Writing—review & editing, Funding acquisition. Igor Kagan: Conceptualization (DIP), Conceptualization, Methodology, Formal analysis, Data Curation, Writing—original draft, Writing—review & editing, Visualisation, Supervision of individual projects, Supervision, Project administration, Funding acquisition. Nivedita Mani: Conceptualization (DIP), Conceptualization, Methodology, Formal analysis, Writing—original draft, Writing—review & editing, Visualisation, Supervision of individual projects, Supervision, Project administration, Funding acquisition.
References
- Simultaneous recording of EEG and facial muscle reactions during spontaneous emotional mimicryNeuropsychologia 46:1104–1113https://doi.org/10.1016/j.neuropsychologia.2007.10.019Google Scholar
- Using games to understand the mindNature Human Behaviour 8:1035–1043https://doi.org/10.1038/s41562-024-01878-9Google Scholar
- Modulation of value representation by social context in the primate orbitofrontal cortexProceedings of the National Academy of Sciences 109:2126–2131https://doi.org/10.1073/pnas.1111715109Google Scholar
- The computational and neural substrates of moral strategies in social decision-makingNature Communications 10:1483https://doi.org/10.1038/s41467-019-09161-6Google Scholar
- Personality and social relationships: What do we know and where do we go?Personality Science 4:e7505https://doi.org/10.5964/ps.7505Google Scholar
- Optimally Interacting MindsScience 329:1081–1085https://doi.org/10.1126/science.1185718Google Scholar
- Rudimentary empathy in macaques’ social decision-makingProceedings of the National Academy of Sciences :201504454https://doi.org/10.1073/pnas.1504454112Google Scholar
- Facial emotion recognition through artificial intelligenceFrontiers in Computer Science :6https://doi.org/10.3389/fcomp.2024.1359471Google Scholar
- Confidence matching in group decision-makingNature Human Behaviour 1:0117https://doi.org/10.1038/s41562-017-0117Google Scholar
- Making better decisions in groupsRoyal Society Open Science 4:170193https://doi.org/10.1098/rsos.170193Google Scholar
- Multiple Levels of Representation of Reaching in the Parieto-frontal NetworkCerebral Cortex 13:1009–1022https://doi.org/10.1093/cercor/13.10.1009Google Scholar
- Neurophysiological correlates of collective perceptual decision-makingEuropean Journal of Neuroscience n/ahttps://doi.org/10.1111/ejn.14545Google Scholar
- Physiological synchrony is associated with cooperative success in real-life interactionsScientific Reports 10:19609https://doi.org/10.1038/s41598-020-76539-8Google Scholar
- Human Social AttentionAnnals of the New York Academy of Scienees 1156:118–140https://doi.org/10.1111/j.1749-6632.2009.04468.xGoogle Scholar
- Children from 17 communities process gaze in similar waysIn: A universal of human social cognition OSF https://doi.org/10.31234/osf.io/z3ahvGoogle Scholar
- Temporal dynamics of costly avoidance in naturalistic fears: Evidence for sequential-sampling of fear and reward informationJournal of Anxiety Disorders 103:102844https://doi.org/10.1016/j.janxdis.2024.102844Google Scholar
- Little scientists & social apprenticesActive word learning in dynamic social contexts using a transparent dyadic interaction platform https://doi.org/10.31234/osf.io/fx2bgGoogle Scholar
- Moving magnetoencephalography towards real-world applications with a wearable systemNature 555:657–661Google Scholar
- Behavioural endocrinology in the social sciencesKölner Zeitschrift für Soziologie und Sozialpsychologie 76:649–680https://doi.org/10.1007/s11577-024-00945-3Google Scholar
- Magnetoencephalography with optically pumped magnetometers (OPM-MEG): the next generation of functional neuroimagingTrends in Neurosciences 45:621–634https://doi.org/10.1016/j.tins.2022.05.008Google Scholar
- Human and monkey responses in a symmetric game of conflict with asymmetric equilibriaJournal of Economic Behavior & Organization 142:293306https://doi.org/10.1016/j.jebo.2017.07.037Google Scholar
- The interplay of cognition and cooperationPhilosophical Transactions of the Royal Society B: Biological Sciences 365:2699–2710https://doi.org/10.1098/rstb.2010.0154Google Scholar
- Old World monkeys are more similar to humans than New World monkeys when playing a coordination gameProceedings Biological Sciences 279:1522–1530https://doi.org/10.1098/rspb.2011.1781Google Scholar
- Communication with Surprise - Computational and Neural Mechanisms for Non-Verbal Human InteractionsbioRxiv https://doi.org/10.1101/2024.02.20.581193Google Scholar
- Expectation violations signal goals in novel human communicationNature Communications 16:1989https://doi.org/10.1038/s41467-025-57025-zGoogle Scholar
- Coordination of chimpanzees (Pan troglodytes) in a stag hunt gameInternational Journal of Primatology 32:1296–1310Google Scholar
- Activity of striatal neurons reflects social action and own rewardProceedings of the National Academy of Sciences 110:16634–16639https://doi.org/10.1073/pnas.1211342110Google Scholar
- Performance error-related activity in monkey striatum during social interactionsScientific Reports 6:srep37199https://doi.org/10.1038/srep37199Google Scholar
- Neuronal Circuits for Social Decision-Making and Their Clinical ImplicationsFrontiers in Neuroscience :15https://doi.org/10.3389/fnins.2021.720294Google Scholar
- The representations of reach endpoints in posterior parietal cortex depend on which hand does the reachingJournal of Neurophysiology 107:2352–2365https://doi.org/10.1152/jn.00852.2011Google Scholar
- An Emerging Field of Primate Social Neurophysiology: Current DevelopmentseNeuro 4:Eneuro.0295-17.2017https://doi.org/10.1523/ENEURO.0295-17.2017Google Scholar
- Neural mechanisms of social decision-making in the primate amygdalaProceedings of the National Academy of Sciences 112:16012–16017https://doi.org/10.1073/pnas.1514761112Google Scholar
- Neuronal reference frames for social decisions in primate frontal cortexNature Neuroscience 16:243–250https://doi.org/10.1038/nn.3287Google Scholar
- Reward modulates the effect of visual cortical microstimulation on perceptual decisionseLife 4:e07832https://doi.org/10.7554/eLife.07832Google Scholar
- Coding of Self and Other’s Future Choices in Dorsal Premotor Cortex during Social InteractionCell Reports 24:1679–1686https://doi.org/10.1016/j.celrep.2018.07.030Google Scholar
- Toward a neuroscience of natural behaviorCurrent Opinion in Neurobiology 86:102859https://doi.org/10.1016/j.conb.2024.102859Google Scholar
- Infants’ Attention to Patterned Stimuli: Developmental Change From 3 to 12 Months of AgeChild Development 77:680–695https://doi.org/10.1111/j.1467-8624.2006.00897.xGoogle Scholar
- Natural pedagogyTrends in Cognitive Sciences 13:148–153https://doi.org/10.1016/j.tics.2009.01.005Google Scholar
- Hyperscanning: A Valid Method to Study Neural Inter-brain Underpinnings of Social InteractionFrontiers in Human Neuroscience :14https://doi.org/10.3389/fnhum.2020.00039Google Scholar
- Specialized medial prefrontal-amygdala coordination in other- regarding decision preferenceNature Neuroscience 23:565–574https://doi.org/10.1038/s41593-020-0593-yGoogle Scholar
- Widespread implementations of interactive social gaze neurons in the primate prefrontal-amygdala networksNeuron 110:2183–2197https://doi.org/10.1016/j.neuron.2022.04.013Google Scholar
- Specialized Networks for Social Cognition in the Primate BrainAnnual Review of Neuroscience 46:381–401https://doi.org/10.1146/annurev-neuro-102522-121410Google Scholar
- Coaction versus reciprocity in continuous-time models of cooperationJournal of Theoretical Biology 356:1–10https://doi.org/10.1016/j.jtbi.2014.03.019Google Scholar
- Beyond the prisoner’s dilemma: Toward models to discriminate among mechanisms of cooperation in natureTrends in Ecology & Evolution 7:202–205https://doi.org/10.1016/0169-5347(92)90074-LGoogle Scholar
- Neural activity in macaque medial frontal cortex represents others’ choicesScientific Reports 7https://doi.org/10.1038/s41598-017-12822-5Google Scholar
- Levels of Naturalism in Social Neuroscience ResearchiScience :102702https://doi.org/10.1016/j.isci.2021.102702Google Scholar
- Two Brains in Action: Joint-Action Coding in the Primate Frontal CortexJournal of Neuroscience 39:3514–3528https://doi.org/10.1523/JNEUROSCI.1512-18.2019Google Scholar
- Statistical determinants of visuomotor adaptation along different dimensions during naturalistic 3D reachesScientific Reports 12:10198https://doi.org/10.1038/s41598-022-13866-yGoogle Scholar
- The experimental emergence of convention in a non-human primatePhilosophical Transactions of the Royal Society B: Biological Sciences 377:20200310https://doi.org/10.1098/rstb.2020.0310Google Scholar
- Guinea baboons are strategic cooperatorsScience Advances 9:eadi5282https://doi.org/10.1126/sciadv.adi5282Google Scholar
- The where, what and when of gaze allocation in the lab and the natural environmentVision Research 51:1920–1931https://doi.org/10.1016/j.visres.2011.07.002Google Scholar
- See and be seen: Infant-caregiver social looking during locomotor free playDevelopmental Science 21:e12626https://doi.org/10.1111/desc.12626Google Scholar
- MouseTracker: Software for studying real-time mental processing using a computer mouse-tracking methodBehavior Research Methods 42:226–241https://doi.org/10.3758/BRM.42.1.226Google Scholar
- Dynamic Social Adaptation of Motion-Related Neurons in Primate Parietal CortexPLoS ONE 2:e397https://doi.org/10.1371/jour-nal.pone.0000397Google Scholar
- Social cognition in premotor and parietal cortexSocial Neuroscience 3:250–260https://doi.org/10.1080/17470910701434610Google Scholar
- Neural Dynamics in Monkey Parietal Reach Region Reflect Context-Specific Sensorimotor TransformationsJournal of Neuroscience 26:9376–9384https://doi.org/10.1523/JNEUROSCI.1570-06.2006Google Scholar
- Three-dimensional reach trajectories as a probe of real-time decision-making between multiple competing targetsFrontiers in Neuroscience :8https://doi.org/10.3389/fnins.2014.00215Google Scholar
- Decision-making in sensorimotor controlNature Reviews Neuroscience 19:519–534https://doi.org/10.1038/s41583-018-0045-9Google Scholar
- Visual attention and the acquisition of information in human crowdsProceedings of the National Academy of Sciences 109:7245–7250https://doi.org/10.1073/pnas.1116141109Google Scholar
- The dual function of social gazeCognition 136:359–364https://doi.org/10.1016/j.cognition.2014.11.040Google Scholar
- The road towards understanding embodied decisionsNeuroscience & Biobehavioral Reviews 131:722–736https://doi.org/10.1016/j.neubiorev.2021.09.034Google Scholar
- Primate Amygdala Neurons Simulate Decision Processes of Social PartnersCell 177:986–998https://doi.org/10.1016/j.cell.2019.02.042Google Scholar
- A review of theories and methods in the science of face-to-face social interactionNature Reviews Psychology 1:42–54https://doi.org/10.1038/s44159-021-00008-wGoogle Scholar
- Quantification of inter-brain coupling: A review of current methods used in haemodynamic and electrophysiological hyperscanning studiesNeuroImage :120354https://doi.org/10.1016/j.neuroimage.2023.120354Google Scholar
- Face2face: advancing the science of social interactionPhilosophical Transactions of the Royal Society B: Biological Sciences 378:20210470https://doi.org/10.1098/rstb.2021.0470Google Scholar
- Centrality of Social Interaction in Human Brain FunctionNeuron 88:181–193https://doi.org/10.1016/j.neuron.2015.09.022Google Scholar
- Neuronal Prediction of Opponent’s Behavior during Cooperative Social Interchange in PrimatesCell 160:1233–1245https://doi.org/10.1016/j.cell.2015.01.045Google Scholar
- The Formation of Social Conventions in Real-Time EnvironmentsPLOS One 11:e0151670https://doi.org/10.1371/journal.pone.0151670Google Scholar
- Transwall: a transparent double-sided touch display facilitating co-located face-to-face interactionsIn: CHI ‘14 Extended Abstracts on Human Factors in Computing Systems CHI EA’14 pp. 435–438https://doi.org/10.1145/2559206.2574828Google Scholar
- Neural mechanisms for emotional contagion and spontaneous mimicry of live facial expressionsPhilosophical Transactions of the Royal Society B: Biological Sciences 378:20210472https://doi.org/10.1098/rstb.2021.0472Google Scholar
- Enabling ambulatory movement in wearable magnetoencephalography with matrix coil active magnetic shieldingNeuroImage 274:120157Google Scholar
- Prefrontal Neurons Represent Winning and Losing during Competitive Video Shooting Games between MonkeysJournal of Neuroscience 32:7662–7671https://doi.org/10.1523/JNEUROSCI.6479-11.2012Google Scholar
- JARVIS-MoCap/JARVlS-AcquisitionToolGitHub https://github.com/JARVIS-MoCap/JARVIS-AcquisitionToolGoogle Scholar
- ClearBoard: a seamless medium for shared drawing and conversation with eye contactIn: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems CHI ‘92 New York, NY, USA: Association for Computing Machinery pp. 525–532https://doi.org/10.1145/142750.142977Google Scholar
- Development of social systems neuroscience using macaquesProceedings of the Japan Academy, Series B 94:305–323https://doi.org/10.2183/pjab.94.020Google Scholar
- Neural dynamics of two players when using nonverbal cues to gauge intentions to cooperate during the Prisoner’s Dilemma GameNeuroImage 157:263–274https://doi.org/10.1016/j.neuroimage.2017.06.024Google Scholar
- Ultralytics YOLOGitHub https://github.com/ultralytics/ultralyticsGoogle Scholar
- Space representation for eye movements is more contralateral in monkeys than in humansProceedings of the National Academy of Sciences 107:7933–7938https://doi.org/10.1073/pnas.1002825107Google Scholar
- Representation of Confidence Associated with a Decision by Neurons in the Parietal CortexScience 324:759–764https://doi.org/10.1126/science.1169405Google Scholar
- A taxonomy of all ordinal 2x 2 gamesTheory and decision 24:99–117Google Scholar
- The Lab Streaming Layer for Synchronized Multimodal RecordingbioRxiv https://doi.org/10.1101/2024.02.13.580071Google Scholar
- Potential social interactions are important to social attentionProceedings of the National Academy of Sciences 108:5548–5553https://doi.org/10.1073/pnas.1017022108Google Scholar
- Reach and Gaze Representations in Macaque Parietal and Premotor Grasp AreasJournal of Neuroscience 33:7038–7049https://doi.org/10.1523/JNEUROSCI.5568-12.2013Google Scholar
- The integration of social and neural synchrony: a case for ecologically valid research using MEG neuroimagingSocial Cognitive and Affective Neuroscience 16:143–152https://doi.org/10.1093/scan/nsaa061Google Scholar
- Continuous dynamics of cooperation and competition in social foragingGoogle Scholar
- Electromyogram (EMG) Removal by Adding Sources of EMG (ERASE)—A Novel ICA-Based Algorithm for Removing Myoelectric Artifacts From EEGFrontiers in Neuroscience :14https://doi.org/10.3389/fnins.2020.597941Google Scholar
- Egocentricity in infants’ play with familiar objects in caregiverchild interactionsPsyArXiv https://doi.org/10.31234/osf.io/2ewqc_v2Google Scholar
- Reciprocity of social influenceNature Communications 9:2474https://doi.org/10.1038/s41467-018-04925-yGoogle Scholar
- Distinct neurocomputational mechanisms support informational and socially normative conformityPLOS Biology 20:e3001565https://doi.org/10.1371/journal.pbio.3001565Google Scholar
- Why do children learn the words they do?Child Development Perspectives 12:253–257https://doi.org/10.1111/cdep.12295Google Scholar
- DeepLabCut: markerless pose estimation of user-defined body parts with deep learningNature Neuroscience https://www.nature.com/articles/s41593-018-0209-y
- Bayesian nonparametric models characterize instantaneous strategies in a competitive dynamic gameNature Communications 10:1–12https://doi.org/10.1038/s41467-019-09789-4Google Scholar
- Development of a Marmoset Apparatus for Automated Pulling to study cooperative behaviorseLife 13:RP97088https://doi.org/10.7554/eLife.97088Google Scholar
- Human and macaque pairs employ different coordination strategies in a transparent decision gameeLife 12:e81641https://doi.org/10.7554/eLife.81641Google Scholar
- Post-decision wagering after perceptual judgments reveals bi-directional certainty readoutsCognition 176:40–52https://doi.org/10.1016/j.cognition.2018.02.026Google Scholar
- What makes a reach movement effortful? Physical effort discounting supports common minimization principles in decision making and motor controlPLOS Biology 15:e2001323https://doi.org/10.1371/journal.pbio.2001323Google Scholar
- Theory of Games and Economic BehaviorPrinceton University Press Google Scholar
- Live agent preference and social action monitoring in the macaque mid-superior temporal sulcus regionProceedings of the National Academy of Sciences 118:e2109653118https://doi.org/10.1073/pnas.2109653118Google Scholar
- Social reward monitoring and valuation in the macaque brainNature Neuroscience 21:1452–1462https://doi.org/10.1038/s41593-018-0229-7Google Scholar
- Role of the social actor during social interaction and learning in humanmonkey paradigmsNeuroscience & Biobehavioral Reviews 102:242–250https://doi.org/10.1016/j.neubiorev.2019.05.004Google Scholar
- Cooperation experiments: coordination through communication versus acting apart togetherAnimal Behaviour 71:1–18https://doi.org/10.1016/j.anbehav.2005.03.037Google Scholar
- AprilTag: A robust and flexible visual fiducial systemIn: 2011 IEEE International Conference on Robotics and Automation pp. 3400–3407https://doi.org/10.1109/ICRA.2011.5979561Google Scholar
- Neuronal correlates of strategic cooperation in monkeysNature Neuroscience 24:116–128https://doi.org/10.1038/s41593-020-00746-9Google Scholar
- Interpersonal Autonomic Physiology: A Systematic Review of the LiteraturePersonality and Social Psychology Review 21:99–141https://doi.org/10.1177/1088868316628405Google Scholar
- Neural computations underlying strategic social decision-making in groupsNature Communications 10:1–12https://doi.org/10.1038/s41467-019-12937-5Google Scholar
- Promises and limitations of human intracranial electroencephalographyNature Neuroscience 21:474–483https://doi.org/10.1038/s41593-018-0108-2Google Scholar
- Understanding motor events: a neurophysiological studyExperimental Brain Research 91:176–180https://doi.org/10.1007/BF00230027Google Scholar
- The elaboration of exploratory playPhilosophical Transactions of the Royal Society B: Biological Sciences 375:20190503https://doi.org/10.1098/rstb.2019.0503Google Scholar
- The effects of recursive communication dynamics on belief updatingProceedings of the Royal Society B: Biological Sciences 287:20200025https://doi.org/10.1098/rspb.2020.0025Google Scholar
- Benefits of spontaneous confidence alignment between dyad membersCollective Intelligence 1https://doi.org/10.1177/26339137221126915Google Scholar
- Neurocomputational mechanisms involved in adaptation to fluctuating intentions of othersNature Communications 15:3189https://doi.org/10.1038/s41467-024-47491-2Google Scholar
- Neural implementation of computational mechanisms underlying the continuous trade-off between cooperation and competitionNature Communications 13:6873https://doi.org/10.1038/s41467-022-34509-wGoogle Scholar
- Advancements in Noncontact, Multiparameter Physiological Measurements Using a WebcamIEEE Transactions on Biomedical Engineering 58:7–11https://doi.org/10.1109/TBME.2010.2086456Google Scholar
- Connecting minds and sharing emotions through mimicry: A neurocognitive model of emotional contagionNeuroscience & Biobehavioral Reviews 80:99–114https://doi.org/10.1016/j.neubiorev.2017.05.013Google Scholar
- Shared yet dissociable neural codes across eye gaze, valence and expectationNature 586:95–100https://doi.org/10.1038/s41586-020-2740-8Google Scholar
- Robust Speech Recognition via Large-Scale Weak SupervisionIn: Proceedings of the 40th International Conference on Machine Learning PMLR pp. 28492–28518https://proceedings.mlr.press/v202/radford23a.htmlGoogle Scholar
- The Neuroscience of Social Decision-MakingAnnual Review of Psychology 62:23–48https://doi.org/10.1146/annurev.psych.121208.131647Google Scholar
- Breaking the Fourth Wall of Cognitive Science: Real-World Social Attention and the Dual Function of GazeCurrent Directions in Psychological Science 25:70–74https://doi.org/10.1177/0963721415617806Google Scholar
- Judgments of effort exerted by others are influenced by received rewardsScientific Reports 10:1868https://doi.org/10.1038/s41598-020-58686-0Google Scholar
- Memory enhancements from active control of learning emerge across developmentCognition 186:82–94https://doi.org/10.1016/j.cognition.2019.01.010Google Scholar
- Social Decision-Making: Insights from Game Theory and NeuroscienceScience 318:598–602https://doi.org/10.1126/science.1142996Google Scholar
- Psychometrics of the continuous mind: Measuring cognitive sub-processes via mouse trackingMemory & Cognition 48:436–454https://doi.org/10.3758/s13421-019-00981-xGoogle Scholar
- Confidence over competence: Real-time integration of social information in human continuous perceptual decision-makingbioRxiv :2024.08.19.608609https://doi.org/10.1101/2024.08.19.608609Google Scholar
- Gaze Following in Human Infants Depends on Communicative SignalsCurrent Biology 18:668–671https://doi.org/10.1016/j.cub.2008.03.059Google Scholar
- Game theory and the evolution of behaviourProceedings of the Royal Society of London Series B Biological Sciences 205:475–488https://doi.org/10.1098/rspb.1979.0080Google Scholar
- The development and impact of active learning strategies on self-confidence in a newly designed first-year self-care pharmacy course-outcomes and experiencesCurrents in Pharmacy Teaching and Learning 10:499–504https://doi.org/10.1016/j.cptl.2017.12.008Google Scholar
- Critical Viewing Conditions for Evaluation of Color Television Pictureshttps://doi.org/10.5594/SMPTE.RP166.1995Google Scholar
- Screen Luminance Level, Chromaticity and Uniformityhttps://doi.org/10.5594/SMPTE.ST431-1.2006Google Scholar
- Continuous attraction toward phonological competitorsProceedings of the National Academy of Sciences 102:10393–10398https://doi.org/10.1073/pnas.0503903102Google Scholar
- Humans depart from optimal computational models of interactive decision-making during competition under partial informationScientific Reports 12:289https://doi.org/10.1038/s41598-021-04272-xGoogle Scholar
- Complementary encoding of priors in monkey frontoparietal network supports a dual process of decision-makingeLife 8:e47581https://doi.org/10.7554/eLife.47581Google Scholar
- The effects of reward and social context on visual processing for perceptual decisionmakingCurrent Opinion in Physiology 16:109–117https://doi.org/10.1016/j.cophys.2020.08.006Google Scholar
- Interpersonal brain synchronization in the right temporoparietal junction during face-to-face economic exchangeSocial Cognitive and Affective Neuroscience 11:23–32https://doi.org/10.1093/scan/nsv092Google Scholar
- Informational and Normative Influences in Conformity from a Neurocomputational PerspectiveTrends in Cognitive Sciences 19:579–589https://doi.org/10.1016/j.tics.2015.07.007Google Scholar
- Social Decision-Making and the Brain: A Comparative PerspectiveTrends in Cognitive Sciences 21:265–276https://doi.org/10.1016/j.tics.2017.01.007Google Scholar
- Neural correlates of attention in primate visual cortexTrends in Neurosciences 24:295–300https://doi.org/10.1016/S0166-2236(00)01814-2Google Scholar
- Visual attention: the where, what, how and why of saliencyCurrent Opinion in Neurobiology 13:428–432https://doi.org/10.1016/S0959-4388(03)00105-3Google Scholar
- The cone method: Inferring decision times from single-trial 3D movement trajectories in choice behaviorBehavior Research Methods 53:2456–2472https://doi.org/10.3758/s13428-021-01579-5Google Scholar
- Deciding While Acting—Mid-Movement Decisions Are More Strongly Affected by Action Probability than Reward AmounteNeuro 10https://doi.org/10.1523/ENEURO.0240-22.2023Google Scholar
- Using imaging photoplethysmography for heart rate estimation in non-human primatesPLOS One 13:1–22https://doi.org/10.1371/journal.pone.0202581Google Scholar
- Emergence and suppression of cooperation by action visibility in transparent gamesPLoS Computational Biology 16https://doi.org/10.1371/journal.pcbi.1007588Google Scholar
- Evolutionary Successful Strategies in a Transparent iterated Prisoner’s DilemmaIn:
- Kaufmann P
- Castillo PA
- Measuring information alignment in hyperscanning research with representational analyses: moving beyond interbrain synchronyFrontiers in Human Neuroscience :18https://doi.org/10.3389/fn-hum.2024.1385624Google Scholar
- Group benefits in joint perceptual tasks—a reviewAnnals of the New York Academy of Sciences 1426:166–178https://doi.org/10.1111/nyas.13843Google Scholar
- Sensory and decision-making processes underlying perceptual adaptationJournal of Vision 18:10https://doi.org/10.1167/18.8.10Google Scholar
- Continuous decisionsPhilosophical Transactions of the Royal Society B: Biological Sciences 376:20190664https://doi.org/10.1098/rstb.2019.0664Google Scholar
- Multicentric tracking of multiple agents by anterior cingulate cortex during pursuit and evasionNature Communications 12:1985https://doi.org/10.1038/s41467-021-22195-zGoogle Scholar
- The neural basis of predictive pursuitNature Neuroscience 23:252–259https://doi.org/10.1038/s41593-019-0561-6Google Scholar
- Representation of Others’ Action by Neurons in Monkey Medial Frontal CortexCurrent Biology 21:249–253https://doi.org/10.1016/j.cub.2011.01.004Google Scholar
- Joint Attention without Gaze Following: Human Infants and Their Parents Coordinate Visual Attention to Objects through Eye-Hand CoordinationPLOS One 8:e79659https://doi.org/10.1371/journal.pone.0079659Google Scholar
- Communication speeds up but impairs the consensus decision in a dyadic colour estimation taskRoyal Society Open Science 7:191974https://doi.org/10.1098/rsos.191974Google Scholar
- The Role of Interpersonal Traits in Social Decision Making: Exploring Sources of Behavioral Heterogeneity in Economic GamesPersonality and Social Psychology Review 19:277–302https://doi.org/10.1177/1088868314553709Google Scholar
Article and author information
Author information
Version history
- Sent for peer review:
- Preprint posted:
- Reviewed Preprint version 1:
Cite all versions
You can cite all versions using the DOI https://doi.org/10.7554/eLife.106757. This DOI represents all versions, and will always resolve to the latest one.
Copyright
© 2025, Isbaner et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 100
- downloads
- 0
- citations
- 0
Views, downloads and citations are aggregated across all versions of this paper published by eLife.