Individuals physically interacting in a group rapidly coordinate their movement by estimating the collective goal
Abstract
How can a human collective coordinate, for example to move a banquet table, when each person is influenced by the inertia of others who may be inferior at the task? We hypothesized that large groups cannot coordinate through touch alone, accruing to a zero-sum scenario where individuals inferior at the task hinder superior ones. We tested this hypothesis by examining how dyads, triads and tetrads, whose right hands were physically coupled together, followed a common moving target. Surprisingly, superior individuals followed the target accurately even when coupled to an inferior group, and the interaction benefits increased with the group size. A computational model shows that these benefits arose as each individual uses their respective interaction force to infer the collective’s target and enhance their movement planning, which permitted coordination in seconds independent of the collective’s size. By estimating the collective’s movement goal, its individuals make physical interaction beneficial, swift and scalable.
https://doi.org/10.7554/eLife.41328.001Introduction
A recent social experiment involving the widely acclaimed Pokemon video game ignited enormous public interest, where tens of thousands of players simultaneously controlled the protagonist of the game together and successfully finished the game (Zhang and Liu, 2015). Such collective behavior in humans has been researched when a collective makes a decision verbally (Webb, 1991; Hastie and Kameda, 2005; Bahrami et al., 2010). However, the key to many great human accomplishments, such as carrying stone blocks to construct the Great Pyramids, was enabled by many individuals who needed to coordinate the forces they applied on a stone in order to guide it on top of wooden rollers and move it. Such physical coordination has been investigated in pairs or dyads in the past decade (Basdogan et al., 2000; Sebanz et al., 2006; Reed and Peshkin, 2008; van der Wel et al., 2011; Malysz and Sirouspour, 2013).
Previous studies that investigated dyads found evidence of improved task performance (Basdogan et al., 2000; Reed and Peshkin, 2008; Malysz and Sirouspour, 2013), but the underlying mechanism of physical coordination was unknown. In a recent study, we tested dyads interacting in a continuous tracking task, and found that the tracking performance of both partners improved, even when the partner was worse at the task (Ganesh et al., 2014). This mutual improvement during continuous interaction is explained by a mechanism where individuals estimate the partner’s target from the interaction force to improve their prediction of the target’s motion (Takagi et al., 2017). In a second study, we showed that a stronger connection yields a better estimate of the partner’s target, enabling partners to improve more from the interaction (Takagi et al., 2018). We speak of the partner’s target for a tracking task, but this can be generalized to estimating a partner’s movement goal, which we define as the partner’s desired state, for example a position and velocity in time.
Although the mechanism of estimating the partner’s movement goal explains coordination in dyads, it is not known whether this interaction mechanism holds for an interactive tracking task with more than one partner. The connection dynamics to multiple partners may help inferior partners in a group but will likely hinder superior partners’ task performance. The dynamics may interfere with the coordination mechanism, which in dyads enabled even the superior partner to improve during the interactive tracking task (Ganesh et al., 2014). It is therefore unclear whether the interaction remains mutually beneficial for large groups.
To elucidate this question and investigate how collectives negotiate common actions, we examined a task inspired by dancing in which two, three and four partners have to control their motion while feeling forces from the soft interaction with others. We hypothesized that the stochastic summation of every partner’s actions, yielding the interaction force, would produce a noisier and poorer haptic estimate of the target as the group size increases. We also expected the connection dynamics and the collective’s inertia to have a detrimental effect on the superior partners’ performance. In such a scenario, the dynamics of being physically connected to a collective of partners may characterize the interaction behavior, similar to what was observed in joint reaching movements (Takagi et al., 2016). Could the coordination mechanism proposed in earlier studies (Takagi et al., 2017; Takagi et al., 2018) fully explain the tracking performance observed in collective interaction, or would the dynamics of the collective’s inertia outweigh the benefits of the coordination mechanism in larger groups?
Results
We tested interaction in dyads, triads and tetrads who tracked a common target together using their right hands, which were all joined together with virtual elastic bands with a stiffness of 100 N/m (Figure 1A and B). 12 fours carried out the experiment in 12 triads and 12 tetrads, and 12 dyads were tested separately (see Figure 1D and the Materials and methods for details on the protocol). Individuals in the collective had to control a robotic handle using their right hand, which moved a cursor on their own respective monitor, to track a moving target (Figure 1A). The same target was used for all individuals of the collective. Each individual saw, on their own monitor, the positions of the target and their hand, but not the partners’ cursor positions. Individual performance at the task was calculated for each 15 s trial by measuring the average distance between their cursor and the target, defined as the tracking error. Two types of trials were tested: in solo trials, each individual tracked the target alone; in connected trials, the individuals’ right hands were coupled together by elastic bands.
To test how interaction with inferior or superior partners influenced tracking performance, we manipulated the tracking ability of subjects by applying visual noise to the target (Körding and Wolpert, 2004) as described in the Materials and methods (see Figure 1C and Video 1 and 2 for visual noise during tracking task). The tracking error of subjects was linearly and tightly related to the standard deviation of this visual noise, such that greater visual noise resulted in larger tracking errors (see Figure 2B for sample subjects). A different amount of visual noise, which was randomly selected but fixed during each connected trial, was applied to each member of a collective for every trial. This enabled us to test the influence of interaction with participants of different selected tracking ability. As the visual noise was linearly related to the tracking error, we could calculate the change in each subject’s tracking error during the interaction relative to the visual noise that we applied. We dispersed the solo trials throughout the entire experiment to verify that the relationship between visual noise level and tracking error did not drift with time, for example due to fatigue, which is the rationale for having a complicated protocol for twos and fours (see Figure 1D).
Figure 2A shows raw data of the x-axis positions and forces experienced by a sample tetrad in solo and connected trials. The positions of the subjects lagged the target’s motion due to visual feedback delays in anticipating the target’s movement. For each subject, we assessed the performance improvement , where was an individual’s tracking error in a connected trial and was the same subject’s solo error, which was estimated from the visual noise applied during the connected trial. This ratio quantifies an individual’s tracking ability during the interaction relative to tracking alone. We analyzed this performance improvement as a function of the partners’ relative error, , where was the mean of the partners’ solo errors, which was also estimated from the visual noise applied to the partners in the connected trial. This ratio is a measure of how the partners’ average tracking ability compared with the individual’s. This enabled us to study how each individual’s tracking ability changed when they interacted with ‘superior’ or ‘inferior’ partners.
The results of the collective physical interaction are plotted in Figure 3 (the data in Figure 3—source data 1 was used for all subsequent analysis). First, we assessed how the collective as a whole improved from the physical interaction, which is shown in Figure 3A, by taking the mean performance improvement from all individuals in the collective from every connected trial, and averaging over all trials for each collective. Two-sample t-tests revealed that the collective’s mean improvement increased with its size (between dyads and triads: t(22)=2.53, p<0.02; between dyads and tetrads: t(22)=6.07, p<10−5), revealing the benefits of interacting in larger collectives. To observe how each individual’s improvement changed as a function of the partners’ performance, we plotted each individual’s performance improvement as a function of the partners’ mean relative error for dyads (red trace), triads (green) and tetrads (blue) in Figure 3B. The data was fit using a linear mixed-effects model, where each recruited group of twos and fours were treated as a random factor to control for individual differences in their inherent ability to improve from the interaction (see Equation 2 in the Materials and methods for details). A mixed-effects analysis showed that the collective’s size modulated the individual’s performance improvement (χ2(2)=412, p<10−15, see Materials and methods for details).
-
Figure 3—source data 1
- https://doi.org/10.7554/eLife.41328.007
We split the data into the superior () and inferior () individuals of the collective for dyads, triads and tetrads to examine how they were affected by physically interacting with superior or inferior partners. One-sample t-tests were carried out on the inferior and superior individuals’ improvements using a Bonferroni correction of significance 0.05/6. Inferior individuals improved when coupled to a superior collective, regardless of its size (dyads: t(11)=10.8, p<10−6; triads: t(11)=24.0, p<10−10, t(11)=23.0; tetrads: p<10−9). Surprisingly, superior individuals in dyads, triads and tetrads maintained their performance with respect to their solo error (dyads: t(11)=-2.22, p>0.05; triads: t(11)=-1.53, p>0.15; tetrads: t(11)=3.01, p>0.012). A superior individual could sustain their tracking performance even if they were physically coupled to an inferior collective regardless of how many inferior individuals were part of the collective.
An individual’s improvement was dependent on the performance of the others in the collective, but did the performance improvement change within the 15 s trial? We examined the improvement plot of Figure 3B for each collective as a function of time by calculating the improvement from the start of the trial to a specific trial time in increments of 0.5 s. Figure 3C shows the evolution of the improvement of a sample tetrad, where each trace is a second-order polynomial fitted to the data. The improvement was observed to significantly change over time. To study the evolution of the interaction’s beneficial effect on performance, we analyzed the improvement curve’s deviation from the final improvement, defined as the improvement at the end of the 15 s trial, that is the improvement during the entire trial, for dyads, triads and tetrads. Figure 3D shows the Euclidean distance between the second-order polynomial fits on the data at different times and the final improvement as a function of time for dyads, triads and tetrads. The improvement increased rapidly during the 15 trials for all collectives. To compare the rate of convergence between dyads, triads and tetrads, we fitted an exponential function to each collective of the form , where are parameters, is the decay constant and is the trial time. Mann-Whitney U-tests revealed that the decay constant was similar between dyads and triads (U=122, n1=12, n2=12, p>0.11 two tailed), and between dyads and tetrads (U=136, n1=12, n2=12, p>0.44 two tailed). Thus, the time constant for the collective’s improvement did not depend on its size. Remarkably, it took only 7.4±0.9 s (mean ± standard error) for the collective to reach 90% of the final improvement.
The empirical data shows that the collective physical interaction was beneficial for most individuals in the collective. How could individuals cause the performance improvement during collective interaction? To determine the behavioral strategy that individuals employed during collective interaction, we compared the empirical data from collective interaction with a simulation of it using the control models represented in Figure 4A and C to predict the outcome of the collective interaction experiment. In the simulation, we assumed that each individual sent motor commands to their arm to minimize the distance between their hand and the moving target. Simulated individuals relied on proprioception and vision for feedback of their hand and target positions, respectively. The simulated individuals had two free parameters that controlled the jerkiness of their movement and the strength of the controller, that is the control gain to bring the hand to the target. We carried out a sensitivity analysis to find values for these parameters that explained the empirical data best for each interaction model proposed in this study (see Supplementary material for details). Two, three and four such individuals were simulated in parallel with and without the elastic coupling to measure their performance at the tracking task during interaction and solo practice.
We first tested whether the performance improvements observed in groups larger than dyads can be explained by a model where the physical connection to a superior or inferior collective with greater inertia dominates the interaction outcome. This model also tests whether the averaging of multiple partners’ trajectories during the tracking task helped to reduce tracking errors due to a cancellation of tracking errors. In this no exchange model (Figure 4A), individuals track a target estimated from vision whilst under the influence of the forces from the elastic bands. This model predicted an improvement that was linearly dependent on the partners’ relative error, which was different from the data (Figure 4B). Importantly, the model predicted that a superior individual in the collective was hindered by inferior partners, and the hindrance was greater with more inferior individuals in contrast to the data (Figure 3B). The mismatch between the experimental data and the no exchange model’s prediction for triads and tetrads suggests that individuals interacting in large groups use the interaction force to exchange information that is relevant to the task, as was found in dyads in our previous study (Takagi et al., 2017).
What kind of information did the individuals in triads and tetrads estimate from the interaction with their partners during the tracking task? In earlier studies (Takagi et al., 2017; Takagi et al., 2018), we showed that partners in dyads estimated each other’s target through the interaction force to improve their prediction of the target’s motion. Individuals in triads and tetrads may also extract useful information from haptics to improve tracking performance. We hypothesized that individuals interpret the summed interaction force as originating from one entity that tracks a collective target. According to this hypothesis, the individuals’ central nervous system (CNS) recognizes some correlation between the interaction force and the target motion (Parise and Ernst, 2016), then builds a representation of the entity that tracks the target. We assume that every individual’s CNS in the group estimates one collective target from the summed interaction force regardless of the number of partners in the group. In this extended neuromechanical goal sharing model, we propose that individuals track the optimally weighted average of the collective target and one’s own target from vision (see Figure 4C for schematic of the model).
As an example, for tetrads, we simulated four connected individuals who each estimated a collective target from the three other partners, and who then integrated this haptic estimate of the target with their own visual estimate of the target’s position (see Equation 9 in the Materials and methods). The weights between vision and haptics were assumed to be known by every partner as we were interested in comparing the steady-state predictions of the model with the data. Furthermore, we accounted for the additional haptic noise that arises due to the compliance of the spring connection. In our earlier study (Takagi et al., 2018), we found that a stiffer spring reduces the haptic noise when estimating the partner’s target. Mechanics tells us that an individual in a triad who is connected to two partners by a total of two springs, each of stiffness 100 N/m, feels an equivalent force to being connected to the average of the two partners’ positions by a spring of stiffness 200 N/m (see Equation 6 in Materials and methods) (Burdet et al., 2013). In other words, individuals in larger groups effectively feel like they are connected to the group’s average position by a stronger spring. The error due to a specific compliant connection was accounted for in the simulations as additive noise (see Materials and methods for details on the haptic tracking experiment to measure this additional error due to the compliance in the spring in dyads, triads and tetrads, and Figure 4—figure supplement 1 for the results of the haptic tracking experiment).
The simulation of the neuromechanical goal sharing model predicted a performance improvement that captured the curvature of the improvement as a function of the partners’ relative error with minimal deviation from the data when tested in a sensitivity analysis (Figure 4D and Figure 4—figure supplement 2). The performance improvement increased supralinearly for inferior individuals, and superior individuals retained their performance even when coupled to an inferior collective. Furthermore, the improvement was correctly modulated by the collective’s size, such that tetrads improved the most, followed by triads, and then dyads. These results suggest that individuals in collectives of different sizes use the same coordination strategy of extracting a haptic estimate of the collective target position from the interaction force.
Discussion
This study tested physically interacting dyads, triads and tetrads in a tracking task to assess the effect of the group’s size and its skill on the participating individuals’ tracking performance. We found that the total group’s performance increased with the group size, where inferior individuals in the group improved incrementally more in larger groups, and superior individuals were capable of sustaining their superior tracking performance even when connected to a group of individuals with inferior performance. Contrary to the results of our previous study (Ganesh et al., 2014), the superior individuals of the dyads in the current study did not improve. This discrepancy in the results is likely due to the high amount of visual noise added to the target in order to manipulate each individual’s tracking performance.
In our experiment, the performance improvements observed in dyads, triads and tetrads did not arise instantaneously, but emerged continuously during the trial such that 90% of the group’s final performance improvement (calculated over the entire trial) was reached after 7 s. As Figure 3C shows, the partners’ movements initially depend only on the connecting spring dynamics (compare with Figure 4A and B), and gradually acquire a model of the interaction dynamics enabling them to benefit from this interaction. The similarity in the adaptation rates between dyads, triads and tetrads in reaching their performance improvement at the end of the trial may indicate that the same coordination mechanism may be utilized regardless of the size of the interacting group. The similarity in these adaptation rates for physical interaction stands in contrast with verbal or gestural communication where significantly longer time is needed with more participants. This highlights the advantage of the simultaneity of haptic communication relative to the sequential exchanges in verbal and gestural communication.
In order to identify the coordination mechanism that explained the improvements from collective physical interaction, we used a computational model to test the determinants of interaction, to predict their effect on the performance improvement, and compare the predictions with the empirical data. The neuromechanical goal sharing model, which captured the improvements from the empirical data, suggests a mechanism whereby individuals extract task relevant sensory information from haptics, and integrate it with their own visual information of the target’s motion to improve tracking performance during interaction. In this model, we assumed that each individual extracts a haptic estimate of the target from the interaction force. As this haptic estimate of the target is stochastically optimally combined with the individual’s visual target, this improves their tracking performance even when connected to partners having a collectively inferior performance. The haptic estimate of the target arises from the summed interaction force, which is composed of the elastic couplings to multiple partners, that is equivalent to one elastic coupling to the average partner (see Equation 6 in the Materials and methods).
If the haptic estimate of the target is extracted from summed interaction force, which is a function of the average partner’s movements, then intuitively the performance improvement from integrating this haptic signal should depend only on the average partner’s tracking error, and not on the number of partners in the collective. If so, why did the simulation in Figure 4D of the neuromechanical goal sharing model predict improvements that were dependent on both the average partner’s error and the size of the collective? There may be two main reasons for the graded performance improvement with group size. First, the connection dynamics alone could have graded the improvement, since the no exchange model (in Figure 4A and B) also predicted improvements that were graded by group size. Second, the effect of the additional noise in the haptic estimate of the target due to the compliance of the elastic coupling may explain the graded improvement (see Equation 16 in Materials and methods). To assess the impact of these two factors on the predicted performance improvement, we simulated the neuromechanical goal sharing model (see Figure 5A) without the interaction spring dynamics ( in Equation 6 in the Materials and methods) and without the additional noise from the elastic compliance ( in Equation 16 in the Materials and methods). As the results still exhibit improvements graded as a function of group size (see Figure 5B), the graded improvement was not caused by the connection dynamics nor by the additional haptic noise due to the elastic compliance.
What explains the grading of the improvement as a function of both the average partner’s error and the collective’s size? The original intuitive premise must be questioned as to whether the improvement from interacting with a collective of partners, whose mean tracking error is , is the same as the improvement from interacting with one average partner who has the error . In previous studies (Takagi et al., 2017; Takagi et al., 2018), the noise in the haptic measurement of the target was equivalent to the partner’s visual tracking noise. So what is the noise in haptics from an interaction force during collective interaction? Although the interaction force is equivalent to one stiffer elastic coupling to the average partner (as Equation 6 shows), the noise in this interaction force is not the average of the partners’ visual tracking noise. Instead, the stochastically summed interaction force has noise that is inversely proportional to the squared number of partners (see Equation 11 in Materials and methods), which is different from the mean of the partners’ visual tracking noise. To illustrate this difference, we simulated the neuromechanical goal sharing model where the haptic noise was the mean of the partners’ visual tracking noise (see Figure 5C). To isolate the effects of the haptic measurement noise, we again removed the connection dynamics and the noise from the elastic coupling in this simulation. This model predicted similar improvements irrespective of group size (see Figure 5A), showing that the graded improvement as a function of group size is indeed explained by a reduction in the variance of haptics due to the averaging of the partners’ positions in the interaction force.
Does the elucidated mechanism provide maximum possible performance improvement with haptic feedback? Maximum information transfer during collective interaction can be estimated in a limiting case of the neuromechanical goal sharing model where the central nervous system is able to extract every partner’s contribution to the interaction force (instead of modeling the interaction force as coming from a single entity). This would be similar to the cocktail party effect of audition where one can isolate a conversation in a room of people talking at the same time (Hawley et al., 2004). Each individual might extract multiple streams of information from the interaction force (one, two and three streams for dyads, triads and tetrads, respectively) using individual spectral characteristics, yielding the maximal possible information transfer through haptics (see Figure 5E for the schematic of this model). However, the predictions of this source separation limiting case (see Figure 5F) consistently overestimated the improvement in comparison to the data, showing that our individuals could not break down each individual source in the interaction force. Instead, the average behavior of all other individuals was identified, and their collective target was inferred. This reveals a limit in the ability to share and estimate information through haptics.
In summary, this paper presented experiments and computational modeling to understand how physically interacting human individuals coordinate their movements during the collective tracking of a common target. The results elucidate the coordination mechanism in a collective by systematically analyzing how the information from the interaction dynamics is processed by its individuals. As the interaction force is the sum of all partner’s forces, it is not possible to identify each partner’s specific contribution to it. Instead, the individuals estimate a collective target from the interaction force, which they combine with their own visual target. The performance improvement resulting from this mechanism is suboptimal relative to that allowed by a putative source separation mechanism, but it still enables the collective’s individuals to improve their tracking error when interacting with superior partners, and to not be hindered by inferior ones. This neuromechanical coordination mechanism is also scalable, as the time required to adapt to the group’s skill is independent of group size, and a group’s total performance improvement increases with its size. The surprising result that the collective’s mean improvement increases in larger groups is explained by the stochastic properties of the collective target that is extracted from the summed interaction force.
Materials and methods
Experiments
Request a detailed protocolThe study was conducted according to the Declaration of Helsinki and was approved by the ethics committee of the Graduate School of Education at the University of Tokyo (reference number 14-75). Each of the 72 subjects gave a written consent prior to starting with the experiments. The sample size of 12 per group of dyads, triads and tetrads was determined by a prior power analysis from a repeated-measures ANOVA within and between interaction consisting of 3 groups and the error detection parameters and with a medium effect size of .
Each subject held onto the robotic handle of the Phantom 1.5HF (Geomagic), which constrained the handle’s movement within a horizontal plane via software. The individual monitors displayed a cursor of the handle position and the target, which was composed of a dynamic cloud around the multi-sine function
where the target’s trajectory was randomized through the selection of the initial time according to a uniform stochastic distribution in the interval between 0 and 10 s.
The dynamic cloud consisted of five circular spots that were displayed every millisecond (as shown in the Video 1 and 2). Each spot was regenerated one at a time every 400 ms by picking a new position and velocity. These position and velocity parameters were determined at the start of a trial from normal random distributions with a standard deviation of 0.005 m for the position, and from a set of ten equally spaced values from 0.005 m/s to 0.3 m/s for the velocity. The wider the spots were spread, the more difficult it was to follow the target as its true position was hard to guess (Körding and Wolpert, 2004). Spots with low velocity noise were easy to track but high velocity noise spots spread out rapidly like fireworks. Every time a velocity noise was selected for each subject, it was removed from the set that was unique to each subject. The random selection ensured that an individual’s own performance and the others’ tracking skill were unknown a priori.
The velocity parameter enabled us to control the tracking error of each individual in a trial, which was measured as the root-mean squared distance between the target and the cursor. For each subject, the tracking error on trials without interaction was regressed with the target spot velocity noise using data from three trials per velocity noise level, giving a fit with R2 = 0.80 ± 0.01 (mean ±standard error for all subjects). The spot velocity noise was used as an estimate of each individual’s tracking error (see Figure 2B for fits from a sample tetrad).
Subjects were instructed to follow the target as accurately as possible and were told that they would experience forces on their hand. At the end of the experiment, subjects were asked about the nature of the forces. Although some guessed that the forces originated from a partner, none of them could tell how many partners they were connected to.
The experimental protocol is described in Figure 1D. Twos experienced 80 total trials and fours completed 100 trials in total. Both twos and fours experienced 10 solo training trials to become acquainted to the task. After this training phase, twos and fours encountered a series of solo and connected trials. Twos then carried out 60 trials with and without the elastic connection in a series of 30 connected-solo trials, and then 10 connected trials. This ensured that solo trials were interspersed throughout the experiment. Interaction data from both triads and tetrads were collected during the experiment with fours. We collected as much data as possible from triads by testing all four combinations of triads possible from the tetrad, and collected the tetrad interaction data in the last 30 trials. Thirty solo trials were interspersed such that 10 were tested after training, 10 prior to tetrad interaction, and another 10 in each triad block where the excluded individual experienced solo trials instead of triad interaction trials. In total, 40 connected trials for dyads and triads, and 30 connected trials for tetrads.
Analysis
A linear mixed-effects model
was employed to fit the improvement , where is the error of an individual in a solo trial (estimated from the linear regression with the visual noise) and is the same subject’s error on a connected trial, as a function of the partners’ relative error, , where was the partners’ mean error estimated from solo trials, and the collective’s size . In this model, is the intercept, to are the parameters for each predictor and is the unexplained variance of the improvement for each collective .
Simulation model
Request a detailed protocolA model was developed in discrete time to simulate how the members of a collective connected by elastic bands plan their movement to track a randomly moving target in two dimensions. The Cartesian product of two one-dimensional models as described below was used in simulation. At every time index the target with position must be estimated, then a motor command is generated to move the hand’s position to the target. First, we describe the state equation that governs the movement of the target, and then that of the hand, and combine these two equations to formulate a single state equation of the full system. The movement of the target, which is assumed to be moving randomly via Gaussian noise in its velocity , is described by the first-order system
where is the target state and is the covariance matrix.
The control of the hand is modelled as with point-mass and the force from one or several elastic bands. In state-space format, this yields
where the control command to move the hand towards the target is described by
with and describing the position and velocity control gains, respectively. In a collective of individuals , each individual’s right hand with state is connected to the other individuals’ right hands through elastics bands of stiffness and damping that produce the force
where is the average partners’ position. Importantly, as we see in the right side of this equation, the sum of the elastic coupling to all partners is equivalent to the interaction force when coupled with a more rigid and damped elastic interaction with the average of the partners’ hands. Solo trials, where the subjects are not connected, are characterized by zero force for all . A subject using the motor command of Equation 5 to move their hand according to Equation 4 to follow the target, with motion described by Equation 3, is described by the full state equation
which is equivalent to the difference of Equation 4 minus Equation 3.
Models of interaction
Request a detailed protocolTwo models of interaction are described from the sensory information exchange between the partners. First, we describe the solo strategy of one subject tracking the target alone using only visual feedback. To generate the motor command according to Equation 5, the state describing the difference between the target and the hand is observed through
where the observation is corrupted by Gaussian visual noise with variance . The linear quadratic estimation is computed in discrete time using an iterative Kalman filter algorithm (Kalman, 1960). Sensory delay in vision and proprioception is compensated for by integrating Equation 7.
Now that we have described how to visually track a target, what motion planning model could be used to track the randomly moving target whilst being physically coupled to multiple partners? In the no exchange model, each individual ignores the interaction forces and tracks the target using the visual information of the target’s position, as in Equation 8, under the influence of the dynamics of the elastic bands described in Equation 6. The neuromechanical goal sharing model (Takagi et al., 2017; Takagi et al., 2018) proposes that, in dyads, both individuals extract a haptic estimate of the target’s position from the interaction force with the partner, and optimally combine it with their own visual estimate of the target. Similarly, we propose that in collective interaction each individual uses the interaction force to extract a haptic estimate of the target position, referred from here on as the collective target , such that the observation of the difference between the hand of individual and the target is observed using
which extends the corresponding law of previous studies (Takagi et al., 2017; Takagi et al., 2018).
What is the variance of the noise that corrupts the haptic measurement of the collective target in the extended neuromechanical goal sharing model? In previous studies (Takagi et al., 2017, Takagi et al., 2018), the interaction; force was linearly dependent on the partner’s hand position, and so the noise in the haptic measurement of the partner’s target was the partner’s visual tracking noise. Similarly, in collective interaction, the collective target is estimated from the interaction noise, which is shown in Equation 6 to be linearly dependent on the partners’ average hand position . Let every th partner’s visual measurement of the target be corrupted by Gaussian visual tracking noise with variance . Then the difference between the hand and the collective target’s position suffers Gaussian noise with variance
We can assume that these measurements between partners are independent, thus and , and the variance in the measurement of the collective target
is inversely proportional to the number of partners in the collective. Therefore, the measurement noise on the collective target will reduce in larger collectives even if the partners’ average tracking noise is equivalent.
We further tested a modified version of the neuromechanical goal sharing model (see Figure 5C and 5D) with the intuitive, but incorrect, expectation that the interaction with multiple partners whose average error is is identical to interacting with one partner whose error is equivalent to . In this scenario, the variance of the noise in the haptic measurement of the collective target would be equal to the average of the partners’ visual tracking noise, that is , with a denominator different from Equation 11. If this were the noise in the haptic estimate of the collective target, the improvement would not change with the group’s size (as Figure 5D shows), which is in contrast to what is observed in the data.
How does an individual estimate the collective target of Equation 9 from the interaction force in Equation 6? The average of the partners would use a motor command similar to Equation 5,
where the average partner’s state is estimated through the force and the state of one’s own hand, whilst the average partner’s control law from Equation 12 is identified by letting it evolve with noise according to
Thus, the representation of the partner includes the state of their hand, their target, their control law, one’s own hand and the elastic force to yield
This state is described by the non-linear function that is linearized at every time step to be used for linear quadratic estimation (Ljung, 1979). is identified by minimizing the squared estimation error of the observations of one’s own target position , hand position and force . Once is identified, the collective target is estimated by minimizing the squared estimation error of the observations of the average partner’s estimated control , one’s own hand position and the force .
The source separation limiting case is where every partner’s target can be estimated and combined with one’s own visual estimate of the target,
In this limiting case, observations of the partners’ target position are directly provided to each individual in the collective, who integrates the partners’ targets with their own visual estimate of the target, providing total observations of the target.
How compliance changes the quality of haptic information
Request a detailed protocolIn a previous study (Takagi et al., 2018), we found that the strength of the elastic coupling influenced the quality of the haptic information. With a weaker elastic band, the amplitude of the interaction force is smaller, reducing the signal-to-noise ratio when measuring it through haptics. Dyads, triads and tetrads experienced different magnitudes of force due to the increasing number of elastic bands that coupled them together. The dynamics experienced by dyads, triads and tetrads can be modeled as a single elastic band of 100 N/m, 200 N/m and 300 N/m respectively, which connects each individual to the average position of the partners as shown in Equation 6.
Another eight subjects were recruited individually to carry out a haptic tracking control experiment. The target movement was the same as in Equation 1, but without visual feedback, that is the target was invisible to the subject. Subjects tracked the haptic target for 15 s, and experienced five trials of each coupling stiffness consecutively in the order of 300 N/m, 200 N/m and 100 N/m, respectively. Figure 4—figure supplement 1 shows the results of this experiment, revealing that stronger stiffness resulted in lower tracking errors. The values found in this experiment were used to alter the source separation and neuromechanical goal sharing models by changing the sensory noise in the haptic estimate of the collective target. The haptic noise from Equation 11 has some additive noise due to the compliance of the elastic connection,
In the haptic tracking experiment, we measured the additional error in the tracking task that arises due to the softness of the elastic spring. These error values can be used to estimate , and so the haptic noise is described by
where is a function that converts tracking error to an equivalent sensory noise and is the stiffness of the elastic spring. This function changes with respect to the process noise as the performance of the linear quadratic estimator is directly related to this value, and the controller strength that affects how closely one can follow the estimated target trajectory.
To determine , we simulated only the solo trials of the tracking task for each unique pair of and , and fitted a second order polynomial that related the standard deviation of an individual’s visual tracking noise and tracking error ,
where , and are fitted parameters. Since we assume that the softness of the interaction results in additive sensory noise, the haptic noise of Equation 16 is modified to
where is the average of the partners’ tracking error and is the additional error from the interaction stiffness , whose values were taken from the haptic tracking experiment.
To remove the effects of the additional noise due to the elastic coupling on the predicted performance improvement (as described in the Discussion), the compliance noise in Equation 16 was set to .
Appendix 1
Sensitivity analysis
For each proposed model, we conducted a sensitivity analysis to compare their predictive power over a parameter space. As the length of the trial is sufficiently long and the tracking task is continuous, we used an infinite-horizon optimal controller with quadratic cost. Two parameters of the model are adjustable; , a multiplier for the Gaussian noise in the velocity, and , a multiplier for the state cost in the controller. The control gain in Equation 5 will minimize the cost functional
where is the combined state vector, is the motor command, the state cost is positive semi-definite and control cost .
is a scaling factor of the process noise that determines the frequency content of a simulated individual’s movement. modulates the strength of the individual controllers. was bound within a range such that the trajectories were not jerky, and was varied within a stable controllable range. These parameters were kept the same for all individuals in a collective.
The sum of squared error (SSE) between the fit from the data and from the simulation was used as a metric for predictive power. The neuromechanical goal sharing model exhibits less error than the no exchange model at the parameters best fitting the experimental data (Figure 4—figure supplement 2). The no exchange model appears to fit the data best for large where trajectories were less smooth. However, it always yielded linear, first order improvement curves as a function of the partners’ relative error, which was different from the data that exhibited second and third order components. Thus, the neuromechanical goal sharing model that explained both the curvature of the improvement curve and the effect of the collective’s size on the improvement best explained the empirical data.
Data availability
All data generated or analysed during this study are included in the manuscript and supporting files.
References
-
An experimental study on the role of touch in shared virtual environmentsACM Transactions on Computer-Human Interaction 7:443–460.https://doi.org/10.1145/365058.365082
-
The robust beauty of majority rules in group decisionsPsychological Review 112:494–508.https://doi.org/10.1037/0033-295X.112.2.494
-
The benefit of binaural hearing in a cocktail party: effect of location and type of interfererThe Journal of the Acoustical Society of America 115:833–843.https://doi.org/10.1121/1.1639908
-
A new approach to linear filtering and prediction problemsJournal of Basic Engineering 82:35–45.https://doi.org/10.1115/1.3662552
-
Asymptotic behavior of the extended Kalman filter as a parameter estimator for linear systemsIEEE Transactions on Automatic Control 24:36–50.https://doi.org/10.1109/TAC.1979.1101943
-
Task performance evaluation of asymmetric semiautonomous teleoperation of mobile twin-arm robotic manipulatorsIEEE Transactions on Haptics 6:484–495.https://doi.org/10.1109/TOH.2013.23
-
Correlation detection as a general mechanism for multisensory integrationNature Communications 7:11543.https://doi.org/10.1038/ncomms11543
-
Physical collaboration of Human-Human and Human-Robot teamsIEEE Transactions on Haptics 1:108–120.https://doi.org/10.1109/TOH.2008.13
-
Joint action: bodies and minds moving togetherTrends in Cognitive Sciences 10:70–76.https://doi.org/10.1016/j.tics.2005.12.009
-
Haptic communication between humans is tuned by the hard or soft mechanics of interactionPLOS Computational Biology 14:e1005971.https://doi.org/10.1371/journal.pcbi.1005971
-
Let the force be with Us: dyads exploit haptic coupling for coordinationJournal of Experimental Psychology: Human Perception and Performance 37:1420–1431.https://doi.org/10.1037/a0022337
-
Task-Related Verbal Interaction and Mathematics Learning in Small GroupsJournal for Research in Mathematics Education 22:366–389.https://doi.org/10.2307/749186
Article and author information
Author details
Funding
Horizon 2020 Framework Programme (ICT-644727)
- Atsushi Takagi
- Etienne Burdet
Japan Society for the Promotion of Science (17H00874)
- Masaya Hirashima
- Daichi Nozaki
Japan Society for the Promotion of Science (JP18K1813)
- Masaya Hirashima
- Daichi Nozaki
Seventh Framework Programme (ICT-601003)
- Etienne Burdet
Seventh Framework Programme (ICT-611626)
- Etienne Burdet
Engineering and Physical Sciences Research Council (EP/NO29003/1)
- Etienne Burdet
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
This work was partially supported by JSPS KAKENHI grant numbers JP18K18130 and 17H00874, and by EU-FP7 grants ICT-601003 BALANCE, ICT-611626 SYMBITRON, EU-H2020 ICT-644727 COGIMON, UK EPSRC MOTION grant EP/NO29003/1.
Ethics
Human subjects: The study was conducted according to the Declaration of Helsinki, and was approved by the ethics committee of the Graduate School of Education at the University of Tokyo (reference number 14-75). Each of the 72 subjects gave a written consent prior to starting with the experiments.
Copyright
© 2019, Takagi et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 2,249
- views
-
- 254
- downloads
-
- 31
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Perceiving emotions from the movements of other biological entities is critical for human survival and interpersonal interactions. Here, we report that emotional information conveyed by point-light biological motion (BM) triggered automatic physiological responses as reflected in pupil size. Specifically, happy BM evoked larger pupil size than neutral and sad BM, while sad BM induced a smaller pupil response than neutral BM. Moreover, this happy over sad pupil dilation effect is negatively correlated with individual autistic traits. Notably, emotional BM with only local motion features retained could also exert modulations on pupils. Compared with intact BM, both happy and sad local BM evoked stronger pupil responses than neutral local BM starting from an earlier time point, with no difference between the happy and sad conditions. These results revealed a fine-grained pupil-related emotional modulation induced by intact BM and a coarse but rapid modulation by local BM, demonstrating multi-level processing of emotions in life motion signals. Taken together, our findings shed new light on BM emotion processing, and highlight the potential of utilizing the emotion-modulated pupil response to facilitate the diagnosis of social cognitive disorders.
-
- Neuroscience
Locomotion in mammals is directly controlled by the spinal neuronal network, operating under the control of supraspinal signals and somatosensory feedback that interact with each other. However, the functional architecture of the spinal locomotor network, its operation regimes, and the role of supraspinal and sensory feedback in different locomotor behaviors, including at different speeds, remain unclear. We developed a computational model of spinal locomotor circuits receiving supraspinal drives and limb sensory feedback that could reproduce multiple experimental data obtained in intact and spinal-transected cats during tied-belt and split-belt treadmill locomotion. We provide evidence that the spinal locomotor network operates in different regimes depending on locomotor speed. In an intact system, at slow speeds (<0.4 m/s), the spinal network operates in a non-oscillating state-machine regime and requires sensory feedback or external inputs for phase transitions. Removing sensory feedback related to limb extension prevents locomotor oscillations at slow speeds. With increasing speed and supraspinal drives, the spinal network switches to a flexor-driven oscillatory regime and then to a classical half-center regime. Following spinal transection, the model predicts that the spinal network can only operate in the state-machine regime. Our results suggest that the spinal network operates in different regimes for slow exploratory and fast escape locomotor behaviors, making use of different control mechanisms.