IMS 8, 10 and 5 control participants were asked to imitate pictures of an actor’s face expressing one of six facial expressions (anger, disgust, fear, happiness, sadness, and surprise). Their imitation was video-recorded and, then, analyzed offline with OpenFace 2.1.0, an open source deep learning facial recognition system allowing automatic detection of action unit (AU) presence and intensity (Amos et al., 2016; Baltrusaitis et al., 2016). For each participant and facial expression, we first computed the average intensity of 12 facial action units relevant for facial expressions (AU01, 02, 04, 05, 06, 07, 09, 12, 15, 20, 23 and 26). Then, to obtain a measure of the similarity between the different facial expressions executed by each participant, we correlated the intensity of the 12 facial action units observed for the different facial expressions to each other. This allowed obtaining an objective measure of the similarity of the intensities of the different relevant action units when the IMS executed the different facial expressions. The results indicate that the intensity of the action units during the execution of the different facial expressions were highly correlated in the three IMS and much more correlated than in typical control participants. This supports the claim that although IMS 8 and 10 could execute some subtle facial movements, these movements were largely unspecific.