Peer review process
Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, public reviews, and a provisional response from the authors.
Read more about eLife’s peer review process.Editors
- Reviewing EditorGordon BermanEmory University, Atlanta, United States of America
- Senior EditorKate WassumUniversity of California, Los Angeles, Los Angeles, United States of America
Reviewer #1 (Public review):
Summary:
In this manuscript, Nührenberg et al., describe vassi, a Python package for mutually exclusive behavioral classification of social behaviors. This package imports and organizes trajectory data and manual behavior labels, and then computes feature representations for use with available Python machine learning-based classification tools. These representations include all possible dyadic interactions within an animal group, enabling classification of social behaviors between pairs of animals at a distance. The authors validate this package by reproducing the behavior classification performance on a previously published dyadic mouse dataset, and demonstrate its use on a novel cichlid group dataset. The authors have created a package that is agnostic to the mechanism of tracking and will reduce the barrier of data preparation for machine learning, which can be a stumbling block for non-experts. The package also evaluates the classification performance with helpful visualizations and provides a tool for inspection of behavior classification results.
Strengths:
(1) A major contribution of this paper was creating a framework to extend social behavior classification to groups of animals such that the actor and receiver can be any member of the group, regardless of distance. To implement this framework, the authors created a Python package and an extensive documentation site, which is greatly appreciated. This package should be useful to researchers with a knowledge of Python, virtual environments, and machine learning, as it relies on scripts rather than a GUI interface and may facilitate the development of new machine learning algorithms for behavior classification.
(2) The authors include modules for correctly creating train and test sets, and evaluation of classifier performance. This is extremely useful. Beyond evaluation, they have created a tool for manual review and correction of annotations. And they demonstrate the utility of this validation tool in the case of rare behaviors where correct classification is difficult, but the number of examples to review is reasonable.
(3) The authors provide well-commented step-by-step instructions for the use of the package in the documentation.
Weaknesses:
(1) While the classification algorithm was not the subject of the paper, as the authors used off-the-shelf methods and were only able to reproduce the performance of the CALMS21 dyadic dataset, they did not improve upon previously published results. Furthermore, the results from the novel cichlid fish dataset, including a macro F1 score of 0.45, did not compellingly show that the workflow described in the paper produces useful behavioral classifications for groups of interacting animals performing rare social behaviors. I commend the authors for transparently reporting the results both with the macro F1 scores and the confusion matrices for the classifiers. The mutually exclusive, all-vs-all data annotation scheme of rare behaviors results in extremely unbalanced datasets such that categorical classification becomes a difficult problem. To try to address the performance limitation, the authors built a validation tool that allows the user to manually review the behavior predictions.
(2) The pipeline makes a few strong assumptions that should be made more explicit in the paper.
First, the behavioral classifiers are mutually exclusive and one-to-one. An individual animal can only be performing one behavior at any given time, and that behavior has only one recipient. These assumptions are implicit in how the package creates the data structure, and should be made clearer to the reader. Additionally, the authors emphasize that they have extended behavior classification to animal groups, but more accurately, they have extended behavioral classification to all possible pairs within a group.
Second, the package expects comprehensive behavior labeling of the tracking data as input. Any frames not manually labeled are assumed to be the background category. Additionally, the package will interpolate through any missing segments of tracking data and assign the background behavioral category to those trajectory segments as well. The effects of these assumptions are not explored in the paper, which may limit the utility of this workflow for naturalistic environments.
(3) Finally, the authors described the package as a tool for biologists and ethologists, but the level of Python and machine learning expertise required to use the package to develop a novel behavior classification workflow may be beyond the ability of many biologists. More accessible example notebooks would help address this problem.
Reviewer #2 (Public review):
Summary:
The authors present a novel supervised behavioral analysis pipeline (vassi), which extends beyond previously available packages with its innate support of groups of any number of organisms. Importantly, this program also allows for iterative improvement upon models through revised behavioral annotation.
Strengths:
vassi's support of groups of any number of animals is a major advancement for those studying collective social behavior. Additionally, the built-in ability to choose different base models and iteratively train them is an important advancement beyond current pipelines. vassi is also producing behavioral classifiers with similar precision/recall metrics for dyadic behavior as currently published packages using similar algorithms.
Weaknesses:
vassi's performance on group behaviors is potentially too low to proceed with (F1 roughly 0.2 to 0.6). Different sources have slightly different definitions, but an F1 score of 0.7 or 0.8 is often considered good, while anything lower than 0.5 can typically be considered bad. There has been no published consensus within behavioral neuroscience (that I know of) on a minimum F1 score for use. Collective behavioral research is extremely challenging to perform due to hand annotation times, and there needs to be a discussion in the field as to the trade-off between throughput and accuracy before these scores can be either used or thrown out the door. It would also be useful to see the authors perform a few rounds of iterative corrections on these classifiers to see if performance is improved.
While the interaction networks in Figure 2b-c look visually similar based on interaction pairs, the weights of the interactions appear to be quite different between hand and automated annotations. This could lead to incorrect social network metrics, which are increasingly popular in collective social behavior analysis. It would be very helpful to see calculated SNA metrics for hand versus machine scoring to see whether or not vassi is reliable for these datasets.