A consensus guide to capturing the ability to inhibit actions and impulsive behaviors in the stop-signal task
Abstract
Response inhibition is essential for navigating everyday life. Its derailment is considered integral to numerous neurological and psychiatric disorders, and more generally, to a wide range of behavioral and health problems. Response-inhibition efficiency furthermore correlates with treatment outcome in some of these conditions. The stop-signal task is an essential tool to determine how quickly response inhibition is implemented. Despite its apparent simplicity, there are many features (ranging from task design to data analysis) that vary across studies in ways that can easily compromise the validity of the obtained results. Our goal is to facilitate a more accurate use of the stop-signal task. To this end, we provide twelve easy-to-implement consensus recommendations and point out the problems that can arise when these are not followed. Furthermore we provide user-friendly open-source resources intended to inform statistical-power considerations, facilitate the correct implementation of the task, and assist in proper data analysis.
Data availability
The code used for the simulations and all simulated data can be found on Open Science Framework (https://osf.io/rmqaw/)
-
Race model simulations to determine estimation bias and reliability of SSRT estimatesOpen Science Framework, DOI 10.17605/OSF.IO/JWSF9.
Article and author information
Author details
Funding
H2020 European Research Council (769595)
- Frederick Verbruggen
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Reviewing Editor
- David Badre, Brown University, United States
Publication history
- Received: February 22, 2019
- Accepted: April 9, 2019
- Accepted Manuscript published: April 29, 2019 (version 1)
- Version of Record published: May 23, 2019 (version 2)
Copyright
© 2019, Verbruggen et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 8,670
- Page views
-
- 1,145
- Downloads
-
- 235
- Citations
Article citation count generated by polling the highest count across the following sources: Scopus, Crossref, PubMed Central.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Gustatory sensory neurons detect caloric and harmful compounds in potential food and convey this information to the brain to inform feeding decisions. To examine the signals that gustatory neurons transmit and receive, we reconstructed gustatory axons and their synaptic sites in the adult Drosophila melanogaster brain, utilizing a whole-brain electron microscopy volume. We reconstructed 87 gustatory projections from the proboscis labellum in the right hemisphere and 57 from the left, representing the majority of labellar gustatory axons. Gustatory neurons contain a nearly equal number of interspersed pre-and post-synaptic sites, with extensive synaptic connectivity among gustatory axons. Morphology- and connectivity-based clustering revealed six distinct groups, likely representing neurons recognizing different taste modalities. The vast majority of synaptic connections are between neurons of the same group. This study resolves the anatomy of labellar gustatory projections, reveals that gustatory projections are segregated based on taste modality, and uncovers synaptic connections that may alter the transmission of gustatory signals.
-
- Neuroscience
Categorization of everyday objects requires that humans form representations of shape that are tolerant to variations among exemplars. Yet, how such invariant shape representations develop remains poorly understood. By comparing human infants (6–12 months; N=82) to computational models of vision using comparable procedures, we shed light on the origins and mechanisms underlying object perception. Following habituation to a never-before-seen object, infants classified other novel objects across variations in their component parts. Comparisons to several computational models of vision, including models of high-level and low-level vision, revealed that infants’ performance was best described by a model of shape based on the skeletal structure. Interestingly, infants outperformed a range of artificial neural network models, selected for their massive object experience and biological plausibility, under the same conditions. Altogether, these findings suggest that robust representations of shape can be formed with little language or object experience by relying on the perceptually invariant skeletal structure.