A human subcortical network underlying social avoidance revealed by risky economic choices
Abstract
Social interactions have a major impact on well-being. While many individuals actively seek social situations, others avoid them, at great cost to their private and professional life. The neural mechanisms underlying individual differences in social approach or avoidance tendencies are poorly understood. Here we estimated people's subjective value of engaging in a social situation. In each trial, more or less socially anxious participants chose between an interaction with a human partner providing social feedback and a monetary amount. With increasing social anxiety, the subjective value of social engagement decreased; amygdala BOLD response during decision-making and when experiencing social feedback increased; ventral striatum BOLD response to positive social feedback decreased; and connectivity between these regions during decision-making increased. Amygdala response was negatively related to the subjective value of social engagement. These findings suggest a relation between trait social anxiety / social avoidance and activity in a subcortical network during social decision-making.
Data availability
Data are freely available on Dryad, doi:10.5061/dryad.jq44b1r
-
Data from: A human subcortical network underlying social avoidance revealed by an econometric taskDryad Digital Repository, doi:10.5061/dryad.jq44b1r.
Article and author information
Author details
Funding
The authors declare that there was no funding for this work.
Reviewing Editor
- Christian Büchel, University Medical Center Hamburg-Eppendorf, Germany
Ethics
Human subjects: All subjects gave written informed consent and the ethics committee of the Medical Faculty of the University of Bonn, Germany approved all studies (Approval number: 098/18).
Version history
- Received: January 16, 2019
- Accepted: July 21, 2019
- Accepted Manuscript published: July 22, 2019 (version 1)
- Version of Record published: August 21, 2019 (version 2)
Copyright
© 2019, Schultz et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 2,343
- views
-
- 300
- downloads
-
- 11
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Combining information from multiple senses is essential to object recognition, core to the ability to learn concepts, make new inferences, and generalize across distinct entities. Yet how the mind combines sensory input into coherent crossmodal representations - the crossmodal binding problem - remains poorly understood. Here, we applied multi-echo fMRI across a four-day paradigm, in which participants learned 3-dimensional crossmodal representations created from well-characterized unimodal visual shape and sound features. Our novel paradigm decoupled the learned crossmodal object representations from their baseline unimodal shapes and sounds, thus allowing us to track the emergence of crossmodal object representations as they were learned by healthy adults. Critically, we found that two anterior temporal lobe structures - temporal pole and perirhinal cortex - differentiated learned from non-learned crossmodal objects, even when controlling for the unimodal features that composed those objects. These results provide evidence for integrated crossmodal object representations in the anterior temporal lobes that were different from the representations for the unimodal features. Furthermore, we found that perirhinal cortex representations were by default biased towards visual shape, but this initial visual bias was attenuated by crossmodal learning. Thus, crossmodal learning transformed perirhinal representations such that they were no longer predominantly grounded in the visual modality, which may be a mechanism by which object concepts gain their abstraction.
-
- Neuroscience
Mechanosensory neurons located across the body surface respond to tactile stimuli and elicit diverse behavioral responses, from relatively simple stimulus location-aimed movements to complex movement sequences. How mechanosensory neurons and their postsynaptic circuits influence such diverse behaviors remains unclear. We previously discovered that Drosophila perform a body location-prioritized grooming sequence when mechanosensory neurons at different locations on the head and body are simultaneously stimulated by dust (Hampel et al., 2017; Seeds et al., 2014). Here, we identify nearly all mechanosensory neurons on the Drosophila head that individually elicit aimed grooming of specific head locations, while collectively eliciting a whole head grooming sequence. Different tracing methods were used to reconstruct the projections of these neurons from different locations on the head to their distinct arborizations in the brain. This provides the first synaptic resolution somatotopic map of a head, and defines the parallel-projecting mechanosensory pathways that elicit head grooming.