Serotonergic neurons signal reward and punishment on multiple timescales
Abstract
Serotonin's function in the brain is unclear. One challenge in testing the numerous hypotheses about serotonin's function has been observing the activity of identified serotonergic neurons in animals engaged in behavioral tasks. We recorded the activity of dorsal raphe neurons while mice experienced a task in which rewards and punishments varied across blocks of trials. We 'tagged' serotonergic neurons with the light-sensitive protein channelrhodopsin-2 and identified them based on their responses to light. We found three main features of serotonergic neuron activity: (1) a large fraction of serotonergic neurons modulated their tonic firing rates over the course of minutes during reward versus punishment blocks; (2) most were phasically excited by punishments; and (3) a subset was phasically excited by reward-predicting cues. By contrast, dopaminergic neurons did not show firing rate changes across blocks of trials. These results suggest that serotonergic neurons signal information about reward and punishment on multiple timescales.
Article and author information
Author details
Ethics
Animal experimentation: All surgical and experimental procedures were in accordance with the National Institutes of Health Guide for the Care and Use of Laboratory Animals and approved by the Harvard or Johns Hopkins Institutional Animal Care and Use Committees.
Copyright
© 2015, Cohen et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 9,994
- views
-
- 2,881
- downloads
-
- 290
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
During rest and sleep, memory traces replay in the brain. The dialogue between brain regions during replay is thought to stabilize labile memory traces for long-term storage. However, because replay is an internally driven, spontaneous phenomenon, it does not have a ground truth - an external reference that can validate whether a memory has truly been replayed. Instead, replay detection is based on the similarity between the sequential neural activity comprising the replay event and the corresponding template of neural activity generated during active locomotion. If the statistical likelihood of observing such a match by chance is sufficiently low, the candidate replay event is inferred to be replaying that specific memory. However, without the ability to evaluate whether replay detection methods are successfully detecting true events and correctly rejecting non-events, the evaluation and comparison of different replay methods is challenging. To circumvent this problem, we present a new framework for evaluating replay, tested using hippocampal neural recordings from rats exploring two novel linear tracks. Using this two-track paradigm, our framework selects replay events based on their temporal fidelity (sequence-based detection), and evaluates the detection performance using each event’s track discriminability, where sequenceless decoding across both tracks is used to quantify whether the track replaying is also the most likely track being reactivated.
-
- Neuroscience
Organizing the continuous stream of visual input into categories like places or faces is important for everyday function and social interactions. However, it is unknown when neural representations of these and other visual categories emerge. Here, we used steady-state evoked potential electroencephalography to measure cortical responses in infants at 3–4 months, 4–6 months, 6–8 months, and 12–15 months, when they viewed controlled, gray-level images of faces, limbs, corridors, characters, and cars. We found that distinct responses to these categories emerge at different ages. Reliable brain responses to faces emerge first, at 4–6 months, followed by limbs and places around 6–8 months. Between 6 and 15 months response patterns become more distinct, such that a classifier can decode what an infant is looking at from their brain responses. These findings have important implications for assessing typical and atypical cortical development as they not only suggest that category representations are learned, but also that representations of categories that may have innate substrates emerge at different times during infancy.