Cold-induced hyperphagia requires AgRP-neuron activation in mice
Abstract
To maintain energy homeostasis during cold exposure, the increased energy demands of thermogenesis must be counterbalanced by increased energy intake. To investigate the neurobiological mechanisms underlying this cold-induced hyperphagia, we asked whether agouti-related peptide (AgRP) neurons are activated when animals are placed in a cold environment and, if so, whether this response is required for the associated hyperphagia. We report that AgRP-neuron activation occurs rapidly upon acute cold exposure, as do increases of both energy expenditure and energy intake, suggesting the mere perception of cold is sufficient to engage each of these responses. We further report that silencing of AgRP neurons selectively blocks the effect of cold exposure to increase food intake but has no effect on energy expenditure. Together, these findings establish a physiologically important role for AgRP neurons in the hyperphagic response to cold exposure.
Data availability
Photometry data has been deposited in DryadDOI: https://doi.org/10.5061/dryad.0p2ngf208Individual source data files are associated with individual figures.
Article and author information
Author details
Funding
National Institutes of Health (DK089056)
- Gregory J Morton
National Institutes of Health (T32 GM095421)
- Chelsea L Faber
National Institutes of Health (T32 HL007028)
- Jennifer Deem
Diabetes Research Center
- Jennifer Deem
American Diabetes Association (ADA 1-19-PDF-103)
- Jennifer Deem
National Institutes of Health (DK083042)
- Michael W Schwartz
National Institutes of Health (DK101997)
- Michael W Schwartz
National Institutes of Health (R37 DA033396)
- Michael Bruchas
National Institutes of Health (R01DA24908)
- Richard D Palmiter
National Institutes of Health (P30 DA048736)
- Michael Bruchas
National Institutes of Health (DK035816)
- Gregory J Morton
Diabetes Research Center (DK17047)
- Gregory J Morton
National Institutes of Health (F31 DK113673)
- Chelsea L Faber
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Animal experimentation: This study was performed in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. All of the animals were handled according to a protocol approved by the institutional animal care and use committee (IACUC) of the University of Washington (#2456-06). All surgery was performed under isoflurane anesthesia, and every effort was made to minimize suffering.
Copyright
© 2020, Deem et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 3,102
- views
-
- 487
- downloads
-
- 35
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Biological memory networks are thought to store information by experience-dependent changes in the synaptic connectivity between assemblies of neurons. Recent models suggest that these assemblies contain both excitatory and inhibitory neurons (E/I assemblies), resulting in co-tuning and precise balance of excitation and inhibition. To understand computational consequences of E/I assemblies under biologically realistic constraints we built a spiking network model based on experimental data from telencephalic area Dp of adult zebrafish, a precisely balanced recurrent network homologous to piriform cortex. We found that E/I assemblies stabilized firing rate distributions compared to networks with excitatory assemblies and global inhibition. Unlike classical memory models, networks with E/I assemblies did not show discrete attractor dynamics. Rather, responses to learned inputs were locally constrained onto manifolds that ‘focused’ activity into neuronal subspaces. The covariance structure of these manifolds supported pattern classification when information was retrieved from selected neuronal subsets. Networks with E/I assemblies therefore transformed the geometry of neuronal coding space, resulting in continuous representations that reflected both relatedness of inputs and an individual’s experience. Such continuous representations enable fast pattern classification, can support continual learning, and may provide a basis for higher-order learning and cognitive computations.
-
- Neuroscience
Data-driven models of neurons and circuits are important for understanding how the properties of membrane conductances, synapses, dendrites, and the anatomical connectivity between neurons generate the complex dynamical behaviors of brain circuits in health and disease. However, the inherent complexity of these biological processes makes the construction and reuse of biologically detailed models challenging. A wide range of tools have been developed to aid their construction and simulation, but differences in design and internal representation act as technical barriers to those who wish to use data-driven models in their research workflows. NeuroML, a model description language for computational neuroscience, was developed to address this fragmentation in modeling tools. Since its inception, NeuroML has evolved into a mature community standard that encompasses a wide range of model types and approaches in computational neuroscience. It has enabled the development of a large ecosystem of interoperable open-source software tools for the creation, visualization, validation, and simulation of data-driven models. Here, we describe how the NeuroML ecosystem can be incorporated into research workflows to simplify the construction, testing, and analysis of standardized models of neural systems, and supports the FAIR (Findability, Accessibility, Interoperability, and Reusability) principles, thus promoting open, transparent and reproducible science.