The precise pattern of motor neuron (MN) activation is essential for the execution of motor actions; however, the molecular mechanisms that give rise to specific patterns of MN activity are largely unknown. Phrenic MNs integrate multiple inputs to mediate inspiratory activity during breathing and are constrained to fire in a pattern that drives efficient diaphragm contraction. We show that Hox5 transcription factors shape phrenic MN output by connecting phrenic MNs to inhibitory pre-motor neurons. Hox5 genes establish phrenic MN organization and dendritic topography through the regulation of phrenic-specific cell adhesion programs. In the absence of Hox5 genes, phrenic MN firing becomes asynchronous and erratic due to loss of phrenic MN inhibition. Strikingly, mice lacking Hox5 genes in MNs exhibit abnormal respiratory behavior throughout their lifetime. Our findings support a model where MN-intrinsic transcriptional programs shape the pattern of motor output by orchestrating distinct aspects of MN connectivity.
Sequencing data have been deposited in GEO under accession code GSE138085
Gene expression changes in cervical motor neuron transcriptomes after loss of Hox5 transcription factorsNCBI Gene Expression Omnibus, GSE138085.
- Polyxeni Philippidou
- Polyxeni Philippidou
- Alicia N Vagnozzi
- Alicia N Vagnozzi
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Animal experimentation: All animal procedures performed in this study were in strict accordance with the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the Case Western Reserve University School of Medicine Institutional Animal Care and Use Committee (Animal Welfare Assurance Number A3145-01, protocol #: 2015-0180).
- Anne E West, Duke University School of Medicine, United States
© 2020, Vagnozzi et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
The computational principles underlying attention allocation in complex goal-directed tasks remain elusive. Goal-directed reading, that is, reading a passage to answer a question in mind, is a common real-world task that strongly engages attention. Here, we investigate what computational models can explain attention distribution in this complex task. We show that the reading time on each word is predicted by the attention weights in transformer-based deep neural networks (DNNs) optimized to perform the same reading task. Eye tracking further reveals that readers separately attend to basic text features and question-relevant information during first-pass reading and rereading, respectively. Similarly, text features and question relevance separately modulate attention weights in shallow and deep DNN layers. Furthermore, when readers scan a passage without a question in mind, their reading time is predicted by DNNs optimized for a word prediction task. Therefore, we offer a computational account of how task optimization modulates attention distribution during real-world reading.
Even though human experience unfolds continuously in time, it is not strictly linear; instead, it entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a continuously varying acoustic signal into phonemes, words, and meaning, and these levels all have distinct but interdependent temporal structures. Time-lagged regression using temporal response functions (TRFs) has recently emerged as a promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here we introduce the Eelbrain Python toolkit, which makes this kind of analysis easy and accessible. We demonstrate its use, using continuous speech as a sample paradigm, with a freely available EEG dataset of audiobook listening. A companion GitHub repository provides the complete source code for the analysis, from raw data to group level statistics. More generally, we advocate a hypothesis-driven approach in which the experimenter specifies a hierarchy of time-continuous representations that are hypothesized to have contributed to brain responses, and uses those as predictor variables for the electrophysiological signal. This is analogous to a multiple regression problem, but with the addition of a time dimension. TRF analysis decomposes the brain signal into distinct responses associated with the different predictor variables by estimating a multivariate TRF (mTRF), quantifying the influence of each predictor on brain responses as a function of time(-lags). This allows asking two questions about the predictor variables: 1) Is there a significant neural representation corresponding to this predictor variable? And if so, 2) what are the temporal characteristics of the neural response associated with it? Thus, different predictor variables can be systematically combined and evaluated to jointly model neural processing at multiple hierarchical levels. We discuss applications of this approach, including the potential for linking algorithmic/representational theories at different cognitive levels to brain responses through computational models with appropriate linking hypotheses.