Audiovisual task switching rapidly modulates sound encoding in mouse auditory cortex

  1. Ryan James Morrill
  2. James Bigelow
  3. Jefferson DeKloe
  4. Andrea R Hasenstaub  Is a corresponding author
  1. University of California, San Francisco, United States

Abstract

In everyday behavior, sensory systems are in constant competition for attentional resources, but the cellular and circuit-level mechanisms of modality-selective attention remain largely uninvestigated. We conducted translaminar recordings in mouse auditory cortex (AC) during an audiovisual (AV) attention shifting task. Attending to sound elements in an AV stream reduced both pre-stimulus and stimulus-evoked spiking activity, primarily in deep layer neurons and neurons without spectrotemporal tuning. Despite reduced spiking, stimulus decoder accuracy was preserved, suggesting improved sound encoding efficiency. Similarly, task-irrelevant mapping stimuli during intertrial intervals evoked fewer spikes without impairing stimulus encoding, indicating that attentional modulation generalized beyond training stimuli. Importantly, spiking reductions predicted trial-to-trial behavioral accuracy during auditory attention, but not visual attention. Together, these findings suggest auditory attention facilitates sound discrimination by filtering sound-irrelevant background activity in AC, and that the deepest cortical layers serve as a hub for integrating extramodal contextual information.

Data availability

Physiology and behavior data supporting all figures in this manuscript have been submitted to Dryad with DOI: 10.7272/Q6BV7DVM

The following data sets were generated

Article and author information

Author details

  1. Ryan James Morrill

    Coleman Memorial Laboratory, University of California, San Francisco, San Francisco, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-8592-4549
  2. James Bigelow

    Coleman Memorial Laboratory, University of California, San Francisco, San Francisco, United States
    Competing interests
    The authors declare that no competing interests exist.
  3. Jefferson DeKloe

    Coleman Memorial Laboratory, University of California, San Francisco, San Francisco, United States
    Competing interests
    The authors declare that no competing interests exist.
  4. Andrea R Hasenstaub

    oleman Memorial Laboratory, University of California, San Francisco, San Francisco, United States
    For correspondence
    andrea.hasenstaub@ucsf.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-3998-5073

Funding

National Institutes of Health (R01NS116598)

  • Andrea R Hasenstaub

National Institutes of Health (R01DC014101)

  • Andrea R Hasenstaub

National Science Foundation (GFRP)

  • Ryan James Morrill

Hearing Research Incorporated

  • Andrea R Hasenstaub

Klingenstein Foundation

  • Andrea R Hasenstaub

Coleman Memorial Fund

  • Andrea R Hasenstaub

National Institutes of Health (F32DC016846)

  • James Bigelow

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Animal experimentation: All experiments were approved by the Institutional Animal Care and Use Committee at the University of California, San Francisco.

Copyright

© 2022, Morrill et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,506
    views
  • 331
    downloads
  • 4
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Ryan James Morrill
  2. James Bigelow
  3. Jefferson DeKloe
  4. Andrea R Hasenstaub
(2022)
Audiovisual task switching rapidly modulates sound encoding in mouse auditory cortex
eLife 11:e75839.
https://doi.org/10.7554/eLife.75839

Share this article

https://doi.org/10.7554/eLife.75839

Further reading

    1. Neuroscience
    Célian Bimbard, Flóra Takács ... Philip Coen
    Tools and Resources

    Electrophysiology has proven invaluable to record neural activity, and the development of Neuropixels probes dramatically increased the number of recorded neurons. These probes are often implanted acutely, but acute recordings cannot be performed in freely moving animals and the recorded neurons cannot be tracked across days. To study key behaviors such as navigation, learning, and memory formation, the probes must be implanted chronically. An ideal chronic implant should (1) allow stable recordings of neurons for weeks; (2) allow reuse of the probes after explantation; (3) be light enough for use in mice. Here, we present the ‘Apollo Implant’, an open-source and editable device that meets these criteria and accommodates up to two Neuropixels 1.0 or 2.0 probes. The implant comprises a ‘payload’ module which is attached to the probe and is recoverable, and a ‘docking’ module which is cemented to the skull. The design is adjustable, making it easy to change the distance between probes, the angle of insertion, and the depth of insertion. We tested the implant across eight labs in head-fixed mice, freely moving mice, and freely moving rats. The number of neurons recorded across days was stable, even after repeated implantations of the same probe. The Apollo implant provides an inexpensive, lightweight, and flexible solution for reusable chronic Neuropixels recordings.

    1. Neuroscience
    Ana Fló, Lucas Benjamin ... Ghislaine Dehaene-Lambertz
    Research Article

    Interest in statistical learning in developmental studies stems from the observation that 8-month-olds were able to extract words from a monotone speech stream solely using the transition probabilities (TP) between syllables (Saffran et al., 1996). A simple mechanism was thus part of the human infant’s toolbox for discovering regularities in language. Since this seminal study, observations on statistical learning capabilities have multiplied across domains and species, challenging the hypothesis of a dedicated mechanism for language acquisition. Here, we leverage the two dimensions conveyed by speech –speaker identity and phonemes– to examine (1) whether neonates can compute TPs on one dimension despite irrelevant variation on the other and (2) whether the linguistic dimension enjoys an advantage over the voice dimension. In two experiments, we exposed neonates to artificial speech streams constructed by concatenating syllables while recording EEG. The sequence had a statistical structure based either on the phonetic content, while the voices varied randomly (Experiment 1) or on voices with random phonetic content (Experiment 2). After familiarisation, neonates heard isolated duplets adhering, or not, to the structure they were familiarised with. In both experiments, we observed neural entrainment at the frequency of the regularity and distinct Event-Related Potentials (ERP) to correct and incorrect duplets, highlighting the universality of statistical learning mechanisms and suggesting it operates on virtually any dimension the input is factorised. However, only linguistic duplets elicited a specific ERP component, potentially an N400 precursor, suggesting a lexical stage triggered by phonetic regularities already at birth. These results show that, from birth, multiple input regularities can be processed in parallel and feed different higher-order networks.