Long-term implicit memory for sequential auditory patterns in humans

  1. Roberta Bianco  Is a corresponding author
  2. Peter M C Harrison
  3. Mingyue Hu
  4. Cora Bolger
  5. Samantha Picken
  6. Marcus T Pearce
  7. Maria Chait
  1. University College London, United Kingdom
  2. Max-Planck-Institut für empirische Ästhetik, Germany
  3. Queen Mary University of London, United Kingdom

Abstract

Memory, on multiple timescales, is critical to our ability to discover the structure of our surroundings, and efficiently interact with the environment. We combined behavioural manipulation and modelling to investigate the dynamics of memory formation for rarely reoccurring acoustic patterns. In a series of experiments, participants detected the emergence of regularly repeating patterns within rapid tone-pip sequences. Unbeknownst to them, a few patterns reoccurred every ~3 minutes. All sequences consisted of the same 20 frequencies and were distinguishable only by the order of tone-pips. Despite this, reoccurring patterns were associated with a rapidly growing detection-time advantage over novel patterns. This effect was implicit, robust to interference, and persisted up to 7 weeks. The results implicate an interplay between short (a few seconds) and long-term (over many minutes) integration in memory formation and demonstrate the remarkable sensitivity of the human auditory system to sporadically reoccurring structure within the acoustic environment.

Data availability

The datasets for this study can be found in the OSF repository: Dataset URL: https://osf.io/dtzs3/DOI 10.17605/OSF.IO/DTZS3

The following data sets were generated

Article and author information

Author details

  1. Roberta Bianco

    UCL Ear Institute, University College London, London, United Kingdom
    For correspondence
    r.bianco@ucl.ac.uk
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-9613-8933
  2. Peter M C Harrison

    Computational Auditory Perception Research Group, Max-Planck-Institut für empirische Ästhetik, Frankfurt am Main, Germany
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9851-9462
  3. Mingyue Hu

    UCL Ear Institute, University College London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
  4. Cora Bolger

    UCL Ear Institute, University College London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
  5. Samantha Picken

    UCL Ear Institute, University College London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
  6. Marcus T Pearce

    School of Electronic Engineering and Computer Science, Queen Mary University of London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
  7. Maria Chait

    UCL Ear Institute, University College London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-7808-3593

Funding

Biotechnology and Biological Sciences Research Council (BB/P003745/1)

  • Maria Chait

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Reviewing Editor

  1. Jonas Obleser, University of Lübeck, Germany

Ethics

Human subjects: The research ethics committee of University College London approved the experiment, and written informed consent was obtained from each participant.[Project ID Number]: 1490/009

Version history

  1. Received: February 16, 2020
  2. Accepted: May 18, 2020
  3. Accepted Manuscript published: May 18, 2020 (version 1)
  4. Version of Record published: July 6, 2020 (version 2)

Copyright

© 2020, Bianco et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 3,027
    views
  • 389
    downloads
  • 24
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Roberta Bianco
  2. Peter M C Harrison
  3. Mingyue Hu
  4. Cora Bolger
  5. Samantha Picken
  6. Marcus T Pearce
  7. Maria Chait
(2020)
Long-term implicit memory for sequential auditory patterns in humans
eLife 9:e56073.
https://doi.org/10.7554/eLife.56073

Share this article

https://doi.org/10.7554/eLife.56073

Further reading

    1. Genetics and Genomics
    2. Neuroscience
    Kenneth Chiou, Noah Snyder-Mackler
    Insight

    Single-cell RNA sequencing reveals the extent to which marmosets carry genetically distinct cells from their siblings.

    1. Neuroscience
    Flavio J Schmidig, Simon Ruch, Katharina Henke
    Research Article

    We are unresponsive during slow-wave sleep but continue monitoring external events for survival. Our brain wakens us when danger is imminent. If events are non-threatening, our brain might store them for later consideration to improve decision-making. To test this hypothesis, we examined whether novel vocabulary consisting of simultaneously played pseudowords and translation words are encoded/stored during sleep, and which neural-electrical events facilitate encoding/storage. An algorithm for brain-state-dependent stimulation selectively targeted word pairs to slow-wave peaks or troughs. Retrieval tests were given 12 and 36 hr later. These tests required decisions regarding the semantic category of previously sleep-played pseudowords. The sleep-played vocabulary influenced awake decision-making 36 hr later, if targeted to troughs. The words’ linguistic processing raised neural complexity. The words’ semantic-associative encoding was supported by increased theta power during the ensuing peak. Fast-spindle power ramped up during a second peak likely aiding consolidation. Hence, new vocabulary played during slow-wave sleep was stored and influenced decision-making days later.