1. Neuroscience
Download icon

Temporal chunking as a mechanism for unsupervised learning of task-sets

  1. Flora Bouchacourt
  2. Stefano Palminteri
  3. Etienne Koechlin
  4. Srdjan Ostojic  Is a corresponding author
  1. Ecole Normale Superieure Paris, France
Research Article
  • Cited 3
  • Views 1,648
  • Annotations
Cite this article as: eLife 2020;9:e50469 doi: 10.7554/eLife.50469

Abstract

Depending on environmental demands, humans can learn and exploit multiple concurrent sets of stimulus-response associations. Mechanisms underlying the learning of such task-sets remain unknown. Here we investigate the hypothesis that task-set learning relies on unsupervised chunking of stimulus-response associations that occur in temporal proximity. We examine behavioral and neural data from a task-set learning experiment using a network model. We first show that task-set learning can be achieved provided the timescale of chunking is slower than the timescale of stimulus-response learning. Fitting the model to behavioral data on a subject-by-subject basis confirmed this expectation and led to specific predictions linking chunking and task-set retrieval that were borne out by behavioral performance and reaction times. Comparing the model activity with BOLD signal allowed us to identify neural correlates of task-set retrieval in a functional network involving ventral and dorsal prefrontal cortex, with the dorsal system preferentially engaged when retrievals are used to improve performance.

Data availability

Code has been uploaded to https://github.com/florapython/TemporalChunkingTaskSets. Statistical maps corresponding to human subjects data have been uploadeed to Neurovault (https://neurovault.org/collections/6754/).

The following previously published data sets were used
    1. A Collins
    2. E Koechlin
    (2012) Human behaviour data
    https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1001293.

Article and author information

Author details

  1. Flora Bouchacourt

    Laboratoire de Neurosciences Cognitives et Computationelles, Ecole Normale Superieure Paris, Paris, France
    Competing interests
    The authors declare that no competing interests exist.
  2. Stefano Palminteri

    Laboratoire de Neurosciences Cognitives et Computationelles, Ecole Normale Superieure Paris, Paris, France
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-5768-6646
  3. Etienne Koechlin

    Laboratoire de Neurosciences Cognitives et Computationelles, Ecole Normale Superieure Paris, Paris, France
    Competing interests
    The authors declare that no competing interests exist.
  4. Srdjan Ostojic

    Laboratoire de Neurosciences Cognitives et Computationelles, Ecole Normale Superieure Paris, Paris, France
    For correspondence
    srdjan.ostojic@ens.fr
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-7473-1223

Funding

Ecole de Neuroscience de Paris (Doctoral Fellowship)

  • Flora Bouchacourt

Agence Nationale de la Recherche (ANR-16-CE37- 0016-01)

  • Srdjan Ostojic

Agence Nationale de la Recherche (ANR-17-ERC2-0005-01)

  • Srdjan Ostojic

Inserm (R16069JS)

  • Stefano Palminteri

Agence Nationale de la Recherche (ANR-16-NEUC-0004)

  • Stefano Palminteri

Fondation Fyssen

  • Stefano Palminteri

Fondation Schlumberger

  • Stefano Palminteri

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Human subjects: Participants provided written informed consent approved by the French National Ethics Committee.

Reviewing Editor

  1. Mark CW van Rossum, University of Nottingham, United Kingdom

Publication history

  1. Received: July 23, 2019
  2. Accepted: February 24, 2020
  3. Accepted Manuscript published: March 9, 2020 (version 1)
  4. Version of Record published: March 31, 2020 (version 2)

Copyright

© 2020, Bouchacourt et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,648
    Page views
  • 269
    Downloads
  • 3
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

Further reading

    1. Neuroscience
    Linda M Amarante, Mark Laubach
    Research Article Updated

    This study examined how the medial frontal (MFC) and orbital frontal (OFC) cortices process reward information. We simultaneously recorded local field potentials in the two areas as rats consumed liquid sucrose rewards. Both areas exhibited a 4–8 Hz ‘theta’ rhythm that was phase-locked to the lick cycle. The rhythm tracked shifts in sucrose concentrations and fluid volumes, demonstrating that it is sensitive to differences in reward magnitude. The coupling between the rhythm and licking was stronger in MFC than OFC and varied with response vigor and absolute reward value in the MFC. Spectral analysis revealed zero-lag coherence between the cortical areas, and found evidence for a directionality of the rhythm, with MFC leading OFC. Our findings suggest that consummatory behavior generates simultaneous theta range activity in the MFC and OFC that encodes the value of consumed fluids, with the MFC having a top-down role in the control of consumption.

    1. Neuroscience
    James P Bohnslav et al.
    Tools and Resources Updated

    Videos of animal behavior are used to quantify researcher-defined behaviors of interest to study neural function, gene mutations, and pharmacological therapies. Behaviors of interest are often scored manually, which is time-consuming, limited to few behaviors, and variable across researchers. We created DeepEthogram: software that uses supervised machine learning to convert raw video pixels into an ethogram, the behaviors of interest present in each video frame. DeepEthogram is designed to be general-purpose and applicable across species, behaviors, and video-recording hardware. It uses convolutional neural networks to compute motion, extract features from motion and images, and classify features into behaviors. Behaviors are classified with above 90% accuracy on single frames in videos of mice and flies, matching expert-level human performance. DeepEthogram accurately predicts rare behaviors, requires little training data, and generalizes across subjects. A graphical interface allows beginning-to-end analysis without end-user programming. DeepEthogram’s rapid, automatic, and reproducible labeling of researcher-defined behaviors of interest may accelerate and enhance supervised behavior analysis. Code is available at: https://github.com/jbohnslav/deepethogram.