Adaptive behavior requires the separation of current from future goals in working memory. We used fMRI of object-selective cortex to determine the representational (dis)similarities of memory representations serving current and prospective perceptual tasks. Participants remembered an object drawn from three possible categories as the target for one of two consecutive visual search tasks. A cue indicated whether the target object should be looked for first (currently relevant), second (prospectively relevant), or if it could be forgotten (irrelevant). Prior to the first search, representations of current, prospective and irrelevant objects were similar, with strongest decoding for current representations compared to prospective (Experiment 1) and irrelevant (Experiment 2). Remarkably, during the first search, prospective representations could also be decoded, but revealed anti-correlated voxel patterns compared to currently relevant representations of the same category. We propose that the brain separates current from prospective memories within the same neuronal ensembles through opposite representational patterns.
All data generated or analyzed during this study are included in the manuscript and supporting files. Source data files and source code files have been provided for Figures 2,3,4, 5, S1,S2,S3 and the fMRI data is made available via the open science framework:""Current and Future Goals Are Represented in Opposite Patterns in Object-Selective Cortex."" Open Science Framework. May 31. osf.io/hcp47.For the newly added experiment 2, the data and scripts have also been provided.
Current and Future Goals Are Represented in Opposite Patterns in Object-Selective CortexOpen Science Framework, osf.io/hcp47.
- Christian N L Olivers
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Human subjects: Human subjects: Twenty-four healthy volunteers participated in Experiment 1 and twenty-five healthy volunteers participated in Experiment 2. The experiment was approved by the Ethical Committee of the Faculty of Social and Behavioral Sciences, University of Amsterdam and conformed to the Declaration of Helsinki. All subjects provided written informed consent and consent to publish.
- Floris de Lange, Donders Institute for Brain, Cognition and Behaviour, Netherlands
© 2018, van Loon et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Touch sensation is primarily encoded by mechanoreceptors, called low-threshold mechanoreceptors (LTMRs), with their cell bodies in the dorsal root ganglia. Because of their great diversity in terms of molecular signature, terminal endings morphology, and electrophysiological properties, mirroring the complexity of tactile experience, LTMRs are a model of choice to study the molecular cues differentially controlling neuronal diversification. While the transcriptional codes that define different LTMR subtypes have been extensively studied, the molecular players that participate in their late maturation and in particular in the striking diversity of their end-organ morphological specialization are largely unknown. Here we identified the TALE homeodomain transcription factor Meis2 as a key regulator of LTMRs target-field innervation in mice. Meis2 is specifically expressed in cutaneous LTMRs, and its expression depends on target-derived signals. While LTMRs lacking Meis2 survived and are normally specified, their end-organ innervations, electrophysiological properties, and transcriptome are differentially and markedly affected, resulting in impaired sensory-evoked behavioral responses. These data establish Meis2 as a major transcriptional regulator controlling the orderly formation of sensory neurons innervating peripheral end organs required for light touch.
Blindness affects millions of people around the world. A promising solution to restoring a form of vision for some individuals are cortical visual prostheses, which bypass part of the impaired visual pathway by converting camera input to electrical stimulation of the visual system. The artificially induced visual percept (a pattern of localized light flashes, or ‘phosphenes’) has limited resolution, and a great portion of the field’s research is devoted to optimizing the efficacy, efficiency, and practical usefulness of the encoding of visual information. A commonly exploited method is non-invasive functional evaluation in sighted subjects or with computational models by using simulated prosthetic vision (SPV) pipelines. An important challenge in this approach is to balance enhanced perceptual realism, biologically plausibility, and real-time performance in the simulation of cortical prosthetic vision. We present a biologically plausible, PyTorch-based phosphene simulator that can run in real-time and uses differentiable operations to allow for gradient-based computational optimization of phosphene encoding models. The simulator integrates a wide range of clinical results with neurophysiological evidence in humans and non-human primates. The pipeline includes a model of the retinotopic organization and cortical magnification of the visual cortex. Moreover, the quantitative effects of stimulation parameters and temporal dynamics on phosphene characteristics are incorporated. Our results demonstrate the simulator’s suitability for both computational applications such as end-to-end deep learning-based prosthetic vision optimization as well as behavioral experiments. The modular and open-source software provides a flexible simulation framework for computational, clinical, and behavioral neuroscientists working on visual neuroprosthetics.