Rapid learning in visual cortical networks
Abstract
Although changes in brain activity during learning have been extensively examined at the single neuron level, the coding strategies employed by cell populations remain mysterious. We examined neuronal populations in macaque area V4 during a rapid form of perceptual learning that emerges within tens of minutes. Multiple single-units and LFP responses were recorded as monkeys improved their performance in an image discrimination task. We show that the increase in behavioral performance during learning is predicted by a tight coordination of spike timing with local population activity. More spike-LFP theta synchronization is correlated with higher learning performance while high-frequency synchronization is unrelated with changes in performance, but these changes were absent once learning had stabilized and stimuli became familiar or in the absence of learning. These findings reveal a novel mechanism of plasticity in visual cortex by which elevated low-frequency synchronization between individual neurons and local population activity accompanies the improvement in performance during learning.
Article and author information
Author details
Ethics
Animal experimentation: This study was performed in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. All of the animals were handled according to approved institutional animal care and use committee (IACUC) protocols (AWC-14-0114) of the Texas Health Science Center at Houston. The protocol was approved by the Committee on the Ethics of Animal Experiments of the Texas Health Science Center at Houston. All surgery was performed under isoflurane anesthesia, and every effort was made to minimize suffering.
Copyright
© 2015, Wang & Dragoi
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,664
- views
-
- 385
- downloads
-
- 7
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Outcomes can vary even when choices are repeated. Such ambiguity necessitates adjusting how much to learn from each outcome by tracking its variability. The medial prefrontal cortex (mPFC) has been reported to signal the expected outcome and its discrepancy from the actual outcome (prediction error), two variables essential for controlling the learning rate. However, the source of signals that shape these coding properties remains unknown. Here, we investigated the contribution of cholinergic projections from the basal forebrain because they carry precisely timed signals about outcomes. One-photon calcium imaging revealed that as mice learned different probabilities of threat occurrence on two paths, some mPFC cells responded to threats on one of the paths, while other cells gained responses to threat omission. These threat- and omission-evoked responses were scaled to the unexpectedness of outcomes, some exhibiting a reversal in response direction when encountering surprising threats as opposed to surprising omissions. This selectivity for signed prediction errors was enhanced by optogenetic stimulation of local cholinergic terminals during threats. The enhanced threat-evoked cholinergic signals also made mice erroneously abandon the correct choice after a single threat that violated expectations, thereby decoupling their path choice from the history of threat occurrence on each path. Thus, acetylcholine modulates the encoding of surprising outcomes in the mPFC to control how much they dictate future decisions.
-
- Neuroscience
Quantitative information about synaptic transmission is key to our understanding of neural function. Spontaneously occurring synaptic events carry fundamental information about synaptic function and plasticity. However, their stochastic nature and low signal-to-noise ratio present major challenges for the reliable and consistent analysis. Here, we introduce miniML, a supervised deep learning-based method for accurate classification and automated detection of spontaneous synaptic events. Comparative analysis using simulated ground-truth data shows that miniML outperforms existing event analysis methods in terms of both precision and recall. miniML enables precise detection and quantification of synaptic events in electrophysiological recordings. We demonstrate that the deep learning approach generalizes easily to diverse synaptic preparations, different electrophysiological and optical recording techniques, and across animal species. miniML provides not only a comprehensive and robust framework for automated, reliable, and standardized analysis of synaptic events, but also opens new avenues for high-throughput investigations of neural function and dysfunction.