State-dependent representations of mixtures by the olfactory bulb
Abstract
Sensory systems are often tasked to analyse complex signals from the environment, separating relevant from irrelevant parts. This process of decomposing signals is challenging when a mixture of signals does not equal the sum of its parts, leading to an unpredictable corruption of signal patterns. In olfaction, nonlinear summation is prevalent at various stages of sensory processing. Here, we investigate how the olfactory system deals with binary mixtures of odours under different brain states, using two-photon imaging of olfactory bulb (OB) output neurons. Unlike previous studies using anaesthetised animals, we found that mixture summation is more linear in the early phase of evoked responses in awake, head-fixed mice performing an odour detection task, due to dampened responses. Despite this, and responses being more variable, decoding analyses indicated that the data from behaving mice was well discriminable. Curiously, the time course of decoding accuracy did not correlate strictly with the linearity of summation. Further, a comparison with naïve mice indicated that learning to accurately perform the mixture detection task is not accompanied by more linear mixture summation. Finally, using a simulation, we demonstrate that, while saturating sublinearity tends to degrade the discriminability, the extent of the impairment may depend on other factors, including pattern decorrelation. Altogether, our results demonstrate that the mixture representation in the primary olfactory area is state-dependent, but the analytical perception may not strictly correlate with linearity in summation.
Data availability
The files consist of individual data to compare linear sum vs. observed mixture responses (300 - 1000 ms after odour onset).
-
Figure 6CDryad Digital Repository, doi:10.5061/dryad.p2ngf1vrh.
Article and author information
Author details
Funding
Okinawa Institute of Science and Technology Graduate University
- Aliya Mari Adefuin
- Sander Lindeman
- Janine K Reinert
- Izumi Fukunaga
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Animal experimentation: All procedures described in this study have been approved by the OIST Graduate University's Animal Care and Use Committee (Protocol 2016-151 and 2020-310)
Reviewing Editor
- Naoshige Uchida, Harvard University, United States
Version history
- Preprint posted: September 24, 2021 (view preprint)
- Received: January 7, 2022
- Accepted: March 5, 2022
- Accepted Manuscript published: March 7, 2022 (version 1)
- Version of Record published: March 21, 2022 (version 2)
Copyright
© 2022, Adefuin et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,281
- Page views
-
- 183
- Downloads
-
- 1
- Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
The lateral geniculate nucleus (LGN), a retinotopic relay center where visual inputs from the retina are processed and relayed to the visual cortex, has been proposed as a potential target for artificial vision. At present, it is unknown whether optogenetic LGN stimulation is sufficient to elicit behaviorally relevant percepts, and the properties of LGN neural responses relevant for artificial vision have not been thoroughly characterized. Here, we demonstrate that tree shrews pretrained on a visual detection task can detect optogenetic LGN activation using an AAV2-CamKIIα-ChR2 construct and readily generalize from visual to optogenetic detection. Simultaneous recordings of LGN spiking activity and primary visual cortex (V1) local field potentials (LFP) during optogenetic LGN stimulation show that LGN neurons reliably follow optogenetic stimulation at frequencies up to 60 Hz, and uncovered a striking phase locking between the V1 local field potential (LFP) and the evoked spiking activity in LGN. These phase relationships were maintained over a broad range of LGN stimulation frequencies, up to 80 Hz, with spike field coherence values favoring higher frequencies, indicating the ability to relay temporally precise information to V1 using light activation of the LGN. Finally, V1 LFP responses showed sensitivity values to LGN optogenetic activation that were similar to the animal's behavioral performance. Taken together, our findings confirm the LGN as a potential target for visual prosthetics in a highly visual mammal closely related to primates.
-
- Neuroscience
Decisions about noisy stimuli are widely understood to be made by accumulating evidence up to a decision bound that can be adjusted according to task demands. However, relatively little is known about how such mechanisms operate in continuous monitoring contexts requiring intermittent target detection. Here, we examined neural decision processes underlying detection of 1 s coherence targets within continuous random dot motion, and how they are adjusted across contexts with weak, strong, or randomly mixed weak/strong targets. Our prediction was that decision bounds would be set lower when weak targets are more prevalent. Behavioural hit and false alarm rate patterns were consistent with this, and were well captured by a bound-adjustable leaky accumulator model. However, beta-band EEG signatures of motor preparation contradicted this, instead indicating lower bounds in the strong-target context. We thus tested two alternative models in which decision-bound dynamics were constrained directly by beta measurements, respectively, featuring leaky accumulation with adjustable leak, and non-leaky accumulation of evidence referenced to an adjustable sensory-level criterion. We found that the latter model best explained both behaviour and neural dynamics, highlighting novel means of decision policy regulation and the value of neurally informed modelling.