First-order visual interneurons distribute distinct contrast and luminance information across ON and OFF pathways to achieve stable behavior
Abstract
The accurate processing of contrast is the basis for all visually guided behaviors. Visual scenes with rapidly changing illumination challenge contrast computation because photoreceptor adaptation is not fast enough to compensate for such changes. Yet, human perception of contrast is stable even when the visual environment is quickly changing, suggesting rapid post receptor luminance gain control. Similarly, in the fruit fly Drosophila, such gain control leads to luminance invariant behavior for moving OFF stimuli. Here we show that behavioral responses to moving ON stimuli also utilize a luminance gain, and that ON-motion guided behavior depends on inputs from three first-order interneurons L1, L2 and L3. Each of these neurons encodes contrast and luminance differently and distributes information asymmetrically across both ON and OFF contrast-selective pathways. Behavioral responses to both ON and OFF stimuli rely on a luminance-based correction provided by L1 and L3, wherein L1 supports contrast computation linearly, and L3 non-linearly amplifies dim stimuli. Therefore, L1, L2 and L3 are not specific inputs to ON and OFF pathways but the lamina serves as a separate processing layer that distributes distinct luminance and contrast information across ON and OFF pathways to support behavior in varying conditions.
Data availability
Analysis code is available at https://github.com/silieslab/Ketkar-Gur-MolinaObando-etal2022, and source data can be found on Zenodo: https://doi.org/10.5281/zenodo.6335347.
Article and author information
Author details
Funding
European Commission (ERC Starting Grant,No 716512)
- Marion Silies
Deutsche Forschungsgemeinschaft (CRC1080,project C06)
- Marion Silies
Deutsche Forschungsgemeinschaft (MA 7804/2-1)
- Carlotta Martelli
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Copyright
© 2022, Ketkar et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,386
- views
-
- 207
- downloads
-
- 18
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Most visual tasks involve looking for specific object features. But we also often perform property-based tasks where we look for specific property in an image, such as finding an odd item, deciding if two items are same, or if an object has symmetry. How do we solve such tasks? These tasks do not fit into standard models of decision making because their underlying feature space and decision process is unclear. Using well-known principles governing multiple object representations, we show that displays with repeating elements can be distinguished from heterogeneous displays using a property we define as visual homogeneity. In behavior, visual homogeneity predicted response times on visual search, same-different and symmetry tasks. Brain imaging during visual search and symmetry tasks revealed that visual homogeneity was localized to a region in the object-selective cortex. Thus, property-based visual tasks are solved in a localized region in the brain by computing visual homogeneity.
-
- Neuroscience
Interest in statistical learning in developmental studies stems from the observation that 8-month-olds were able to extract words from a monotone speech stream solely using the transition probabilities (TP) between syllables (Saffran et al., 1996). A simple mechanism was thus part of the human infant’s toolbox for discovering regularities in language. Since this seminal study, observations on statistical learning capabilities have multiplied across domains and species, challenging the hypothesis of a dedicated mechanism for language acquisition. Here, we leverage the two dimensions conveyed by speech –speaker identity and phonemes– to examine (1) whether neonates can compute TPs on one dimension despite irrelevant variation on the other and (2) whether the linguistic dimension enjoys an advantage over the voice dimension. In two experiments, we exposed neonates to artificial speech streams constructed by concatenating syllables while recording EEG. The sequence had a statistical structure based either on the phonetic content, while the voices varied randomly (Experiment 1) or on voices with random phonetic content (Experiment 2). After familiarisation, neonates heard isolated duplets adhering, or not, to the structure they were familiarised with. In both experiments, we observed neural entrainment at the frequency of the regularity and distinct Event-Related Potentials (ERP) to correct and incorrect duplets, highlighting the universality of statistical learning mechanisms and suggesting it operates on virtually any dimension the input is factorised. However, only linguistic duplets elicited a specific ERP component, potentially an N400 precursor, suggesting a lexical stage triggered by phonetic regularities already at birth. These results show that, from birth, multiple input regularities can be processed in parallel and feed different higher-order networks.