Visual and motor signatures of locomotion dynamically shape apopulation code for feature detection in Drosophila

Abstract

Natural vision is dynamic: as an animal moves, its visual input changes dramatically. How can the visual system reliably extract local features from an input dominated by self-generated signals? In Drosophila, diverse local visual features are represented by a group of projection neurons with distinct tuning properties. Here we describe a connectome-based volumetric imaging strategy to measure visually evoked neural activity across this population. We show that local visual features are jointly represented across the population, and that a shared gain factor improves trial-to-trial coding fidelity. A subset of these neurons, tuned to small objects, is modulated by two independent signals associated with self-movement, a motor-related signal and a visual motion signal associated with rotation of the animal. These two inputs adjust the sensitivity of these feature detectors across the locomotor cycle, selectively reducing their gain during saccades and restoring it during intersaccadic intervals. This work reveals a strategy for reliable feature detection during locomotion.

Data availability

All software and code is available on GitHub. Main analysis, modeling and figure generation code can be found here: https://github.com/mhturner/glom_pop; Visual stimulus code can be found here: https://github.com/ClandininLab/visanalysis and here: https://github.com/ClandininLab/flystim. Extracted ROI responses and associated stimulus metadata, along with raw imaging data, can be found in a Dryad repository here: https://doi.org/10.5061/dryad.h44j0zpp8.

The following data sets were generated

Article and author information

Author details

  1. Maxwell H Turner

    Department of Neurobiology, Stanford University, Stanford, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-4164-9995
  2. Avery Krieger

    Department of Neurobiology, Stanford University, Stanford, United States
    Competing interests
    The authors declare that no competing interests exist.
  3. Michelle M Pang

    Department of Neurobiology, Stanford University, Stanford, United States
    Competing interests
    The authors declare that no competing interests exist.
  4. Thomas R Clandinin

    Department of Neurobiology, Stanford University, Stanford, United States
    For correspondence
    trc@stanford.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-6277-6849

Funding

National Institutes of Health (F32-MH118707)

  • Maxwell H Turner

National Institutes of Health (K99-EY032549)

  • Maxwell H Turner

National Institutes of Health (R01-EY022638)

  • Thomas R Clandinin

National Institutes of Health (R01NS110060)

  • Thomas R Clandinin

National Science Foundation (GRFP)

  • Avery Krieger

National Defense Science and Engineering Graduate (Fellowship)

  • Michelle M Pang

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Copyright

© 2022, Turner et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 2,299
    views
  • 263
    downloads
  • 25
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Maxwell H Turner
  2. Avery Krieger
  3. Michelle M Pang
  4. Thomas R Clandinin
(2022)
Visual and motor signatures of locomotion dynamically shape apopulation code for feature detection in Drosophila
eLife 11:e82587.
https://doi.org/10.7554/eLife.82587

Share this article

https://doi.org/10.7554/eLife.82587

Further reading

    1. Neuroscience
    Lisa M Bas, Ian D Roberts ... Anita Tusche
    Research Article

    People selectively help others based on perceptions of their merit or need. Here, we develop a neurocomputational account of how these social perceptions translate into social choice. Using a novel fMRI social perception task, we show that both merit and need perceptions recruited the brain’s social inference network. A behavioral computational model identified two non-exclusive mechanisms underlying variance in social perceptions: a consistent tendency to perceive others as meritorious/needy (bias) and a propensity to sample and integrate normative evidence distinguishing high from low merit/need in other people (sensitivity). Variance in people’s merit (but not need) bias and sensitivity independently predicted distinct aspects of altruism in a social choice task completed months later. An individual’s merit bias predicted context-independent variance in people’s overall other-regard during altruistic choice, biasing people toward prosocial actions. An individual’s merit sensitivity predicted context-sensitive discrimination in generosity toward high and low merit recipients by influencing other- and self-regard during altruistic decision-making. This context-sensitive perception–action link was associated with activation in the right temporoparietal junction. Together, these findings point toward stable, biologically based individual differences in perceptual processes related to abstract social concepts like merit, and suggest that these differences may have important behavioral implications for an individual’s tendency toward favoritism or discrimination in social settings.

    1. Neuroscience
    Weihua Cai, Arkady Khoutorsky
    Insight

    Mice lacking two neuropeptides thought to be essential for processing pain show no change in how they respond to a wide range of harmful stimuli.