Computations underlying Drosophila photo-taxis, odor-taxis, and multi-sensory integration

Abstract

To better understand how organisms make decisions on the basis of temporally varying multi-sensory input, we identified computations made by Drosophila larvae responding to visual and optogenetically induced fictive olfactory stimuli. We modeled the larva's navigational decision to initiate turns as the output of a Linear-Nonlinear-Poisson cascade. We used reverse-correlation to fit parameters to this model; the parameterized model predicted larvae's responses to novel stimulus patterns. For multi-modal inputs, we found that larvae linearly combine olfactory and visual signals upstream of the decision to turn. We verified this prediction by measuring larvae's responses to coordinated changes in odor and light. We studied other navigational decisions and found that larvae integrated odor and light according to the same rule in all cases. These results suggest that photo-taxis and odor-taxis are mediated by a shared computational pathway.

Article and author information

Author details

  1. Ruben Gepner

    Department of Physics, New York University, New York, United States
    Competing interests
    The authors declare that no competing interests exist.
  2. Mirna Mihovilovic Skanata

    Department of Physics, New York University, New York, United States
    Competing interests
    The authors declare that no competing interests exist.
  3. Natalie M Bernat

    Department of Physics, New York University, New York, United States
    Competing interests
    The authors declare that no competing interests exist.
  4. Margarita Kaplow

    Center for Neural Science, New York University, New York, United States
    Competing interests
    The authors declare that no competing interests exist.
  5. Marc Gershow

    Department of Physics, New York University, New York, United States
    For correspondence
    mhg4@nyu.edu
    Competing interests
    The authors declare that no competing interests exist.

Reviewing Editor

  1. Ronald L Calabrese, Emory University, United States

Version history

  1. Received: December 22, 2014
  2. Accepted: May 5, 2015
  3. Accepted Manuscript published: May 6, 2015 (version 1)
  4. Version of Record published: June 16, 2015 (version 2)

Copyright

© 2015, Gepner et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 3,833
    views
  • 789
    downloads
  • 81
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Ruben Gepner
  2. Mirna Mihovilovic Skanata
  3. Natalie M Bernat
  4. Margarita Kaplow
  5. Marc Gershow
(2015)
Computations underlying Drosophila photo-taxis, odor-taxis, and multi-sensory integration
eLife 4:e06229.
https://doi.org/10.7554/eLife.06229

Share this article

https://doi.org/10.7554/eLife.06229

Further reading

    1. Computational and Systems Biology
    2. Neuroscience
    Ronald L Calabrese
    Insight

    Three recent studies use optogenetics, virtual ‘odor-scapes’ and mathematical modeling to study how the nervous system of fruit fly larvae processes sensory information to control navigation.

    1. Neuroscience
    Aviv Ratzon, Dori Derdikman, Omri Barak
    Research Article

    Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; and (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.