1. Neuroscience
Download icon

Spherical arena reveals optokinetic response tuning to stimulus location, size, and frequency across entire visual field of larval zebrafish

  1. Florian Alexander Dehmelt
  2. Rebecca Meier
  3. Julian Hinz
  4. Takeshi Yoshimatsu
  5. Clara A Simacek
  6. Ruoyu Huang
  7. Kun Wang
  8. Tom Baden
  9. Aristides B Arrenberg  Is a corresponding author
  1. University of Tuebingen, Germany
  2. University of Sussex, UK, United Kingdom
  3. University of Sussex, United Kingdom
Research Article
  • Cited 0
  • Views 108
  • Annotations
Cite this article as: eLife 2021;10:e63355 doi: 10.7554/eLife.63355

Abstract

Many animals have large visual fields, and sensory circuits may sample those regions of visual space most relevant to behaviours such as gaze stabilisation and hunting. Despite this, relatively small displays are often used in vision neuroscience. To sample stimulus locations across most of the visual field, we built a spherical stimulus arena with 14,848 independently controllable LEDs. We measured the optokinetic response gain of immobilised zebrafish larvae to stimuli of different steradian size and visual field locations. We find that the two eyes are less yoked than previously thought and that spatial frequency tuning is similar across visual field positions. However, zebrafish react most strongly to lateral, nearly equatorial stimuli, consistent with previously reported spatial densities of red, green and blue photoreceptors. Upside-down experiments suggest further extra-retinal processing. Our results demonstrate that motion vision circuits in zebrafish are anisotropic, and preferentially monitor areas with putative behavioural relevance.

Data availability

Analysis code, pre-processed data and examples of raw data have been deposited in GIN by G-Node and published under Digital Object Identifier 10.12751/g-node.qergnn

The following data sets were generated
The following previously published data sets were used

Article and author information

Author details

  1. Florian Alexander Dehmelt

    Werner Reichardt Centre for Integrative Neuroscience and Institute of Neurobiology, University of Tuebingen, Tuebingen, Germany
    Competing interests
    The authors declare that no competing interests exist.
  2. Rebecca Meier

    Werner Reichardt Centre for Integrative Neuroscience and Institute of Neurobiology, University of Tuebingen, Tuebingen, Germany
    Competing interests
    The authors declare that no competing interests exist.
  3. Julian Hinz

    Werner Reichardt Centre for Integrative Neuroscience and Institute of Neurobiology, University of Tuebingen, Tuebingen, Germany
    Competing interests
    The authors declare that no competing interests exist.
  4. Takeshi Yoshimatsu

    School of Life Sciences, University of Sussex, UK, Brighton, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
  5. Clara A Simacek

    Werner Reichardt Centre for Integrative Neuroscience and Institute of Neurobiology, University of Tuebingen, Tuebingen, Germany
    Competing interests
    The authors declare that no competing interests exist.
  6. Ruoyu Huang

    Werner Reichardt Centre for Integrative Neuroscience and Institute of Neurobiology, University of Tuebingen, Tuebingen, Germany
    Competing interests
    The authors declare that no competing interests exist.
  7. Kun Wang

    Werner Reichardt Centre for Integrative Neuroscience and Institute of Neurobiology, University of Tuebingen, Tuebingen, Germany
    Competing interests
    The authors declare that no competing interests exist.
  8. Tom Baden

    School of Life Sciences, University of Sussex, Brighton, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-2808-4210
  9. Aristides B Arrenberg

    Werner Reichardt Centre for Integrative Neuroscience and Institute of Neurobiology, University of Tuebingen, Tuebingen, Germany
    For correspondence
    aristides.arrenberg@uni-tuebingen.de
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-8262-7381

Funding

Deutsche Forschungsgemeinschaft (EXC307 (Werner-Reichardt-Centrum))

  • Aristides B Arrenberg

Human Frontier Science Program (Young Investigator Grant RGY0079)

  • Aristides B Arrenberg

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Animal experimentation: Animal experiments were performed in accordance with licenses granted by local government authorities (Regierungspräsidium Tübingen) in accordance with German federal law and Baden-Württemberg state law. Approval of this license followed consultation of both in-house animal welfare officers and an external ethics board appointed by the local government.

Reviewing Editor

  1. Kristin Tessmar-Raible, University of Vienna, Austria

Publication history

  1. Received: September 22, 2020
  2. Accepted: June 7, 2021
  3. Accepted Manuscript published: June 8, 2021 (version 1)

Copyright

© 2021, Dehmelt et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 108
    Page views
  • 24
    Downloads
  • 0
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

Further reading

    1. Neuroscience
    Xiaoxuan Jia et al.
    Research Article

    Temporal continuity of object identity is a feature of natural visual input, and is potentially exploited -- in an unsupervised manner -- by the ventral visual stream to build the neural representation in inferior temporal (IT) cortex. Here we investigated whether plasticity of individual IT neurons underlies human core-object-recognition behavioral changes induced with unsupervised visual experience. We built a single-neuron plasticity model combined with a previously established IT population-to-recognition-behavior linking model to predict human learning effects. We found that our model, after constrained by neurophysiological data, largely predicted the mean direction, magnitude and time course of human performance changes. We also found a previously unreported dependency of the observed human performance change on the initial task difficulty. This result adds support to the hypothesis that tolerant core object recognition in human and non-human primates is instructed -- at least in part -- by naturally occurring unsupervised temporal contiguity experience.

    1. Neuroscience
    Nick Taubert et al.
    Research Article

    Dynamic facial expressions are crucial for communication in primates. Due to the difficulty to control shape and dynamics of facial expressions across species, it is unknown how species-specific facial expressions are perceptually encoded and interact with the representation of facial shape. While popular neural network models predict a joint encoding of facial shape and dynamics, the neuromuscular control of faces evolved more slowly than facial shape, suggesting a separate encoding. To investigate these alternative hypotheses, we developed photo-realistic human and monkey heads that were animated with motion capture data from monkeys and humans. Exact control of expression dynamics was accomplished by a Bayesian machine-learning technique. Consistent with our hypothesis, we found that human observers learned cross-species expressions very quickly, where face dynamics was represented largely independently of facial shape. This result supports the co-evolution of the visual processing and motor control of facial expressions, while it challenges appearance-based neural network theories of dynamic expression recognition.