Disparate substrates for head gaze following and face perception in the monkey superior temporal sulcus

  1. Karolina Marciniak  Is a corresponding author
  2. Artin Atabaki
  3. Peter W Dicke
  4. Peter Thier
  1. Hertie Institute for Clinical Brain Research, University of Tuebingen, Germany

Abstract

Primates use gaze cues to follow peer gaze to an object of joint attention. Gaze following of monkeys is largely determined by head or face orientation. We used fMRI in rhesus monkeys to identify brain regions underlying head gaze following and to assess their relationship to the 'face patch' system, the latter being the likely source of information on face orientation. We trained monkeys to locate targets by either following head gaze or using a learned association of face identity with the same targets. Head gaze following activated a distinct region in the posterior STS, close to-albeit not overlapping with-the medial face patch delineated by passive viewing of faces. This 'gaze following patch' may be the substrate of the geometrical calculations needed to translate information on head orientation from the face patches into precise shifts of attention, taking the spatial relationship of the two interacting agents into account.

Article and author information

Author details

  1. Karolina Marciniak

    Hertie Institute for Clinical Brain Research, University of Tuebingen, Tuebingen, Germany
    For correspondence
    marciniak.kar@gmail.com
    Competing interests
    The authors declare that no competing interests exist.
  2. Artin Atabaki

    Hertie Institute for Clinical Brain Research, University of Tuebingen, Tuebingen, Germany
    Competing interests
    The authors declare that no competing interests exist.
  3. Peter W Dicke

    Hertie Institute for Clinical Brain Research, University of Tuebingen, Tuebingen, Germany
    Competing interests
    The authors declare that no competing interests exist.
  4. Peter Thier

    Hertie Institute for Clinical Brain Research, University of Tuebingen, Tuebingen, Germany
    Competing interests
    The authors declare that no competing interests exist.

Ethics

Animal experimentation: This study was performed in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. All of the animals were handled according to the guidelines of the German law regulating the usage of experimental animals and the protocols approved by the local institution in charge of experiments using animals (Regierungspraesidium Tuebingen, Abteilung Tierschutz, permit-number N1/08). All surgery was performed under combination anesthesia involving isoflurane and remifentanyl and every effort was made to minimize discomfort and suffering.

Copyright

© 2014, Marciniak et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 5,099
    views
  • 947
    downloads
  • 34
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Karolina Marciniak
  2. Artin Atabaki
  3. Peter W Dicke
  4. Peter Thier
(2014)
Disparate substrates for head gaze following and face perception in the monkey superior temporal sulcus
eLife 3:e03222.
https://doi.org/10.7554/eLife.03222

Share this article

https://doi.org/10.7554/eLife.03222

Further reading

    1. Developmental Biology
    2. Neuroscience
    Taro Ichimura, Taishi Kakizuka ... Takeharu Nagai
    Tools and Resources

    We established a volumetric trans-scale imaging system with an ultra-large field-of-view (FOV) that enables simultaneous observation of millions of cellular dynamics in centimeter-wide three-dimensional (3D) tissues and embryos. Using a custom-made giant lens system with a magnification of ×2 and a numerical aperture (NA) of 0.25, and a CMOS camera with more than 100 megapixels, we built a trans-scale scope AMATERAS-2, and realized fluorescence imaging with a transverse spatial resolution of approximately 1.1 µm across an FOV of approximately 1.5×1.0 cm2. The 3D resolving capability was realized through a combination of optical and computational sectioning techniques tailored for our low-power imaging system. We applied the imaging technique to 1.2 cm-wide section of mouse brain, and successfully observed various regions of the brain with sub-cellular resolution in a single FOV. We also performed time-lapse imaging of a 1-cm-wide vascular network during quail embryo development for over 24 hr, visualizing the movement of over 4.0×105 vascular endothelial cells and quantitatively analyzing their dynamics. Our results demonstrate the potential of this technique in accelerating production of comprehensive reference maps of all cells in organisms and tissues, which contributes to understanding developmental processes, brain functions, and pathogenesis of disease, as well as high-throughput quality check of tissues used for transplantation medicine.

    1. Neuroscience
    Sean M Perkins, Elom A Amematsro ... Mark M Churchland
    Research Article

    Decoders for brain-computer interfaces (BCIs) assume constraints on neural activity, chosen to reflect scientific beliefs while yielding tractable computations. Recent scientific advances suggest that the true constraints on neural activity, especially its geometry, may be quite different from those assumed by most decoders. We designed a decoder, MINT, to embrace statistical constraints that are potentially more appropriate. If those constraints are accurate, MINT should outperform standard methods that explicitly make different assumptions. Additionally, MINT should be competitive with expressive machine learning methods that can implicitly learn constraints from data. MINT performed well across tasks, suggesting its assumptions are well-matched to the data. MINT outperformed other interpretable methods in every comparison we made. MINT outperformed expressive machine learning methods in 37 of 42 comparisons. MINT’s computations are simple, scale favorably with increasing neuron counts, and yield interpretable quantities such as data likelihoods. MINT’s performance and simplicity suggest it may be a strong candidate for many BCI applications.