Gain, not concomitant changes in spatial receptive field properties, improves task performance in a neural network attention model

  1. Kai J Fox  Is a corresponding author
  2. Daniel Birman  Is a corresponding author
  3. Justin L Gardner
  1. Stanford University, United States
  2. University of Washington, United States

Abstract

Attention allows us to focus sensory processing on behaviorally relevant aspects of the visual world. One potential mechanism of attention is a change in the gain of sensory responses. However, changing gain at early stages could have multiple downstream consequences for visual processing. Which, if any, of these effects can account for the benefits of attention for detection and discrimination? Using a model of primate visual cortex we document how a Gaussian-shaped gain modulation results in changes to spatial tuning properties. Forcing the model to use only these changes failed to produce any benefit in task performance. Instead, we found that gain alone was both necessary and sufficient to explain category detection and discrimination during attention. Our results show how gain can give rise to changes in receptive fields which are not necessary for enhancing task performance.

Data availability

The images and composite grids used in this study as well as the code necessary to replicate our analyses are available in the Open Science Framework with the identifier 10.17605/OSF.IO/AGHQK.

The following data sets were generated

Article and author information

Author details

  1. Kai J Fox

    Department of Psychology, Stanford University, Stanford, United States
    For correspondence
    kaifox@stanford.edu
    Competing interests
    The authors declare that no competing interests exist.
  2. Daniel Birman

    Department of Biological Structure, University of Washington, Seattle, United States
    For correspondence
    dbirman@uw.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-3748-6289
  3. Justin L Gardner

    Department of Psychology, Stanford University, Stanford, United States
    Competing interests
    The authors declare that no competing interests exist.

Funding

Washington Ressearch Foundation (Postdoctoral Fellowship)

  • Daniel Birman

Research to Prevent Blindness

  • Justin L Gardner

Lions Club International

  • Justin L Gardner

Hellman Fellows Fund

  • Justin L Gardner

National Eye Institute (T32EY07031)

  • Daniel Birman

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Human subjects: Procedures were approved in advance by the Stanford Institutional Review Board on human participants research and all observers gave prior written informed consent before participating (Protocol IRB-32120).

Copyright

© 2023, Fox et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 725
    views
  • 148
    downloads
  • 11
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Kai J Fox
  2. Daniel Birman
  3. Justin L Gardner
(2023)
Gain, not concomitant changes in spatial receptive field properties, improves task performance in a neural network attention model
eLife 12:e78392.
https://doi.org/10.7554/eLife.78392

Share this article

https://doi.org/10.7554/eLife.78392

Further reading

    1. Neuroscience
    Moritz F Wurm, Doruk Yiğit Erigüç
    Research Article

    Recognizing goal-directed actions is a computationally challenging task, requiring not only the visual analysis of body movements, but also analysis of how these movements causally impact, and thereby induce a change in, those objects targeted by an action. We tested the hypothesis that the analysis of body movements and the effects they induce relies on distinct neural representations in superior and anterior inferior parietal lobe (SPL and aIPL). In four fMRI sessions, participants observed videos of actions (e.g. breaking stick, squashing plastic bottle) along with corresponding point-light-display (PLD) stick figures, pantomimes, and abstract animations of agent–object interactions (e.g. dividing or compressing a circle). Cross-decoding between actions and animations revealed that aIPL encodes abstract representations of action effect structures independent of motion and object identity. By contrast, cross-decoding between actions and PLDs revealed that SPL is disproportionally tuned to body movements independent of visible interactions with objects. Lateral occipitotemporal cortex (LOTC) was sensitive to both action effects and body movements. These results demonstrate that parietal cortex and LOTC are tuned to physical action features, such as how body parts move in space relative to each other and how body parts interact with objects to induce a change (e.g. in position or shape/configuration). The high level of abstraction revealed by cross-decoding suggests a general neural code supporting mechanical reasoning about how entities interact with, and have effects on, each other.

    1. Neuroscience
    Gyeong Hee Pyeon, Hyewon Cho ... Yong Sang Jo
    Research Article Updated

    Recent studies suggest that calcitonin gene-related peptide (CGRP) neurons in the parabrachial nucleus (PBN) represent aversive information and signal a general alarm to the forebrain. If CGRP neurons serve as a true general alarm, their activation would modulate both passive nad active defensive behaviors depending on the magnitude and context of the threat. However, most prior research has focused on the role of CGRP neurons in passive freezing responses, with limited exploration of their involvement in active defensive behaviors. To address this, we examined the role of CGRP neurons in active defensive behavior using a predator-like robot programmed to chase mice. Our electrophysiological results revealed that CGRP neurons encode the intensity of aversive stimuli through variations in firing durations and amplitudes. Optogenetic activation of CGRP neurons during robot chasing elevated flight responses in both conditioning and retention tests, presumably by amplifying the perception of the threat as more imminent and dangerous. In contrast, animals with inactivated CGRP neurons exhibited reduced flight responses, even when the robot was programmed to appear highly threatening during conditioning. These findings expand the understanding of CGRP neurons in the PBN as a critical alarm system, capable of dynamically regulating active defensive behaviors by amplifying threat perception, and ensuring adaptive responses to varying levels of danger.