Neural Processing: Looking into the future

  1. Vincent D Costa  Is a corresponding author
  2. Bruno B Averbeck
  1. Laboratory of Neuropsychology, National Institute of Mental Health, United States

In daily life, we carry out numerous tasks that require a high level of visual awareness. For example, we can reach for a cup of coffee without looking directly at it, we can walk down the street without bumping into other people, and we can drive a car without thinking about it. Because we can perform these tasks so easily, it seems as though our brain can work out the positions of objects with very little effort. In fact, to do this the brain must process a lot of complex information.

We see things because receptors on the retina are excited by photons of light, and our brain represents this information in the visual cortex. However, if we move our eyes, receptors in a different part of the retina are excited, and the new information is stored in a different part of the visual cortex—but we still know that the objects we can see are in the same place. How, then, does the brain ensure that we can continue to perform tasks that require us to know exactly where objects are, while all these changes are going on?

It has been proposed that a mechanism called gain field coding makes this possible (Zipser and Andersen, 1988). This is a form of population coding: that is, it involves many neurons firing in response to a given visual image, rather than just one neuron firing. Neurons with gain field coding represent both the location of objects on the retina and the angle of gaze (i.e., where we are looking in space). From this information, computational models have shown that the location of objects in space can be calculated (Pouget and Sejnowski, 1997). However, for this mechanism to work effectively, the angle of gaze must be reliably represented and rapidly updated after an eye movement. Now, in eLife, Arnulf Graf and Richard Andersen of the California Institute of Technology show that the neural population code for eye movements and eye position in a region of the brain called the parietal cortex is accurate, and is updated rapidly when eye movements are planned and executed (Graf and Andersen, 2014).

To demonstrate this, monkeys carried out a task where they had to make saccades—rapid movements of the eyes (Figure 1). At the same time, the response of a population of neurons in an area of the parietal cortex called LIP (Lateral-Intra-Parietal) was recorded. Area LIP has previously been associated with behaviour related to eye movements (Gnadt and Andersen, 1988).

How the brain represents information about the locations of objects can be revealed through memory-guided saccade tasks, performed in the dark.

To find out how the neurons in area LIP of the parietal cortex respond to eye movements and eye position, Graf and Andersen trained monkeys to rapidly move (saccade) their eyes to the remembered location of a target, while the response of their neurons was monitored. The monkey initially fixated on one of nine target positions (top). Then, one of the surrounding target locations was flashed before disappearing (middle). The animals had to remember the target location for a short period of time and then move their eyes to look at this location when the fixation point disappeared (bottom). The experiments were carried out in the dark to eliminate the possibility that the recorded neural response was caused by any other visual information.

The task performed by the monkeys had to be carefully designed to eliminate a range of possible confounding factors. If the targets were visible when eye movements were made towards them, any detected neural activity may have been representing the locations of those targets, rather than the eye movements. Therefore, eye movements were made in darkness and the planned movement had to be remembered by the monkey. The task also separated the direction of the eye movement from the position of the eye before and after the movement. This allowed Graf and Andersen to examine, unambiguously, whether information in the neural population code was representing either—or both—current and future eye position signals.

To determine whether the population code in the parietal cortex contained information about eye position and eye movements, Graf and Andersen used statistical models to analyse the neural activity and estimate these two variables at specific points in time. If these variables can be read out by a decoding analysis, this information will also be available in the brain and will, therefore, also be able to drive behaviour.

Graf and Andersen found that population coding of the initial eye position was represented well throughout the eye movement. In contrast, the coding of the final eye position began after the target location was flashed—at the point in the task when the animals were told where to move their eyes—and peaked following the completion of the eye movement. This finding builds on existing evidence that eye position can be decoded from area LIP before and after a saccade to a visual target (Morris et al., 2013).

Contrary to recent suggestions by Xu et al. (Xu et al., 2012), Graf and Andersen show that population coding of the post saccadic eye position signal was updated quickly after the saccade target was shown. There are two possible reasons for the discrepancies between these studies. First, Xu et al. assumed that eye position information could be characterised by a number called the gain field index. Xu et al. also only examined single neurons. Even though individual neurons represent fixation location and eye movements, different combinations of eye positions and target locations can cause some neurons to respond in the same way. However, looking at a population of neurons removes this ambiguity so that it is clear what the neurons are actually responding to—and this can be achieved with decoding analyses (Georgopoulos et al., 1986).

It is well recognised that the brain computes and produces behaviour on the basis of distributed representations of neural activity, where patterns of activity across many neurons represent one action, and each neuron is involved in more than one action (Fetz, 1992; Rigotti et al., 2013). Distributed representations can be non-intuitive—but they are the way the brain represents and processes information. Graf and Anderson illustrate the use of decoding to extract information from a distributed representation across a population of neurons, and show that this approach can resolve debates about neural coding. This study also points to the importance of recording from large neural populations when investigating how complex tasks are performed, so the space in which population coding applies is fully explored.

References

    1. Gnadt JW
    2. Andersen RA
    (1988)
    Memory related motor planning activity in posterior parietal cortex of macaque
    Experimental Brain Research 70:216–220.

Article and author information

Author details

  1. Vincent D Costa

    Laboratory of Neuropsychology, National Institute of Mental Health, Maryland, United States
    For correspondence
    vincent.costa@nih.gov
    Competing interests
    The authors declare that no competing interests exist.
  2. Bruno B Averbeck

    Laboratory of Neuropsychology, National Institute of Mental Health, Maryland, United States
    Competing interests
    The authors declare that no competing interests exist.

Publication history

  1. Version of Record published:

Copyright

© 2014, Costa and Averbeck

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,069
    views
  • 35
    downloads
  • 2
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Vincent D Costa
  2. Bruno B Averbeck
(2014)
Neural Processing: Looking into the future
eLife 3:e03146.
https://doi.org/10.7554/eLife.03146

Further reading

    1. Neuroscience
    Hannah Bos, Christoph Miehl ... Brent Doiron
    Research Article

    Synaptic inhibition is the mechanistic backbone of a suite of cortical functions, not the least of which are maintaining network stability and modulating neuronal gain. In cortical models with a single inhibitory neuron class, network stabilization and gain control work in opposition to one another – meaning high gain coincides with low stability and vice versa. It is now clear that cortical inhibition is diverse, with molecularly distinguished cell classes having distinct positions within the cortical circuit. We analyze circuit models with pyramidal neurons (E) as well as parvalbumin (PV) and somatostatin (SOM) expressing interneurons. We show how, in E – PV – SOM recurrently connected networks, SOM-mediated modulation can lead to simultaneous increases in neuronal gain and network stability. Our work exposes how the impact of a modulation mediated by SOM neurons depends critically on circuit connectivity and the network state.

    1. Genetics and Genomics
    2. Neuroscience
    Martina Rudgalvyte, Zehan Hu ... Dominique A Glauser
    Research Article

    Thermal nociception in Caenorhabditis elegans is regulated by the Ca²+/calmodulin-dependent protein kinase CMK-1, but its downstream effectors have remained unclear. Here, we combined in vitro kinase assays with mass-spectrometry-based phosphoproteomics to identify hundreds of CMK-1 substrates, including the calcineurin A subunit TAX-6, phosphorylated within its conserved regulatory domain. Genetic and pharmacological analyses reveal multiple antagonistic interactions between CMK-1 and calcineurin signaling in modulating both naive thermal responsiveness and adaptation to repeated noxious stimuli. Cell-specific manipulations indicate that CMK-1 acts in AFD and ASER thermo-sensory neurons, while TAX-6 functions in FLP thermo-sensory neurons and downstream interneurons. Since CMK-1 and TAX-6 act in distinct cell types, the phosphorylation observed in vitro might not directly underlie the behavioral phenotype. Instead, the opposing effects seem to arise from their distributed roles within the sensory circuit. Overall, our study provides (1) a resource of candidate CMK-1 targets for further dissecting CaM kinase signaling and (2) evidence of a previously unrecognized, circuit-level antagonism between CMK-1 and calcineurin pathways. These findings highlight a complex interplay of signaling modules that modulate thermal nociception and adaptation, offering new insights into potentially conserved mechanisms that shape nociceptive plasticity and pain (de)sensitization in more complex nervous systems.