Neural Processing: Looking into the future
In daily life, we carry out numerous tasks that require a high level of visual awareness. For example, we can reach for a cup of coffee without looking directly at it, we can walk down the street without bumping into other people, and we can drive a car without thinking about it. Because we can perform these tasks so easily, it seems as though our brain can work out the positions of objects with very little effort. In fact, to do this the brain must process a lot of complex information.
We see things because receptors on the retina are excited by photons of light, and our brain represents this information in the visual cortex. However, if we move our eyes, receptors in a different part of the retina are excited, and the new information is stored in a different part of the visual cortex—but we still know that the objects we can see are in the same place. How, then, does the brain ensure that we can continue to perform tasks that require us to know exactly where objects are, while all these changes are going on?
It has been proposed that a mechanism called gain field coding makes this possible (Zipser and Andersen, 1988). This is a form of population coding: that is, it involves many neurons firing in response to a given visual image, rather than just one neuron firing. Neurons with gain field coding represent both the location of objects on the retina and the angle of gaze (i.e., where we are looking in space). From this information, computational models have shown that the location of objects in space can be calculated (Pouget and Sejnowski, 1997). However, for this mechanism to work effectively, the angle of gaze must be reliably represented and rapidly updated after an eye movement. Now, in eLife, Arnulf Graf and Richard Andersen of the California Institute of Technology show that the neural population code for eye movements and eye position in a region of the brain called the parietal cortex is accurate, and is updated rapidly when eye movements are planned and executed (Graf and Andersen, 2014).
To demonstrate this, monkeys carried out a task where they had to make saccades—rapid movements of the eyes (Figure 1). At the same time, the response of a population of neurons in an area of the parietal cortex called LIP (Lateral-Intra-Parietal) was recorded. Area LIP has previously been associated with behaviour related to eye movements (Gnadt and Andersen, 1988).
The task performed by the monkeys had to be carefully designed to eliminate a range of possible confounding factors. If the targets were visible when eye movements were made towards them, any detected neural activity may have been representing the locations of those targets, rather than the eye movements. Therefore, eye movements were made in darkness and the planned movement had to be remembered by the monkey. The task also separated the direction of the eye movement from the position of the eye before and after the movement. This allowed Graf and Andersen to examine, unambiguously, whether information in the neural population code was representing either—or both—current and future eye position signals.
To determine whether the population code in the parietal cortex contained information about eye position and eye movements, Graf and Andersen used statistical models to analyse the neural activity and estimate these two variables at specific points in time. If these variables can be read out by a decoding analysis, this information will also be available in the brain and will, therefore, also be able to drive behaviour.
Graf and Andersen found that population coding of the initial eye position was represented well throughout the eye movement. In contrast, the coding of the final eye position began after the target location was flashed—at the point in the task when the animals were told where to move their eyes—and peaked following the completion of the eye movement. This finding builds on existing evidence that eye position can be decoded from area LIP before and after a saccade to a visual target (Morris et al., 2013).
Contrary to recent suggestions by Xu et al. (Xu et al., 2012), Graf and Andersen show that population coding of the post saccadic eye position signal was updated quickly after the saccade target was shown. There are two possible reasons for the discrepancies between these studies. First, Xu et al. assumed that eye position information could be characterised by a number called the gain field index. Xu et al. also only examined single neurons. Even though individual neurons represent fixation location and eye movements, different combinations of eye positions and target locations can cause some neurons to respond in the same way. However, looking at a population of neurons removes this ambiguity so that it is clear what the neurons are actually responding to—and this can be achieved with decoding analyses (Georgopoulos et al., 1986).
It is well recognised that the brain computes and produces behaviour on the basis of distributed representations of neural activity, where patterns of activity across many neurons represent one action, and each neuron is involved in more than one action (Fetz, 1992; Rigotti et al., 2013). Distributed representations can be non-intuitive—but they are the way the brain represents and processes information. Graf and Anderson illustrate the use of decoding to extract information from a distributed representation across a population of neurons, and show that this approach can resolve debates about neural coding. This study also points to the importance of recording from large neural populations when investigating how complex tasks are performed, so the space in which population coding applies is fully explored.
References
-
Are movement Parameters recognizably coded in the activity of single neuronsBehavioral and Brain Sciences 15:679–690.https://doi.org/10.1017/CBO9780511529788.008
-
Memory related motor planning activity in posterior parietal cortex of macaqueExperimental Brain Research 70:216–220.
-
Eye-position signals in the dorsal visual system are accurate and precise on short timescalesJournal of Neuroscience 33:12395–12406.https://doi.org/10.1523/Jneurosci.0576-13.2013
-
Spatial transformations in the parietal cortex using basis functionsJournal of Cognitive Neuroscience 9:222–237.https://doi.org/10.1162/jocn.1997.9.2.222
Article and author information
Author details
Publication history
Copyright
© 2014, Costa and Averbeck
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,065
- views
-
- 35
- downloads
-
- 2
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Predicting an individual’s cognitive traits or clinical condition using brain signals is a central goal in modern neuroscience. This is commonly done using either structural aspects, such as structural connectivity or cortical thickness, or aggregated measures of brain activity that average over time. But these approaches are missing a central aspect of brain function: the unique ways in which an individual’s brain activity unfolds over time. One reason why these dynamic patterns are not usually considered is that they have to be described by complex, high-dimensional models; and it is unclear how best to use these models for prediction. We here propose an approach that describes dynamic functional connectivity and amplitude patterns using a Hidden Markov model (HMM) and combines it with the Fisher kernel, which can be used to predict individual traits. The Fisher kernel is constructed from the HMM in a mathematically principled manner, thereby preserving the structure of the underlying model. We show here, in fMRI data, that the HMM-Fisher kernel approach is accurate and reliable. We compare the Fisher kernel to other prediction methods, both time-varying and time-averaged functional connectivity-based models. Our approach leverages information about an individual’s time-varying amplitude and functional connectivity for prediction and has broad applications in cognitive neuroscience and personalised medicine.
-
- Neuroscience
Non-linear summation of synaptic inputs to the dendrites of pyramidal neurons has been proposed to increase the computation capacity of neurons through coincidence detection, signal amplification, and additional logic operations such as XOR. Supralinear dendritic integration has been documented extensively in principal neurons, mediated by several voltage-dependent conductances. It has also been reported in parvalbumin-positive hippocampal basket cells, in dendrites innervated by feedback excitatory synapses. Whether other interneurons, which support feed-forward or feedback inhibition of principal neuron dendrites, also exhibit local non-linear integration of synaptic excitation is not known. Here, we use patch-clamp electrophysiology, and two-photon calcium imaging and glutamate uncaging, to show that supralinear dendritic integration of near-synchronous spatially clustered glutamate-receptor mediated depolarization occurs in NDNF-positive neurogliaform cells and oriens-lacunosum moleculare interneurons in the mouse hippocampus. Supralinear summation was detected via recordings of somatic depolarizations elicited by uncaging of glutamate on dendritic fragments, and, in neurogliaform cells, by concurrent imaging of dendritic calcium transients. Supralinearity was abolished by blocking NMDA receptors (NMDARs) but resisted blockade of voltage-gated sodium channels. Blocking L-type calcium channels abolished supralinear calcium signalling but only had a minor effect on voltage supralinearity. Dendritic boosting of spatially clustered synaptic signals argues for previously unappreciated computational complexity in dendrite-projecting inhibitory cells of the hippocampus.