Vision: Framing orientation selectivity

The ongoing debate on the neural basis of orientation selectivity in the primary visual cortex continues.
  1. Floris P de Lange  Is a corresponding author
  2. Matthias Ekman  Is a corresponding author
  1. Radboud University, Netherlands

Color, contrast and motion are only some of the many things our brain needs to process when it receives information about our surroundings. From the moment light hits our eyes, the visual input is depicted and transported through a myriad of steps and networks.

In a region of the brain called the primary visual cortex or V1, the neurons are arranged in a specific way that allows the visual system to calculate where objects are in space. That is, neurons are organized ‘retinotopically’, meaning that neighboring areas in the retina correspond to neighboring areas in V1. Moreover, in humans, neurons sensitive to the same orientation are located in so-called orientation columns. For example, in one column, all neurons only respond to a horizontal stimulus, but not to diagonal or vertical ones (Figure 1A). Different orientation columns sit next to each other, repeating every 0.5–1 mm, and together cover the entire visual field.

How can we discern orientation selectivity from fMRI measurements?

(A) Differently oriented gratings (here vertical and horizontal) elicit different activity patterns in the primary visual cortex (illustrated by the 3x3 voxel matrix, where the color of each voxel is proportional to its activity (red is very active, blue is inactive). Multivariate pattern analysis techniques are used to decode the orientation of the grating from the voxel pattern. Roth et al. argue that variations in activity patterns are caused not by differences in the orientation of the stimuli per se, but are instead caused by ‘vignetting’ – a term they use to describe the interaction between the orientation of the and a change in light intensity that occurs for instance at the frame within which the stimulus is presented. (B) Applying different frames to the same vertical stimulus (left) (e.g., a radial frame (top), or an angular frame (bottom)) modulates the fMRI activity pattern in a way that is predicted by their computational model. (C) The same oriented grating can give rise to an opposite activity pattern, depending on whether it is projected through a radial frame (top) or angular frame (bottom).

Scientists often use a technique called functional magnetic resonance imaging, or fMRI for short, to study brain circuits. In 2005, two research groups managed to read out the orientation of a visual stimulus from fMRI activity patterns, a development that was met with a lot of excitement – but also some skepticism (Haynes and Rees, 2005; Kamitani and Tong, 2005). The resolution of fMRI is usually insufficient to image the orientation columns in V1: an ‘unbiased’ sample at the resolution of ~2–3 mm would capture neurons with all possible orientation preferences. How, then, was it possible to draw detailed conclusions from such a coarse-scale measure as fMRI?

Initially, it was speculated that even though every fMRI voxel contains all orientation columns (due to the much larger resolution of a voxel compared to the spatial scale of orientation columns), there are subtle differences between voxels in terms of the proportion of the different orientation columns within each voxel. For example, in one voxel, more horizontal than vertical columns may be found (Boynton, 2005). This is refered to as fine-scale bias. These subtle differences are random but systematic. Therefore, a machine learning algorithm can read out the orientation on the basis of these small differences.

Later studies suggested that, rather, the ability to decode orientation from fMRI patterns originates from activity differences at multiple spatial scales (Swisher et al., 2010), or even exclusively at a coarse spatial scale, i.e. at the level of retinotopic maps (Freeman et al., 2011). For example, clockwise orientation columns may be over-represented in neurons encoding the upper right part of our surrounding visual space, whereas counter-clockwise orientation columns may be over-represented in neurons encoding the upper left part of visual space (Sasaki et al., 2006). This is an example of coarse-scale bias. Now, in eLife, Zvi Roth, David Heeger and Elisha Merriam from the National Institutes of Health and the New York University add a new twist to this debate (Roth et al., 2018).

According to previous research, the edges of a visual stimulus (e.g., the outer and inner contours of a disc with horizontal or vertical stripes) create coarse-scale differences in visual activity (Carlson, 2014). These edges generate decodable activation patterns, and indeed stimuli with blurred edges are harder to decode than with sharper edges. Roth et al. show in an elegant combination of computational modeling and empirical fMRI work that orientation decoding is indeed sensitive to such ‘edge effects’. However, it does not depend on the edge per se, but on the interaction (which Roth et al. term ‘vignetting’) between the orientation of the stimulus and the frame within which the stimulus is presented (for example, a vertical pattern is presented on a circular frame, with a hole in the middle).

The researchers presented the stimuli within a radial or angular frame, to create different vignettes (Figure 1B). Their computational model predicted that a pattern oriented in the same way would create an opposite coarse-scale bias under these two different sets of vignettes. Indeed, their empirical data confirmed the model’s predictions: a vertical pattern within a radial frame showed an opposite bias to a vertical pattern within an angular frame, but the same bias as a horizontal pattern within an angular frame. This suggests that the different vignettes had a response pattern that was shifted by 90° (Figure 1C).

Does this have any implications for previous studies using ‘vignetted’ stimuli (e.g., Kamitani and Tong, 2005; Haynes and Rees, 2005)? Fortunately, the conclusions of these studies do not directly depend on the relative contribution of coarse-scale and fine-scale bias in activity patterns. However, the study by Roth et al. serves as a cautionary tale that multivariate pattern analyses – when used to identify activity patterns in the brain – have their limitation (Naselaris and Kay, 2015). Vignetting could produce activity patterns that resemble orientation tuning even in neurons that do not process orientation. This also applies to other techniques, such as electrophysiological recordings, if they use ‘vignetted’ stimuli.

While Roth et al. find no evidence for fine-scale biases, the strength of the correlation between the predicted and measured orientation preference is arguably modest and leaves room for other sources of orientation information. Some scientists argue that biases related to the frame within which stimuli are presented are not the sole contributor to orientation decoding in the visual cortex and that other sources of orientation selectivity might co-exist alongside vignetting (Wardle et al., 2017).

In conclusion, Roth et al. make a compelling case of how the frame in which a stimulus is presented can dramatically change the measured orientation preference, uncovering an important source of measured orientation information in brain recordings.

References

Article and author information

Author details

  1. Floris P de Lange

    Floris P de Lange is at the Donders Institute, Radboud University Nijmegen, Nijmegen, The Netherlands

    For correspondence
    floris.delange@donders.ru.nl
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-6730-1452
  2. Matthias Ekman

    Matthias Ekman is at the Donders Institute, Radboud University Nijmegen, Nijmegen, The Netherlands

    For correspondence
    matthias.ekman@gmail.com
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-1254-1392

Publication history

  1. Version of Record published: August 14, 2018 (version 1)

Copyright

© 2018, de Lange et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,833
    Page views
  • 148
    Downloads
  • 1
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Floris P de Lange
  2. Matthias Ekman
(2018)
Vision: Framing orientation selectivity
eLife 7:e39762.
https://doi.org/10.7554/eLife.39762
  1. Further reading

Further reading

    1. Evolutionary Biology
    2. Neuroscience
    Antoine Beauchamp, Yohan Yee ... Jason P Lerch
    Research Advance Updated

    The ever-increasing use of mouse models in preclinical neuroscience research calls for an improvement in the methods used to translate findings between mouse and human brains. Previously, we showed that the brains of primates can be compared in a direct quantitative manner using a common reference space built from white matter tractography data (Mars et al., 2018b). Here, we extend the common space approach to evaluate the similarity of mouse and human brain regions using openly accessible brain-wide transcriptomic data sets. We show that mouse-human homologous genes capture broad patterns of neuroanatomical organization, but the resolution of cross-species correspondences can be improved using a novel supervised machine learning approach. Using this method, we demonstrate that sensorimotor subdivisions of the neocortex exhibit greater similarity between species, compared with supramodal subdivisions, and mouse isocortical regions separate into sensorimotor and supramodal clusters based on their similarity to human cortical regions. We also find that mouse and human striatal regions are strongly conserved, with the mouse caudoputamen exhibiting an equal degree of similarity to both the human caudate and putamen.

    1. Neuroscience
    Na Young Jun, Douglas A Ruff ... Jennifer M Groh
    Research Article

    Sensory receptive fields are large enough that they can contain more than one perceptible stimulus. How, then, can the brain encode information about each of the stimuli that may be present at a given moment? We recently showed that when more than one stimulus is present, single neurons can fluctuate between coding one vs. the other(s) across some time period, suggesting a form of neural multiplexing of different stimuli (Caruso et al., 2018). Here, we investigate (a) whether such coding fluctuations occur in early visual cortical areas; (b) how coding fluctuations are coordinated across the neural population; and (c) how coordinated coding fluctuations depend on the parsing of stimuli into separate vs. fused objects. We found coding fluctuations do occur in macaque V1 but only when the two stimuli form separate objects. Such separate objects evoked a novel pattern of V1 spike count (‘noise’) correlations involving distinct distributions of positive and negative values. This bimodal correlation pattern was most pronounced among pairs of neurons showing the strongest evidence for coding fluctuations or multiplexing. Whether a given pair of neurons exhibited positive or negative correlations depended on whether the two neurons both responded better to the same object or had different object preferences. Distinct distributions of spike count correlations based on stimulus preferences were also seen in V4 for separate objects but not when two stimuli fused to form one object. These findings suggest multiple objects evoke different response dynamics than those evoked by single stimuli, lending support to the multiplexing hypothesis and suggesting a means by which information about multiple objects can be preserved despite the apparent coarseness of sensory coding.