Visual Cognition: In sight, in mind

A region of the brain called the perirhinal cortex represents both what things look like and what they mean.
  1. Mariam Aly  Is a corresponding author
  1. Columbia University, United States

When we look around at the world, we can appreciate what things look like and also what they are used for. For example, when we look at a couch, we see its long flat surface, its cushions, and its back. We also know that a couch is a good place to sit or nap. How does the brain represent, and integrate, these different kinds of information? This is a tricky question because these details are often related. A futon and a couch have similar functions and they look similar too. Because of this, it can be difficult to tell whether a given brain region codes for an object’s appearance (known as a percept) or its function (a concept).

Now, in eLife, Chris Martin, Morgan Barense and colleagues – who are based at the University of Toronto, Mount Allison University, the Rotman Research Institute, and Queen's University in Kingston – report how they have been able to tease out percepts and concepts in the brain (Martin et al., 2018). Their ingenious approach involved using the names of pairs of objects that look similar but have different functions, and other pairs with similar functions but different looks. For example, a tennis ball and a lemon are both roundish and yellow, but serve different purposes; a tennis ball and a tennis racket, on the other hand, do not look alike but are both involved in playing tennis.

Martin et al. asked over a thousand people to rate how much each pair of named objects looked alike, and another equally large group to describe conceptual features of those objects, for example, their function, or where they are typically found. For each pair of objects, these experiments gave one number that indicated the perceptual similarity of the objects, and a second number that indicated their conceptual similarity. Equipped with this information, Martin et al. could test different hypotheses of how percepts and concepts are represented in the brain.

One possibility was that some brain regions represent visual form (Martin and Chao, 2001) and others represent the function or meaning of objects (Patterson et al., 2007). An additional possibility, not exclusive of the first, was that some brain regions could simultaneously represent both (Barense et al., 2012a2012a; Clarke and Tyler, 2014; Murray and Bussey, 1999).

Functional magnetic resonance imaging (fMRI) examines brain activity on a moment-by-moment basis. Martin et al. used fMRI to observe how activity in different brain regions changed when individuals were shown the names of the objects, and did one of two tasks. In one task, individuals had to make judgments about what the object looked like; in the other task they had to make judgments about its conceptual features (e.g., what it is used for). Martin et al. could then look at the patterns of activity in different brain regions while people performed these two tasks, and relate those activity patterns to the ratings of perceptual and conceptual similarity they had obtained earlier (Kriegeskorte et al., 2008).

Martin et al. hypothesized that a region of the brain called the perirhinal cortex would represent what things looked like and what they meant. Prior studies have separately linked this brain region to both of these functions (e.g., Barense et al., 2012b; Wright et al., 2015), but could not disentangle perceptual and conceptual similarity. Having overcome that challenge with their experimental design, Martin et al. found that activity patterns in the perirhinal cortex did indeed reflect both perceptual and conceptual similarity. This result was obtained whether individuals were judging what objects looked like or what they meant, suggesting that this region of the brain may integrate percepts and concepts relatively automatically. Other regions of the brain represented either what things looked like or what they meant, but it was only the perirhinal cortex where both of these representations were integrated (Figure 1).

How visual and conceptual similarity are represented in different regions of the brain.

Objects that are represented similarly in a given brain region are shown close together, with thick solid lines connecting them. Objects that are somewhat similar are shown at intermediate distance, with thin solid lines connecting them. Objects that are represented distinctly are shown further apart, with thin dashed lines between them. (A) A region of the brain called the lateral occipital cortex, shown in blue, represents objects that look alike – like a lemon and a tennis ball – in similar ways. (B) The temporal pole and parahippocampal cortex, shown in green, represent objects that are conceptually related – like a tennis ball and tennis racket – in similar ways. (C) The perirhinal cortex, shown in red, integrates these different kinds of information such that objects that are conceptually related or that look alike are represented in similar ways.

IMAGE CREDIT: Object images courtesy of Bainbridge and Oliva (2015).

Martin et al. have furthered our understanding of how we can perceive and understand objects, and their findings open some exciting avenues for future research. It remains unclear whether the exact same neurons in the perirhinal cortex represent both percepts and concepts at the same time, or if they are represented by distinct, but intermingled, populations of neurons. fMRI allows researchers to see at a general level which brain regions are active, but it cannot identify exactly which neurons are responding or how. Future studies that record from individual neurons will provide a complementary picture to this latest work.

References

Article and author information

Author details

  1. Mariam Aly

    Mariam Aly is in the Department of Psychology, Columbia University, New York, United States

    For correspondence
    ma3631@columbia.edu
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4033-6134

Publication history

  1. Version of Record published: March 1, 2018 (version 1)

Copyright

© 2018, Aly

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 2,030
    Page views
  • 146
    Downloads
  • 0
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Mariam Aly
(2018)
Visual Cognition: In sight, in mind
eLife 7:e35663.
https://doi.org/10.7554/eLife.35663

Further reading

    1. Cell Biology
    2. Neuroscience
    Rituparna Chakrabarti, Lina María Jaime Tobón ... Carolin Wichmann
    Research Article Updated

    Ribbon synapses of cochlear inner hair cells (IHCs) are specialized to indefatigably transmit sound information at high rates. To understand the underlying mechanisms, structure-function analysis of the active zone (AZ) of these synapses is essential. Previous electron microscopy studies of synaptic vesicle (SV) dynamics at the IHC AZ used potassium stimulation, which limited the temporal resolution to minutes. Here, we established optogenetic IHC stimulation followed by quick freezing within milliseconds and electron tomography to study the ultrastructure of functional synapse states with good temporal resolution in mice. We characterized optogenetic IHC stimulation by patch-clamp recordings from IHCs and postsynaptic boutons revealing robust IHC depolarization and neurotransmitter release. Ultrastructurally, the number of docked SVs increased upon short (17–25 ms) and long (48–76 ms) light stimulation paradigms. We did not observe enlarged SVs or other morphological correlates of homotypic fusion events. Our results indicate a rapid recruitment of SVs to the docked state upon stimulation and suggest that univesicular release prevails as the quantal mechanism of exocytosis at IHC ribbon synapses.

    1. Computational and Systems Biology
    2. Neuroscience
    Zhe Chen, Garrett J Blair ... Hugh T Blair
    Tools and Resources Updated

    Epifluorescence miniature microscopes (‘miniscopes’) are widely used for in vivo calcium imaging of neural population activity. Imaging data are typically collected during a behavioral task and stored for later offline analysis, but emerging techniques for online imaging can support novel closed-loop experiments in which neural population activity is decoded in real time to trigger neurostimulation or sensory feedback. To achieve short feedback latencies, online imaging systems must be optimally designed to maximize computational speed and efficiency while minimizing errors in population decoding. Here we introduce DeCalciOn, an open-source device for real-time imaging and population decoding of in vivo calcium signals that is hardware compatible with all miniscopes that use the UCLA Data Acquisition (DAQ) interface. DeCalciOn performs online motion stabilization, neural enhancement, calcium trace extraction, and decoding of up to 1024 traces per frame at latencies of <50 ms after fluorescence photons arrive at the miniscope image sensor. We show that DeCalciOn can accurately decode the position of rats (n = 12) running on a linear track from calcium fluorescence in the hippocampal CA1 layer, and can categorically classify behaviors performed by rats (n = 2) during an instrumental task from calcium fluorescence in orbitofrontal cortex. DeCalciOn achieves high decoding accuracy at short latencies using innovations such as field-programmable gate array hardware for real-time image processing and contour-free methods to efficiently extract calcium traces from sensor images. In summary, our system offers an affordable plug-and-play solution for real-time calcium imaging experiments in behaving animals.