Object Recognition: Do rats see like we see?
In our eyes, cells called photoreceptors convert the world around us into a pixel-like representation. Our brains must then reorganize this into a representation that reflects the identities of the objects we are looking at. The same object can be represented by very different pixel patterns, depending on its distance from us, the viewing angle and the lighting conditions. Conversely, different objects can be represented by pixel patterns that are similar. This is what makes object recognition a tremendously challenging problem for our brains to solve, and we do not fully understand how our brains manage to recognize objects.
Nonhuman primates (such as rhesus monkeys) are routinely used to study object recognition because their brains are similar to ours in many ways. However, there are advantages to working with mice and rats, including access to an array of modern biotechnological tools that have been optimized for these species. These tools include sophisticated ways to measure neural activity (Svoboda and Yasuda, 2006), to manipulate neural activity (Fenno et al., 2011), and to map how neurons are connected together within and between brain areas (Oh et al., 2014).
Skepticism that rodents could be used to gain insight into object recognition has largely been targeted at the ways in which rodent visual systems deviate from our own. For example, the retinae of mice and rats are specialized for seeing in the dark, and they lack a region called the fovea that allows humans to see objects in great detail at the center of the gaze. The visual cortex is also organized differently in primates and rodents with regard to how neurons with similar preferences for visual stimuli are clustered together within each brain area, and a much smaller fraction of the rodent cortex is devoted to visual processing. In light of all of these differences, can we really learn much about how our brains recognize objects by studying how rodents see?
In an earlier study, Davide Zoccolan and colleagues presented behavioral evidence that rats are capable of identifying objects under variations in viewing conditions (Zoccolan et al., 2009). Now, in eLife, Zoccolan and co-workers at SISSA in Trieste, the Istituto Italiano di Tecnologia and Harvard Medical School – including Sina Tafazoli and Houman Safaai as joint first authors – present evidence that this behavior is supported by four visual areas of the brain that are arranged in a functional hierarchy (Tafazoli et al., 2017). This is analogous to how object processing happens in the primate brain (DiCarlo et al., 2012).
Researchers had previously relied on anatomical evidence to argue that visual brain areas in rats are organized in a hierarchical fashion (Coogan and Burkhalter, 1993). Tafazoli et al. recorded the activity of four of these areas – termed V1, LM, LI and LL – in response to different objects as they systematically changed a number of variables (such as the position, size and luminance of each object). With this data, they quantified how much information each brain area reflected about the identity of the object, as well as how that information was formatted.
A key insight came from analyzing the degree to which changes in the neural responses to different objects could be attributed to differences in object luminance as opposed to object shape. Compared to the other brain areas, the firing rate of the neurons in V1 (the first brain area in the hierarchy) depended more strongly on the amount of luminance within the region of the visual field that each neuron was sensitive to. Moving through the hierarchy, an increasingly large proportion of the responses of the neurons reflected information about the shape of the object. At the same time, there was a systematic increase in the degree to which information about object identity was formatted in a manner that would make it easy for higher brain areas to access this information (DiCarlo and Cox, 2007).
In the face of considerable evidence that object processing in rats and primates is different, Tafazoli et al. have uncovered a compelling similarity. By design, their study has strong parallels with the studies that established a hierarchy for object processing in the primate brain, and their results suggest that rats and primates may perform object recognition in broadly similar ways. Future work will be required to determine the degree to which the nuts-and-bolts of object processing are in fact the same between the species.
References
-
Hierarchical organization of areas in rat visual cortexJournal of Neuroscience 13:3749–3772.
-
Untangling invariant object recognitionTrends in Cognitive Sciences 11:333–341.https://doi.org/10.1016/j.tics.2007.06.010
-
The development and application of optogeneticsAnnual Review of Neuroscience 34:389–412.https://doi.org/10.1146/annurev-neuro-061010-113817
Article and author information
Author details
Publication history
- Version of Record published: April 12, 2017 (version 1)
Copyright
© 2017, Rust
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,977
- Page views
-
- 200
- Downloads
-
- 0
- Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Even though human experience unfolds continuously in time, it is not strictly linear; instead, it entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a continuously varying acoustic signal into phonemes, words, and meaning, and these levels all have distinct but interdependent temporal structures. Time-lagged regression using temporal response functions (TRFs) has recently emerged as a promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here we introduce the Eelbrain Python toolkit, which makes this kind of analysis easy and accessible. We demonstrate its use, using continuous speech as a sample paradigm, with a freely available EEG dataset of audiobook listening. A companion GitHub repository provides the complete source code for the analysis, from raw data to group level statistics. More generally, we advocate a hypothesis-driven approach in which the experimenter specifies a hierarchy of time-continuous representations that are hypothesized to have contributed to brain responses, and uses those as predictor variables for the electrophysiological signal. This is analogous to a multiple regression problem, but with the addition of a time dimension. TRF analysis decomposes the brain signal into distinct responses associated with the different predictor variables by estimating a multivariate TRF (mTRF), quantifying the influence of each predictor on brain responses as a function of time(-lags). This allows asking two questions about the predictor variables: 1) Is there a significant neural representation corresponding to this predictor variable? And if so, 2) what are the temporal characteristics of the neural response associated with it? Thus, different predictor variables can be systematically combined and evaluated to jointly model neural processing at multiple hierarchical levels. We discuss applications of this approach, including the potential for linking algorithmic/representational theories at different cognitive levels to brain responses through computational models with appropriate linking hypotheses.
-
- Neuroscience
The functional complementarity of the vestibulo-ocular reflex (VOR) and optokinetic reflex (OKR) allows for optimal combined gaze stabilization responses (CGR) in light. While sensory substitution has been reported following complete vestibular loss, the capacity of the central vestibular system to compensate for partial peripheral vestibular loss remains to be determined. Here, we first demonstrate the efficacy of a 6-week subchronic ototoxic protocol in inducing transient and partial vestibular loss which equally affects the canal- and otolith-dependent VORs. Immunostaining of hair cells in the vestibular sensory epithelia revealed that organ-specific alteration of type I, but not type II, hair cells correlates with functional impairments. The decrease in VOR performance is paralleled with an increase in the gain of the OKR occurring in a specific range of frequencies where VOR normally dominates gaze stabilization, compatible with a sensory substitution process. Comparison of unimodal OKR or VOR versus bimodal CGR revealed that visuo-vestibular interactions remain reduced despite a significant recovery in the VOR. Modeling and sweep-based analysis revealed that the differential capacity to optimally combine OKR and VOR correlates with the reproducibility of the VOR responses. Overall, these results shed light on the multisensory reweighting occurring in pathologies with fluctuating peripheral vestibular malfunction.