A compositional neural code in high-level visual cortex can explain jumbled word reading
Abstract
We read jubmled wrods effortlessly, but the neural correlates of this remarkable ability remain poorly understood. We hypothesized that viewing a jumbled word activates a visual representation that is compared to known words. To test this hypothesis, we devised a purely visual model in which neurons tuned to letter shape respond to longer strings in a compositional manner by linearly summing letter responses. We found that dissimilarities between letter strings in this model can explain human performance on visual search, and responses to jumbled words in word reading tasks. Brain imaging revealed that viewing a string activates this letter-based code in the lateral occipital (LO) region and that subsequent comparisons to stored words are consistent with activations of the visual word form area (VWFA). Thus, a compositional neural code potentially contributes to efficient reading.
Data availability
Data and code necessary to reproduce the results are available in an Open Science Framework repository at https://osf.io/384zw/
Article and author information
Author details
Funding
Wellcome Trust/DBT India Alliance (IA/S/17/1/503081)
- SP Arun
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Human subjects: All subjects gave informed consent to an experimental protocol approved by the Institutional Human Ethics Committee of the Indian Institute of Science (IHEC # 6-15092017).
Copyright
© 2020, Agrawal et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 4,151
- views
-
- 465
- downloads
-
- 28
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
The circadian clock enables organisms to synchronize biochemical and physiological processes over a 24 hr period. Natural changes in lighting conditions, as well as artificial disruptions like jet lag or shift work, can advance or delay the clock phase to align physiology with the environment. Within the suprachiasmatic nucleus (SCN) of the hypothalamus, circadian timekeeping and resetting rely on both membrane depolarization and intracellular second-messenger signaling. Voltage-gated calcium channels (VGCCs) facilitate calcium influx in both processes, activating intracellular signaling pathways that trigger Period (Per) gene expression. However, the precise mechanism by which these processes are concertedly gated remains unknown. Our study in mice demonstrates that cyclin-dependent kinase 5 (Cdk5) activity is modulated by light and regulates phase shifts of the circadian clock. We observed that knocking down Cdk5 in the SCN of mice affects phase delays but not phase advances. This is linked to uncontrolled calcium influx into SCN neurons and an unregulated protein kinase A (PKA)-calcium/calmodulin-dependent kinase (CaMK)-cAMP response element-binding protein (CREB) signaling pathway. Consequently, genes such as Per1 are not induced by light in the SCN of Cdk5 knock-down mice. Our experiments identified Cdk5 as a crucial light-modulated kinase that influences rapid clock phase adaptation. This finding elucidates how light responsiveness and clock phase coordination adapt activity onset to seasonal changes, jet lag, and shift work.
-
- Neuroscience
Recognizing goal-directed actions is a computationally challenging task, requiring not only the visual analysis of body movements, but also analysis of how these movements causally impact, and thereby induce a change in, those objects targeted by an action. We tested the hypothesis that the analysis of body movements and the effects they induce relies on distinct neural representations in superior and anterior inferior parietal lobe (SPL and aIPL). In four fMRI sessions, participants observed videos of actions (e.g. breaking stick, squashing plastic bottle) along with corresponding point-light-display (PLD) stick figures, pantomimes, and abstract animations of agent–object interactions (e.g. dividing or compressing a circle). Cross-decoding between actions and animations revealed that aIPL encodes abstract representations of action effect structures independent of motion and object identity. By contrast, cross-decoding between actions and PLDs revealed that SPL is disproportionally tuned to body movements independent of visible interactions with objects. Lateral occipitotemporal cortex (LOTC) was sensitive to both action effects and body movements. These results demonstrate that parietal cortex and LOTC are tuned to physical action features, such as how body parts move in space relative to each other and how body parts interact with objects to induce a change (e.g. in position or shape/configuration). The high level of abstraction revealed by cross-decoding suggests a general neural code supporting mechanical reasoning about how entities interact with, and have effects on, each other.