Cognitive Neuroscience: Memorable first impressions
Look out the window and see what stands out. Perhaps you notice some red and pink azaleas in full bloom. Now close your eyes and picture that scene in your mind. Initially, the colors and silhouettes linger vividly, but the details wither rapidly, leaving only a faded version of the image. As time passes, the accuracy with which an image can be recalled drops abruptly.
Memory is a critical, wonderful, multifaceted mental capacity that relies on many structures and mechanisms throughout the brain (Baddeley, 2003; Squire and Wixted, 2011; Schacter et al., 2012). This is not surprising, given the diversity of timescales and data types – such as images, words, facts and motor skills – that we can remember. Studies have shown that our visual memories are strongest immediately after an image disappears, remaining reliable for about half a second. This has traditionally been attributed to ‘iconic’ memory, which is thought to rely on a direct readout of stimulus-driven activity in visual circuits in the brain. In this case, the memory remains vivid because, after the stimulus (i.e., the image) has been removed, the visual activity takes some time to decay (Sperling, 1960; Pratte, 2018; Teeuwen et al., 2021).
In contrast, recalling an image a second or so after it has disappeared engages a different type of memory – visual working memory – that relies on information stored in different circuits in the frontal lobe (Pasternak and Greenlee, 2005; D’Esposito and Postle, 2015). Although not as vivid, the stored image remains stable for much longer. This is because, despite being more robust, the storage capacity for visual working memory is more limited: fewer items and less detail can be recalled from a remembered image. Together, these findings led to the idea that there are two distinct short-term memory mechanisms. Now, in eLife, Ivan Tomić and Paul Bays report strong evidence indicating that iconic memory and visual working memory are part of the same recall mechanism (Tomić and Bays, 2023).
Tomić and Bays – who are based at the University of Zagreb and the University of Cambridge – first constructed a detailed computational model to describe how sensory information is passed to a visual working memory circuit for storage and later recall (Figure 1). In this model, visual neurons respond to the presentation of an image containing a few items. This stimulus causes sensory activity to rise smoothly while the input lasts, and to decay once the stimulus ceases, consistent with previous experiments (Teeuwen et al., 2021). This sensory response then drives a population of visual working memory neurons that can sustain their activity in the absence of a stimulus, although this activity will eventually be corrupted due to noise (Wimmer et al., 2014; DePasquale et al., 2018). An important feature of the model is that each remembered item is allocated an equal fraction of the maximum possible working memory activity.

Timeline of events during stimulus presentation and storage.
A visual stimulus (grey box containing pattern) with N items is presented for a period of time (pale blue region). Sensory activity increases to a maximum value during this period, and then decays when the stimulus disappears. For each item, VWM activity also increases towards an effective saturation limit, which is the maximum possible value divided by the number of items presented: here N=2, so the effective saturation limit is half the maximum possible value. When the target item is cued (black arrow; top) at a later time (yellow region), the non-target item(s) are removed from memory (grey trace), and activity associated with the target item (green trace) increases towards the maximum possible value. The level of activity (and hence the accuracy of memory recall) will vary more and more over time due to noise. VWM: visual working memory.
Image credit: Adapted from Figure 2 in the paper by Tomić and Bays, 2023.
The model constructed by Tomić and Bays can make specific testable predictions. For example, it predicts that if an item is cued for later recall while the sensory signal is still present, the working memory activity associated with the non-targets will decay rapidly, freeing up resources and thus increasing the working memory activity associated with the cued item. This leads to more accurate recall of the item. In contrast, if an item is cued for later recall once the sensory signal has approached zero, this ‘boost’ does not happen, and the item is not recalled as accurately. In addition, the working memory activity should increase with longer exposure to the stimulus and should decrease as the number of remembered items increases.
These predictions were confirmed through experiments with humans. Participants were shown visual stimuli while several factors were varied, including the number of items to be remembered, the duration of the stimulus, the time at which the item to be recalled was identified, and the time of the actual recall. The results of these experiments are consistent with the notion that, during recall, visual information is always read out from the same population of neurons.
The findings of Tomić and Bays are satisfying for their simplicity; what seemed to require two separate mechanisms is explained by a single framework aligned with many prior studies. However, models always require simplifications and shortcuts. For instance, much evidence indicates that both frontal lobe circuits and sensory areas contribute to the self-sustained maintenance of activity that underlies the short-term memory of sensory events (Pasternak and Greenlee, 2005). Therefore, visual working memory is likely the result of continuous recurrent dynamics across areas (DePasquale et al., 2018; Stroud et al., 2024). Furthermore, there is still debate about the degree to which visual working memory implies equal sharing of resources, as opposed to some items receiving larger or smaller shares (Ma et al., 2014; Pratte, 2018). Nevertheless, the proposed model is certainly an important advance that future studies can build upon.
References
-
Working memory: looking back and looking forwardNature Reviews Neuroscience 4:829–839.https://doi.org/10.1038/nrn1201
-
The cognitive neuroscience of working memoryAnnual Review of Psychology 66:115–142.https://doi.org/10.1146/annurev-psych-010814-015031
-
Working memory in primate sensory systemsNature Reviews Neuroscience 6:97–107.https://doi.org/10.1038/nrn1603
-
Iconic memories die a sudden deathPsychological Science 29:877–887.https://doi.org/10.1177/0956797617747118
-
The information available in brief visual presentationsPsychological Monographs 74:1–29.https://doi.org/10.1037/h0093759
-
The cognitive neuroscience of human memory since H.MAnnual Review of Neuroscience 34:259–288.https://doi.org/10.1146/annurev-neuro-061010-113720
-
The computational foundations of dynamic coding in working memoryTrends in Cognitive Sciences 4:S1364-6613(24)00053-6.https://doi.org/10.1016/j.tics.2024.02.011
-
A neuronal basis of iconic memory in macaque primary visual cortexCurrent Biology 31:5401–5414.https://doi.org/10.1016/j.cub.2021.09.052
Article and author information
Author details
Publication history
Copyright
© 2024, Salinas and Sheikh
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 623
- views
-
- 48
- downloads
-
- 0
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Recent studies suggest that calcitonin gene-related peptide (CGRP) neurons in the parabrachial nucleus (PBN) represent aversive information and signal a general alarm to the forebrain. If CGRP neurons serve as a true general alarm, their activation would modulate both passive nad active defensive behaviors depending on the magnitude and context of the threat. However, most prior research has focused on the role of CGRP neurons in passive freezing responses, with limited exploration of their involvement in active defensive behaviors. To address this, we examined the role of CGRP neurons in active defensive behavior using a predator-like robot programmed to chase mice. Our electrophysiological results revealed that CGRP neurons encode the intensity of aversive stimuli through variations in firing durations and amplitudes. Optogenetic activation of CGRP neurons during robot chasing elevated flight responses in both conditioning and retention tests, presumably by amplifying the perception of the threat as more imminent and dangerous. In contrast, animals with inactivated CGRP neurons exhibited reduced flight responses, even when the robot was programmed to appear highly threatening during conditioning. These findings expand the understanding of CGRP neurons in the PBN as a critical alarm system, capable of dynamically regulating active defensive behaviors by amplifying threat perception, and ensuring adaptive responses to varying levels of danger.
-
- Evolutionary Biology
- Neuroscience
The first complete 3D reconstruction of the compound eye of a minute wasp species sheds light on the nuts and bolts of size reduction.