Memory: When neurons split the load

Various aspects of olfactory memory are represented as modulated responses across different classes of neurons in C. elegans.
  1. Itamar Lev
  2. Manuel Zimmer  Is a corresponding author
  1. Department of Neuroscience and Developmental Biology, Vienna Biocenter, University of Vienna, Austria
  2. Research Institute of Molecular Pathology, Vienna Biocenter, Austria

Memories are composed of the molecular and cellular traces that an event leaves in the nervous system. In turn, these neuronal changes enable the brain to weave together different features of the experience – for example, its outcome – with certain properties of the environment at the time.

Even the small worm Caenorhabditis elegans, a tractable and well-studied model organism with 302 neurons, can form such associations. Through conditioning, these animals can ‘learn’ to prefer a stimulus – for instance a smell – that is associated with food being present. This requires neurons to encode information so that an experience (e.g. smelling a specific odor) is correctly linked to valence (whether the situation was positive or negative, depending on the presence or absence of food).

Previous studies have already implicated specific genes and neurons in these processes, for example (Jin et al., 2016; Tomioka et al., 2006). However, this reductionist framework cannot fully capture how different aspects of a memory, such as experience and valence, are represented amongst an entire network of neurons. In C. elegans, it is possible to identify many of the neurons in these networks, and to record their activity simultaneously at single-cell resolution. This offers a unique opportunity to directly measure the features of memory traces during perception. Now, in eLife, Alon Zaslaver and colleagues at the Hebrew University of Jerusalem – including Chrisitian Pritz as first author – report the results of an extensive series of experiments which examined how olfactory memory modulates neuronal responses in this model organism (Pritz et al., 2023).

First, the team trained groups of worms to associate a conditioning odor, butanone (diluted in a solvent) with the presence or the absence of food (appetitive vs. aversive conditioning; Colbert and Bargmann, 1995; Kauffman et al., 2010). The protocol was adapted for the animals to form either short- or long-term memories of these associations.

A choice assay experiment then confirmed that in both short- and long-term conditions, appetitive and aversive conditioning respectively increased and decreased the worms’ preference for butanone over another smell (diacetyl). Two control groups were also tested: naive animals that had not been experimented on, and worms that had been through a ‘mock’ training identical to the one received during conditioning, but in the absence of butanone (only the solvent was present).

This experimental design allowed Pritz et al. to systematically isolate and investigate the different factors that influence behavior and neuronal activity. For instance, comparing mock-treated and naive individuals helped to capture the impact of experimental parameters other than smell and valence, such as the worms experiencing starvation.

Next, Pritz et al. used calcium imaging to record the activity of the same set of 24 sensory neurons in conditioned, naive and mock-treated animals exposed to butanone or diacetyl. This revealed that, for these classes of cells, the modulation of neuronal activity in response to the odors was mainly taking place for short- rather than long-term memories. Overall, a large proportion of the sensory neurons studied showed fine changes in activity following conditioning, with a few neuron classes exhibiting a stronger response. Detailed analyses highlighted that each class could encode one or several features of the memories, such as the presence of the odor, valence or a specific aspect of the training process. Mock treatment also impacted the activity of a large proportion of sensory neurons, shedding light on how parameters such as starvation can affect neuronal responses. Overall, these results suggest that the neuronal changes associated with short-term memories are distributed across multiple types of sensory neurons, rather than one class being solely dedicated to capturing a specific element of the response.

To further explore this possibility, Pritz et al. developed machine learning algorithms that could predict the type of conditioning the worms received based on their neuronal responses. The models made better predictions if information from more neuron types (up to five) was provided. Principal components analysis, which helps to pinpoint patterns in large datasets, further supported the idea that different task parameters (conditioning odor, valence, and starvation experience) create distinct activity profiles across the sensory circuit.

Next, Pritz et al. demonstrated that modulation of the sensory neurons also impacted the interneurons that they projected onto, and which relay sensory information to the rest of the nervous system (Figure 1). Three classes of interneurons were examined: while one of them mainly responded to the mock training, the others showed conditioning-specific responses. Unlike sensory neurons, however, all three interneurons could encode both short and long-term memories. While these initial findings are intriguing, additional work on larger datasets is probably needed to confirm whether short- versus long-term memory processes are generally allocated to specific classes of cells.

Conditioning induces a distributed modulation of neuronal responses to odor.

Worms were exposed to the odor butanone while food was absent (aversive conditioning; left) or present (appetitive conditioning; right); the animals formed memories of these associations, which led them to show respectively decreased or increased preference for butanone over an alternative odor diacetyl. Both types of conditioning modulate a wide range of sensory neurons (circles on outer edge; ASI, AWC and other three letter initials refer to various neuronal classes), which increase (pink) or decrease (green) their activity to varying degrees. The sensory neurons, in turn, alter the activity of three classes of interneurons (AIY, AIA and RIA) that they connect to via gap junctions or chemical synapses. The interneurons then feed information to other parts of the nervous system.

Finally, Pritz et al. used statistical modelling to examine how various sensory neurons shaped the activity of the AIY interneuron, which receives most of its inputs from these cells. The results suggest that AIY modulation was provided by different combinations of sensory neurons depending on the type of training: changes in AIY activity were driven by a single class of neurons after appetitive conditioning, but by a complex circuit of several neuronal classes after aversive conditioning. The memory of different treatment experiences is therefore retained in variable degrees of distribution (the number of classes of neurons involved); whether this could be underpinned by complex changes in the strength of the connections between sensory neurons and interneurons is an exciting hypothesis for future studies.

In conclusion, the work by Pritz et al. adds to existing evidence showing that neuronal signals are distributed across the nervous system in a wide range of organisms – from reactions to stimuli and movement control in C. elegans, to memory in animals with larger brains (Lin et al., 2023; Kato et al., 2015; Owald and Waddell, 2015; Tonegawa et al., 2015). As research on C. elegans can now also examine larger neuronal networks, it should provide new insights into how the nervous system computes and yields behavior in this and other animals. Advanced machine learning algorithms could help in this effort, as they are uniquely placed to ‘decode’ the signals embedded in large neural activity datasets – for example, which odor a worm is smelling. However, the way that algorithms process that information does not necessarily match the underlying neuronal mechanisms and biological processes accurately. Addressing these problems will require further developing computational and experimental approaches alongside one another.

References

Article and author information

Author details

  1. Itamar Lev

    Itamar Lev is in the Department of Neuroscience and Developmental Biology, Vienna Biocenter, University of Vienna, Vienna, Austria

    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9100-5100
  2. Manuel Zimmer

    Manuel Zimmer is in the Department of Neuroscience and Developmental Biology, Vienna Biocenter, University of Vienna and the Research Institute of Molecular Pathology, Vienna Biocenter, Vienna, Austria

    For correspondence
    manuel.zimmer@univie.ac.at
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-8072-787X

Publication history

  1. Version of Record published: May 4, 2023 (version 1)

Copyright

© 2023, Lev and Zimmer

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 732
    Page views
  • 53
    Downloads
  • 0
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Itamar Lev
  2. Manuel Zimmer
(2023)
Memory: When neurons split the load
eLife 12:e87861.
https://doi.org/10.7554/eLife.87861
  1. Further reading

Further reading

    1. Computational and Systems Biology
    Qianmu Yuan, Chong Tian, Yuedong Yang
    Tools and Resources

    Revealing protein binding sites with other molecules, such as nucleic acids, peptides, or small ligands, sheds light on disease mechanism elucidation and novel drug design. With the explosive growth of proteins in sequence databases, how to accurately and efficiently identify these binding sites from sequences becomes essential. However, current methods mostly rely on expensive multiple sequence alignments or experimental protein structures, limiting their genome-scale applications. Besides, these methods haven’t fully explored the geometry of the protein structures. Here, we propose GPSite, a multi-task network for simultaneously predicting binding residues of DNA, RNA, peptide, protein, ATP, HEM, and metal ions on proteins. GPSite was trained on informative sequence embeddings and predicted structures from protein language models, while comprehensively extracting residual and relational geometric contexts in an end-to-end manner. Experiments demonstrate that GPSite substantially surpasses state-of-the-art sequence-based and structure-based approaches on various benchmark datasets, even when the structures are not well-predicted. The low computational cost of GPSite enables rapid genome-scale binding residue annotations for over 568,000 sequences, providing opportunities to unveil unexplored associations of binding sites with molecular functions, biological processes, and genetic variants. The GPSite webserver and annotation database can be freely accessed at https://bio-web1.nscc-gz.cn/app/GPSite.

    1. Cell Biology
    2. Computational and Systems Biology
    Thomas Grandits, Christoph M Augustin ... Alexander Jung
    Research Article

    Computer models of the human ventricular cardiomyocyte action potential (AP) have reached a level of detail and maturity that has led to an increasing number of applications in the pharmaceutical sector. However, interfacing the models with experimental data can become a significant computational burden. To mitigate the computational burden, the present study introduces a neural network (NN) that emulates the AP for given maximum conductances of selected ion channels, pumps, and exchangers. Its applicability in pharmacological studies was tested on synthetic and experimental data. The NN emulator potentially enables massive speed-ups compared to regular simulations and the forward problem (find drugged AP for pharmacological parameters defined as scaling factors of control maximum conductances) on synthetic data could be solved with average root-mean-square errors (RMSE) of 0.47 mV in normal APs and of 14.5 mV in abnormal APs exhibiting early afterdepolarizations (72.5% of the emulated APs were alining with the abnormality, and the substantial majority of the remaining APs demonstrated pronounced proximity). This demonstrates not only very fast and mostly very accurate AP emulations but also the capability of accounting for discontinuities, a major advantage over existing emulation strategies. Furthermore, the inverse problem (find pharmacological parameters for control and drugged APs through optimization) on synthetic data could be solved with high accuracy shown by a maximum RMSE of 0.22 in the estimated pharmacological parameters. However, notable mismatches were observed between pharmacological parameters estimated from experimental data and distributions obtained from the Comprehensive in vitro Proarrhythmia Assay initiative. This reveals larger inaccuracies which can be attributed particularly to the fact that small tissue preparations were studied while the emulator was trained on single cardiomyocyte data. Overall, our study highlights the potential of NN emulators as powerful tool for an increased efficiency in future quantitative systems pharmacology studies.