Male rodent perirhinal cortex, but not ventral hippocampus, inhibition induces approach bias under object-based approach-avoidance conflict
Abstract
Neural models of approach-avoidance (AA) conflict behavior and its dysfunction have focused traditionally on the hippocampus, with the assumption that this medial temporal lobe (MTL) structure plays a ubiquitous role in arbitrating AA conflict. We challenge this perspective by using three different AA behavioural tasks in conjunction with optogenetics, to demonstrate that a neighbouring region in male rats, perirhinal cortex, is also critically involved but only when conflicting motivational values are associated with objects and not contextual information. The ventral hippocampus, in contrast, was found not to be essential for object-associated AA conflict, suggesting its preferential involvement in context-associated conflict. We propose that stimulus type can impact MTL involvement during AA conflict and that a more nuanced understanding of MTL contributions to impaired AA behaviour (e.g., anxiety) is required. These findings serve to expand upon the established functions of the perirhinal cortex while concurrently presenting innovative behavioural paradigms that permit the assessment of different facets of AA conflict behaviour.
Data availability
All data generated in this study have been deposited in Open Science Framework database under the accession code: https://osf.io/9h7wr/
Article and author information
Author details
Funding
Canadian Institutes of Health Research (156070)
- Andy CH Lee
- Rutsuko Ito
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Animal experimentation: All experiments were conducted in accordance with the regulations of the Canadian Council of Animal Care and approved by the University and Local Animal Care Committee of the University of Toronto (Protocol no. 20012479).
Reviewing Editor
- Mario A Penzo, National Institute of Mental Health, United States
Version history
- Received: June 29, 2022
- Preprint posted: July 2, 2022 (view preprint)
- Accepted: June 6, 2023
- Accepted Manuscript published: June 14, 2023 (version 1)
- Version of Record published: June 26, 2023 (version 2)
Copyright
© 2023, Dhawan et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 459
- Page views
-
- 34
- Downloads
-
- 1
- Citations
Article citation count generated by polling the highest count across the following sources: PubMed Central, Crossref, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
The computational principles underlying attention allocation in complex goal-directed tasks remain elusive. Goal-directed reading, that is, reading a passage to answer a question in mind, is a common real-world task that strongly engages attention. Here, we investigate what computational models can explain attention distribution in this complex task. We show that the reading time on each word is predicted by the attention weights in transformer-based deep neural networks (DNNs) optimized to perform the same reading task. Eye tracking further reveals that readers separately attend to basic text features and question-relevant information during first-pass reading and rereading, respectively. Similarly, text features and question relevance separately modulate attention weights in shallow and deep DNN layers. Furthermore, when readers scan a passage without a question in mind, their reading time is predicted by DNNs optimized for a word prediction task. Therefore, we offer a computational account of how task optimization modulates attention distribution during real-world reading.
-
- Neuroscience
Even though human experience unfolds continuously in time, it is not strictly linear; instead, it entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a continuously varying acoustic signal into phonemes, words, and meaning, and these levels all have distinct but interdependent temporal structures. Time-lagged regression using temporal response functions (TRFs) has recently emerged as a promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here we introduce the Eelbrain Python toolkit, which makes this kind of analysis easy and accessible. We demonstrate its use, using continuous speech as a sample paradigm, with a freely available EEG dataset of audiobook listening. A companion GitHub repository provides the complete source code for the analysis, from raw data to group level statistics. More generally, we advocate a hypothesis-driven approach in which the experimenter specifies a hierarchy of time-continuous representations that are hypothesized to have contributed to brain responses, and uses those as predictor variables for the electrophysiological signal. This is analogous to a multiple regression problem, but with the addition of a time dimension. TRF analysis decomposes the brain signal into distinct responses associated with the different predictor variables by estimating a multivariate TRF (mTRF), quantifying the influence of each predictor on brain responses as a function of time(-lags). This allows asking two questions about the predictor variables: 1) Is there a significant neural representation corresponding to this predictor variable? And if so, 2) what are the temporal characteristics of the neural response associated with it? Thus, different predictor variables can be systematically combined and evaluated to jointly model neural processing at multiple hierarchical levels. We discuss applications of this approach, including the potential for linking algorithmic/representational theories at different cognitive levels to brain responses through computational models with appropriate linking hypotheses.