Decision Making: Remembering to choose the future
From the philosophers of ancient Greece to the self-help books of today, humans have long been interested in choice. Philosophers and ethicists have debated what goals we ought to choose for millennia, and for a century or more economists and psychologists have studied what goals we will choose. However, neuroscience has only recently begun to systematically address how we choose.
Whether we are pondering life-defining decisions about love, career or commitment to a cause, or simply picking which snacks to buy in the grocery store, it is still unclear what regions of the brain are involved in making choices, and what information those regions encode. In everyday language, we often talk about ‘value’ (or in economic terms, ‘utility’) as the driver of such decisions: we consider our options, and select the one with the highest value. Hundreds of functional MRI (or fMRI) studies in healthy humans have identified a consistent set of brain regions which seem to process signals associated with subjective values; this suggests that value is indeed a concept that has biological roots (Bartra et al., 2013). However, the nature of the information that contributes to the neural signals related to value remains a matter of debate (O'Doherty, 2014). In other words, it is not clear what we think about when we think about value.
In fact, scientists know far less about choices based on value than they do about perceptual decisions (such as assessing if a noisy array of moving dots is trending more to the left or to the right; Shadlen and Kiani, 2013). During perceptual choices, external information is repeatedly sampled and the neural representation of this evidence accumulates until a threshold is crossed and a decision is triggered. These tasks are associated with well-known behavioral phenomena – for instance, choices with less perceptual evidence take longer to resolve – which are captured by drift diffusion models (Ratcliff and McKoon, 2008).
It has been proposed that value-based decisions might occur in a similar way (Rangel et al., 2008). However, while it is obvious what knowledge is accumulating as a person gazes at a screen filled with moving dots, it is less clear what information might be sampled to support a decision based on value. Now, in eLife, Akram Bakkour of Columbia University and colleagues report that, at least in part, we may be thinking about past experiences (Bakkour et al., 2019).
Their work makes a strong case that value-based deliberation engages the hippocampus, a small structure within the brain that is involved in long-term memory. Although past experiences are a likely source of relevant information in value-based decisions, to date researchers have focused mostly on other regions of the brain such as the ventral prefrontal cortex and the striatum.
Bakkour et al. – who are based at Columbia and the Memory Disorders Research Center – first used fMRI to establish that activity in the hippocampus is greater for longer deliberations during value-based choice. They then harnessed the power of a lesion experiment to infer that the structure is necessary for such choices (Vaidya et al., 2019). Patients with hippocampal damage were slower to make decisions, and somewhat more variable in what they chose. These hippocampal effects were specific to value-based decisions. Deliberation time in a classic perceptual decision task did not relate to hippocampal signal, nor was it influenced by hippocampal damage. While perceptual decisions involve sampling external evidence, Bakkour et al. propose that deliberation during value-based choice requires sampling internal evidence. This includes – although is presumably not limited to – using the hippocampus to conjure up past experiences with similar options. Ultimately, these results will help to broaden the anatomical scope of decision neuroscience.
Studies have already shown that ‘attention’, while intuitive and attractive as a holistic concept, is in fact composed of dozens of distinct processes with definable characteristics that rely on different neural circuits. It is likely that ‘value’ will also require further decomposition. Armed with this knowledge, it may become possible to better understand how the brain carries out the important value-based decisions that define us as individuals and shape the directions of our societies.
References
-
The problem with valueNeuroscience & Biobehavioral Reviews 43:259–268.https://doi.org/10.1016/j.neubiorev.2014.03.027
-
A framework for studying the neurobiology of value-based decision makingNature Reviews Neuroscience 9:545–556.https://doi.org/10.1038/nrn2357
-
Lesion studies in contemporary neuroscienceTrends in Cognitive Sciences 23:653–671.https://doi.org/10.1016/j.tics.2019.05.009
Article and author information
Author details
Publication history
Copyright
© 2019, Fellows
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,934
- views
-
- 156
- downloads
-
- 0
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Recognizing goal-directed actions is a computationally challenging task, requiring not only the visual analysis of body movements, but also analysis of how these movements causally impact, and thereby induce a change in, those objects targeted by an action. We tested the hypothesis that the analysis of body movements and the effects they induce relies on distinct neural representations in superior and anterior inferior parietal lobe (SPL and aIPL). In four fMRI sessions, participants observed videos of actions (e.g. breaking stick, squashing plastic bottle) along with corresponding point-light-display (PLD) stick figures, pantomimes, and abstract animations of agent–object interactions (e.g. dividing or compressing a circle). Cross-decoding between actions and animations revealed that aIPL encodes abstract representations of action effect structures independent of motion and object identity. By contrast, cross-decoding between actions and PLDs revealed that SPL is disproportionally tuned to body movements independent of visible interactions with objects. Lateral occipitotemporal cortex (LOTC) was sensitive to both action effects and body movements. These results demonstrate that parietal cortex and LOTC are tuned to physical action features, such as how body parts move in space relative to each other and how body parts interact with objects to induce a change (e.g. in position or shape/configuration). The high level of abstraction revealed by cross-decoding suggests a general neural code supporting mechanical reasoning about how entities interact with, and have effects on, each other.
-
- Neuroscience
Our movements result in predictable sensory feedback that is often multimodal. Based on deviations between predictions and actual sensory input, primary sensory areas of cortex have been shown to compute sensorimotor prediction errors. How prediction errors in one sensory modality influence the computation of prediction errors in another modality is still unclear. To investigate multimodal prediction errors in mouse auditory cortex, we used a virtual environment to experimentally couple running to both self-generated auditory and visual feedback. Using two-photon microscopy, we first characterized responses of layer 2/3 (L2/3) neurons to sounds, visual stimuli, and running onsets and found responses to all three stimuli. Probing responses evoked by audiomotor (AM) mismatches, we found that they closely resemble visuomotor (VM) mismatch responses in visual cortex (V1). Finally, testing for cross modal influence on AM mismatch responses by coupling both sound amplitude and visual flow speed to the speed of running, we found that AM mismatch responses were amplified when paired with concurrent VM mismatches. Our results demonstrate that multimodal and non-hierarchical interactions shape prediction error responses in cortical L2/3.