Neuroscience: Watching the brain in action

  1. Bradford Z Mahon  Is a corresponding author
  1. University of Rochester, United States

In our daily lives, we interact with a vast array of objects—from pens and cups to hammers and cars. Whenever we recognize and use an object, our brain automatically accesses a wealth of background knowledge about the object’s structure, properties and functions, and about the movements associated with its use. We are also constantly observing our own actions as we engage with objects, as well as those of others. A key question is: how are these distinct types of information, which are distributed across different regions of the brain, integrated in the service of everyday behavior? Addressing this question involves specifying the internal organizational structure of the representations of each type of information, as well as the way in which information is exchanged or combined across different regions. Now, in eLife, Jody Culham at the University of Western Ontario (UWO) and co-workers report a significant advance in our understanding of these ‘big picture’ issues by showing how a specific type of information about object-directed actions is coded across the brain (Gallivan et al., 2013b).

A great deal is known about which brain regions represent and process different types of knowledge about objects and actions (Martin, 2007). For instance, visual information about the structure and form of objects, and of body parts, is represented in ventral and lateral temporal occipital regions (Goodale and Milner, 1992). Visuomotor processing in support of object-directed action, such as reaching and grasping, is represented in dorsal occipital and posterior parietal regions (Culham et al., 2003). Knowledge about how to manipulate objects according to their function is represented in inferior-lateral parietal cortex, and in premotor regions of the frontal lobe (Culham et al., 2003; Johnson-Frey, 2004).

Summary of the networks of brain regions that code for movements of hands and tools.

By comparing brain activation as subjects prepared to reach towards or grasp an object using their hands or a tool, Gallivan et al. identified four networks that code for distinct components of object-directed actions. Some brain regions code for planned actions that involve the hands but not tools (red), and others for actions that involve tools but not the hands (blue). A third set of regions codes for actions involving either the hands or tools, but uses different neural representations for each (pink). A final set of areas code the type of action to be performed, distinguishing between reaching towards an object as opposed to grasping it, irrespective of whether a tool or the hands alone are used (purple). The red lines represent the frontoparietal network implicated in hand actions, with the short dashes showing the subnetwork involved in reaching, and the long dashes, the subnetwork involved in grasping. The blue solid lines show the network implicated in tool use, while the green line connects areas comprising a subset of the perception network.

FIGURE CREDIT: IMAGE ADAPTED FROM FIGURE 7 IN GALLIVAN ET AL., 2013B

Culham and colleagues—who are based at the UWO, Queen's University and the University of Missouri, and include Jason Gallivan as first author—focus their investigation on the neural substrates that underlie our ability to grasp objects. They used functional magnetic resonance imaging (fMRI) to scan the brains of subjects performing a task in which they had to alternate between using their hands or a set of pliers to reach towards or grasp an object. Ingeniously, the pliers were reverse pliers—constructed so that the business end opens when you close your fingers, and closes when your fingers open. This made it possible to dissociate the goal of each action (e.g., ‘grasp’) from the movements involved in its execution (since in the case of the pliers, ‘grasping’ is accomplished by opening the hand).

Gallivan et al. used multivariate analyses to test whether the pattern of responses elicited across a set of voxels (or points in the brain) when the participant reaches to touch an object can be distinguished from the pattern elicited across the same voxels when they grasp the object. In addition, they sought to identify three classes of brain regions: those that code grasping of objects with the hand (but not the pliers), those that code grasping of objects with the pliers (but not the hand), and those that have a common code for grasping with both the hand and the pliers (that is, a code for grasping that is independent of the specific movements involved).

One thing that makes this study particularly special is that Gallivan et al. performed their analyses on the fMRI data just ‘before’ the participants made an overt movement. In other words, they examined where in the brain the ‘intention’ to move is represented. Specifically, they asked: which brain regions distinguish between intentions corresponding to different types of object-directed actions? They found that certain regions decode upcoming actions of the hand but not the pliers (superior-parietal/occipital cortex and lateral occipital cortex), whereas other regions decode upcoming actions involving the pliers but not the hand (supramarginal gyrus and left posterior middle temporal gyrus). A third set of regions uses a common code for upcoming actions of both the hands and the pliers (subregions of the intraparietal sulcus and premotor regions of the frontal lobe).

The work of Gallivan et al. significantly advances our understanding of how the brain codes upcoming actions involving the hands. Research by a number of teams is converging to suggest that such actions activate regions of lateral occipital cortex that also respond to images of hands (Astafiev et al., 2004; Peelen and Downing, 2005; Orlov et al., 2010; Bracci et al., 2012). Moreover, a previous paper from Gallivan and colleagues reported that upcoming hand actions (grasping versus reaching with the fingers) can be decoded in regions of ventral and lateral temporal-occipital cortex that were independently defined as showing differential BOLD responses for different categories of objects (e.g., objects, scenes, body parts; Gallivan et al., 2013a). Furthermore, the regions of lateral occipital cortex that respond specifically to images of hands are directly adjacent to those that respond specifically to images of tools, and also exhibit strong functional connectivity with areas of somatomotor cortex (Bracci et al., 2012).

Taken together, these latest results and the existing literature point toward a model in which the connections between visual areas and somatomotor regions help to organize high level visual areas (Mahon and Caramazza, 2011), and to integrate visual and motor information online to support object-directed action. An exciting issue raised by this study is the degree to which tools may have multiple levels of representation across different brain regions: some regions seem to represent tools as extensions of the human body (Iriki et al., 1996), while other regions represent them as discrete objects to be acted upon by the body. The work of Gallivan et al. suggests a new way of understanding how these different representations of tools are combined in the service of everyday behavior.

References

    1. Peelen MV
    2. Downing PE
    (2005)
    Is the extrastriate body area involved in motor actions?
    125, Nat Neurosci, 8, author reply 125–6, 10.1038/nn0205-125a.

Article and author information

Author details

  1. Bradford Z Mahon

    Department of Brain and Cognitive Sciences, and the Department of Neurosurgery, University of Rochester, Rochester, United States
    For correspondence
    mahon@rcbi.rochester.edu
    Competing interests
    The author declares that no competing interests exist.

Publication history

  1. Version of Record published:

Copyright

© 2013, Mahon

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 690
    views
  • 43
    downloads
  • 1
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Bradford Z Mahon
(2013)
Neuroscience: Watching the brain in action
eLife 2:e00866.
https://doi.org/10.7554/eLife.00866
  1. Further reading

Further reading

    1. Cancer Biology
    2. Neuroscience
    Jeffrey Barr, Austin Walz ... Paola D Vermeer
    Research Article

    Cancer patients often experience changes in mental health, prompting an exploration into whether nerves infiltrating tumors contribute to these alterations by impacting brain functions. Using a mouse model for head and neck cancer and neuronal tracing, we show that tumor-infiltrating nerves connect to distinct brain areas. The activation of this neuronal circuitry altered behaviors (decreased nest-building, increased latency to eat a cookie, and reduced wheel running). Tumor-infiltrating nociceptor neurons exhibited heightened calcium activity and brain regions receiving these neural projections showed elevated Fos as well as increased calcium responses compared to non-tumor-bearing counterparts. The genetic elimination of nociceptor neurons decreased brain Fos expression and mitigated the behavioral alterations induced by the presence of the tumor. While analgesic treatment restored nesting and cookie test behaviors, it did not fully restore voluntary wheel running indicating that pain is not the exclusive driver of such behavioral shifts. Unraveling the interaction between the tumor, infiltrating nerves, and the brain is pivotal to developing targeted interventions to alleviate the mental health burdens associated with cancer.

    1. Neuroscience
    Xinlin Hou, Peng Zhang ... Dandan Zhang
    Research Article

    Emotional responsiveness in neonates, particularly their ability to discern vocal emotions, plays an evolutionarily adaptive role in human communication and adaptive behaviors. The developmental trajectory of emotional sensitivity in neonates is crucial for understanding the foundations of early social-emotional functioning. However, the precise onset of this sensitivity and its relationship with gestational age (GA) remain subjects of investigation. In a study involving 120 healthy neonates categorized into six groups based on their GA (ranging from 35 and 40 weeks), we explored their emotional responses to vocal stimuli. These stimuli encompassed disyllables with happy and neutral prosodies, alongside acoustically matched nonvocal control sounds. The assessments occurred during natural sleep states using the odd-ball paradigm and event-related potentials. The results reveal a distinct developmental change at 37 weeks GA, marking the point at which neonates exhibit heightened perceptual acuity for emotional vocal expressions. This newfound ability is substantiated by the presence of the mismatch response, akin to an initial form of adult mismatch negativity, elicited in response to positive emotional vocal prosody. Notably, this perceptual shift’s specificity becomes evident when no such discrimination is observed in acoustically matched control sounds. Neonates born before 37 weeks GA do not display this level of discrimination ability. This developmental change has important implications for our understanding of early social-emotional development, highlighting the role of gestational age in shaping early perceptual abilities. Moreover, while these findings introduce the potential for a valuable screening tool for conditions like autism, characterized by atypical social-emotional functions, it is important to note that the current data are not yet robust enough to fully support this application. This study makes a substantial contribution to the broader field of developmental neuroscience and holds promise for future research on early intervention in neurodevelopmental disorders.