1. Hillary Hadley   Is a corresponding author
  2. Lisa Scott   Is a corresponding author
  1. University of Massachusetts Amherst, United States

Human faces are an important part of social interactions. We use them to recognize a friend, to gauge someone's mood, or to figure out where to direct our attention. But before engaging in any of these activities, we must first identify a face as a face.

The fundamental question of how a perceptual system, such as the one underlying face recognition, becomes organized in the brain is important for understanding how changes in the brain lead to changes in behavior. Studying face perception in developing infants could help us to understand the parts of the brain that contribute to adult face perception. It might also reveal how face-processing abilities can be impaired in some populations, such as people with Autism.

In adults, the right hemisphere of the brain is critical for recognizing faces. Damage to the right hemisphere, but not the left hemisphere, can impair face recognition. Moreover, the right hemisphere produces larger brain responses than the left hemisphere when a face is seen. This has been witnessed using two different neuroimaging methods. The first, called functional magnetic resonance imaging (fMRI), measures the flow of blood around the brain and relates this to brain activity (e.g., Kanwisher et al., 1997). The second directly measures event-related potentials (ERPs)—the electrical response of a brain region to a stimulus (e.g., Rossion et al., 2003).

In children, there is also evidence that the right and left hemispheres of the brain respond differently to faces (Scherf et al., 2007). Recently it was reported that the response of the right hemisphere to faces is intricately linked to changes that occur in the left hemisphere when children learn to read (Dundas et al., 2014). To date, the majority of studies on infants (who are too young to read) have found no significant differences in how the two sides of the brain respond to faces (e.g., de Haan and Nelson, 1999; Gliga and Dehaene-Lambertz, 2007). However, one group did find response differences between hemispheres when comparing faces to markedly less complex stimuli (patterns of colored dots) (Tzourio-Mazoyer et al., 2002).

Now, in eLife, Adélaïde de Heering and Bruno Rossion from the University of Louvain have used a fast periodic visual stimulation (FPVS) approach to explore face perception in a group of infants aged between four and six months (de Heering and Rossion, 2015). This approach involves presenting images at a rapid, fixed rate in order to induce brain responses that occur at the same rate (often defined as ‘steady-state visual evoked potentials’, SSVEPs; Regan, 1966; for a review, see Norcia et al., 2015).

de Heering and Rossion report that, in the brains of these infants, faces are represented as a distinct category of objects, separate from other categories of objects such as plants or man-made objects. This distinction can be seen most prominently in the response recorded over the right occipito-temporal brain region, which is near the back of the brain. Importantly, faces that vary in size, viewpoint and features (such as the expression and the gender of the faces) are all categorized as faces. This is even the case when the images include naturalistic backgrounds.

These findings provide evidence that by the time they are six months old, infants possess a relatively robust ability to identify that faces are different from objects, and can do so in a realistic context. Moreover, the larger face-related brain responses recorded over the right hemisphere suggest that the right hemisphere of the brain has begun to preferentially respond to faces by six months of age. These findings also complement previous behavioral and ERP work suggesting that infants can distinguish between faces and objects in the first year of life (for a review, see Scott and Nelson, 2004).

Although this technique has been successfully used in infant studies of low-level vision (e.g., Braddick et al., 1986), de Heering and Rossion are among the first researchers to demonstrate the effectiveness of the FPVS technique using complex images in infant research. This is an important addition to the developmental scientist's toolbox and will greatly expand our ability to characterize brain development in infants even before they begin to talk. This technique has been successfully used in a variety of adult investigations but to our knowledge only one other published study reports results from this method with infants (Farzin et al., 2012).

The fact that the FPVS technique can be applied to infant populations has a number of benefits for researchers. Infants can be exposed to hundreds of trials and several conditions within minutes, and no verbal or motor response is required. This large amount of data, collected in a short period of time, results in a higher of proportion data suitable for analysis than in studies using behavior or standard ERP approaches. The increased number of trials also allows researchers to use a variety of visual stimuli that vary in shape, size and orientation, leading to conclusions that are more generalizeable and relevant to real-world situations. Relative to other methods, the FPVS method measures infant brain responses objectively, allowing for precise testing of predictions and easy comparisons across investigations. Finally, the FPVS method measures how the brain tells the difference between various stimulus conditions and provides a direct link between this response and the behavioral tasks commonly used to study infant perception, learning and memory.

References

    1. Kanwisher N
    2. McDermott J
    3. Chun MM
    (1997)
    The fusiform face area: a module in human extrastriate cortex specialized for face perception
    Journal of Neuroscience 17:4302–4311.
    1. Scott LS
    2. Nelson CA
    (2004)
    Review of Psychiatry Series
    Review of Psychiatry Series, Volume 23, American Psychiatric Publishing.

Article and author information

Author details

  1. Hillary Hadley

    Department of Psychological and Brain Sciences, University of Massachusetts Amherst, Amherst, United States
    For correspondence
    hhadley@psych.umass.edu
    Competing interests
    The authors declare that no competing interests exist.
  2. Lisa Scott

    Department of Psychological and Brain Sciences, University of Massachusetts Amherst, Amherst, United States
    For correspondence
    lscott@psych.umass.edu
    Competing interests
    The authors declare that no competing interests exist.

Publication history

  1. Version of Record published:

Copyright

© 2015, Hadley and Scott

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 3,892
    views
  • 126
    downloads
  • 0
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Hillary Hadley
  2. Lisa Scott
(2015)
Face Recognition: Babies get it right
eLife 4:e08232.
https://doi.org/10.7554/eLife.08232

Further reading

    1. Neuroscience
    2. Physics of Living Systems
    Moritz Schloetter, Georg U Maret, Christoph J Kleineidam
    Research Article

    Neurons generate and propagate electrical pulses called action potentials which annihilate on arrival at the axon terminal. We measure the extracellular electric field generated by propagating and annihilating action potentials and find that on annihilation, action potentials expel a local discharge. The discharge at the axon terminal generates an inhomogeneous electric field that immediately influences target neurons and thus provokes ephaptic coupling. Our measurements are quantitatively verified by a powerful analytical model which reveals excitation and inhibition in target neurons, depending on position and morphology of the source-target arrangement. Our model is in full agreement with experimental findings on ephaptic coupling at the well-studied Basket cell-Purkinje cell synapse. It is able to predict ephaptic coupling for any other synaptic geometry as illustrated by a few examples.

    1. Neuroscience
    Sven Ohl, Martin Rolfs
    Research Article

    Detecting causal relations structures our perception of events in the world. Here, we determined for visual interactions whether generalized (i.e. feature-invariant) or specialized (i.e. feature-selective) visual routines underlie the perception of causality. To this end, we applied a visual adaptation protocol to assess the adaptability of specific features in classical launching events of simple geometric shapes. We asked observers to report whether they observed a launch or a pass in ambiguous test events (i.e. the overlap between two discs varied from trial to trial). After prolonged exposure to causal launch events (the adaptor) defined by a particular set of features (i.e. a particular motion direction, motion speed, or feature conjunction), observers were less likely to see causal launches in subsequent ambiguous test events than before adaptation. Crucially, adaptation was contingent on the causal impression in launches as demonstrated by a lack of adaptation in non-causal control events. We assessed whether this negative aftereffect transfers to test events with a new set of feature values that were not presented during adaptation. Processing in specialized (as opposed to generalized) visual routines predicts that the transfer of visual adaptation depends on the feature similarity of the adaptor and the test event. We show that the negative aftereffects do not transfer to unadapted launch directions but do transfer to launch events of different speeds. Finally, we used colored discs to assign distinct feature-based identities to the launching and the launched stimulus. We found that the adaptation transferred across colors if the test event had the same motion direction as the adaptor. In summary, visual adaptation allowed us to carve out a visual feature space underlying the perception of causality and revealed specialized visual routines that are tuned to a launch’s motion direction.