At a glance, you can recognize a friend’s face, even if their expression shifts from joy to frustration. How does the brain make the match?
Researchers at Carnegie Mellon University are closer than ever before to understanding the neural basis of facial identification. In a study published in the Proceedings of the National Academy of Sciences, they used sophisticated brain imaging tools and computational methods to measure the real-time brain processes that convert the appearance of a face into the recognition of an individual.
The research team is hopeful the findings might be used to locate the exact point at which the visual perception system breaks down in different disorders and injuries, ranging from developmental dyslexia to prosopagnosia, or face blindness.
“Our results provide a step toward understanding the stages of information processing that begin when an image of a face first enters a person’s eye and unfold over the next few hundred milliseconds, until the person is able to recognize the identity of the face,” says Mark D. Vida, a postdoctoral research fellow in psychology.
91 different faces with 2 expressions
To determine how the brain rapidly distinguishes faces, the researchers scanned the brains of four people using magnetoencephalography (MEG). MEG allowed them to measure ongoing brain activity throughout the brain on a millisecond-by-millisecond basis while the participants viewed images of 91 different people with two facial expressions each: happy and neutral.
The participants indicated when they recognized that the same individual’s face repeated, regardless of expression.
New search engine grafts your face onto the results
The MEG scans allowed the researchers to map out, for each of many points in time, which parts of the brain encode appearance-based information and which encode identity-based information. The team also compared the neural data to behavioral judgments of the face images from humans, whose judgments were based mainly on identity-based information.
Then, they validated the results by comparing the neural data to the information present in different parts of a computational simulation of an artificial neural network trained to recognize individuals from the same face images.
“Combining the detailed timing information from MEG imaging with computational models of how the visual system works has the potential to provide insight into the real-time brain processes underlying many other abilities beyond face recognition,” says David C. Plaut, professor of psychology.
Marlene Behrmann from Carnegie Mellon and University of Toronto Scarborough’s Adrian Nestor participated in the study. The Natural Sciences and Engineering Research Council, Pennsylvania Department of Health’s Commonwealth Universal Research Enhancement Program, and the National Science Foundation funded the work.
Source: Carnegie Mellon University