This is an Inside Science story.
Recognizing the face of a family member or good friend is something that happens almost instantly and effortlessly. Years of exposure to loved ones’ faces allow the brain to easily pick them out of a crowd, despite variables like different hairstyles or emotional expressions.
But what about someone who isn’t close, like a person you sat next to on the plane or a cashier at the grocery store? How does the brain do when it comes to identifying the face of a stranger? The short answer is, not very well.
“With unknown faces, we are not particularly good at telling them apart,” said Géza Gergely Ambrus, a postdoctoral researcher at Friedrich Schiller University Jena in Germany. “If you take two photos with different cameras, and the person shaved or is not wearing glasses in one of them, it is surprisingly difficult to recognize with high accuracy people we are unfamiliar with.”
The enormous rift between our recognition of unfamiliar versus familiar faces led Ambrus and his colleagues to wonder about how the brain’s response evolves when a person goes from unknown to known. Their study, published May 24 in The Journal of Neuroscience, found that getting to know strangers by sorting photos of two unfamiliar people into their separate identities did not produce significant signs of familiarity in brain scans. However, watching videos of new people or interacting with them in person led to enhanced brain activation upon seeing their faces later on. Unsurprisingly, in-person interaction resulted in much stronger signs of recognition than just watching videos of them.
The findings emphasize the importance of in-person interactions, while virtual, one-way observations have less of an impact on facial recognition.
“This paper opens a whole array of interesting questions linked to what generates the difference between versus in-person familiarity patterns,” said David Acunzo, a postdoctoral researcher at the U.K.’s University of Birmingham, who was not involved in the study. “We have fake characters versus real people, made-up situations versus genuine engagement, observation of behavior versus interaction, purely audio-visual input versus fully multimodal input, etc.”
Previous research measuring electrical activity in the brain with electrodes attached to the scalp, a technique known as EEG, has shown areas that consistently light up upon viewing any human face. However, the pattern and strength of signals differ depending on the familiarity of the face, with more robust signals being tied to well-known faces.
Ambrus and his colleagues compiled EEG recordings of 42 participants for the photo sorting task, 24 participants for the video exposure task, and 23 participants for the in-person interaction task. They then used a machine-learning algorithm to identify the EEG patterns that correspond to different levels of face familiarity, and the researchers then used this model to interpret their experimental data.
The photo sorting task asked subjects to divide 30 photos of two previously unknown women into two piles, with each one representing a different person. Immediately afterward, they observed photos of the women, along with other people who looked fairly similar. No significant difference was seen in the EEG pattern when viewing the women versus strangers, suggesting that simply looking at photos in this manner does not create familiarity.
In the video exposure experiment, subjects viewed photos of actors they had never seen before during a baseline EEG measurement. They then watched one whole season of a TV series (“The Americans” or “The Bridge”) where the actors played leading roles. An EEG test performed after they completed the season showed a strong indicator of familiarity, suggesting that video exposure does lead to reliable representations of face familiarity, as opposed to the photo sorting task.
The last experiment showed participants photos of four unknown women during a baseline EEG measurement. Then, in the days that followed, they got to know two of these women by meeting with them in-person.
“They spent an hour at our lab for three consecutive days, drinking coffee and playing a quiz game. They met personally and were free to chat about anything they wanted,” said Ambrus. “All the participants met at the same place with the same people for the same amount of time.”
An EEG test performed the next day, which again showed photos of all four women, demonstrated a drastic change from the baseline EEG. The participants’ brains clearly recognized the two women’s faces as familiar, and the signals were much stronger than those seen in the video exposure task. In-person interaction clearly matters when it comes to our brains and face recognition.
Acunzo, who studies sensory perception and visual attention, said the researchers went beyond the standard techniques used in face perception research by making people meet and interact in person.
“They found that media familiarization generates reliable familiarization representations, but not as strong as in-person familiarization,” said Acunzo. “It is a result that was to be expected, but it is nice to have.”
While the researchers didn’t investigate the impact of virtual interaction through applications like Zoom, Ambrus suspects from personal experience that it still wouldn’t equate to in-person interaction, something that became clear to him after teaching online during the pandemic.
“When I eventually met some of those students face-to-face, I was oftentimes quite surprised regarding the discrepancy between the ‘expectations’ in my mind and the real-world experience,” he said. “I wonder how many times we walk by people on the street who we know from online interactions but don’t recognize in person.”
Inside Science is an editorially independent nonprofit print, electronic and video journalism news service owned and operated by the American Institute of Physics.