Although we’ve gotten quite good at getting robots to recognize human faces, we still can’t quite figure out how we do it ourselves.
Neuroscientists know that facial recognition is unlike any other object perception in our brains. Within milliseconds of birth, we learn to distinguish faces from other images—likely because other people are vital to our survival. What gives us that ability, and how exactly we refine these skills—how we learn to tell one specific face from another for example—over time is a bit of a mystery, though, because there seem to be so many neurological faculties that go into it. (This is why prosopagnosia, or face blindness, comes in varying degrees and is impossible to cure.)
Conventional wisdom is that the brain tissue we’re born with gets chiseled down to just what we need as adults. But new research led by neuroscientists at Stanford University suggests that when it comes to facial recognition, it’s just the opposite: the fusiform gyrus, a sliver of lower brain matter thought to play a role in discerning faces, appears to become larger and develop more complexly arranged neurons (relative to total brain size) as we go through adolescence into adulthood. These changes likely translate into being better at recognizing faces as we age.
“What’s surprising here is that the changes involve a different mechanism, expansion not pruning,” Kalanit Grill-Spector, a cognitive neuroscientist at Stanford and lead author of the paper, told New Scientist. Their work was published (paywall) on Jan. 6 in the journal Science.
For the study, the researchers had 22 children (aged 5 to 22) and 25 adults (aged 22 to 28) sit in an advanced MRI machine that tracked brain tissue density and composition in their brains. (The kids had to have some extra coaching on sitting still, Gizmodo reports.) While in the machine, the participants were asked to look at two sets of images: one of different faces, the other of different places.
The MRI showed the researchers which areas of the brain were called upon to perform each task (face-recognition; place-recognition) and gave them a reading of the makeup of those lit-up regions. In the adults, the fusiform gyrus was larger—12.6% bigger when accounting for different overall brain sizes—and had a more complex structure than in kids. “You can imagine a 10-foot-by-10-foot garden, and it has some number of flowers in there,” Jesse Gomez, a graduate student in neuroscience at Stanford and co-author of the paper told NPR. “The number of flowers isn’t changing, but their stems and branches and leaves are getting more complex.” But in the part of the brain that lit up for place-recognition (the collateral sulcus), things looked basically the same in kids and adults.
The exact mechanism of facial recognition are still murky: Scientists are not sure how exactly this part of the brain works, and each person’s undoubtedly varies. Aside from our neurological hardware, there are a number of other factors that play a role in helping us recognize those we know, like social and environmental cues. But, from a sociological standpoint, the theory that facial recognition improves from childhood to young adulthood makes sense: As we make more friends and gain coworkers and bosses, being able to distinguish more faces is important.
Grill-Spector and her team were unable to compare brain scans to older adults to see if these structures gain even more complexity as we age; this is something they hope to tackle with future work. But since our friend groups tend to decrease after we hit 25, we may not have as many faces to learn beyond then, so it’s possible the trend halts or reverses with middle and old age.