The goal of roboticists has long been to make A.I. as efficient as the human brain, and researchers at the Massachusetts Institute of Technology just brought them one step closer.
In a recent paper published in the journal Biology, scientists were able to successfully train a neural network to recognize faces at different angles by feeding it a set of different orientations for several face templates. Although this only initially gave the neural network the ability to roughly reach invariance — the ability to process data regardless of form — over time, the network taught itself to achieve full “mirror symmetry”. Through mathematical algorithms, the neural network was able to mimic the human brain’s ability to understand objects are the same despite orientation or rotation.
Thanks to three distinct brain layers, each designed to interpret specific angles, human neuro systems can decipher images — such as faces — regardless of orientation, even if it’s never before seen the image. While the first and last layers are limited to processing certain degrees, the intermediate area of optic nerves allows the brain to translate the image regardless of its angle or rotation — thus giving humans the ability to understand “mirror-like” images and understand they represent the same object.
“We were able to show if you assume a particular form of Hebbian learning, which is known to existing cortex, then, when you plug in, our theory and our algorithms, out pops this property of mirror symmetric view tuning,” says Tomaso Poggio, a professor of brain and cognitive sciences at MIT and director of the Center for Brains, Minds, and Machines.
The result, which brings scientists once step closer to understanding how to recreate the human brain, was an unexpected one for the team.
“It was not a model that was trying to explain mirror symmetry,” Poggio says. “This model was trying to explain invariance, and in the process, there is this other property that pops out.”
But despite the advancement, scientists aren’t entirely sure how to recreate it. It’s possible future variations from the machine won’t possess the same ability to detect facial angle and the math is still hypothetical.
“This is not a proof that we understand what’s going on,” says Poggio. “Models are kind of cartoons of reality, especially in biology. So I would be surprised if things turn out to be this simple. But I think it’s strong evidence that we are on the right track.”
This isn’t the first time a neural network has taught itself to do something previously unheard of — on Monday, MIT researchers announced a prototype of a neural network that could predict seconds into the future. What makes Poggio’s research so incredible is that the neural network is recreating an elaborate neurological process that is both essential to human cognition and still not fully understood by biologists.
“I think it’s a significant step forward,” says Christof Koch, president and chief scientific officer at the Allen Institute for Brain Science. “In this day and age, when everything is dominated by either big data or huge computer simulations, this shows you how a principled understanding of learning can explain some puzzling findings.”