Science

Algorithms Learned How Size Up What You Look Like — and Find Your Twin

The robots of the future will probably make our lives easier by taking over rote tasks like folding laundry or driving us home from the office. But efforts are also underway to teach them how to recreate more intrinsically human skills with less obvious real-world applications, like how to find faces in clouds.

Nothing showcased this more than the Google Arts and Culture application that went viral almost a year ago, in January 2018.

This is #18 on Inverse’s list of the 20 Ways A.I. Became More Human in 2018.

The smartphone app used machine learning to match a selfie to one of a few thousand famous art pieces. This simple game leveraged the power of artificial intelligence to introduce users to an array of new art, had them share their results with friends, and showed them what the future of A.I. holds. But there was more than meets the eye to what seemed at first like a marketing gimmick.

Kumail Nanjiani tries out the Google Arts and Culture app

Kumail Nanjiani

To find your fine art doppelganger, the app creates a “faceprint” from the selfie you feed it and identifies unique characteristics from your eyes, nose, mouth, and ears. It then matches that information to a massive database of characteristics pulled from its set of artwork. Creating the pairings requires a process that’s a lot like what humans do with their eyes and brain when interpreting visual information.

We learn people’s faces by both seeing them and interacting with them, for example by using some sort of mnemonic device. The brain stores that all of that information as memories, which also helps explain why, when we encounter a visually similar drawing or rock formation we can’t “unsee” our friend, relative, or favorite pop singer.

This hilarious mobile app is clear evidence that A.I. is quickly gaining the ability to mimic even the subtlest human characteristics, with huge ramifications in emerging fields like facial recognition.

Related Tags