Computer scientists from the University of Buffalo have created a new system for detecting deepfakes, or highly realistic images of people that are generated by computers and hard to discern from real faces. The new method is remarkably simple, too.
While some tools to combat the fakery look for specific artifacts like artificially smooth skin, this new tool looks at how light is reflected in the eyes. The cornea has a mirror-like surface that creates reflective patterns when looking towards a light source. The reflection is usually similar in both eyes because they’re seeing the same thing.
Deepfake videos of Tom Cruise that recently went viral shed light on how dangerous the technology could be to society if seeing is no longer believing. It was later learned that the videos were created by a team highly skilled in visual effects. But as the technology gets better at duping people, companies like Microsoft are racing to keep up and develop detection methods.
Subtle inconsistencies — Deepfakes fail to capture this resemblance because of how they’re constructed. Many photos and videos of a person are analyzed by artificial intelligence programs to understand unique facial movements, such as how they look when they smile or frown. Synthesized composites can then be generated that mimic a person and make them look like they’re saying or doing anything. But deepfake programs struggle with the reflections of eyes, often displaying inconsistencies like a mismatch in the locations of the reflections where they should be the same in both eyes.
The scientists say this “striking” inconsistency is due to the technology’s lack of understanding of human face anatomy and the relation between facial parts. They also don’t know how to represent a face as it interacts with the physical world — for instance, if a bright light is shining in a face, deepfake programs wouldn’t know how to reflect that into the eyes. Algorithms following rules and look for patterns; they don’t have very good logic-based intelligence.
According to the researchers, their tool generates a score that serves as a similarity metric. The smaller the score, the more likely the image is a deepfake. In testing, the tool was 94 percent effective at detecting deepfake images from the site This Person Does Not Exist.
Future considerations — There are some shortcomings with the tool. The researchers say that the inconsistent reflections can be manually fixed in post-processing. If one eye isn’t visible, or a person isn’t looking at the camera, the tool doesn’t work either. They hope to investigate these issues and find ways to improve the effectiveness of their method. But for less sophisticated deepfakes, the tool should help.