Deepfake technology is definitely an example of letting the genie out of the bottle. There’s really no turning back from its existence now, and so the question has become less “How to stop it?” and more “How to regulate and use it as responsibly as possible?” Of course, we’ve got no answer to that horrifying dilemma, but at least some major companies are beginning to think about it. Sort of.
During yesterday’s annual Max Conference, Adobe unveiled something called Project Morpheus — which is essentially its Neural Filter tool first unveiled last year, but now for videos instead of only images. In a three-minute video (anchored by an extremely out-of-place Keenan Thompson), Project Morpheus software developer Han Gao showed off some early potential uses of the feature. There’s still some pretty deep uncanny valley going on in the footage, but it’s a good example of how even the biggest visual-based tech companies are adopting deepfake learning into their models... even if they won’t call it “deepfakes,” per se.
A heavily restricted deepfake tool — Right off the bat, it’s clear that Adobe is reining in just what you can and can’t do with its deepfake neural program. Audiences are only shown relatively simple edits like augmenting a smile and adding facial hair, and as The Verge points out, you can’t do more intensive tasks like face-swapping between individuals. As presented, it seems like Adobe intends this more to be a touch-up tool as opposed to something more heavy-duty, and in the case of deepfaking, potentially more dangerous or exploitative.
We may never see it — On top of all that, there’s apparently a decent chance the public will never get its hands on Project Morpheus. Gao makes it clear in the video that this is merely something Adobe is experimenting with at the moment, and makes no guarantee that it will eventually wind up in something like Photoshop. Still, it shows just how seriously companies are taking the rapidly developing world of deepfake tech.