As the 2020 election heats up, many are worried deepfakes featuring the presidential candidates could be used to sway voters. With this in mind, Facebook just announced it will be removing some types of deepfakes it discovers on its platform, but this new policy is pretty limited in scope.
Facebook announced in a blog post that the company has been communicating with “50 global experts with technical, policy, media, legal, civic and academic backgrounds” to develop a plan to address deepfakes and other methods of manipulating media. The social media giant says it will now remove videos that have been edited to make it appear that someone said something they didn’t say and deepfakes that were created using artificial intelligence or machine learning.
“This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words,” the company writes.
This relatively narrowly tailored policy has already been criticized by many who say it doesn’t go far enough to combat the spread of disinformation. A spokesman for Democratic presidential candidate Joe Biden has already called it “an incredibly low floor in combating disinformation.”
Considering Facebook was a hotbed for spreading disinformation during the 2016 election, it’s not surprising that Facebook is receiving a lot of skepticism. Brooke Binkowski, managing editor at the fact-checking website Truth or Fiction and an expert on disinformation, tells Inverse that she’s not satisfied with this new policy.
“No matter what Facebook announces it should be regarded with a critical eye, because of their track record of caring more about optics than reality,” Binkowski says. “Yes, deepfakes are a problem, but so is everything else used to radicalize vulnerable people.”
Facebook has been widely criticized for refusing to take down political ads that feature outright lies in them, and it appears the company is taking a similar approach to deepfakes. The social media company told CNN that it would not take down a deepfake that was featured in a politician’s ad. Facebook’s policy also won’t ban videos that have been edited in a misleading way but aren’t technically “deepfakes,” such as the one that went viral where House Speaker Nancy Pelosi was made to look drunk because the video had been slowed down.
“What exactly are they trying to pull?” Binkowski asks. “Sure looks like they’re working toward specific political goals using a combination of obfuscation, misdirection and disinformation to me.”
Though deepfakes are starting to become a problem that could influence elections, it’s still pretty difficult to make a convincing deepfake unless you have a sophisticated team working on it. That said, it will become cheaper and easier to create deepfakes as the technology progresses.
As things stand, most disinformation is spread through viral edited images, misleading memes and manipulated videos that won’t be banned under Facebook’s new policy. No one wants Facebook to be censoring legitimate political speech, but the company could certainly do more to fight the spread of false information.