Science

The solution to stopping deepfakes might be staring us in the face

A misinformation expert tells Inverse that deepfakes have a people problem.

Hyper RPG

Deepfakes. They are a problem that could fool Americans during elections, disrupt the lives of the people being faked, and broadly, screw with our perception of reality in an increasingly digital future.

In an effort to stem this seemingly unsolvable problem, lawmakers in California have passed two laws aimed at combatting deepfakes.

A deepfake is generally defined as digital video clip created using artificial intelligence to superimpose the likeness and voice of someone on another’s body, in a very convincing style. Smart policy reforms are needed to combat this modern phenomenon, but some feel there are underlying issues that are unsolvable by laws.

Brooke Binkowski is one of the best people to talk to when it comes to online fakery. The former managing editor of Snopes now works at Truth Or Fiction, and tells Inverse that deepfake laws are needed, but they won’t save us by themselves.

“I think that this is a well-intentioned law, and perhaps it can be retroactively adjusted to make room for quickly changing technologies,” Binkowski says. “I don’t disagree that [deepfakes] will be a problem as more people catch on to these smear attempts.”

Binkowski says that everyone thought Photoshop would be the end of people trusting photographs, but we’ve adapted to that change in the media landscape. She also adds the larger problem is one of people, not technology: There aren’t enough people working on fighting disinformation, she says.

THEGONCAS2

“We don’t have enough sources for vetted and thorough information, and it’s getting worse all the time,” Binkowski says. “We need more trained journalists and others with expertise working on these issues, so that we can combat disinformation in all its forms together.

“If you have a lot of accessible, vetted, responsibly presented information to overwhelm the fakery, it doesn’t matter how high-tech the disinformation is; it will still be clear that it is disinformation.”

A technological fix is needed, too. Spotting deepfakes and their origin point has to be easier, Binkowski says “Some sort of irrefutable way to encode their origins into the metadata,” she says, adding that social media platforms need to make sure this data can’t be stripped from the video. A.I. and machine learning have also been cited as tools that can be used to confront deepfakes.

“Most of all, we need to elevate and support the work and the voices of those doing fact-checking and explaining how the technology works, actual experts in their field, and flood social media with vetted stories that show their work so that they are fakery-proof,” Binkowski says.

Essentially, the technology is here, and we need to get geared up for a battle. Deepfakes will continue to make their way around the internet, and some people will believe them, so we need people working on refuting these fraudulent videos.

“We are in a new world now, and it requires heart and courage to call out bullshit in the face of pearl-clutching, threats, cherry-picking, gaslighting, and other abusive techniques,” Binkowski says. “We need to understand what we are up against, and we need to be able to fight against it together.”

Go deeper:

Related Tags