Science

A.I. created the madness of deep fakes, but who can save us from it?

This problem is only going to get bigger, and we need to re-think everything.

Gfycat

Google just released a large database of deepfake videos so software engineers can work on ways to help identify them, often using artificial intelligence, but some argue that a technical fix isn’t going to be enough to save us from the negative effects of deepfake videos.

It appears we’re coming to a point where we will no longer be able to trust the videos we come across on the internet. As deep fakes get better and better, it will soon be impossible to tell the difference between a real video and a fraudulent one.

That’s the future that involves deep fake deception. So how can we solve for it?

It’s a monumental problem that will affect politics, pop culture, and even our private lives. Many in the tech world are working on utilizing artificial intelligence and machine learning to spot these fakes so videos will be verifiable, but a new report from nonprofit research firm Data and Society suggests this kind of technology won’t save us from deep fakes.

The report argues that a technical fix to the deep fake problem is necessary, but it’s also going to take social solutions to address it. Report authors Britt Paris and Joan Donovan write that deep fakes are just the most recent iteration of media manipulation.

“What coverage of this deepfake phenomenon often misses is that the ‘truth’ of audiovisual content has never been stable — truth is socially, politically, and culturally determined,” the report reads.

Looking back in history, the report explains that photographic evidence has been “made admissible in courts on a case-by-case basis since the 1850s,” with juries of the 19th century often preferring witness testimony. This demonstrates the public’s distrust of new technology.

Moving forward to the use of video in court cases, the authors outline how police officers slowing a video of Rodney King being beaten by police during his trial in 1991 “made King’s involuntary physical reactions to blows appear as if he were attempting to get up,” which resulted in three officers being acquitted.

Giphy

Images and videos have long been framed or manipulated in certain ways to influence public opinion. The report states that, outside the courtroom, you can clearly see how visual evidence plays a “key role” in how journalism is done.

“Within journalism, visual evidence plays a key role in the construction of public opinion and the arrangement of political power,” the report reads. “Journalists also serve as experts in the politics of evidence, deciding how to frame media as representative of truth to the public.”

"We need more broad, sweeping changes in terms of how we interact with technology."

The authors outline multiple occasions in which journalists have deceptively used images to create a certain narrative that didn’t accurately reflect the reality of what they were reporting on.

With deep fakes the emerging technology won’t only affect public figures. A deep fake of someone you know might be created, spread on social media and ultimately become impossible to get rid of. The internet never forgets, the saying goes.

The authors argue that beyond technical solutions, society needs to work against the dissemination of deep fakes and reconsider how we approach life on the internet.

“The social solutions are multifaceted and would need to address the entrenched issues of systematic inequality online,” Paris tells Inverse. “We need more broad, sweeping changes in terms of how we interact with technology.”

Paris says that lawmakers, platforms and individuals need to recognize that certain groups are disproportionately targeted by harassment online and that they need to work to help improve this inherent problem, rather than just assuming a technical fix to a problem like deepfakes will solve everything. She explains that, oftentimes, these videos are used to reinforce things like misogyny and racism.

“Not a whole lot is happening at the platform level, and I think this is where we need to intervene,” Paris says.

Mark Zuckerberg after testifying before Congress in April

Flickr / ahhhnice

In terms of what platforms can do, the authors suggest content moderators could be emboldened to create a “community that fosters pro-social values that go beyond profit motives” and that tech companies could “foster digital citizenship” by creating ways to discourage other users from sharing false or misleading content.

“We need to not just increase human content moderation, but we also need to improve conditions for those moderators,” Paris says.

When it comes down to it, there’s no silver bullet to fixing a problem like the spread of deep fakes, so the authors point out different ways we can think about this issue and recognize who is being harmed and how we can help foster an online community that helps support those who are the most vulnerable.

“We need to think about how evidence is wielded through technology in ways that reproduce systemic inequalities,” Paris says. “It’s about rethinking society’s relationship with technology and data.”

Related Tags