robot apocalypse

2035's biggest A.I. threat is already here

The robot apocalypse has more to do with us than technology.

Updated: 
Originally Published: 

As if 2020 wasn't going badly enough, a team of academics, policy experts, and private sector stakeholders warn there is trouble on the horizon. They've pinpointed the top 18 artificial intelligence threats we should be worried about in the next 15 years.

While science fiction and popular culture would have us believe that intelligent robot uprisings will be our undoing, a forthcoming study in Crime Science reveals that the top threat may actually have more to do with us than A.I. itself.

Rating threats on their potential harm, profitability, achievability, and defeatability, the group identified that deep fakesa technology that already exists and is spreading — posed the highest level of threat.

Unlike a robot siege that might damage property, the harm caused by these deep fakes was the erosion of trust in people and society itself.

The threat of A.I. may seem to be forever stuck in the future — after all, how can A.I. harm us when my Alexa can't even correctly give a weather report? — but Shane Johnson, Director of the Dawes Centre for Future Crimes at UCL which funded the study, explains that these threats will only continue to grow in sophistication and entanglement with our daily lives.

"We live in an ever-changing world which creates new opportunities - good and bad," Johnson warns. "As such, it is imperative that we anticipate future crime threats so that policymakers and other stakeholders with the competency to act can do so before new 'crime harvests' occur."

While the authors concede that the judgments made in this study are inherently speculative in nature and influenced by our current political and technical landscape, they argue that the future of these technologies cannot be removed for those environments either.

How did they do it — In order to make these futuristic judgments, the researchers gathered a team of 14 academics in related fields, seven experts from the private sector, and 10 experts from the public sector.

These 30 experts were divided evenly into groups of four to six people and given a list of potential A.I. crimes, ranging from physical threats (like an autonomous drone attack) to digital forms of threat like phishing schemes. In order to make their judgments, the team considered four main features of the attacks:

  • Harm,
  • Profitability
  • Achievability
  • Defeatability

Harm, in this case, could refer to physical, mental, or social damages. The study authors further define that these threats could cause harm by either defeating an A.I. (e.g. evading facial recognition,) or using an A.I. to commit a crime (e.g. blackmailing people using a deep fake video).

While these factors cannot truly be separated from one another (e.g. an attack's harm, in reality, might hinge on its achievability), the experts were asked to consider the impact of these criteria separately. The teams' scores were then sorted to determine the overall most harmful attacks from A.I. in the coming 15-years.

From forgery on the lowest end to deep fakes on the highest, A.I. threats are sure to be a force to reckon with in coming years.

Crime Science

What were the results — Comparing 18 different types of A.I. threats, the group determined that video and audio manipulations in the form of deep fakes were the greatest overall threat.

"Humans have a strong tendency to believe their own eyes and ears, so audio and video evidence has traditionally been given a great deal of credence (and often legal force), despite the long history of photographic trickery," explain the authors. "But recent developments in deep learning [and deep fakes] have significantly increased the scope for the generation of fake content."

The authors say the potential impact of these manipulations range from individuals scamming the elderly by impersonating a family member to videos designed to impersonate and sow distrust in public and governmental figures. They also add that these attacks are hard for individuals (and even experts in certain cases) to detect, making them difficult to stop.

"Changes in citizen behavior might, therefore, be the only effective defense," write the authors.

Other top threats included autonomous cars being used as remote weapons, similar to vehicle terrorist attacks we've seen in recent years, and A.I. authored fake news. Interestingly, the group judged burglar robots (little robots that can climb through peoples' cat flaps to steal keys and aid human burglars) as one of the lowest threats.

So ... are we doomed? — No, but there is some work for us to do. Popular depictions of A.I. threats imagine that we'll have a single red button to press that can make all nefarious robots and computers stop in their tracks. In reality, the threat isn't so much the robots themselves but how we use them to manipulate and harm each other.

Understanding this potential for harm and doing our best to get ahead of it in the form of information literacy and community building can be a powerful tool against this more realistic robot apocalypse.

Abstract: A review was conducted to identify possible applications of artificial intelligence and related technologies in the perpetration of crime. The collected examples were used to devise an approximate taxonomy of criminal applications for the purpose of assessing their relative threat levels. The exercise culminated in a two-day workshop on ‘AI & Future Crime’ with representatives from academia, police, defence, government and the private sector. The workshop remit was (i) to catalogue potential criminal and terror threats arising from increasing adoption and power of artificial intelligence, and (ii) to rank these threats in terms of expected victim harm, criminal profit, criminal achievability and difficulty of defeat. Eighteen categories of threat were identified and rated. Five of the six highest-rated had a broad societal impact, such as those involving AI-generated fake content, or could operate at scale through use of AI automation; the sixth was abuse of driverless vehicle technology for terrorist attack.

This article was originally published on

Related Tags