Innovation

7 AI threats worth worrying about

Shutterstock

Shutterstock

A.I. is all around us, from our homes to our cars.

Shutterstock

Experts say we might face serious threats from this technology in the next 15 years.

Shutterstock

Here they are, from least worrisome to most.

Burglar Bots: When it comes to itty bitty robots squeezing into our homes through cat flaps and stealing or keys or jewelry, experts aren’t too worried. Attacks like these would have low impact and overall low profitability.

Online Eviction: Much of our lives are lived online and “online eviction,” a denial of access to essential online services or platforms, is considered a medium threat by experts.

This could look like targeted terms of service violations to an account and might be used for extortion.

Data Poisoning: Also considered a medium threat, data poisoning is when purposefully biased data is fed to a machine learning algorithm in order to have it make biased decisions.

Biased A.I. has already led to the incarceration of innocent BIPOC and could be used to manipulate public discourse and trust.

Shutterstock

AI Snake Oil (fake AI): It seems like every startup these days has an A.I. or machine learning component to its product — and this could be abused in the future to pass off non-A.I. technology as A.I. This is also considered a medium threat.

Disrupting AI-controlled Systems: As A.I. infrastructure becomes more integrated in our cities and towns, researchers see a high-level risk to our everyday lives if these systems are nefariously targeted.

Traffic gridlock or power failure are a couple of examples. This kind of attack is highly profitable and hard to stop, but researchers do say it's also hard to implement widely.

Shutterstock

Social engineering: Social engineering is a form of hacking that requires little technical know-how and uses espionage-like deception to collect information from targets such as their bank number or password.

Shutterstock

Also called phishing attacks, researchers rated this attack as a high-level threat because they’re highly profitable and difficult to stop.

Deep Fakes: Number one on their list as well as ours, researchers believe deep fakes will be the biggest A.I. threat in the next 15 years. These video and audio manipulations play on our instinct to believe what our own eyes see.

Deep fakes are already being used for blackmail and to sow political dissent and researchers say they’re difficult to stop because it’s hard for even experts to spot the difference.

Thanks for reading,
head home for more!