Science

7 Ways People are Wrong About the Future of A.I.

Getty Images / Tomohiro Ohsumi

Rodney Brooks is tired of misinformation. The Australian roboticist helped bring the Roomba to life, chairs Rethink Robotics and used to serve as director of the MIT Artificial Intelligence Lab. So when he says that his field of expertise gets misrepresented in the media, you know there is a serious issue that needs addressing.

In a blog post published this month, Brooks highlights four predictions for artificial intelligence development that lead to questionable depictions. There is the notion of a general-purpose A.I., one that could act as a super-intelligent agent with an independent existence. This is linked to the idea of the singularity, where a general A.I. would exceed our intelligence and build its own. There’s the fear that this system could have values that differ from ours, which in turn leads to the doomsday prediction that A.I. could choose to wipe out humanity.

These four issues weave their way into narratives and create inaccurate ideas around robots and the future. Here are the seven major ways Brooks sees A.I. as regularly misrepresented:

Over and underestimating

This is summed up by an adage known as Amara’s Law, written by futurist Roy Amara: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

Essentially, a new technology never quite works out as expected. Brooks cites the computer as an example, where people in the fifties were concerned the machines could take over jobs, like in the movie Desk Set:

When that didn’t materialize, there was less concern over the effect they may have on everyday lives, until the nineties and subsequent years when computer chips suddenly found their way into all manner of devices. While bold A.I. predictions make for good headlines, the biggest changes will arrive over the long term.

Imagining magic

Brooks cites science fiction writer Arthur C. Clarke’s third law as an explanation for why some artificial intelligence predictions are borderline impossible: “Any sufficiently advanced technology is indistinguishable from magic.”

Predicting how an advanced machine could change our lives is incredibly tricky, even more so if we have no idea what the limitations of such a device are.

It would be like going back in time to show an Amazon Kindle to Johannes Gutenberg. He may have been impressed by the device’s ability to change the words without using his printing press, but he may have assumed the device could display the same words forever like a regular printed page. Without any concept of charging, batteries or electricity, the limits of the system are unclear. In a similar vein, discussing the limits and tensions in a hypothetical general A.I. is near-impossible.

Rodney Brooks

Performance versus competence

When a person displays knowledge on a subject, you make certain assumptions about their competence. For example, you might make some assumptions if you’re walking down the street with a friend, and they point to a group of people playing football and say “oh look, people playing football.”

With artificial intelligence, there’s an assumption that if a machine can perform a certain task, it has a certain level of competence. But while an image recognition system may identify Corinthian orders with ease, it probably can’t tell you anything about their history. The reverse is also true, in that the system probably could identify an animal with ease.

“Suitcase words”

Words that pack in a variety of meanings — a term coined by scientist Marvin Minsky — can spell trouble for A.I. coverage.

South Korean professional Go player Lee Sedol watches as Google DeepMind's lead programmer Aja Huang puts the Google's artificial intelligence program, AlphaGo's first stone, during the last Google DeepMind Challenge Match on March 12, 2016 in Seoul, South Korea.

Getty Images/Handout

For example, when someone says that a machine “played” a game of Go against a human and won, it may conjure up slightly inaccurate images. When Lee Sedol lost to AlphaGo last year, it was to a cold collection of wires that had no concept of what a game is, what playing feels like and whether it’s enjoying itself.

Similar issues arise with words like “thinking” or “learning.” Applying human concept to machines may work as a way of understanding what a computer has achieved, but the process is packed with assumptions that can mislead readers.

Exponentials

This is where people extrapolate based on a set of given information. It’s easy to assume that because a technology advances at a certain pace, it will continue to advance at that pace. The top-of-the-range iPod in 2003 stored 10 gigabytes of data, where the one in 2007 stored 160 gigabytes. But that pattern didn’t continue — the most recent iPhone has a capacity of just 256 gigabytes, as flash memory and lack of consumer demand led to a reduced desire to expand further.

Artificial intelligence has similar issues. It may seem like advanced machines today will be twice as advanced after double the development time, but what if there’s no need for it? Will we see a general purpose artificial intelligence just because we can, or will there be a reduced desire to invent one?

Hollywood scenarios

This is less about being too bold with future visions, and more about now appreciating how technologies feed into each other. With the development of the modern smartphone came the tablet, which changed how many people read on trains. Yet in a lot of sci-fi films, the main character will be seen babbling away to a hyper-smart robot in a world that seems almost identical to ours. When artificial intelligence arrives, it will be in an almost unrecognizable world.

Brooks points to economist Tim Harford’s recent blog post as a good summary:

Blade Runner is a magnificent film, but there’s something odd about it. The heroine, Rachael, seems to be a beautiful young woman. In reality, she’s a piece of technology — an organic robot designed by the Tyrell Corporation. She has a lifelike mind, imbued with memories extracted from a human being. So sophisticated is Rachael that she is impossible to distinguish from a human without specialised equipment; she even believes herself to be human. Los Angeles police detective Rick Deckard knows otherwise; in Rachael, Deckard is faced with an artificial intelligence so beguiling, he finds himself falling in love. Yet when he wants to invite Rachael out for a drink, what does he do?
He calls her up from a payphone.

Speed of deployment

There’s a mismatch between how fast software and hardware get deployed. While an update to Facebook can go live immediately, a new machine could take months before businesses start using it.

“I regularly see decades old equipment in factories around the world,” Brooks said. “I even see PCs running Windows 3.0 in factories–a software version released in 1990. The thinking is that “if it ain’t broke, don’t fix it”. Those PCs and their software have been running the same application doing the same task reliably for over two decades.”

While it may seem logical that a piece of artificial intelligence software could go live straight away, in practice it could take years before they find their way into everyday use. An A.I.-powered self-driving car means buying a whole new car, something families do incredibly rarely. The change is something that could take years to enter public use.

Related Tags