Science

How UC Berkeley's New Center Could Prevent a Military A.I. Apocalypse

"If both sides develop this technology, nobody is ahead."

Getty Images / Jeff Spicer

University of California Berkeley just added its name to a growing list of reputable institutions working to prevent malevolent artificial intelligence from destroying humanity. Its new Center for Human-Compatible Artificial Intelligence, announced Monday, is meant to ensure that A.I. remains beneficial, and also to preclude an A.I. apocalypse.

Berkeley’s own A.I. expert, Professor Stuart Russell, is leading the venture. The Open Philanthropy Project provided a $5.5 million grant to the Center, and others chipped in as well. Several other experts from Berkeley, Cornell University, and the University of Michigan will serve as co-principal investigators.

Cornell Professor Bart Selman told Inverse about the Center, A.I.s that we should fear, and why the world probably isn’t headed to hell in a handbasket — yet.

What do you hope to accomplish with the Center for Human-Compatible A.I.?

These centers are a relatively new development. People make investments — the Future of Life Institute, for example, has an investment from Elon Musk, from Tesla — to deal with the questions concerning the future of life, and possible threats to the future of life and society. The new center at Berkeley is another example of such an institute.

It’s inspired by the changing role of artificial intelligence and computing in our society. These technologies are moving into our society, and will have an effect on us. These institutes are there to bring researchers together to study possible negative impacts, or how to make sure that there are no negative impacts.

Facebook (I, Robot)

The Center’s very title suggests that A.I. and humanity are inherently incompatible. Is that your view?

That’s the first reaction people have, that these things could be incompatible. We actually believe that, with the proper precautions, and the proper guidelines and research, we can really align the interests and make sure that these technologies are beneficial. A.I. technology has tremendous positive sides, and the goal for society is to make sure that we stay on the positive sides. The researchers involved in the institute believe that it’s feasible.

If you could adopt a doomsayer perspective, what would the most likely A.I.-induced doomsday scenario look like?

A clear risk would be a military development of artificial intelligence, one that could lead to an A.I.-weapons arms race between countries: Autonomous weapons, autonomous drones that follow their own commands, developed by different countries that are hostile to each other. That’s the most concrete risk I see. I think A.I. developed for self-driving cars, for example — for regular roles in society — will be very beneficial. But the military aspect is a worrisome component.

Public Domain Pictures

Is that already an uphill battle?

Countries are actually considering this issue. In a certain sense, it’s an uphill battle: The military is pushing forward to develop A.I. technology. But, there have been agreements — for example, biological weapons is probably the most well-known case — where countries at some point decided that it would benefit all countries not to develop biological weapons on a large scale. We’re hoping that countries will realize that the downside, for everybody, is too large to go into an all-out A.I. arms race.

If both sides develop this technology, nobody is ahead.

Wikimedia Commons

Is there anything else I should know about the Center, or about malevolent A.I. in general?

It’s important to give the positive side. It’s easy to emphasize the risks — and, of course, that’s partly why these centers are established — but it’s also important to point out that a lot of researchers feel that this risk is manageable, and that the upsides will be tremendous.

You already gave the doomsday perspective. What are some potential positive uses of A.I. that you’re really excited about?

The positive sides will be a society where we are assisted by this technology. Simultaneous speech translations — between Chinese and English, for example — where people can just converse, as if they were speaking a native language, with each other around the world. Elderly care: People can stay as long as they want in their home, because they have a household assistant robot that helps them live independently.

Wikimedia Commons

That will probably have a pretty dramatic impact on life in general — as far as employment, and what we prioritize.

Yes. There’s the employment debate, which is sort of a separate debate going on. There will be sufficient resources to support everyone, so there is the issue of how we make sure that everyone benefits from these developments. There will be a re-realization of what it means to work, what kind of work people will do. But in principle, people will have more freedom, and more time to do activities that they enjoy doing, because machines will help with many other more mundane, work-type things. It will take some time, but we believe society will adapt, and slowly incorporate these changes.

This interview has been edited for clarity and brevity.

Related Tags