Science

The Crowd Sees Tomorrow Coming

Physicist Anthony Aguirre wants us all to consider the probabilities that will define our collective future.

Flickr / anddinu

Physicist Anthony Aguirre worries about artificial intelligence bringing about the apocalypse. Not, like, constantly, but enough that he wonders why the rest of us aren’t paying more attention. He chalks up complacency to the lack of credible predictions about the future. When people don’t know if something is plausible, they don’t gear up for a fight. Humans are invariably defeated by what they didn’t see coming.

Aguirre’s solution is also a call to action. Metaculus, an online prediction platform that uses the wisdom of crowds to make better predictions, was devised as a means of crowdsourcing futurism. And there’s a ton of evidence to support the idea that a hive mind can out-think an expert — even if that expert is Ray Kurzweil. And Metaculus is a subtle enough system to take into account that some people are just better at making predictions than others, which means members who have a proven track record of predictions that come true will have future predictions weighted more heavily. Sometimes crowds are smart in predictable ways.

What sets Metaculus apart from even the most astute futurologists is the volume of data it generates. If you look at 100 different events that the crowd determined had a 70 percent likelihood of happening and 70 of them did, there’s reason to take the system seriously. Nostradamus types can’t make predictions in bulk so it’s harder to understand when they should be taken at their word.

If it proves effective over time, Metaculus still won’t be able to predict events before they happen. However, it will be able to provide probabilities associated with different outcomes, allowing policy makers, inventors, and everyone else to prepare for both likely and unlikely outcomes. It will facilitate a sort of survivalist triage for the human race. It will give Aguirre both more to worry about and less. Inverse spoke to him about how he hopes to use the information he’s collecting.

Why is having a whole bunch of predictors better than just a few experts?

Everybody has access to different data sources, and a lot of it is teasing a signal out of a very noisy set of information. When you do an experiment, you don’t just try to take a little bit of data because it’s easy, you take as much data as you possibly can, because the two things that are affecting your accuracy are noise uncertainty and systematic errors, and the more data you have, the more you’re able to account for both of those things. Systematic errors are always harder; noise just requires getting more data.

Isaac Asimov was just one guy.

Flickr / zzazazz

OK, but if there’s a systematic error, how can you account for that?

Of all the things that participants say are 99 percent certain, in fact like 85 or 90 percent of those things happen. So people in general are a little bit overconfident. Weirdly it doesn’t happen at the other end. So, of the things that people say are like, one percent likely or five percent likely, about one or five percent of them actually happen.

If you know that people generally tend to be overconfident, you can correct for that. And that’s not something an individual predictor can really do well, because they don’t have a big enough set of data on their own predictions to do that. Even if they were self-aware and keeping track of them all and all that stuff, it’s pretty hard to do.

How does Metaculus help us understand events that have a low probability of occurring, but would be really bad if they did, like, say, robots taking over the world?

That’s an example, yeah, or nuclear war, or plague. I’ve thought quite a lot about all these.

Those are particularly hard to get a handle on because they tend to be, we hope, pretty low probability. When you say, is a nuclear war 0.01 percent likely each year, or two percent likely each year? That’s a huge difference, but it’s hard even for people who think clearly about nuclear weapons and geopolitics and numbers and all that stuff, it’s pretty hard to pick apart. How do you differentiate those two? So having this ability to have lots of data and calibrate things well is important.

Also what I think will be important is taking those questions apart, like, what are the different steps that would lead up to a nuclear war? What are the different ways that it could happen? And sort of map out that space, and have lots of questions about it. So, what’s the probability that Russia will invade Crimea, or something? And what’s the probability that then the U.S. would respond, in this way, if they did? You can then pick those things apart into some more digestible things to make predictions on, and things that we can actually test.

Part of it is threat assessment. Like, how worried really should we be about nuclear war versus climate change versus A.I. apocalypse, etc? It may be that we’re spending tons of money, appropriately, on worrying about climate change; If an A.I. apocalypse is 10 percent as likely to cause catastrophe as climate change, we should be spending like 10 percent of the money on that.

Buckminster Fuller was just one guy.

Wikimedia Commons

What do you think is the most undervalued risk right now?

You can read into this what you will, but I am getting very worried about authoritarian governments. The ability of a government 50 or 100 or 1,000 years ago to actually control a huge population was there but it was pretty limited. And technology has now gotten to the point where you can pretty well imagine scenarios where there’s just no escape. The ubiquity of electronic surveillance and the ability to process those huge amounts of data and so on is pretty scary even in the right hands, whatever those are, if you imagine that there are some right hands for it to be in, it’s still pretty scary.

I would have said A..I. recently, and it still may be, but I’m getting a little bit more worried now about the much more effective than Big Brother surveillance state. A lot of the technology is in place for that, and it’s not that far off to have more. People are worried about it, but it’s not something that anybody feels they can do much about.

What are you worried about, in terms of A.I.?

There’s a lot of debate in the community about whether real, powerful A.I., that can do pretty much all the things humans can do — is that 15 years away or 10? Those are kind of the low end of the spectrum, that I’ve seen. Or is it 200 years away? And there’s no real consensus about that, but the idea of it being 15 to 40 years away, which most people would have dismissed five or 10 years ago — now, it’s not so easily dismissed.

Anyone who feels like, when we have machines that are as intelligent as us — or potentially vastly more so — on Earth, that things aren’t going to change in a very, very fundamental way, I think is just fooling themselves. And whether that’s disaster or utopia or something in the middle is unclear, but it is a complete phase change, like the invention of agriculture, or societies, or money, or something. It’s going to be that big.

Some of the disaster scenarios are pretty silly, but anyone who thinks that it’s going to be business as usual if true A.I. comes really needs to think harder about the issue, because it’s hard to imagine that that would be true. It’s the entire future of the human race that’s going to radically change — shouldn’t we be thinking hard about that?

What are the signals that true A.I. might be coming in the not so distant future?

Things that we felt like were beyond A.I.’s capabilities, like winning at Go or recognizing images, CAPTCHA, driving cars, — these really were things that 20 years ago were just like, oh man, we’re going to have some major conceptual breakthroughs in order to do those things. And we didn’t. There were no conceptual breakthroughs. There was much more data, and much faster processing. Now that doesn’t mean that more data and more processing is going to get us to human-level A.I. — I don’t think it will, I think there probably will have to be some breakthroughs. But it does indicate that intelligence may not be as hard to engineer as we thought.

Talking about specific worries and scenarios and stuff is a little bit less important than generally recognizing that this is coming on some timescale and that we’re completely unprepared for it, and still we’re devoting most of our time and attention to somewhat irrelevant things. Arguing about whether we should have more or less skilled immigration into the U.S. in the face of the fact that huge swaths of the economy are going to be automated on a 10 or 20-year time scale is kind of off the mark.

Part of it is having a basis to take the predictions seriously. If you say, “My report that I just wrote says that 30 percent of jobs are going to be automated in the next 15 years,” people will say, “OK that’s interesting, but it could be five percent, and is that such a big deal? And won’t these 30 percent of people just find new work? They always have in the past,” and so on. Ideology quickly comes in, and people twist the predictions to suit their agenda in a lot of ways. There isn’t a strong reason to believe that those predictions are actually true, other than the reasoning that somebody has put into making those predictions, which is always criticizable. You can always get different answers by putting in different assumptions, and tweaking things in various ways.

What I think is really crucial is that the predictions that are made on Metaculus, or something like it, are accountable. People make the prediction, and we again and again and again check to see what actually happened and compare it to their prediction. So, five years from now, if the site continues to grow and work, when this system makes a prediction about something that says it’s 90 percent likely, you’re really going to know, as well as you’re ever going to know, that nine out of ten times, that thing is going to happen.

If you go to someone in government and say, “Look, automation, it’s big — don’t worry about illegal immigrants,” they’ll say “OK, what am I going to do about it, and how big, exactly?” But I think if you’re able to go to a policy maker and say, “Look, this set of people, with 90 percent probability, is simply going to be out of work and have nothing else to do,” that is a piece of information that you can act on. It’s not vague. The precision and the quantitative aspects of it is what’s different and new, and hopefully more effective. I don’t know. People can make terrible decisions no matter what data they have, unfortunately, but you have to at least try, right? Otherwise you’re just choosing at random.

This interview has been edited for brevity and clarity.

Related Tags