Science

How to Protect Humanity From the Invention That Inadvertently Kills Us All

What happens if we keep opening Pandora's box?

Unsplash / Ra Dragon

Nick Bostrom is concerned if that humans keep putting their hand into the cookie jar of technology, one day we won’t like what we pull out. The founder of the University of Oxford’s Future of Humanity Institute, and the mind behind the famous, Matrix-like simulation argument, argues in a new paper that the world is not prepared for the development of an easily-accessible creation that could cause the destruction of modern civilization.

“Picture a big urn filled with balls, representing ideas, methods, possible technologies,” Bostrom said on the stage at the TED conference in Vancouver, Canada, on Wednesday, as part of a conversation with TED leader Chris Anderson. “You can think of the history of human creativity as the process of reaching into this urn and pulling up one ball after another. And the net effect so far has been hugely beneficial, right, we’ve extracted a great many white balls, some various shades of grey, mixed blessings. We haven’t so far, pulled out the black ball, a technology that invariably destroys the civilization that discovers it.”

In this scenario, the black ball could the form of a biological machine capable of wreaking havoc on the world, say, or a military breakthrough like killer robots. Perhaps, Bostrom’s black ball could even take the form of some unforeseen economic incentive that pushes humans to act in harmful ways.

“I think there might well be various black balls in the urn,” Bostrom said. “There might also be some golden balls that that would help us protect against black balls. And I don’t know which order they will come out.”

Bostrom’s theory is distinct from other existential threats that could end life as we know it. In his construction, for example, climate change is not a “black ball.” But it could have been, argues, if temperatures rose a lot faster, renewable energy was harder to create or there were more fossil fuels in the ground. Similarly, humans were lucky that nuclear weapons were not a “black ball” as they’re basically impossible for the average person to create.

A nuclear power station.

Unsplash / Frédéric Paulussen

So what do we do when we discover this black ball? Bostrom proposes two key systems.

The first would require stronger global governance which goes further than the current international system. This would enable states to agree to outlaw the use of the technology quickly enough to avert total catastrophe, because the international community could move faster than it has been able to in the past. Bostrom suggests in his paper that such a government could also retain nuclear weapons to protect against an outbreak or serious breach.

The second system is more dystopian, and would require significantly more surveillance than humans are used to. Bostrom describes a kind of “freedom tag,” fitted to everyone that transmits encrypted audio and video that spots signs of undesirable behavior. This would be necessary, he argues, future governance systems to preemptively intervene before a potentially history-altering crime is committed. The paper notes that if every tag cost $140, it would cost less than one percent of global gross domestic product to fit everyone with the tag and potentially avoid a species-ending event.

It is a chilling set of proposals, particularly in a post-Snowden world. Perhaps the best response is to simply hope that humanity never discovers an easily-accessible technology that would require such a heavy-handed response.

“Obviously, there are huge downsides, and indeed, massive risks, both to mass surveillance and to global governance,” Bostrom said. “I’m just pointing out that if we are lucky, the world could be such that these would be the the only way you could survive a black ball.”

Read the paper’s abstract below:

Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world : roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the “semi-anarchic default condition”. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology. A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance. The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or a unipolar world order.
Related Tags