Science

Will A.I. Be Able to Self-Program the Randomness of Life?

There are no easy or obvious answers.

An AI robotic humanoid robot who could be able to self-program the randomness of life
Getty Images / Ken Ishii

Facebook’s news feed has recently been taken to task for the explosion of “fake news” that can have real consequences. Google returns different search results depending on who’s asking and other factors. Music curation has moved from record labels and radio stations to increasingly AI-based algorithms.

We are all accustomed to this intrusion of machine intelligence into our lives over the past decade — systems watching our behavior and trying to infer what we want, and then combining such information with that from other sources so as to create a complex consumer profile that is valuable to advertisers.

What happens when the next generation of intelligent systems mediate our environments completely? Or, more to the point, what happens when we delegate the curation of all aspects of our lives, as well as the actual physical appearance of the world, to a pervasive network that learns about us and continually adapts to our needs and desires?

We don’t really know, because such a fantastical capability has always been the realm of science fiction. However, a collection of technologies is poised to merge over the next generation and take the form of a sentient physical environment in which it becomes increasingly difficult to discern physical reality from software-generated illusion.

Today we call it augmented reality; tomorrow it will be a universal method of changing the appearance of the physical world, including loading it with instantaneously curated content from a variety of sources. Today we call it Internet of Things; tomorrow it will be the ubiquitous invisible wiring harness of sensors, located everywhere and continuously gathering data to be integrated in useful ways. Today we call it big data, or 5G networks, or body computing, or blockchain, or any of a number of other technology areas currently seen as discrete; tomorrow, these and other technologies will converge to animate the physical environment and integrate our bodies with the internet of everything else.

An A.I.-based backbone for this continuous sensing and responding, this constant interaction between individuals and a superimposed illusion connected to an ocean of data, would ultimately make more and more decisions about our day-to-day lives. It would learn from a wealth of behavioral data and adjust its responses.

This raises two big questions (at least): first, what kind of customized world would each of us choose to create if we had the power? And second, what kinds of A.I.-based products are most likely to be commercially successful – the ones that ease the path through life, or the ones that offer challenges?

In other words, left to our own devices, would we create worlds for ourselves that stripped away all of the friction of life? Would we ever choose to encounter political, cultural, or religious points of view we didn’t agree with? Would we ever venture out of the bubble of the familiar in any aspect of life, or would personal comfort become the dominant criterion?

'Ex-Machina', 2015

We are surrounded by anecdotal evidence that no matter how adventurous we think we are, we tend to find comfort in patterns, and we tend to act in ways that reduce the friction of offensive ideas or people, not increase it. Furthermore, it would be hard to imagine a commercially successful product in such an AI-based world promising to provide customers with unpleasantness because it’s good for them.

So, how can such a highly curated world not lead to a suburbanization of the mind, an intellectually and emotionally gated community? In such an AI-mediated environment, what happens to the chance occurrence, the happy accident, the spontaneous and serendipitous, on which so much of human life depends?

Even more seriously, what happens to one’s worldview (and, consequently, to culture and politics) when life becomes an illusion of one’s own making, and much more homogeneous than the actual world is? Could an AI-mediated, illusion-based environment encourage narrow-mindedness and distrust of innovation or any other deviation from the norm (which in such an environment means simply “different from me”)?

'Her', 2015.

It doesn’t have to play out that way. In fact, this uniquely powerful collection of technologies could be used to expose us to what is out of reach – experiences of people living under very different circumstances than our own, realistic virtual travel to otherwise inaccessible place and cultures, and virtual experiences that allowed us to experience what other experience. But this constructive approach will not happen by itself.

So, how can a developer of next-generation AI-based systems encode the randomness of life, good and bad, into AI systems?

There are no easy or obvious answers, but either the community will collectively make these decisions, or the technology will be allowed to evolve by strictly commercial criteria. Perhaps the most durable and powerful approach lies in the realm of broad-based consensus, standards, best practices, and ethically aligned design principles.

This collection of technologies is qualitatively different from what has come before. Perhaps a new approach is necessary to make sure these technologies develop in a way that fulfills their positive potential and avoids the pitfalls. Perhaps new cross-industry standards and agreements could help lay an ethical groundwork for this development; openness, transparency, and broad consensus could be the guiding principles that assure this new sentient environment will be used to bring new and unimaginable benefits to humanity.

Jay Iorio is the Director of Innovation for the IEEE Standards Association. He will speak on a panel discussion, “AI & The Suburbanization of the Mind,” at South by Southwest in March.

Related Tags