Science

Google, Facebook, and Microsoft Want to Make A.I. Serve Humanity

Getty Images / Alexandre Loureiro

Five giants of the tech industry have joined forces to create the “Partnership on Artificial Intelligence to Benefit People and Society,” a group that wants to conduct research into A.I. ethics, and how technology can best operate with humans. Google, Facebook, Microsoft, IBM, DeepMind (a subsidiary of Google), and Amazon have all committed themselves to a project that aims “to study and formulate best practices on A.I. technologies, to advance the public’s understanding of A.I., and to serve as an open platform for discussion and engagement about A.I. and its influences on people and society.”

Although it sounds like the group could fall into some fusty arguments about the finer points of obscure software, the partnership does place at the center of its website a mantra that shows it understands the exciting possibilities A.I. can hold for humanity: “We believe that artificial intelligence technologies hold great promise for raising the quality of people’s lives and can be leveraged to help humanity address important global challenges such as climate change, food, inequality, health, and education.”

SEOUL, SOUTH KOREA - MARCH 13: In this handout image provided by Google, South Korean professional Go player Lee Se-Dol (R) puts his first stone against Google's artificial intelligence program, AlphaGo, during the fourth Google DeepMind Challenge Match on March 13, 2016 in Seoul, South Korea. Lee Se-dol played a five-game match against a computer program developed by a Google, AlphaGo. (Photo by Google via Getty Images)

Getty Images / Handout

The partnership promises open dialog on A.I. ethics, but there’s no denying that a consortium of big players gives them a certain degree of power over the direction of the conversation. Beyond shutting out startups, this could impact big names like Tesla, which aren’t part of the group but have a key interest in ethics through its development of autopilot software.

Interest in defining a code of ethics has slowly ramped up, as more A.I. is given responsibilities for making life-or-death choices. How should a self-driving car act, for example, in an emergency? Germany has outlined the basic laws it would expect autonomous vehicles to follow, like not classifying people as a way of prioritizing actions in an accident.

More general sets of rules have come from organizations like the British Standards Institute (BSI). These rules, which cover ground like not designing robots that can hurt people, aim to develop a universal code. The partnership promises that, like the BSI’s code, it will be open to external dialog, but time will tell whether that’s enough to satisfy A.I.’s other big players.

Related Tags