Culture

Google Employees Feared "Lethal Outcomes" for Pentagon A.I. Project — Here's Why

The company might listen to its employees.

Responding to backlash, resignations, and a company-wide petition, Google will not renew its contract providing artificial intelligence research to the US Department of Defense. Employees feared Google’s Project Maven would be used for “lethal” purposes yielding “irreparably damage” for not only the brand but the weaponized nature of A.I.

Google Cloud CEO Diane Greene announced the decision at Weather Report, a weekly company meeting, Gizmodo reported on Friday. Citing the overwhelming internal backlash, Google has decided not to pursue a follow-up contract once the current Pentagon contract expires in 2019. Rather, the company plans to unveil a new standard of ethical principles for the use of A.I. next week.

“Google should not be in the business of war,” Google chief executive Sundar Pichai said in a letter addressed to the company, signed by over 3,000 employees. Project Maven gave the Pentagon an A.I. surveillance engine that used “Wide Area Motion Imagery” data captured by drones which could detect vehicles and track their motions. Google employees began to fear that once this technology becomes available to the military, Google’s own A.I. would be able to assist with the tasks of operating drones and launching weapons.

Sundar Pichai drafted the letter demanding Google drop its contract with the Pentagon.

Wikicommons

While Google’s motto has long been declared “Don’t be evil,” once the technology is outsourced to third parties, Google cannot control the outcomes of its creation. Employees demanded that the company discontinue the contract to improve military surveillance in a way they believed could be used for “lethal outcomes.”

The letter’s choice of words was a direct reference to Defense Secretary Jim Mattis’s frequent insistence that the US military’s central goal is to increase the “lethality” of the American capabilities. Without knowing the internal dialogue between Google Cloud and the Pentagon, current military leadership has been clear about the impetus for A.I. research. Once Google provided the capabilities, it would be at the Pentagon’s discretion whether autonomous weapons systems would be used without the oversight of a human operator.

Furthermore, Google did not disclose the level at which the company would be helping the Pentagon reach this goal. Initially, Greene had told worried employees that the company’s contact with the Department of Defense was worth only $9 million. However, emails first reviewed by Gizmodo revealed that the initial contract was closer to $15 million and the budget for the project was expected to grow as high as $250 million. The emails also revealed that Google was providing the Pentagon with machine learning algorithms with the explicit goal of building a system sophisticated enough to “surveil entire cities.”

Google says it will publish its new set of ethical principles about the use of its A.I. technology next week. The upcoming treatise is expected to reevaluate partnerships with agencies whose intentions for A.I. advancement seem to contradict the values inferred by its “Don’t be evil” motto.

The choice to discontinue a partnership with the Pentagon seemed inevitable, given the fleet of company resignations and potentially irreparable damage to Google’s brand. Despite the new ethical principles underway, the company is still beholden to that contract until 2019, so Google won’t be slowing down its work in providing key A.I. research and capabilities to the Pentagon anytime soon.

Related Tags