Science

War Algorithms Will Save Lives Unless They Kill Us All

When is a weapon "fully autonomous?"

A close-up of a black keyboard of a laptop
Christiaan Colen

Imagine, if you can, you’re a navy sailor out at sea. Your country’s at war, and suddenly, an enemy missile is headed toward your ship. In spite of the entire crew’s thorough training, the idea of imminent death-at-sea has everyone on edge. In times like these, decision-making might be best left to staid, fast-acting artificial intelligence, especially if it means saving lives. In this scenario, A.I.-powered technology could intercept the incoming missile without authorization from a real person. That’s because it would run off what researchers at Harvard call a “war algorithm” — A.I. coded to operate in the context of armed conflict.

“‘War algorithm’ is not a legal term of art or a technical term of art, but we proposed it because we thought it would be a useful concept through which to frame forms of technology that we think should be scrutinized and considered for regulation,” says senior researcher Dustin Lewis of the Harvard Law School Program on International Law and Armed Conflict.

There are three main components for a war algorithm, Lewis tells Inverse. First, it must be expressed through computer code. Second, it must be “given effect through a manufactured platform that can both gather information and help make a choice that is at least partly algorithmically derived,” meaning that it doesn’t require human intervention to make a decision. Third, it must be capable of operating in armed conflict, even if that’s not its design.

“Part of the reason we thought it was important to focus on the algorithm, itself, is that more and more, these algorithms operate in ways that might challenge some of the fundamental legal and accountability concepts underpinning the regulation of armed conflicts,” Lewis says. “These algorithms and algorithmic systems, especially learning algorithms, may challenge these legal concepts in ways that should be spotlighted.”

As the researchers focus on autonomous weapons systems, Lewis says their main concern is accountability. Focusing on weapons alone could obscure the modular qualities of the underlying tech, he says, which can be adapted for various uses in wartime: not only in helping to identify, select, and engaging and attack a target in the conduct of hostilities, but also in such tasks as providing medical care and supplies, or treating a captive from the enemy side.

Nonetheless, whether a weapon is truly “fully autonomous” is often debated, hence why the researchers focus on the underlying algorithms. “They cut across all the systems, whether you identify them as autonomous or not,” says Lewis.

The Navy's new guided missile destroyer DDG 1000 USS Zumwalt is moored to a dock on October 13, 2016 in Baltimore.

Getty Images / Mark Wilson

So let’s return to the example of the ship at sea that’s threatened by an inbound missile. Through a system of algorithms, the weapon can calculate a response based on various detectors to figure out if the incoming object is, in fact, a missile and if so, how to intercept it. “These algorithms can cut across numerous systems, whether those systems are considered autonomous or not,” says Lewis.

“Focusing on algorithms [underlying autonomous weaponry] allows you to concentrate on what we consider to be a key ingredient when it comes to increasingly sophisticated technologies that some see as making human-like choices in war,” Lewis says. “From there, it can be decided whether and how to regulate [the war algorithms], and to consider even whether to pursue a ban on related technologies. For us, the core concern is whether these advances in technology are susceptible to regulation, and if so, whether and how they should be regulated or possibly even banned. War algorithms thus raise a vital question: Whether you can and should regulate, and even ban, certain technologies that are increasingly conceived as replacing elements of human judgment and that can be used in armed conflict?”

He notes that the Harvard program is not an advocacy organization and does not support a particular regulatory approach or ban to war algorithms, but rather explores challenges in contemporary armed conflict that international law can help elucidate.

“Autonomous weapons systems occupy this very weird space between weapon and combattant,” says Rebecca Crootof, Information Society Project executive director, research scholar, and lecturer at Yale Law School. “There are aspects of autonomous weapon systems that suggest a need for a third category of law.” Weapons can’t act autonomously, so we regulate their design. Combatants may act lawfully or unlawfully, so the law regulates their behaviour with training on the front end and punishment on the back end, says Crootof.

“Unlike conventional weapons, autonomous weapon systems have the ability to make independent decisions; unlike combattants, you can’t threaten them with punishment,” she says. Moreover, they have the potential to completely lose their autonomy with the flip of a switch or hacker takeover, Crootof adds. These unique characteristics suggest the need for a specific legal category to regulate both the design and behaviour of autonomous weapon system.

“It’s worth recognizing that autonomous weapons don’t necessarily need to be embodied,” Crootof says. War algorithms can underlie both cyber and physical weaponry. For example, this past April, Deputy Secretary of Defense Robert O. Work discussed plans to drop “cyberbombs” on ISIS in an effort to disrupt the Islamic State’s ability to disseminate its message. The American-Israeli built computer worm called Stuxnet as seen in the documentary Zero Days, is another example of a cyber weapon, which targeted Iran’s nuclear centrifuges. “There will be different levels of cyber weapons just as there are different levels of conventional weapons,” says Crootof.

“Part of the issue is also how you define these weapons system. A common definition you’ll see is a weapon system that can select and engage targets without human involvement,” says military professor Christopher Ford at the U.S. Naval War College, speaking in his own capacity. “What that means is also a matter of debate.”

The Phalanx

U.S. Navy photo by Photographer's Mate 2nd Class Christopher Mobley

The Phalanx, for example, is a kind of autonomous weapon used on American naval ships to target incoming missiles. Once it detects a missile, it switches on automatically and destroys anything in its path. It could engage four or five missiles in half a second, without the operator having to go through and look at each target, Ford says.

Another example is the semi-autonomous Harpy, a “fire-and-forget” drone system made by Israel Aerospace Industries. The Harpy circles around to detect and destroys radar emitters. For instance, in 2003 when the United States invaded Iraq — which had radar-based anti-aircraft systems that intercept aircraft that enter the country — the Harpy helped find and destroy those radar systems so Americans could fly into Iraqi airspace without being shot down, Ford says.

The Harpy

Alex Jilitsky/Flickr

The Samsung SGR-1, another autonomous weapon located in the Demilitarized Zone between North and South Korea, is designed to identify and shoot intruders from two miles away. “It has the ability to distinguish between a person who is surrendering and not surrendering,” he says, depending on the position of their hands, or if they’re charging with a gun.

So with now at least 30 countries around the world using automatic weaponry, the global community needs to develop regulations to oversee how far war algorithms can go, the potential for machine learning, and safeguards against inaccuracies.

“The algorithm is just a computer program. Are there problems with algorithms? No,” says Ford. “Any number of our systems use and have used algorithms for decades. Where it gets really interesting is when we start talking about machine learning and artificial intelligence. If you’ve got a machine that is learning what the enemy is, or what the bad guy looks like and doing it through a computer program, that brings up all kinds of questions.”

Related Tags