Science

Will Autonomous Weapons Say "No" to Illegal Orders?

The U.S. airstrike on a Doctors Without Borders hospital in Afghanistan raises new questions about human error in war crimes and whether automation is the answer.

Getty Images

Baynazar Mohammad Nazar was unconscious on the operating table when the ceiling began to collapse on him. The father of four had checked into the hospital the previous day after being shot in the leg, and was undergoing his second operation in two days to fix his injury. When the Americans began to destroy the building, the doctors working on him had no choice but to escape on their own as fast as they could.

Andrew Quilty at Foreign Policy tells the story of Baynazar’s life and death in an article that includes a photograph of his body covered in debris on the operating table. Baynazar was one of 31 people the United States killed when it struck the hospital run by Doctors Without Borders (also called MSF) in Kunduz, Afghanistan, on October 2nd.

After high-profile, high-civilian-casualty strikes, politicians and pundits ask how such a thing could happen, and what can be done to ensure it doesn’t happen again. Among proponents of autonomous weapons systems, sometimes called “killer robots,” one popular argument is that human error (or malice) is responsible for a large degree of the crimes committed during wartime. It’s theoretically possible, they say, that robots could be more precise in their targeting, and less prone to mistakes than humans are.

“In fact, human judgment can prove less reliable then technical indicators in the heat of battle,” writes Michael N. Schmitt, a professor at the U.S. Naval War College. “Those who believe otherwise have not experienced the fog of war.”

The question, then, is: can you program the tools of war to constrict human behavior to make strikes like the Kunduz hospital bombing impossible, or at least less likely?

Probably not – at least for the near future. But some artificial intelligence programmers have designed a robot that can say no to humans. The experiment design is simple: The human tells a robot to walk forward off a table, which the robot initially refuses to do. When the human tells the robot he’ll catch it, the robot accepts the order.

That’s a long way from a semi-autonomous attack helicopter telling its human crew that it can’t carry out an airstrike against a hospital because it would be a war crime, but the underlying premise is largely the same. As others have pointed out, human anxiety about this very kind of development in robots is common in science fiction – think HAL-9000 saying “I can’t do that, Dave” when it locks the human outside the space station in 2001: A Space Odyssey.

As to the specifics of the Kunduz strike, many of the facts around the attack remain disputed. MSF has demanded an independent investigation, which the United States government opposes, instead promising to carry out its own reviews.

Some parts of one U.S. investigation were made public earlier this month, and found human and mechanical errors responsible for the strike. But earlier this week, two service members came forward to contradict the report’s findings. They say the strike wasn’t a mistake. In their accounting, first reported by AP, U.S. special operations forces called in the strike because they though the hospital was being used as a Taliban command and control center.

In the official version, a mechanical failure led to the crew of the AC-130 gunship initially getting coordinates for an empty field. The crew then searched for a building in the area that fit the physical description they’d been given, and opened fire. When their instruments recalibrated, they gave the crew the correct coordinates for their target, but the crew continued to fire on the hospital anyway.

If this account is true – that the computer was ultimately accurate and the humans ignored it – it gives some credence to supporters of greater autonomy in weapons systems. That said, the U.S. war on terror is littered with examples of the military or CIA hitting the “right” target and still ended up killing huge numbers of civilians. Automation won’t solve bad intelligence, and attempts to program an approximation of morality will not end war crimes.

There is a strong temptation in the United States to sterilize war, and automation, by removing Americans from harm’s way, that is destined to change the very definition of war. Obama’s preference for drone killing and the accompanying assurances that drones are the most precise weapons ever created are the clearest manifestation of those aims. “They have been precise, precision strikes against al Qaeda and their affiliates,” Obama said in a 2012 Google hangout.

A 2013 government study, however, contrasts those claims. It found that drones in Afghanistan caused 10 times as many civilian deaths as unmanned vehicles. “Drones aren’t magically better at avoiding civilians than fighter jets,” Sarah Holewinski, a co-author of the study, told The Guardian. “When pilots flying jets were given clear directives and training on civilian protection, they were able to lower civilian casualty rates.”

The military is spending millions on developing human-robot teaming systems, further blurring the lines between missions carried out by manned or unmanned weapons. “What we want to do on human-machine combat teaming is to take it to the next level, to look at things like swarming tactics,” Deputy Defense Secretary Bob Work told the official DoD science blog. “Can an F-35 go into battle with four unmanned wingmen?”

Will those wingmen say no if the human pilot gives them an order that’s analogous to walking off a table? What about an order to destroy a hospital or school? The fog of war is going to apply in either case. And if we decide to override machines, the phrase “human error” will be one of the more frightening terms in conflicts of the future.

Related Tags