Automatons and androids have been slaughtering fictional humans since Czech writer Karel Čapek popularized the term “robot” in his 1920 science fiction play Rossum´s Universal Robots. Čapek explored an issue at the heart of a lot of modern debates: Who is responsible for a robot that kills? The Czech writer seemed to finger Rossum, his Frankensteinian engineer, but the legal reality is a bit more complicated.

Tech changes, but intention is still everything.

If an engineer creates an army of deathbots for the express purpose of death-dealing (think: Ultron’s hordes) then he or she is responsible for what happens when they come on strong. According to 18 U.S. Code § 1111, murder is:

The unlawful killing of a human being with malice aforethought. Every murder perpetrated by poison, lying in wait, or any other kind of willful, deliberate, malicious, and premeditated killing… or perpetrated from a premeditated design unlawfully and maliciously to effect the death of any human being other than him who is killed, is murder in the first degree.

“Any other kind” of death leaves the window wide open for killer robots, which are a sort of mechanical poison.

Even if the hypothetical bot-builder who programs a dangerous — though not expressly killer — machine, there’s precedent that he’d responsible for any lethal outcomes. When guard dogs fatally attack bystanders, juries have found the dogs’ owners guilty of murder. And what, beyond the fact one method involves command codes and the other spoken commands, is the difference between training a robot and training a dog?

If the weaponized robots happen to be a little smarter than a German Shepherd — say, a fully autonomous gun-bot — Rossum is still not off the hook. Groups like the Human Rights Watch predict that “commanders or operators could be found guilty if they intentionally deployed a fully autonomous weapon to commit a crime.” These groups — and human rights organizations generally — may have an unrealistically rosy view of the judicial system, but one of the compelling reasons to keep Predator Drones under remote control is to make sure that attacks come from the “U.S. Army” so military scientists don’t end their European vacations with an all-expenses-paid tour of the Hague.

The issue gets a little murkier when robots assembled for one task make a bloody pivot. In Čapek’s story, Rossum assembled his robots for manual labor, not murder. But murder happened (not the non-transitive verb). Were Rossum to be tried in a court, his lack of adherence to quality control-slash-Asimov’s laws would get him in trouble. When a faulty product kills civilians, the manufacturer pays, which is what prognosticators are saying will happen to Google should their automated cars break the law. Needless to say, a number of tech firms have lawyers consider their liability exposure.

The fact that robots can’t be charged presents the opportunity for much under-the-bus throwing. The question our best legal minds are trying to answer? Who should wind up tangled in those axles.

Photos via Flickr.com/San Diego Shooter