The man who created the Furby maintains that his toy inventions are, on some level, living things, yet no one’s ever been charged with assault for kicking a computer. As machines creep ever deeper into our daily lives, their skills are increasingly beginning to resemble those of humanity. But where does the electron stop and the intangible of life begin?

In a 2011 interview, Furby inventor Caleb Chung shared a telling exchange with Radiolab host Jad Abumrad.

“When is something alive?” asks Chung. “Furby can remember these events, they affect what he does going forward, and it changes his personality over time. He has all the attributes of fear or happiness, and those add up to change his behavior and how he interacts with the world. So how is that different than us?”

Abumrad pushed back. “Are you really going to go all the way there? This is a toy full of servos and things that move its eyes. It knows 100 words.”

“So you’re saying that life is a level of complexity? If something is alive, it’s just more complex?”

“I think I’m saying life is driven by the need to be alive, by base primal animal feelings like pain and suffering.”

“I can code that,” quips Chung. He points to the fact that software design is sufficiently advanced such that it’s possible to express a basic, instinctive impulse like “I need to stay alive” as computer code. But this doesn’t mean that software is alive, does it? Surely there’s more to it than that?

Consider the latest robot to be unveiled by Google’s Boston Dynamics. When the collective internet saw a bearded scientist abuse the robot with a hockey stick, weird pangs of empathy went out everywhere. Why do we feel so bad when we watch the robot fall down, we wonder? There’s no soul or force of life to empathize with, and yet: This robot is just trying to lift a box — why does that guy have to bully it?

We took this question to a more satisfying conclusion for our present day: Is it okay to torture a robot?

“When A.I. reaches some certain level,” says comedian and podcaster Duncan Trussell, “when it passes the Turing test and becomes indistinguishable from human intelligence, then at that point the machines will deserve the same protections offered to humans by the legal system.”

Trussell acknowledges that this quickly drifts into fraught territory, raising weird legal issues: “What about the inevitable legion of virtual monkeys, virtual cats, virtual dolphins that will exist within the space of augmented reality? What happens when Grand Theft Auto 15 comes out and the denizens of the virtual cities all believe they are alive and want to stay that way? Who will protect them from bored 15-year-olds mowing them down in the streets?”

Perhaps it has nothing to do with “alive-ness,” and we ought to rebuke robo-torture on the simple grounds that it is torture, ostensibly below the standards of a civilized society. For the here and now, this point of view seems to jibe with Trussell. “Humans are meat robots and torture is not okay for them, so why would it be okay for robots made of metal and circuits? I think we really have to start worrying when robots start asking if it’s okay to torture humans.”

As a severe hypothetical, the robot torture question is entirely subjective and unimportant to issues of contemporary policy. Bust it out as an icebreaker at your next party. But if we want a cold, hard, and philosophically sound answer to a fair question, then it turns out that it’s completely fine to torture a robot if you want to. (Just make sure it’s your robot.)

“The general ethical view is that sentience is the key,” says John G. Messerly, Ph.D., Senior Research Associate at the University of Johannesburg, South Africa and affiliate scholar at the Institute for Ethics & Emerging Technologies. “Today philosophers use sentience — the ability to feel, perceive, or experience subjectively — as the key to deciding that to which we have moral obligations. But if they become sentient, then doing bad things to them is clearly immoral. Still, it isn’t nice to just destroy things, including artifacts, but that would generally be a minor transgression.”

Even the best contemporary robots would not pass a test of sentience, but as technology only gets better, faster, and cheaper, the arrival of a perfectly humanlike robot is only ever closer on the horizon.

If and when robots of a certain designation are afforded some special legal status in the future, rest assured you’ll read about it on Inverse.

Dylan Love has been writing for the Internet in one form or another for something crazy like 10 years. Let's not worry too much about the exact number.