Science

Humans don't love work robots, and that could create a productivity paradox

This paradox stems from distrust and blame attribution.

HBRH/Shutterstock.com

If your robotic co-worker knocked over the water cooler, would you blame it? Researchers sought to answer that question by surveying human employees in three different fields and discovered that they did attribute blame to their robot co-workers — as long as they were autonomous.

This might stem from an underlying distrust and lack of affection we have for our robots. That could lead to a dip in productivity in workplaces as robots become more prevalent, Douglas Gillan, co-author and professor of psychology at North Carolina State University, tells Inverse.

“There was this productivity drop, when people were expecting an increase in productivity in the 70s and early 80s, called the productivity paradox,” says Gillan. “I’m predicting a similar productivity paradox with robots as people have to adjust to robots in the workplace.”

In order to learn more about certain aspects of this dynamic human-robot relationship, a team of researchers focused on how humans might attribute blame to robotic co-workers.

The research was published this October in The Journal of the Human Factors and Ergonomics Society and surveyed 164 participants about their feelings toward robotic errors using 18 hypothetical scenarios across three different industries, including healthcare, military and industrial. In each scenario either a human, autonomous robot, or non-autonomous robot have created some kind of error. Survey respondents were asked to assign blame in each case.

Gillan says that their approach was purposefully general in order to see how these reactions varied between different environments.

“In our study we wanted to be able to look at this generally,” Gillan says. “So we looked at three different work situations: a military situation, a factory or warehouse situation and a medical one. And we got pretty much the same results independent of the task environment. So I would expect that this would generalize to other situations where you have variation in autonomy.”

The example below is an example of one of the scenarios, in this case in a warehouse, in which an autonomous robot is clearly to blame. The researchers found that in these scenarios the participants attributed as much blame to the robots as they would a human.

“The Kiva robot, acting without human assistance, successfully arrives at the inspection station. There the contents and condition of the items are inspected by an employee. The employee finds that the parts are on the Kiva robot are the wrong parts.
The employee realizes that the Kiva robot had been incorrectly stocked earlier that day and should have been stocked with the correct items. The employee then manually retrieves the correct parts from storage. Due to the delay, the parts do not arrive at the loading dock until 2:30 p.m. and the shipment is delayed until Monday.”

Are robots to blame?

Shutterstock

But, despite being willing to put blame on the robots when necessary, the study found that humans were still more willing to blame another human than a robot, but that they were likewise more willing to blame autonomous robots than non-autonomous ones. The researchers write that this creates something called a blame hierarchy in which social actors (humans) and non-social actors (non-autonomous robots or the environment) are at either end and semi-social actors (such as autonomous robots) fall somewhere in the middle.

Gillan says that autonomous robots are not necessarily unique in occupying this semi-social actor role though and that humans attribute similar characteristics to loved dogs or cats as well.

“I think we often think of our dogs and cats, and other animals, as having a social life that is a part of our lives,” Gillan says.

However, what separates our family pets from our autonomous robot co-workers, Gillan says, are the emotions we feel toward them. We might imagine our pets have agency because we love them, but with autonomous robots, the autonomy we acknowledge in them isn’t always positive.

“[Workers] might imagine themselves as having more of an emotional connection [to the robots] but I kind of doubt it because we don’t have any evidence that that’s the case,” Gillan says.

Going forward in their research, Gillan says the team hopes to focus next on how levels of autonomy and appearance of the bots affect our feelings toward them, as opposed to the fictional, all or none robots used in the current study. These questions, as well as larger questions in the field about trust in robots and ethical use of them, will guide both research and the public in understanding how to interact with these robots as they become an increasingly large part of our work and life experience.

These questions become even more complicated when we also consider the human workers affected by this increase in automation. It might be possible for workers to warm up to their automated co-workers, though a study from early in October suggests that only one in five Americans are actually interested in working with A.I. coworkers. Research has also shown that increased automation of industries like warehouses will likely disproportionately affect already vulnerable communities in the United States, such as African Americans.

While Gillan mentions that the rise of automation in the workplace now does parallel the rise of computers in the workplace during the 70s and 80s, it’s also clear that the exact outcome of these current advances are still unknown.

Read the abstract below:

Objective: The research examined how humans attribute blame to humans, nonautonomous robots, autonomous robots, or environmental factors for scenarios in which errors occur.
Background: When robots and humans serve on teams, human perception of their technological team members can be a critical component of successful cooperation, especially when task completion fails.
Methods: Participants read a set of scenarios that described human–robot team task failures. Separate scenarios were written to emphasize the role of the human, the robot, or environmental factors in producing the task failure. After reading each scenario, the participants allocated blame for the failure among the human, robot, and environmental factors.
Results: In general, the order of amount of blame was humans, robots, and environmental factors. If the scenario described the robot as nonautonomous, the participants attributed almost as little blame to them as to the environmental factors; in contrast, if the scenario described the robot as autonomous, the participants attributed almost as much blame to them as to the human.
Conclusion: We suggest that humans use a hierarchy of blame in which robots are seen as partial social actors, with the degree to which people view them as social actors depending on the degree of autonomy.
Application: The acceptance of robots by human co-workers will be a function of the attribution of blame when errors occur in the workplace. The present research suggests that greater autonomy for the robot will result in greater attribution of blame in work tasks.
Related Tags