Artificial intelligence needs transparency so humans can hold it to account, a researcher has claimed. Virginia Dignum, Associate Professor at the Delft University of Technology, told an audience at New York University on Friday that if we don’t understand why machines act the way they do, we won’t be able to judge their decisions.

Dignum cited a story by David Berreby, a science writer and researcher, that was published in Psychology Today: “Evidence suggests that when people work with machines, they feel less sense of agency than they do when they work alone or with other people.”

The “trolley problem,” Dignum explained, is an area where people may place blind faith in a machine to choose the right outcome. The question is whether to switch the lever on a hypothetical runaway train so that it kills one person instead of five. People expect machines to solve the problem in the most rational way possible. That might not always be the case, though, and transparency would help explain how the machine came to its decision.

“It’s not just a very deep, neural network chain of events that no one can understand, but to make those explanations in a way that people can understand,” she said.

A.I. that makes its workings clear is an area DARPA has been exploring. The agency posted an announcement in August that it was looking for teams interested in explainable A.I. projects, known as XAI. These systems will help researchers understand why an A.I. made the decision that it did, giving more scope to decide what to do with the resultant information rather than blindly trusting the machine.

With machine learning, Dignum noted that transparency is more crucial than ever. “We cannot expect the systems, and especially the machine learning machines, to learn, and to know it all right away,” she said. “We don’t expect our drivers, when driving, to be fully understanding of the traffic laws. In many countries, they use those “L” plates to show, ‘I’m learning, excuse me for the mistakes I might make.’” Watching A.I., understanding how it comes to certain decisions and acting based on that will be crucial to stopping machines that are still learning from making bad decisions.