Science

DARPA Has Decided it Wants to Actually Understand A.I.

It's a big change for the agency.

Getty Images / U.S. Air Force

The Pentagon wants to find out what makes artificial intelligence tick. The Defense Advanced Research Projects Agency (DARPA) announced on Thursday that it was extremely interested in explainable A.I. projects, known as XAI. The news is a sharp departure for an agency that had previously prioritized A.I. effectiveness, instead of actually understanding how systems arrive at decisions.

DARPA is looking to support A.I. projects that will make it clear to the end user why something happened. For example, DARPA wants intelligence analysts to understand why the recommendations their A.I. sends them have been chosen. If a human analyst is looking into A.I. chosen data, they should understand why the computer put that data in front of them and not something else. An A.I. like the ones in the agency-funded Cyber Grand Challenge would need to be designed in a more transparent way, but DARPA’s announcement said that it would consider a variety of user interfaces (it’s not letting on too many details as to how the XAI project should explain itself.)

Abstracts for potential projects are due by September 1, and DARPA is anticipating handing out multiple awards to researchers. These include procurement contracts and cooperative agreements, but the agency will not be handing out grants. The agency is also looking for teams that can conduct research into how A.I. should explain itself; in other words, researchers need to study an A.I. the same way psychologists study how the human mind processes information in various circumstances.

Related Tags