A new framework would allow users to understand the rationale behind artificial intelligence decisions.
“One thing that sets our framework apart is that we make these interpretability elements part of the AI training process,” says Tianfu Wu, first author of the paper and an assistant professor of computer engineering at North Carolina State University.
“For example, under our framework, when an AI program is learning how to identify objects in images, it is also learning to localize the target object within an image, and to parse what it is about that locality that meets the target object criteria. This information is then presented alongside the result.”
In a proof-of-concept experiment, researchers incorporated the framework into the widely-used R-CNN artificial intelligence object identification system. They then ran the system on two, well-established benchmark data sets.
The researchers found that incorporating the interpretability framework into the AI system did not hurt the system’s performance in terms of either time or accuracy.
“We think this is a significant step toward achieving fully transparent AI,” Wu says. “However, there are outstanding issues to address.
“For example, the framework currently has the AI show us the location of an object those aspects of the image that it considers to be distinguishing features of the target object. That’s qualitative. We’re working on ways to make this quantitative, incorporating a confidence score into the process.”
The researchers will present the paper at the International Conference on Computer Vision in Seoul, South Korea.
Support for the work came from the US Army Research Office and Defense University Research Instrumentation Program, as well as from the National Science Foundation.
Source: NC State