To make sense of A.I. decisions, ‘peek under the hood’

(Credit: Getty Images)

Now that humans have programmed computers to learn, we want to know exactly what they’ve learned and how they make decisions after their learning process is complete. The answers to such questions could shed light on our own decision-making processes.

Kate Saenko, an associate professor of computer science at Boston University, asked humans to look at dozens of pictures depicting steps that the computer may have taken on its road to a decision, and identify its most likely path.

The humans gave answers that made sense, but there was a problem: they made sense to humans, and humans, Saenko knew, have biases. In fact, humans don’t even understand how they themselves make decisions. How in the world then could they figure out how a neural network, with millions of neurons and billions of connections, makes decisions?

Saenko did a second experiment, using computers instead of people to help determine exactly what learning machines learned.

“What we learned that’s really important is that, despite the extreme complexity of these algorithms, it’s possible to peek under the hood and understand their decision-making process, and that we can actually ask humans to explain it to us,” says Saenko. “So we think it’s possible to teach humans how machines make predictions.”

From random to revealing

Computer scientists know in general terms how neural networks develop. After all, they write the training programs that direct a computer’s so-called neurons to connect to other neurons, which are actually mathematical functions.

Each neuron parses one piece of information, and every neuron builds on the information in the preceding nodes. Over time, connections evolve. They go from random to revealing, and the network “learns” to do things like identify enemy stations in satellite images or spot evidence of cancer long before it is visible to a human radiologist. They identify faces. They drive cars.

That’s the good news. The disconcerting news, says Saenko, is that as artificial intelligence plays an increasingly important role in the lives of humans, its learning processes are becoming increasingly obscure. Just when we really need to trust them, they have become inscrutable. That’s a problem.

“The more we rely on artificial intelligence systems to make decisions, like autonomously driving cars, filtering newsfeed, or diagnosing disease, the more critical it is that the AI systems can be held accountable.”

“The more we rely on artificial intelligence systems to make decisions, like autonomously driving cars, filtering newsfeed, or diagnosing disease, the more critical it is that the AI systems can be held accountable,” says Stan Sclaroff, a professor of computer science at Boston University.

He continues, “One aspect of that is that all AI systems should be able to explain how they make decisions in a way that humans can understand. We should be able to see what evidence is used by an AI algorithm and examine the means by which the algorithm produced its answers or actions. It’s important for society that AI algorithms can be explainable, so that they can be held accountable for the decisions they make.”

Decisions, decisions

“We have to come up with ways to evaluate explainable models of their decision-making process,” says Saenko. “We need to know that the explanation reflects the true underlying process, and that the network is not just giving us what we want to hear.”

Saenko’s first effort, which used humans to infer the network’s decision-making process by studying pictures depicting various stages, or modules, of the process, included co-researcher Trevor Darrell from the University of California, Berkeley.

Saenko and Darrell showed participants images of modules, steps in the computer’s decision-making process that involved the recognition of certain objects. After they viewed varying series of modules, researchers asked the participants, who knew what question the network had been asked but did not see the network’s answer, to predict the likelihood that the network would get the answer right. The researchers also how well participants could understand the network’s internal reasoning process, and how clear (i.e., clear, mostly clear, somewhat unclear, and unclear) it was what the model was doing in each step.

“We don’t know how a neural network really thinks, except to write down all of the millions of mathematical computations it is doing. But that’s not useful to a human user.”

The researchers reasoned that if humans predicted the model’s success or failure better than chance, then they understood at least something about the model’s decision process. They presented their findings at the European Conference on Computer Vision in Munich.

“What I’m really interested in,” Saenko says, “is if humans can understand how machines work, especially with such complex algorithms. At this point there is no true explanation of why the network made the decision. For example, to decide if an image contains a dog, the network could be looking for ears, eyes, and tail. ‘Because I found two eyes and a tail’ is the explanation. If we had this gold standard explanation, then we could compare other explanations to it, to evaluate how correct they are.

“But the problem is how to get this truth. We could ask a human to guess it, but they’ll probably just guess how they would make that decision. So we need to come up with other ways of evaluating the explanations. We don’t know how a neural network really thinks, except to write down all of the millions of mathematical computations it is doing. But that’s not useful to a human user,” she explains.

Saenko says there’s one big reason why that won’t be easy. “It’s the same reason that we don’t understand how people think,” she says. “You could ask me why I wore this shirt today and I could come up with some rationalization, but who knows how my thinking really works? I don’t know what my brain process was really like.”

“We can rationalize how we think,” she says, “and we can teach machines to rationalize how they think. For example, we can ask it why it thinks something is a dog and it will say, ‘Oh, because it has ears and a tail and fur,’ but that may not actually be the reason that it predicted dog. There could be some other reason. Maybe it learned that all white objects are dogs.”

With simple machines (that is, machines that use basic decision trees) humans can easily explain what’s going on. But when there are millions of operations involved in a decision, researchers need a more abstract way of explaining things.

“That’s what we’re doing. We are finding an abstract way to explain it,” she says. “We are trying to learn if the process really reflects the underlying decision. If it does, then humans should be able to predict what’s going to happen next. We count how many times the human annotators were able to predict if the machine got the answer right or wrong and we compare that with previous methods explaining neural networks. We compare which learning models and previous neural networks lead to a higher accuracy in predicting what the model will say.”

The human factor

Saenko’s work with humans is one way to validate the decision-making process, but, she says, because it does involve humans, it is subject to human biases. And those biases may fail to recognize the merit of a neural network’s processes.

“Let’s say we have a neural network that learned ‘woman’ whenever it saw a kitchen,” she says. “That would be a logical decision if most of the pictures of kitchens it was trained on had a woman in them. Now, if we had a very good explanation of that model it would understand that this is why the network said ‘woman.’ But if you ask a human to evaluate that, the human would say that’s a terrible explanation. The [focus] should be on the woman. But the network actually has a good explanation. It’s just that the model is not making a decision the same way a human would make it. So a biased human might say, ‘That’s not how I would make a decision, so it’s incorrect.’ The human would be wrong.”

To avoid such problems, Saenko and her other fellow researchers designed a second set of experiments that relied solely on computers. They presented a paper outlining their findings at the 29th British Machine Vision Conference at Northumbria University.

“The network actually has a good explanation. It’s just that the model is not making a decision the same way a human would make it.”

“This time we didn’t have any humans in the loop,” says Saenko. “Instead, we had another computer program evaluate the first program’s explanations.

“The experiment works like this: The first program, the neural network, provides an explanation of why it made the decision by highlighting parts of the image that it used as evidence. The second program, the evaluator, uses this to obscure the important parts, and feeds the obscured image back to the first program,” Saenko explains. “If the first program can no longer make the same decision, then the obscured parts were actually important, and the explanation is a good one. However, if it still makes the same decision, even with the obscured regions, then the explanation is judged to be insufficient.”

Which method does a better job of explaining a network’s decision-making process?

Saenko is reluctant to pick a winner: “I would say that we don’t know which is better because we need both kinds of evaluations. The computer doesn’t have human biases, so it’s a better evaluator in that sense. But we still do the evaluation with humans in the loop because in the end we know how humans interact with the machine.”

The more important questions, according to Saenko, include the following: Does this type of evaluation increase human trust in neural networks? Does it improve a human experience or improve the performance? If you had a self-driving car and it could explain why it is driving a certain way, would it actually help you?

“I would say ‘yes,'” says Saenko. “But I would also say we need a lot more research.”

Defense Advanced Research Projects Agency (DARPA) grants supported the research.

Source: Boston University