Making the AI make sense

After a couple of hours’ reading yesterday, I think I understand the difference between an AI that’s explainable and an AI that’s interpretable.

I didn’t need to spend any time mulling things over to grok the black-box problem, however. If I could vote, I’d choose Cynthia Rudin’s interpretable AI over the confabulating robot we’ve got now:

Today, at least 581 AI models involved in medical decisions have received authorization from the Food and Drug Administration. . . . 

Many of these algorithms are black boxes — either because they’re proprietary or because they’re too complicated for a human to understand. “It makes me very nervous,” Rudin said. “The whole framework of machine learning just needs [to] be changed when you’re working with something higher-stakes.” 

But changed to what? Recently, Rudin and her team set out to prove that even the most complex machine learning models, neural networks doing computer vision tasks, can be transformed into interpretable glass boxes that show their work to doctors. . . . 

RUDIN: If you want to trust a prediction, you need to understand how all the computations work. For example, in health care, you need to know if the model even applies to your patient. And it’s really hard to troubleshoot models if you don’t know what’s in them. . . . 

[M]y student Alina [Barnett] roped us into studying [AI models that examine] mammograms. Then I realized, OK, hold on. They’re not using interpretable models. They’re using just these black boxes; then they’re trying to explain their results. . . .

So we decided we would try to prove that you could construct interpretable models for mammography that did not lose accuracy over their black box counterparts. . . .

How do you make a radiology AI that shows its work?

We decided to use case-based reasoning. That’s where you say, “Well, I think this thing looks like this other thing that I’ve seen before.” It’s like what Dr. House does with his patients in the TV show. Like: “This patient has a heart condition, and I’ve seen her condition before in a patient 20 years ago. This patient is a young woman, and that patient was an old man, but the heart condition is similar.” And so I can reason about this case in terms of that other case.

We decided to do that with computer vision: “Well, this part of the image looks like that part of that image that I’ve seen before.” This would explain the reasoning process in a way that is similar to how a human might explain their reasoning about an image to another human. . . . 

Are there other ways to figure out what a neural network is doing?

Around 2017, people started working on “explainability,” which was explaining the predictions of a black box. So you have some complicated function — like a neural network. You can think about these explanation methods as trying to approximate these functions. Or they might try to pick out which variables are important for a specific prediction.

And that work has some serious problems with it. The explanations have to be wrong, because if their explanations were always right, you could just replace the black box with the explanations. And so the fact that the explainability people casually claim the same kinds of guarantees that the interpretability people are actually providing made me very uncomfortable, especially when it came to high-stakes decisions. Even with an explanation, you could have your freedom denied if you were a prisoner and truly not understand why. Or you could be denied a loan that would give you a house, and again, you wouldn’t be able to know why. They could give you some crappy explanation, and there’s nothing you could do about it, really.

The Computer Scientist Peering Inside AI’s Black Boxes by Allison Parshall 4/27/20023 Quanta Magazine

Two thoughts:

  1. Having the AI reason by analogy makes all the sense in the world to me given that we humans reason by analogy. At least, I think we do. (The brain is its own black box.) So I wonder whether shifting the AI to analogies might help with the hallucination problem. People get things wrong all the time, but not the way the AI does. Could case-based reasoning endow the AI with more human-like errors?
  2. Are we just taking it for granted artificial intelligence is going to be making major legal and life decisions for us? Because if so, I object.

Artificial intelligence: other posts

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s