Published on November 16, 2021 --- 0 min read
By Clearbox AI

How does Explainable AI work?

Share this article

Have you ever run into Google’s ‘People also ask’ section and found the questions you were about to ask? Once a month we answer the most researched questions about AI and Machine Learning with our guests on Clearbox AI’s new interview series ‘People also Ask...and we answer’. Enjoy!

In our first episode we talked about How to ensure trust between humans and machines with Luca Gilli, CTO at Clearbox AI, and Tim Schrills, Researcher in Engineering Psychology and Cognitive Ergonomics at University of Lübeck.

Introducing our guest

Tim Schrills comes from Germany, he studied Psychology and Computer Engineering and his research is focused on the interface between psychology, computer science and human AI interactions: what do humans need to interact with AI and what does AI need to make humans able to interact with them? Currently he's working on different research projects that look into the interaction between people and intelligent systems and how it's possible to design interfaces within these intelligent systems to lead to an efficient and trustworthy cooperation between humans and machines. He has worked in the field of automotive, energy but also in the medical field, which he particularly loves.

How does Explainable AI work and why is it important?

Luca: I would like to start with some kind of definition because, in my opinion, Explainable AI has a very broad connotation. It can mean a lot of things and sometimes it's also a bit misused. Let's start with the definition of Explainable AI: when we talk about Artificial intelligence, we usually talk about an intelligent agent which is designed to solve a very specific problem. This can go from finding the best way to vacuum your room to translating a text from English to German.

So, when we talk about Explainable AI in this context, we really mean that, given that our agent came up with a solution, what is the explanation for this solution? For example, if I used AI to design an antenna that came up with a very strange shape that apparently is very efficient, can the algorithm explain why this antenna is shaped that way?

However right now, especially in academia, with Explainable AI, people tend to mean a subfield of this concept, because we are mostly talking about supervised machine learning and interpreting machine learning models. During the last 5-10 years there was an explosion in the popularity of deep learning and these techniques come with a lot of complexity: there's a need to interpret and explain these kinds of models. Nowadays in academia, we really talk about Explainable AI meaning the interpretation of machine learning models even though it can be a bit broader.

It's a concept as important as complicated, because these models are getting more powerful but also more opaque and difficult to understand. There are different ways to approach the problem. We can try to work on interpretable models by design, so instead of working on models which are very accurate and performant, one can develop models which are as accurate, but trying to make them interpretable by telling them to provide an explanation. Or we can also extract explanations post-hoc: given this very performant and efficient model, can we extract explanations?

One of these explanations can be a global analysis - why the model is behaving in a certain way on a global scale - or more local - why has this prediction been decided in a specific way instead of another -. There is a very set of techniques which is literally exploding, everyday there's a paper published on how to deal with explanations.

One thing I would also like to talk about, especially because Tim is here and he is an expert of human-computer interaction, is that one thing is to generate explanations with mathematical techniques, another thing is to make people understand these explanations, which is a completely different issue. For example, Tim designs interfaces to create this bridge between explaining and understanding models.

Going back to 'why it's important', there are several aspects to take into consideration. For example it's important for developers while developing a model because, by knowing why the model is behaving in a certain way instead of another, we can create better models, so it's a way of driving the development in an efficient manner.

When you have models that are working in specific contexts, especially when we have humans interacting with these models, explainability and interpretability play an important role because they increase trust between the human and the model. It's really important to create this bridge between explanation and end-user and this bridge is done through interfaces between humans and computers.

Tim: When we speak about designing human-AI interfaces I would like to broaden our perspective. We really design the whole interaction, so it's not only about rearranging a few pixels, which is an important factor, but also about what questions we can ask to a system, what the system informs me about, which information it delivers me and in which way. When we try to make people cooperate with a system, it's crucial for us to know how they understand what the system is doing.

For example: I have type 1 diabetes and there's a smart insulin pump technology that can calculate how much insulin you need from the glucose level that you have at that moment. For that to work, I also need to deliver some information to the system, like what my physical activity was, what my food intake was and so on. If I want to interact and to cooperate with the system correctly, I need to understand how it is processing this information. That's why the interface needs to show me (the end-user) that the food intake is very important because it makes the system behave differently. This information can be too complicated for the system to explain or, on the other hand, sometimes the system gives back some information that needs further explanation for the user to understand it.

When we design for interaction between humans and intelligent systems, explanations are our way of making this cooperation effective, efficient and trustworthy for both parties because they know which information is processed in which way, they can anticipate it and they can change their own behaviour accordingly.

This is a crucial thing for automation since many decades, nothing new for AI, but it's a special challenge because of what Luca already explained, for example these opaque techniques that are developed, and we need to find solutions for exactly this piece of technology.

What is a black box in machine learning?

Luca: Well, it's becoming a very hot topic because of the issues we mentioned so far. Basically with black box machine learning we mean all the situations when we have a model and we are not able to understand its inner decisions processes. For example we're dealing with a deep neural network and the network is so complex that we don't know why it came up with a certain decision instead of another.

It's a concept that is becoming more and more prominent for different reasons, mostly because in the past there used to be a lot of handwork involved when talking about machine learning algorithms. A lot of feature extraction was done by hand and engineers were supposed to design a way to, for example, tell an algorithm how many edges or corners there were in an image. Right now, deep learning is taking away this job from engineers and is managing the extraction procedure by itself. This creates a lot of potential because it can really scale up the amount of information you can extract from data. On the other hand it's creating issues about the complexity that arises from this kind of approach.

We can talk about black box in two main circumstances: when developing a model, like these huge NLP models like GPT-3 that require months, literally weeks and weeks to train and they involve a lot of energy consumption, computational cost and so on. So if we are able to understand through explanation why a certain architecture would work better than another one, or detect that a part of the architecture is not really needed for performance, or that 10% of the network is not that important for the final outcome, then we'll have saved 10% of the cost that can mean a huge impact if projected in weeks of training and tons of CO2.

The second circumstance is about using models in real life: once developed and then put in production, models need to be used by humans and they may need to be accountable for their decisions. Therefore, there are issues arising from the black box problem, for example in terms of bias. We need to make sure that the model is not biased against a certain segment of population and, if we aren't able to understand what's going on inside the model, it can be tricky to keep under control. The objective of a decision supersystem is to make a process more efficient, it's not just for the sake of it. We want to have a model that would help a clinician to speed up the number of prognosis or diagnosis that performs during a day. If this model is slowing down the processes instead of making them faster because the clinician may not trust the predictions or they discard some, then the threshold between an efficient model and a model that slows down processes becomes very narrow.

Black box in machine learning is a problem that affects several stages of the machine learning lifecycle and especially the last one related to human-computer interaction. In terms of AI adoption, it can really determine whether machine learning models will be adopted in a field or not.

Where is Explainable AI most used?

Tim: I think it would be interesting to answer this question on two levels: where it is used at the moment and where it should be used. My feeling is that a huge portion of the research that has been done in XAI still focuses on very technical users. Like Luca said before, XAI is able to support users like developers, identifying problems in the model or in the algorithm and supporting them in fixing them or in understanding from which data problems come from and so on.

From my perspective, this is one major application that is being developed at the moment but when we look more into end users or products for people with less education in computer science, then I think that most of the people already saw some kind of explanation from AI systems. For example when we see that a song is recommended on a music service like Spotify -or any other service- because other users have listened to the same one or because you have seen it before, this is already a kind of explanation. Or when we want to make a calculation for a mortgage, behind that sometimes there are AI systems that explain what your score should have been for the mortgage to be allowed or for a credit to be declined.

But where we really need to use it more, from my point of view, is in the field of medicine or personal healthcare. Here personalisation can have a huge impact on what we can do. For example we have some projects going on in the field of nutrition and personalisation has been super helpful because it can inform people. However, people also need to understand what they are doing and why they are getting specific recommendations or feedback on their behaviour. We can also reflect on the black box question that Luca was answering: we have different issues and we need XAI to tackle them. One of these important challenges in the medical sector is for people to rely on XAI: they need to know it's trustworthy and fair.

We also have a big chance that we should not miss: XAI can help us understand things. When I am having my diabetes treated by an AI, this AI should not only treat it, but it has to give me directions, for example: 'you really need to be cautious in the morning and you could improve so much when you eat less in that moment' or 'do some activity in the morning because, when you do, those are your best days'. This is a very huge chance that we have, but when these applications are black box models, then people don't have access to that knowledge that they generated, maybe even by their own data. In conclusion, XAI is used at the moment for developers to fix models, but it has a huge impact on society when it makes people trying to understand their own life, behaviour and actions.

How is Explainable AI used in medicine and what are the main challenges?

Luca: As Tim was saying in his previous answer, I think it's really important to start using Explainable AI in medicine because it's a very sensitive field where the increase of trust between models and humans would really make a difference. I agree with Tim that right now is mostly used for debugging models since it allows us to efficiently detect problems, for example if a model is not generalising properly or it's overfitting or there’s data leakage. So if the model is acting stupidly, Explainable AI can really help us to detect misbehaviour.

On the other hand, there's still a big gap about using Explainable AI in human-computer interfaces. It's a topic that is growing a lot in these years and there is a plethora of techniques that try to explain models. These techniques can be based on local decisions or on the model global behaviour, they can start from the neurons of the neural network or be a bit more agnostic. The problem with these techniques is that there's still a lot of work to be done in terms of personalising the very specific context because different people can understand different information. Not every end user can digest the same information in terms of trying to explain a model, we need to find a way to tailor this explanation to the end user. The model that should serve a doctor is different from a model that should serve a nurse, that is different from one for an engineer. This creates a lot of customisation that, in my opinion, is not really standardised yet.

In terms of methodology, there are still a lot of issues in finding robust ways to explain models. I notice it also in big data science or machine learning platforms, where tools and libraries require some knowledge about explanation techniques themselves, because we basically need to calibrate the explanations and we have to tune it to the specific model and type of data, and this is often neglected. Sometimes we end up having tools that output explanations, but these explanations are not really robust. For example you can have two individuals which are very similar and obtain two completely different explanations for the same decision because a small change in these individual properties may affect your explanation.

I think there's still a lot of work to be done on making these processes more robust, both from the methodology and the design point of view. We need to find a design pattern to understand which problem is suited for a specific kind of explanation. In this case Tim's and his colleagues' work is very important.

Tim: Thank you Luca for going first because I had a hard time thinking about how to cap the answer. This is such a big question, I could talk hours about it.

First, I agree that at the moment XAI is mainly a debugging tool when we talk about medicine and this is also based on the fact that, for good reasons, the skepticism and the hesitation to use intelligent algorithms in diagnosis, operations and therapy decisions is high. If the system that recommends us a song makes an error it isn’t such a big deal, we wasted two minutes of our lifetime listening to some songs we didn't want to hear. On the contrary, we don't want an algorithm to make a mistake in diagnosing diseases. To build trust in such an algorithm requires much more effort from the developers than many other algorithms. Also, the data that we use here is way more sensitive, we need to be more cautious about which data we can use for explanation and which not, because we cannot just easily handle medical data as other kinds of data.

There are many challenges using AI in medicine and XAI is definitely needed in this case, but it needs to be way more tailored to the people who are responsible to collaborate with these systems.

It’s very important to have an XAI algorithm that helps a developer to debug and to understand the model for making better algorithms, but we need XAI that enables patients, doctors, nurses to understand what they are actually doing because intelligent systems is useful to support their work and to make decisions that affect not only their own life, but maybe other lives too.

Here it's important that we understand: what questions do these people have? What do they want to understand about algorithms? We worked on a project where we really went to the doctors and we asked them what questions they would ask to the algorithm. XAI is exactly this, answering those questions to give people the chance to find out for themselves if the algorithm is trustworthy enough, if they understand it enough to use it and if they would use it in such a sensitive matter as medical diagnosis or operations.

I think this is the main challenge: people should have the chance to collaborate with AI rather than just use it as a tool, but we really need to enable them to understand if this is something they can work together with and to comprehend why the system decided that way. This is the main challenge we have in all AI systems, but especially in medicine, and we need to make it easy for people. There's no doctor or nurse or anyone in the medical field who wants to spend many hours into computer science. It needs to be easily controllable and we want to address our questions as clearly as possible to our algorithms.

Tags:

interview
Picture of Clearbox AI
'People also ask...and we answer!' is the new interview series by Clearbox AI. We try to answer Google's most researched questions on a specific topic about Artificial Intelligence or Machine Learning with our guests. Enjoy!