Explainable AI: an introduction to the core concepts

Written by
Willem Prisse
Sometimes Computer says no… , but do we ever ask WHY computer says no? The idea of asking for an explanation is as old as the human race. So, in this emerging era of AI systems that are assisting in the making of increasingly important decisions that influence our daily lives it becomes equally important to understand what preceded that decision or advice.

What is the black-box phenomenon of an AI system? Is it even possible to understand an AI system (spoiler: it is) and why should we want that? Those are some of the questions that this article will try and answer. At DEUS, explainability plays a big role in making AI more humanity centred. A big part of this is making it more understandable, relatable, trustworthy and thus more explainable for everyone.

In the past few months I have been doing research on explainable AI (XAI) at DEUS. More specifically on which factors influence the need for explainability in AI. In the coming weeks I will be posting blogs about several topics that will help you understand what XAI is, why it is important and how it can possibly help your business. For this initial article I will be giving a brief introduction of XAI and elaborating on what will be discussed in the following ‘chapters’.

Image for post

Black-Box phenomena in AI

AI systems are, pretty much without exception, developed using machine learning methods. How AI and machine learning relate to each other is another story in and of itself. A great explanation on that can be found here. For now, what’s important to realise is that the process of machine learning can lead to an AI system that is something of a ‘black-box’ in terms of how it works. This is especially the case with more complicated machine learning methods such as deep learning.

The ‘black-box’ phenomenon occurs in AI is because the more complicated an AI system gets, the less understandable its decision making process becomes for humans.

The black-box phenomenon isn’t limited to AI systems. Its human counterpart would be you asking a contractor to build you something. After close inspection of your request and doing some calculations the contractor would send you an offer in dollars. The contractor in this case is the AI system and the quote is comparable to the output. What went on in the mind of the contractor is unknown to you and how they came up with that price is unclear. Sometimes just the output, or in this case the quote, is all you need as an end user. However, sometimes this is not sufficient, either because you need to re-explain the quote to another person or you need additional information to ‘believe’ and ‘trust’ the quote that the contractor gave you. In order to understand the output (quote) you would need additional information like material cost and estimated man hours. Basically an explanation for why the quote is what it is. The same can be said for AI system outputs.

The ‘black-box’ phenomenon occurs in AI is because the more complicated an AI system gets, the less understandable its decision making process becomes for humans. Simpler machine learning methods, for instance linear regressions can be easily understood by humans, because there aren’t many ‘rules’ that dictate how the model works. However, if we for instance look at convolutional neural networks that can contain hundreds of thousands of nodes (decision points) that interact on different levels, it can be difficult for a human to conceptualise the model and “understand” the output. This is where XAI comes in.

The main goal is to understand AI systems and the workings that operate within the models.

What is XAI?

In the simplest of words XAI is a subdomain of regular Artificial Intelligence that strives to address the above mentioned black-box phenomenon and shed light on how decisions are made by an AI system. The main question that it tries to answer:”Why and based on what does the AI system output what it outputs?”. The main goal is to understand AI systems and the workings that operate within the models. Not only that, but also make it understandable for those who need to collaborate with the AI system in order to create trust in the collaboration between human and machine.

Why XAI?

Why XAI is important can be broken down into several sub-reasons. However, at the basis, it depends on the final application of the AI system as to whether it even really matters that the AI is not explainable for humans. For instance, AI systems that are used in spam filters simply need to work as well as they possibly can. However, there are situations in which it isn’t enough for an AI system to deliver just the output and not further details.

Imagine the case of Laura the oncologist, who has an AI system to aid her in the diagnosis of her patients. X-rays, blood sample values and other parameters are uploaded into a computer which utilises an AI system to create an output based on those inputs. The main advantage for Laura of having an AI system at hand is its computing power that gives an increased capacity to sift through, compare with and base output on vast amounts of data.

However, Laura will need more than just an output. The AI system must be able to somehow explain its output to her. Besides that, a certain degree of trust needs to exist between Laura and the AI system to exploit the computing power that the AI system can offer as support. If there isn’t any trust, Laura will very likely ignore the output of the AI system and stick with her own choice, rendering the AI system useless to her and her patients. Explainability in this case is a necessity to enable a synergetic relationship between the user and the AI system, thus influencing the usefulness and value of that AI system directly.

How XAI?

There are several ways to achieve such explainability. In general all of those can be divided into three subdomains:

  • A priori XAI (pre-modelling phase)
  • Proprius XAI (modelling phase)
  • Post-hoc XAI (post-modelling phase)
Image for post

A priori XAI stands for the phase before the model is built, also known as the pre-modelling phase. Proprius XAI is the phase of the AI model itself. Here the eventual AI model itself is inherently more explainable. Post-hoc XAI refers to the phase in which the model has already been built and its output is explained after the fact. The majority of academic research and literature focusses on post-hoc XAI. It is important to realise that all three phases can contribute to overall explainability, they are not mutually exclusive.

A priori XAI

Firstly, explainability can be worked towards in the phase even before the model has been built. In this phase the only thing that can be analysed is the data, as there isn’t a model yet. Because machine learning will be used to build the AI system, its behaviour can be traced back to the data that was used to train it. It can therefore be possible for data scientists to explain the AI systems behaviour. This concept is well illustrated with an example. If you for instance train an AI system to determine whether or not cattle are in a picture, it could very well be performing well enough. However, it could be making the determination whether there is green grass in a photo or not, as grass and cattle are often found together in photos, meaning that it is making predictions based on wrong indicators. This is called data spilling. By analysing the data, you could prevent such a faux-causality.

Proprius XAI

Secondly, a simple XAI method that has been hinted at above: “Simply” use more inherently understandable machine learning algorithms. For example, decision trees or Bayesian classifiers. For simpler problems, this seems like a good solution. However, for more intricate use cases, such as photo analysis or other situations where there are vast amounts of data, deep learning models structurally outperform simpler machine learning models.

It would then seem that explainable modelling is considered synonymous with limiting AI choice. However, this is not the case. Hybrid models combine simple (explainable) models with complicated black-box models. Self-Explaining Neural networks and Contextual Explanation Networks are also examples of how the simplicity of explainable models and the analysing power of black-box models can be combined.

Post-hoc XAI

Finally, we arrive at post-hoc XAI. This phase focusses on methods that make existing AI systems more explainable. This means that explainability is not factored in a priori or proprius, but only once the model is finished and testing commences.

The Local Interpretable Model-Agnostic Explanations (LIME) is a good and widely known example of this phase. This method strives to explain which attributes were most ‘important’ for the output that the AI delivered. However, this can only be done for 1 output (local). Often it does this by outputting a simpler model to explain that one instance.

More to come

Hopefully this brief overview shed some light on XAI and what it is. As can be seen, XAI encompasses a lot, too much to elaborate on in one article. Therefore, we will be dividing it up into several chapters that will guide you through the world of XAI, deep diving into the following subjects that we at DEUS believe are of importance for XAI:

More examples and explanation of:

  • A priori XAI
  • Proprius XAI
  • Post-hoc XAI

Elaboration on Why XAI:

  • Traceability
  • Trustworthiness
  • AI Bias
  • Ethical AI
Lege plus

Continue reading...

newsletter

Want to stay up to date?

Sign up for our newsletter, and we’ll keep you posted on our research, podcast and other AI goodies.
* We don't share your data. See our Privacy Policy
Thank you! You've subscribed.
Oops! Something went wrong while submitting the form.