Explaining the Explainable AI – Part1

These days machine learning models are extensively utilized in medicine, banking, credit underwriting, etc. Despite its increased usage, there is a lack of techniques that could explain and interpret the reasons as to why the model made a decision. Then why should machine learning models be trusted blindly? With trust in an ML model, we can get better insights into model predictions and improve our decision-making.

Traditionally, machine learning models have been stigmatized as ‘Black Boxes.’ Even though we can get accurate predictions from them, we cannot clearly explain or identify these predictions’ logic. The accuracy of a model was sufficient enough to know that it could be trusted. We can see what goes into the black box, e.g., a photograph of a dog, and what comes out of the AI system, e.g., a labeled version of the photograph, but we do not easily understand how and why the system made that determination.

But how do we go about extracting essential insights from the models? What things should be kept in mind, and what features or tools will we need to achieve them? These are the critical questions that come to mind when the issue of Model Explainability is raised.


Explainable AI is a developing field in machine learning that focuses on how AI systems’ black box decisions are made.

Explainable AI is a process to:

  • Understand the predictions of an ML model. The technology will make the model as interpretable as possible, which will help test its reliability and causality of features and help organizations and economies make better decisions and operate more efficiently.
  • Enable human users to understand, appropriately trust and effectively manage AI systems. As humans, we must be able to fully understand how decisions are being made to trust AI systems’ decisions. This is especially critical as we progress to third-wave AI systems, where machines can understand the context and adapt accordingly.

The use of Explainable AI is best understood with the help of a comparison between today’s systems and tomorrow’s explainable AI systems.

We can see from the figure that with tomorrow’s explainable ai systems, users can better understand why and how the output is produced.


Apart from providing an output, which may or may not be useful to humans, for example, is it ‘a dog’ or ‘not a dog’, concerning the previously given figure. Explainable AI adds an interface that explains the inner workings of a machine learning model. It simplifies some of the decision-making processes, which helps humans understand how and why the model made a specific decision.

Explainability can help build trust

For instance, a person applied for a bank loan, which gets rejected. In this case, the person is entitled to an explanation of why his loan got rejected and what can be done to get his loan approved? The Loan Officer who rejected the application can also inform the person about where he can improve like, income level, employment history etc. By being able to access and understand an algorithm’s decision-making process, people will more openly accept the technology and will be able to trust it.

Explainability can help ensure comparability

For instance, two data models produce similar results: one offers accurate although uninterpretable decisions, and, in contrast, the other model offers a degree of explainability for its processes. A Data Scientist would prefer an AI model that can explain its predictions rather than the one that cannot. Such explainability can help data scientists compare results and make the users feel confident in the system’s outputs and their algorithm. For example, suppose a model places more importance on a feature that the Data Scientists and the business users consider as a less important feature. In that case, the data scientist can tweak the model, making it more robust for decision-making.

Explainability can help satisfy governance and compliance requirements

The European Union’s General Data Protection Regulation (GDPR) helps provide the individual (the person who was rejected for a loan) with ‘a right to explanation’ for decisions based solely on automated processing. The business using personal data for automated processing must explain how the system came to its decision. If it fails to explain a response to an individual’s request, it would not be compliant with the GDPR. In such cases, a model’s explainability plays a massive role if a model is used in production. Companies will usually have more confidence in models that are explainable as it not only helps them comply with regulations but aids in saving costs in the long term. It also leads to better customer satisfaction.


Explainability of a model does not make it transparent, but it is a step in the direction where humans can understand the decision-making process of a model. Achieving transparency will make AI and ML models trusted by a wider range of audience and help achieve unbiased models.

Additional details on explainability in AI will be followed in the subsequent articles.


  • Huntley, Tim. 2020. “Why You Should Be Talking About Explainable Machine Learning.” Why You Should Be Talking About Explainable Machine Learning.
  • Adadi, Amina, and Mohammed Berrada. 2018. “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI).” Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) 6, no. 1 (September): 23. 10.1109/ACCESS.2018.287005
  • Elias, Haris. 2020. “How much should we trust explainable AI?” How much should we trust explainable AI?
  • Lawtomated, Lawtomated. 2020. “Explainable AI — All you need to know. The what, how, why of explainable AI.” Explainable AI — All you need to know. The what, how, why of explainable AI.

Leave a Reply

Your email address will not be published. Required fields are marked *