logo_2 (2)

Explainability For NLP


AI is now a technological reality across various industries and it has become necessary for businesses to adopt complex AI systems or “black-box” models to attain high accuracy predictions but, this results in poor interpretability of the AI model. As there is a growing need to trust AI’s decision-making process, Explainable AI(XAI) has become increasingly popular. Simply put, XAI is a set of tools and frameworks to explain the internal workings of black-box models.

The area of Explainable AI (XAI) was created in an attempt to resolve the following issues and make AI systems more transparent.

Decision-makers require deep understanding and insight into the mechanics of AI systems due to the critical and direct influence that AI has on our everyday life.

Backend algorithms are mostly “black-box” models like the highly sophisticated deep neural networks which are only understood by technical experts.

Lack of interpretability is one of the key barriers for communication between technical and non-technical practitioners.


Natural Language Processing (NLP) is a subset of AI that enables computers to manipulate and interpret the human language. In simple words, it allows computers to understand human language by understanding the grammatical structure of sentences and meaning of individual words. Few NLP applications that we use everyday are Smart Digital assistants (Siri, Alexa & Google Assistant), spell-checks and language translation.

Powerful NLP tools that businesses currently adopt to improve their performance are:

  • Sentiment Analysis
  • ChatBots
  • Text Extraction
  • Topic Classification
  • Text Summarisation
  • Language Translation

These applications of NLP can be implemented using Deep Learning and Language Embedding techniques which are primarily ‘Black Box Models’.

Predactica offers CX Studio (Customer Experience), a deep learning based text analytics to offer highly effective insights into customers’ perceptions and sentiments of products, services and brand. We provide powerful tools such as sentiment analysis, intent prediction, emotion detection and Twitter CX which allows businesses to take proactive measures to improve CX.


Dealing with text, NLP models solve tasks that need human-comprehensible interpretations, such as summarising long texts/contracts or translating international meetings. Thus, XAI is essential in creating interpretability and explainability of NLP models.

Different components of NLP models need to be investigated. In particular, some important questions that need to be answered are: What linguistic knowledge is captured by neural networks? Why do they make certain predictions? How do they represent language? Are they robust? How do they fail?


Sentiment Analysis is one of the most commonly used NLP Tools utilised by businesses to understand how their customers perceive them across various social platforms. It is implemented by identifying, extracting and classifying text into classes namely positive, negative or neutral. XNLP in sentiment analysis is beneficial to the data scientist to understand why and how the classifier makes predictions so that they can avoid misclassification errors.

The above figure is an image of Local Explanation generated by LIME for sentiment analysis.

For a detailed understanding of sentiment analysis, check out my colleague’s blog – https://www.linkedin.com/pulse/sentiment-analysis-understanding-exploring-rishabh-bhatia/

Predactica’s XNLP tool provides Local and Global explanation using LIME and SHAP libraries to explain and interpret how each feature (in the case of NLP, each ‘‘word’) affects the prediction of an instance and impact it has on the classifier. Our tool offers many visualisations such as saliency maps for unambiguous explanations.

SHAP is a model agnostic global explanation method based on Game Theory’s shapley values. Detailed explanation of SHAP can be found here – https://www.linkedin.com/pulse/explaining-explainable-ai-part-2-shapley-values-aditi-dutt/

LIME is a model agnostic local explanation method based on surrogate models. Detailed explanation of LIME can be found here – https://www.linkedin.com/pulse/explaining-explainable-ai-part3-lime-aditi-dutt/

References :


By Sarjhana Ragunathan Brindha
Data Science Intern, Predactica