Predactica

Explainable AI – Part 2 – Shapley Values

Now we know what explainable AI is and how it can enhance user’s understanding of the model’s prediction, let’s check out how we can quantify the model’s behavior.

This section will introduce Shapley Values, the theory behind Shapley values, and how they are used to understand model behavior. Shapley values are a model agnostic explanation method.

Game Theory

Game theory is the study of mathematical models of strategic interaction among rational decision-makers. It has applications in all fields of social science and logic, systems science, and computer science.

Shapley values is a concept coming from game theory. But game theory needs at least two things: a game and some players. How does this apply to machine learning explainability? Imagine that we have a predictive model, then:

  • the “game” is reproducing the outcome of the model,
  • the “players” are the features included in the model.

What Shapley does is quantifying the contribution that each player brings to the game.

Let’s try explaining this with an example.

For instance, in a model where given age, gender, and job of an individual, we want to predict the person’s income. The average income prediction of all the people in our dataset is 50K.

Taking an example for a specific instance, if a person’s age is 60, job is full-time, and gender is male, the prediction for this instance could be 70K. Similarly, the person’s age is 30, the Job is part-time, and the gender is female. The prediction for this instance is 40K.

The goal of Shapley values is to explain the difference between the actual prediction of 70K and 40K for instance 1 and instance 2 respectively with the average prediction of 50K, a difference of 20K and -10K.

  • For instance 1, the answer could be age contributed for +40K, sex contributed for 20K, and job type for +10K eventually predicting 70K
  • For instance 2, the answer could be age contributed for +30K, sex contributed for 20K, and job type for -10K eventually predicting 40K.

HOW DO WE CALCULATE THE SHAPELY VALUES FOR ONE FEATURE?

The Shapley value is the average marginal contribution of a feature value across all possible coalitions.By introducing a feature in our consideration, we will see the contribution made by this feature value. We will simulate that only Age and Sex are in a coalition by randomly drawing a person’s data and using its value of Job Type to understand Job Type’s effect in prediction.

Take the 2nd instance as an example, where the age=30, Sex=Female and randomly selecting a Job Type Value = Full-Time and make predictions (50K). Second, for the same coalition, we select a value of Sex from a random sample, Sex = Male, which could be female, like it was in the previous coalition. We make the prediction again (70K). This helps us understand the contribution by Sex-Male, which is 70K-50K = 20K. This estimate depends on the randomly drawn Sex value that served as a “doner” for Age and Job Type feature values. We will get better estimates if we repeat this sampling step and average the contributions.

Shapley values are based on the idea that the outcome of each possible combination (or coalition) of players should be considered to determine the importance of a single player. In our case, this corresponds to each possible combination of f features (f going from 0 to F, F being the number of all features available, in our example)

In math, this is called a “power set” and can be represented as a tree.

  • Each node represents a coalition of features.
  • Each edge represents the inclusion of a feature not present in the previous coalition. We repeat this computation for all possible coalitions. The Shapley value is the average of all the marginal contributions to all possible coalitions.
  • The computation time increases exponentially with the number of features. Using each of these coalitions, we will predict the value of income.

As seen above, two nodes connected by an edge differ for just one feature, in the sense that the bottom one has the same features as the upper one plus an additional feature that the upper one did not have. Therefore, the gap between the predictions of two connected nodes can be imputed to the effect of that additional feature. This is called the “marginal contribution” of a feature. Therefore, each edge represents the marginal contribution brought by a feature to a model.Each node is a prediction made by using the features mentioned in the node. All red edges are the marginal contribution made by the Age as a feature in each of the predictions made.

So for an instance x0, the Shapley value for Age is aggregated through a weighted average,

The idea behind calculating the weights is,

  • The sum of the weights of all the marginal contributions to 1-feature-nodes should be equal to the sum of the weights of all the marginal contributions to 2-features-nodes and so on.

i.e. w1 = w2+w3 = w4

  • All the weights of marginal contributions to f-feature-nodes should be equal to each other.

i.e. w2 = w3

Keeping in mind that the sum of the weights should be equal to 1, the solution can be:

  • w₁ = 1/3
  • w₂ = 1/6
  • w₃ = 1/6
  • w₄ = 1/3

From the above figure, we can determine a general framework for determining the weights. The weight of an edge is the reciprocal of the total number of edges in the same “row”. Or, equivalently, the weights of a marginal contribution to an f-feature-node is the reciprocal of the number of possible marginal contributions to all the f-feature-nodes.

Each f-feature-node has f marginal contributions (one per feature). Thus, it is enough to count the number of possible f-feature-nodes and to multiply it by f. The problem now boils down to counting the number of possible f-feature-nodes, given f, and knowing that the total number of features is F. This is simply the definition of binomial coefficient.

Putting things together, the number of all the marginal contributions of all the f-feature-nodes — in other words, the number of edges in each “row” — is:

We have built the formula for calculating the shap value of Age in a 3-feature-model.

CONCLUSION

Now that we know how shapley values are calculated, we can explore how we can apply them to a real-world problem. In the next section, we will try to use open source packages for understanding the model’s behavior. We will work with SHAP (Shapley Additive Explanation), a game theory approach to explain model behavior. Check out the github repository for shap developed by Scott M. Lundberg and Su-In Lee.

CITATIONS

  • Samuele Mazzanti, Jan 3, 2020. “SHAP values explained exactly how you wished someone explained to you”. Link
  • Scott Lundberg, Su-In Lee, May 22 2017 “A Unified Approach to Interpreting Model Predictions”. Link