SHAP is a model explainability framework built upon the concept of Shapley values. SHAP seeks to address the computational challenges of directly calculating Shapley values by using a combination of clever approximations and insights from game theory. Here's the gist:

  1. Baseline Prediction: Establish a baseline using the average prediction across all data instances.
  2. Feature Permutations: Generate all possible feature orderings (permutations). Note that the order in which features are introduced can drastically impact their marginal contribution.
  3. Build and Retrain Models: For each ordering, build models by adding features one-by-one according to the permutation. Calculate the model's predictions with and without the current feature to assess its marginal contribution in that specific order.
  4. SHAP Value Approximation: Average these marginal contributions across all possible feature orderings, effectively approximating the Shapley values that would be computationally intractable to calculate directly.

Strengths

Weaknesses