Welcome to the SHAP documentation SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see papers for details and citations)
shap. Explainer — SHAP latest documentation - Read the Docs This is the primary explainer interface for the SHAP library It takes any combination of a model and masker and returns a callable subclass object that implements the particular estimation algorithm that was chosen
decision plot — SHAP latest documentation - Read the Docs Decision plots support SHAP interaction values: the first-order interactions estimated from tree-based models While SHAP dependence plots are the best way to visualize individual interactions, a decision plot can display the cumulative effect of main effects and interactions for one or more observations
shap. DeepExplainer — SHAP latest documentation - Read the Docs class shap DeepExplainer (model, data, session = None, learning_phase_flags = None) Meant to approximate SHAP values for deep learning models This is an enhanced version of the DeepLIFT algorithm (Deep SHAP) where, similar to Kernel SHAP, we approximate the conditional expectations of SHAP values using a selection of background samples
shap. Explanation — SHAP latest documentation - Read the Docs A sliceable set of parallel arrays representing a SHAP explanation Notes The instance methods such as max() return new Explanation objects with the operation applied The class methods such as Explanation max return OpChain objects that represent a set of dot chained operations without actually running them
shap. TreeExplainer — SHAP latest documentation - Read the Docs Uses Tree SHAP algorithms to explain the output of ensemble tree models Tree SHAP is a fast and exact method to estimate SHAP values for tree models and ensembles of trees, under several different possible assumptions about feature dependence
Release notes — SHAP latest documentation - Read the Docs A new shap maskers * module that separates the various ways to mask (i e perturb hide) features from the algorithms themselves A new shap explainers Partition explainer that can explain any text or image models very quickly