SHAP (SHapley Additive exPlanations)
- Publisher: shap
- Status:
active - Version:
0.50.0 - Release Date:
2025-11-11 - Date Added:
2025-11-24 - Source URL: https://github.com/shap/shap
Summary
SHAP is an AI explainability framework based on Shapley values that quantifies how individual features contribute to a model’s output. It fits in the model-interpretability layer of the AI stack, helping teams understand, debug, and audit complex ML systems. Typical use cases include risk assessments, fairness evaluations, detection of anomalous model behavior, and providing human-interpretable explanations for regulated environments.
Key Takeaways
- Provides unified, model-agnostic explanations using game-theoretic Shapley values
- Supports local and global interpretability for both instance-level and system-level analysis
- Integrates with major ML/DL frameworks (XGBoost, LightGBM, PyTorch, scikit-learn)
- Helps uncover biased, unstable, or security-relevant model behaviors
Related Code
Additional Sources
Tags
explainability, model-interpretability, risk-assessment, fairness, auditability
License
MIT