LIME (Local Interpretable Model-Agnostic Explanations)
- Publisher: marcotcr
- Status:
active - Version:
0.2.0.0 - Release Date:
2020-04-03 - Date Added:
2025-11-24 - Source URL: https://github.com/marcotcr/lime
Summary
LIME is an explainability framework that approximates complex model behavior locally around a specific prediction using simple, interpretable surrogate models. Within the AI security and assurance stack, LIME supports transparency, debugging, and validation of model decisions by providing clear, human-readable feature attributions for individual predictions. Typical use cases include auditing high-risk systems, validating fairness concerns, and ensuring traceability for regulated or safety-critical domains.
Key Takeaways
- Produces local, instance-level explanations using interpretable surrogate models
- Model-agnostic: works with any classifier or regressor
- Helps identify misleading, biased, or unstable model behaviors near decision boundaries
- Useful for debugging model predictions in safety-critical or regulated workflows
- Complements global interpretability tools like SHAP by offering fine-grained local views
Related Code
TBD
Additional Sources
- XAI Using Lime — geeksforgeeks overview
- arXiv original research paper
Tags
explainability, model-interpretability, XAI, auditability, risk-assessment
License
BSD-2-Clause