Expand description
Model Interpretability and Explainability.
This module provides tools for understanding model predictions through feature attribution and explanation methods.
§Methods
- SHAP (
SHapleyAdditive exPlanations): Computes feature importance using Shapley values from cooperative game theory. - Permutation Importance: Measures feature importance by shuffling features and measuring prediction change.
- Feature Contributions: Decomposes predictions into per-feature contributions.
§Example
ⓘ
use aprender::interpret::{KernelSHAP, Explainer};
// Create explainer with trained model
let explainer = KernelSHAP::new(model, background_data);
// Explain a prediction
let shap_values = explainer.explain(&sample);
// shap_values[i] = contribution of feature i to the prediction§References
- Lundberg, S. M., & Lee, S. I. (2017). A Unified Approach to Interpreting
Model Predictions.
NeurIPS. - Ribeiro, M. T., et al. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier (LIME). KDD.
Structs§
- Counterfactual
Explainer - Counterfactual explanation generator.
- Counterfactual
Result - Result of counterfactual explanation search.
- Feature
Contributions - Feature contribution analysis.
- Integrated
Gradients - Integrated Gradients for neural network attribution.
- LIME
- LIME (Local Interpretable Model-agnostic Explanations).
- LIME
Explanation - LIME explanation result.
- Permutation
Importance - Permutation feature importance.
- Saliency
Map - Saliency Maps for neural network visualization.
- Shap
Explainer - SHAP (
SHapleyAdditive exPlanations) values for feature attribution.
Traits§
- Explainer
- Trait for model explainers.