Skip to main content

Module interpret

Module interpret 

Source
Expand description

Model Interpretability and Explainability.

This module provides tools for understanding model predictions through feature attribution and explanation methods.

§Methods

  • SHAP (SHapley Additive exPlanations): Computes feature importance using Shapley values from cooperative game theory.
  • Permutation Importance: Measures feature importance by shuffling features and measuring prediction change.
  • Feature Contributions: Decomposes predictions into per-feature contributions.

§Example

use aprender::interpret::{KernelSHAP, Explainer};

// Create explainer with trained model
let explainer = KernelSHAP::new(model, background_data);

// Explain a prediction
let shap_values = explainer.explain(&sample);

// shap_values[i] = contribution of feature i to the prediction

§References

  • Lundberg, S. M., & Lee, S. I. (2017). A Unified Approach to Interpreting Model Predictions. NeurIPS.
  • Ribeiro, M. T., et al. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier (LIME). KDD.

Structs§

CounterfactualExplainer
Counterfactual explanation generator.
CounterfactualResult
Result of counterfactual explanation search.
FeatureContributions
Feature contribution analysis.
IntegratedGradients
Integrated Gradients for neural network attribution.
LIME
LIME (Local Interpretable Model-agnostic Explanations).
LIMEExplanation
LIME explanation result.
PermutationImportance
Permutation feature importance.
SaliencyMap
Saliency Maps for neural network visualization.
ShapExplainer
SHAP (SHapley Additive exPlanations) values for feature attribution.

Traits§

Explainer
Trait for model explainers.