Skip to main content

robustness_eval

Function robustness_eval 

Source
pub fn robustness_eval(
    model: &dyn AttackModel,
    inputs: &[Vec<f64>],
    labels: &[Vec<f64>],
    config: &AttackConfig,
    seed: u64,
) -> Result<f64, AdversarialError>
Expand description

Evaluate the model’s adversarial robustness on a set of samples.

For each sample the PGD attack is run; a sample is considered “robust” if the argmax prediction does not change after the attack (for classification), or equivalently if the adversarial loss is not greater than the clean loss (for regression).

Returns the fraction of samples that remain correctly classified (robust), in the range [0, 1].