rust_metrics 0.1.1

Incremental evaluation metrics for classification and text generation pipelines.
Documentation

rust_metrics

rust_metrics is a rust-native evaluation toolkit that provides incremental metrics for classification and text generation pipelines. Each metric implements the same incremental Metric trait, so you can feed batched predictions over time and ask for the final score when ready.

Getting started

Add the crate to your project:

cargo add rust_metrics
# or enable the BERT-based similarity metric
cargo add rust_metrics --features text-bert

Evaluate batched predictions:

use rust_metrics::{BinaryAccuracy, BinaryAuroc, Metric};

let predictions = [0, 1, 1, 0];
let targets = [0, 1, 0, 0];

let mut accuracy = BinaryAccuracy::new();
accuracy.update((&predictions, &targets)).unwrap();
assert_eq!(accuracy.compute(), Some(0.75));

let scores = [0.9, 0.6, 0.1, 0.2];
let mut auroc = BinaryAuroc::new(0); // 0 => compute exact ROC AUC
auroc.update((&scores, &targets.map(|t| t as f64))).unwrap();
assert!(auroc.compute().unwrap() > 0.9);

Available metrics

Classification

  • BinaryAccuracy, MulticlassAccuracy, MultilabelAccuracy
  • BinaryPrecision, BinaryRecall
  • BinaryHinge
  • BinaryAuroc (exact or binned ROC AUC)

Text

  • Bleu with optional smoothing and arbitrary n-gram depth
  • EditDistance with sum or mean reduction
  • SentenceEmbeddingSimilarity (requires the text-bert feature) backed by fastembed. This metric embeds each sentence pair with lightweight BERT embeddings and reports cosine similarity scores.

Feature flags

Feature Default Description
text-bert no Enables BERT sentence embedding similarity via fastembed.