sklears-metrics
Comprehensive, high-performance evaluation metrics for machine learning in Rust, offering 10-50x speedup over scikit-learn with GPU acceleration support.
Latest release:
0.1.0-beta.1(January 1, 2026). See the workspace release notes for highlights and upgrade guidance.
Overview
sklears-metrics provides a complete suite of evaluation metrics including:
- Classification Metrics: Accuracy, Precision, Recall, F1, ROC-AUC, PR-AUC, and more
- Regression Metrics: MSE, MAE, R², MAPE, Huber, Quantile regression metrics
- Clustering Metrics: Silhouette, Davies-Bouldin, Calinski-Harabasz, V-measure
- Advanced Features: GPU acceleration, uncertainty quantification, streaming computation
- Specialized Domains: Computer vision, NLP, survival analysis, time series
Quick Start
use ;
use array;
// Basic classification metrics
let y_true = array!;
let y_pred = array!;
let acc = accuracy_score?;
let = precision_recall_fscore?;
let auc = roc_auc_score?;
println!;
println!;
println!;
Features
Core Capabilities
- Comprehensive Coverage: 100+ metrics across all ML domains
- Type Safety: Compile-time validation with phantom types
- Performance: SIMD optimizations, GPU acceleration, parallel processing
- Memory Efficiency: Streaming metrics, compressed storage, lazy evaluation
- Production Ready: 393/393 crate tests passing, plus inclusion in the 11,292 passing workspace checks for 0.1.0-beta.1
Advanced Features
GPU Acceleration
use ;
let ctx = new?;
let accuracy = gpu_accuracy?;
Uncertainty Quantification
use ;
let = bootstrap_confidence_interval?;
let prediction_sets = conformal_prediction?;
Streaming Metrics
use StreamingMetrics;
let mut metrics = new;
for batch in data_stream
let final_scores = metrics.compute?;
Performance
Benchmarks show significant improvements:
| Metric | scikit-learn | sklears-metrics | Speedup |
|---|---|---|---|
| Accuracy | 1.2ms | 0.05ms | 24x |
| ROC-AUC | 8.5ms | 0.3ms | 28x |
| Clustering | 15ms | 0.8ms | 19x |
| GPU Accuracy | N/A | 0.01ms | >100x |
Specialized Domains
Computer Vision
use ;
let iou = iou_score?;
let similarity = ssim?;
let peak_snr = psnr?;
Natural Language Processing
use ;
let bleu = bleu_score?;
let rouge = rouge_scores?;
let ppl = perplexity?;
Time Series
use ;
let mase_score = mase?;
let smape_score = smape?;
let da = directional_accuracy?;
Advanced Usage
Multi-Objective Optimization
use ;
let frontier = pareto_frontier?;
let rankings = topsis_ranking?;
Federated Learning
use ;
let global_metrics = secure_aggregation?;
let private_accuracy = privacy_preserving_metrics?;
Calibration
use ;
let = calibration_curve?;
let ece = expected_calibration_error?;
Architecture
The crate is organized into modules:
sklears-metrics/
├── classification/ # Binary and multiclass metrics
├── regression/ # Continuous target metrics
├── clustering/ # Unsupervised evaluation
├── ranking/ # Information retrieval metrics
├── uncertainty/ # Confidence and uncertainty
├── streaming/ # Online and incremental metrics
├── gpu/ # CUDA-accelerated computations
├── visualization/ # Plotting and reporting
└── specialized/ # Domain-specific metrics
Fluent API
use MetricsBuilder;
let results = new
.accuracy
.precision
.recall
.f1_score
.roc_auc
.with_confidence_intervals
.with_gpu_acceleration
.compute?;
Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE)
- MIT license (LICENSE-MIT)
Citation