Expand description
Learning to Rank for Rust: differentiable ranking, LTR losses, trainers, and IR evaluation metrics.
rankit provides everything needed to train and evaluate ranking models:
- Differentiable ranking: sigmoid-based soft ranking with multiple method variants (NeuralSort, SoftRank, SmoothI). O(n^2) complexity, suitable for n < 1000.
- LTR losses: RankNet, LambdaLoss (NDCG-weighted), ApproxNDCG, ListNet, ListMLE. Pairwise and listwise paradigms.
- Trainers: LambdaRank and Ranking SVM with query normalization, cost sensitivity, and score normalization options.
- Evaluation (feature
eval): NDCG, MAP, MRR, Precision/Recall@K, ERR, RBP, F-measure. TREC format parsing. Batch evaluation. Statistical testing (paired t-test, confidence intervals, Cohen’s d).
§Quick start
use rankit::{soft_rank, ranknet_loss};
// Differentiable ranking
let scores = vec![5.0, 1.0, 2.0, 4.0, 3.0];
let ranks = soft_rank(&scores, 1.0);
// ranks[0] is highest (~4.0), ranks[1] is lowest (~0.0)
// RankNet pairwise loss
let predictions = vec![0.8, 0.3, 0.6];
let relevance = vec![2.0, 0.0, 1.0];
let loss = ranknet_loss(&predictions, &relevance);§Feature flags
| Feature | Default | What it adds |
|---|---|---|
eval | yes | IR evaluation metrics, TREC parsing, batch eval, statistics |
losses | yes | LTR loss functions (RankNet, LambdaLoss, ApproxNDCG, ListNet, ListMLE) |
gumbel | no | Gumbel-Softmax, relaxed top-k (requires rand) |
parallel | no | Rayon parallelization for batch operations |
serde | no | Serialization for eval result types |
Re-exports§
pub use batch::soft_rank_batch;pub use batch::spearman_loss_batch;pub use gradients::compute_lambdarank_gradients;pub use gradients::compute_ranking_svm_gradients;pub use gradients::fisher_information_softmax;pub use gradients::natural_gradient_softmax;pub use gradients::ndcg_at_k;pub use gradients::pairwise_hinge_loss;pub use gradients::sigmoid_derivative;pub use gradients::soft_rank_gradient;pub use gradients::spearman_loss_gradient;pub use gradients::with_natural_gradient;pub use gradients::GradientError;pub use gradients::LambdaRankParams;pub use gradients::LambdaRankTrainer;pub use gradients::RankingSVMParams;pub use gradients::RankingSVMTrainer;pub use methods::soft_rank_neural_sort;pub use methods::soft_rank_probabilistic;pub use methods::soft_rank_sigmoid;pub use methods::soft_rank_smooth_i;pub use methods::RankingMethod;pub use optimized::soft_rank_gradient_sparse;pub use optimized::soft_rank_optimized;pub use rank::soft_rank;pub use topk::differentiable_topk;pub use optimized::soft_rank_batch_parallel;pub use losses::approx_ndcg;pub use losses::approx_ndcg_loss;pub use losses::lambda_loss;pub use losses::listmle_loss;pub use losses::listnet_loss;pub use losses::ranknet_loss;pub use losses::soft_rank_softsort;pub use sampling::gumbel_attention_mask;pub use sampling::gumbel_softmax;pub use sampling::relaxed_topk_gumbel;pub use eval::batch as eval_batch;pub use eval::binary;pub use eval::export;pub use eval::graded;pub use eval::statistics;pub use eval::trec;pub use eval::validation;
Modules§
- batch
- Batch processing utilities. Batch processing utilities for efficient multi-query ranking.
- eval
- IR evaluation metrics, TREC parsing, batch evaluation, statistical testing. IR evaluation metrics, TREC parsing, batch evaluation, statistical testing.
- gradients
- Analytical gradient computation for soft ranking and Spearman loss. Analytical gradient computation for differentiable ranking operations.
- losses
- LTR loss functions and advanced ranking operations. LTR loss functions and advanced ranking operations.
- methods
- Multiple ranking method variants from research papers. Multiple differentiable ranking methods from research papers.
- optimized
- Performance-optimized implementations. Performance-optimized implementations.
- pipeline
- End-to-end retrieval pipeline: tokenize, index, score, rank. End-to-end retrieval pipeline: tokenize, index, score, rank.
- rank
- Differentiable ranking operations (sigmoid-based, O(n^2)). Differentiable ranking operations using smooth relaxation.
- sampling
- Gumbel-Softmax sampling and relaxed top-k. Gumbel-Softmax sampling and relaxed top-k.
- topk
- Differentiable top-k selection. Differentiable top-k selection.
- topk_ce
- Top-k cross-entropy loss for classification. Top-k cross-entropy loss for classification.
Functions§
- spearman_
loss - Re-export fynch’s Spearman loss. Spearman correlation loss.