docs.rs failed to build rankit-0.1.1
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
rankit
Learning to Rank for Rust: differentiable ranking, LTR losses, trainers, and IR evaluation metrics.
What it does
- Differentiable ranking -- sigmoid-based soft ranking with multiple method variants from the literature (NeuralSort, SoftRank/Probabilistic, SmoothI). O(n^2) complexity, suitable for lists up to ~1000 items.
- LTR loss functions -- RankNet (Burges 2005), LambdaLoss (NDCG-weighted pairwise), ApproxNDCG (Qin & Liu 2010), ListNet (ICML 2007), ListMLE (ICML 2008).
- Gradient trainers -- LambdaRank and Ranking SVM with configurable query normalization, cost sensitivity, and score normalization.
- IR evaluation metrics -- NDCG, MAP, MRR, Precision@K, Recall@K, ERR, RBP, F-measure, R-Precision, Success@K. Binary and graded relevance.
- TREC format parsing -- load standard TREC run files and qrels, batch evaluate, export CSV/JSON.
- Statistical testing -- paired t-test, confidence intervals, Cohen's d effect size.
Quick start
use ;
// Differentiable ranking
let scores = vec!;
let ranks = soft_rank;
// ranks[0] ≈ 4.0 (highest), ranks[1] ≈ 0.0 (lowest)
// RankNet pairwise loss
let predictions = vec!;
let relevance = vec!;
let loss = ranknet_loss;
Feature flags
| Feature | Default | Description |
|---|---|---|
eval |
yes | IR evaluation metrics, TREC parsing, batch eval, statistics |
losses |
yes | LTR loss functions (RankNet, LambdaLoss, ApproxNDCG, ListNet, ListMLE) |
gumbel |
no | Gumbel-Softmax sampling, relaxed top-k (requires rand) |
parallel |
no | Rayon parallelization for batch operations |
serde |
no | Serialization for eval result types |
Crate topology
rankit builds on fynch (Fenchel-Young losses, differentiable sorting primitives). Related crates:
rankfns-- scoring functions (BM25, TF-IDF, DPH, language models)rankops-- ranked list operations (RBO, Kendall tau, fusion, interleaving)
References
- Burges et al. "Learning to Rank using Gradient Descent" (ICML 2005) -- RankNet
- Qin & Liu. "A General Approximation Framework for Direct Optimization of Information Retrieval Measures" (2010) -- ApproxNDCG
- Cao et al. "Learning to Rank: From Pairwise Approach to Listwise Approach" (ICML 2007) -- ListNet
- Xia et al. "Listwise Approach to Learning to Rank" (ICML 2008) -- ListMLE
- Blondel et al. "Fast Differentiable Sorting and Ranking" (ICML 2020) -- soft ranking methods
License
MIT OR Apache-2.0