rankit 0.1.2

Learning to Rank: differentiable ranking, LTR losses (RankNet, LambdaRank, ApproxNDCG, ListNet, ListMLE), trainers, and IR evaluation metrics
# rankit

[![crates.io](https://img.shields.io/crates/v/rankit.svg)](https://crates.io/crates/rankit)
[![Documentation](https://docs.rs/rankit/badge.svg)](https://docs.rs/rankit)
[![CI](https://github.com/arclabs561/rankit/actions/workflows/ci.yml/badge.svg)](https://github.com/arclabs561/rankit/actions/workflows/ci.yml)

Learning to Rank for Rust: differentiable ranking, LTR losses, trainers, and IR evaluation metrics.

## What it does

- **Differentiable ranking** -- sigmoid-based soft ranking with multiple method variants from the literature (NeuralSort, SoftRank/Probabilistic, SmoothI). O(n^2) complexity, suitable for lists up to ~1000 items.
- **LTR loss functions** -- RankNet (Burges 2005), LambdaLoss (NDCG-weighted pairwise), ApproxNDCG (Qin & Liu 2010), ListNet (ICML 2007), ListMLE (ICML 2008).
- **Gradient trainers** -- LambdaRank and Ranking SVM with configurable query normalization, cost sensitivity, and score normalization.
- **IR evaluation metrics** -- NDCG, MAP, MRR, Precision@K, Recall@K, ERR, RBP, F-measure, R-Precision, Success@K. Binary and graded relevance.
- **TREC format parsing** -- load standard TREC run files and qrels, batch evaluate, export CSV/JSON.
- **Statistical testing** -- paired t-test, confidence intervals, Cohen's d effect size.

## Quick start

```rust
use rankit::{soft_rank, ranknet_loss};

// Differentiable ranking
let scores = vec![5.0, 1.0, 2.0, 4.0, 3.0];
let ranks = soft_rank(&scores, 1.0);
// ranks[0] ≈ 4.0 (highest), ranks[1] ≈ 0.0 (lowest)

// RankNet pairwise loss
let predictions = vec![0.8, 0.3, 0.6];
let relevance = vec![2.0, 0.0, 1.0];
let loss = ranknet_loss(&predictions, &relevance);
```

## Feature flags

| Feature    | Default | Description |
|------------|---------|-------------|
| `eval`     | yes     | IR evaluation metrics, TREC parsing, batch eval, statistics |
| `losses`   | yes     | LTR loss functions (RankNet, LambdaLoss, ApproxNDCG, ListNet, ListMLE) |
| `gumbel`   | no      | Gumbel-Softmax sampling, relaxed top-k (requires `rand`) |
| `parallel` | no      | Rayon parallelization for batch operations |
| `serde`    | no      | Serialization for eval result types |

## Crate topology

`rankit` builds on [`fynch`](https://crates.io/crates/fynch) (Fenchel-Young losses, differentiable sorting primitives). Related crates:

- [`rankfns`]https://crates.io/crates/rankfns -- scoring functions (BM25, TF-IDF, DPH, language models)
- [`rankops`]https://crates.io/crates/rankops -- ranked list operations (RBO, Kendall tau, fusion, interleaving)

## References

- Burges et al. "Learning to Rank using Gradient Descent" (ICML 2005) -- RankNet
- Qin & Liu. "A General Approximation Framework for Direct Optimization of Information Retrieval Measures" (2010) -- ApproxNDCG
- Cao et al. "Learning to Rank: From Pairwise Approach to Listwise Approach" (ICML 2007) -- ListNet
- Xia et al. "Listwise Approach to Learning to Rank" (ICML 2008) -- ListMLE
- Blondel et al. "Fast Differentiable Sorting and Ranking" (ICML 2020) -- soft ranking methods

## License

MIT OR Apache-2.0