1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
//! Learning to Rank for Rust: differentiable ranking, LTR losses, trainers,
//! and IR evaluation metrics.
//!
//! rankit provides everything needed to train and evaluate ranking models:
//!
//! - **Differentiable ranking**: sigmoid-based soft ranking with multiple method
//! variants (NeuralSort, SoftRank, SmoothI). O(n^2) complexity, suitable for
//! n < 1000.
//! - **LTR losses**: RankNet, LambdaLoss (NDCG-weighted), ApproxNDCG, ListNet,
//! ListMLE. Pairwise and listwise paradigms.
//! - **Trainers**: LambdaRank and Ranking SVM with query normalization, cost
//! sensitivity, and score normalization options.
//! - **Evaluation** (feature `eval`): NDCG, MAP, MRR, Precision/Recall@K, ERR,
//! RBP, F-measure. TREC format parsing. Batch evaluation. Statistical testing
//! (paired t-test, confidence intervals, Cohen's d).
//!
//! # Quick start
//!
//! ```rust
//! use rankit::{soft_rank, ranknet_loss};
//!
//! // Differentiable ranking
//! let scores = vec![5.0, 1.0, 2.0, 4.0, 3.0];
//! let ranks = soft_rank(&scores, 1.0);
//! // ranks[0] is highest (~4.0), ranks[1] is lowest (~0.0)
//!
//! // RankNet pairwise loss
//! let predictions = vec![0.8, 0.3, 0.6];
//! let relevance = vec![2.0, 0.0, 1.0];
//! let loss = ranknet_loss(&predictions, &relevance);
//! ```
//!
//! # Feature flags
//!
//! | Feature | Default | What it adds |
//! |---------|---------|-------------|
//! | `eval` | yes | IR evaluation metrics, TREC parsing, batch eval, statistics |
//! | `losses` | yes | LTR loss functions (RankNet, LambdaLoss, ApproxNDCG, ListNet, ListMLE) |
//! | `gumbel` | no | Gumbel-Softmax, relaxed top-k (requires `rand`) |
//! | `parallel` | no | Rayon parallelization for batch operations |
//! | `serde` | no | Serialization for eval result types |
/// Differentiable ranking operations (sigmoid-based, O(n^2)).
/// Multiple ranking method variants from research papers.
/// Analytical gradient computation for soft ranking and Spearman loss.
/// Batch processing utilities.
/// Performance-optimized implementations.
/// LTR loss functions and advanced ranking operations.
/// Differentiable top-k selection.
/// Top-k cross-entropy loss for classification.
/// Gumbel-Softmax sampling and relaxed top-k.
/// IR evaluation metrics, TREC parsing, batch evaluation, statistical testing.
/// End-to-end retrieval pipeline: tokenize, index, score, rank.
// --- Re-exports: core ---
pub use ;
pub use ;
pub use ;
pub use ;
pub use soft_rank;
pub use differentiable_topk;
pub use soft_rank_batch_parallel;
// --- Re-exports: losses ---
pub use ;
// --- Re-exports: gumbel ---
pub use ;
// --- Re-exports from fynch (primitives layer) ---
/// Re-export fynch's Spearman loss.
pub use spearman_loss;
// --- Re-exports: eval ---
pub use ;